entry_id
stringlengths
33
33
published
stringlengths
14
14
title
stringlengths
13
172
authors
sequencelengths
1
668
primary_category
stringclasses
115 values
categories
sequencelengths
1
7
text
stringlengths
3
431k
http://arxiv.org/abs/2406.18753v1
20240626203844
Curved detectors for future X-ray astrophysics missions
[ "Eric D. Miller", "James A. Gregory", "Marshall W. Bautz", "Harry R. Clark", "Michael Cooper", "Kevan Donlon", "Richard F. Foster", "Catherine E. Grant", "Mallory Jensen", "Beverly LaMarr", "Renee Lambert", "Christopher Leitz", "Andrew Malonis", "Mo Neak", "Gregory Prigozhin", "Kevin Ryu", "Benjamin Schneider", "Keith Warner", "Douglas J. Young", "William W. Zhang" ]
astro-ph.IM
[ "astro-ph.IM", "astro-ph.HE" ]
[ Galan Moody July 1, 2024 ================ § ABSTRACT Future X-ray astrophysics missions will survey large areas of the sky with unparalleled sensitivity, enabled by lightweight, high-resolution optics. These optics inherently produce curved focal surfaces with radii as small as 2 m, requiring a large area detector system that closely conforms to the curved focal surface. We have embarked on a project using a curved charge-coupled device (CCD) detector technology developed at MIT Lincoln Laboratory to provide large-format, curved detectors for such missions, improving performance and simplifying design. We present the current status of this work, which aims to curve back-illuminated, large-format (5 cm x 4 cm) CCDs to 2.5-m radius and confirm X-ray performance. We detail the design of fixtures and the curving process, and present intial results on curving bare silicon samples and monitor devices and characterizing the surface geometric accuracy. The tests meet our accuracy requirement of <5 μm RMS surface non-conformance for samples of similar thickness to the functional detectors. We finally show X-ray performance measurements of planar CCDs that will serve as a baseline to evaluate the curved detectors. The detectors exhibit low noise, good charge-transfer efficiency, and excellent, uniform spectroscopic performance, including in the important soft X-ray band. § INTRODUCTION §.§ Astrophysics enabled by wide-field, high-spatial-resolution X-ray imaging The frontier of high-energy astrophysics in the next decade requires instruments that probe deeply over a wide field of the sky. Deep X-ray surveys will reveal the seeds of the supermassive black holes that reside at the heart of every galaxy, shaping the evolution of their hosts in ways that are not well-understood, beginning less than a billion years after the Big Bang<cit.>. Only with sufficient sky coverage will we assemble a statistically useful sample of these seeds and be able to explore how they grow and evolve, along with their host galaxies, into the systems we see today. These studies will capitalize on facilities in other wavebands now coming on-line, and many such future facilities (e.g., the Rubin Observatory, SKA, and LISA) will also identify scores of transient sources that can only be understood with multi-wavelength and multi-messenger observations<cit.>. Since the triggers for transients may not be well-localized, especially from gravitational wave observatories, X-ray observatories of the future need to have sufficient grasp to efficiently search a large region of the sky, localize the transient, resolve it from confusing sources, and characterize its emission. Closer to home, we still have not found the bulk of the baryons in the local Universe, although simulations suggest they should be in a tenuous, hot medium surrounding galaxies and galaxy clusters<cit.>. This material would produce very faint extended X-ray emission, and its challenging sensitivity requirements demand a census of faint, contaminating point sources across a wide field. This material can also be detected in absorption against bright point sources using dispersive spectroscopy<cit.>. The absorption lines from highly ionized oxygen and other elements will be very faint, so high sensitivity and high spectral resolution are vital. These science goals require X-ray instruments with high spatial resolution over a large field of view, or, in the case of spectroscopy, high spectral resolution across a widely dispersed spectrum. Missions to tackle this science with similar capabilities have been presented for several classes, from Flagship (Lynx) to Probe (AXIS) to Explorer (STAR-X) imaging missions, and high-resolution grating instruments flying aboard Flagship (Lynx XGS) and Probe/Explorer (Arcus) missions. These capabilities will be enabled by emerging technologies for lightweight, high-resolution X-ray optics and gratings<cit.>. §.§ The need for a curved focal plane An inherent feature of these advanced optical systems is that their surfaces of best focus are curved, not flat, with curvature radii as small as 1 to 2 m. To fully exploit the capabilities of these sophisticated and expensive optics, and to maximize the science results outlined above, the associated instruments must provide detectors that closely conform to their curved focal surfaces. The power of an X-ray telescope is predicated on many factors, the most important of which are its point-spread function (PSF) and field of view (FoV). While on-axis PSF is determined by fabrication errors, off-axis PSF is in general determined by the mathematical prescriptions that define the basic design of an X-ray telescope. The grazing incidence nature of astronomical X-ray optics dictates that X-ray mirrors are nearly cylindrical in geometry, which means the optimal focal surface is curved. The effects of focal surface on image quality are shown in Figure <ref>, which results from detailed geometrical ray-tracing of a typical mirror shell of the Lynx mirror assembly. The implications on future mission capabilities are profound. Flying a completely flat focal plane in an instrument like the High-Definition X-ray Imager (HDXI)<cit.> on Lynx would double the PSF size over half of the FoV, seriously compromising point-source sensitivity and survey power. Lynx has adapted for this by tilting the array of HDXI detectors to approximate the optimum focal surface, a technique used in previous instruments such as Chandra ACIS<cit.>. However, Lynx’s need for much larger area of sub-arcsecond imaging, with similar focal length and much smaller depth of focus, requires a large number of detectors, each of which is considerably smaller than scientific detectors that can be routinely fabricated today. This approach also requires detectors with four-side abuttability, complicates instrument design, and introduces undesirable gaps in precious high-resolution field coverage. Use of a curved detector greatly reduces this technical complexity and maximizes the return of the expensive, advanced optics. §.§ Curved detectors: the current and future landscape Curved detector technology has already been demonstrated in the ground-based DARPA/US Air Force Space Surveillance Telescope (SST), a system with spherically curved detectors of 5.44-m radius of curvature operating in the visible spectrum. These SST detectors, developed by MIT Lincoln Laboratory, have only recently been approved for public disclosure by the DoD. The curved sensor technology used for SST was made possible by development of thinned, back-illuminated charge-coupled devices (CCDs)<cit.>. We and others have since demonstrated that an imager can be fabricated in the planar state and then deformed to a curved surface<cit.>, and curved CCD and CMOS sensors have also appeared commercially<cit.>. This technology has the potential to improve field-averaged image quality, reduce the number of imaging detectors required to populate a focal surface, simplify detector and instrument design, and reduce the fraction of the field lost to inter-detector gaps. We here describe a joint effort between the MIT Kavli Institute for Astrophysics and Space Research (MKI) and MIT Lincoln Laboratory (MIT/LL) to exploit the demonstrated SST curved-CCD technology to provide large-area, high-performance detectors with the smaller radii of curvature required for future X-ray missions. The ultimate goal of this project is to to demonstrate that high-performance X-ray imaging detectors can be curved to match a focal surface similar to that envisioned for Lynx. Our specific objectives are to: * Curve an existing back-illuminated (BI) X-ray CCD (the MIT/LL CCID94), with diagonal length of 65 mm and active (fully depleted) thickness of 100 μm, to a radius of curvature of 2.5 m with an accuracy of 5 μm; * Characterize detector properties (dark current, cosmetics, charge-transfer efficiency, noise, responsivity, X-ray spectral resolution, and quantum efficiency) of both flat and curved CCID94s to identify performance changes resulting from the deformation process; and * Expose curved and flat CCID94 detectors to notional (GEVS) vibration levels and representative thermal and radiation environments. Our experience with SST has shown that there are challenges in fabricating curved sensors, including mechanical considerations such as deforming the brittle silicon or other semiconductor detector material without exceeding the fracture strain, introducing plastic deformation in metal layers that could lead to irreproducible characteristics, or buckling the deformed membrane into a shape that does not conform to the desired focal surface. It has been demonstrated that the deformation of the silicon can raise the dark current of the device, constituting a further challenge<cit.>. In this paper, we present details of the curving process and first results curving silicon samples that possess all the mechanical properties of the functional imagers that will soon be processed. We also present X-ray performance measurements of planar CCDs that will server as a baseline comparison to future testing of their curved counterparts. § IMPLEMENTATION §.§ Notional focal plane design and choice of test detector As an example of how a curved CCD array could be implemented, we adopt notional requirements of the Lynx Concept Study’s Design Reference mission<cit.>. The HDXI array would be about 65-mm across with focal-surface radius of curvature ∼2.5 m. As shown in Figure <ref>, using a simple flat imaging surface (orange) results in large departures from the optimal focus over the majority of the FoV. The reference design of the HDXI accounts for this by using 21 tiled imagers in a square 5×5 array (with the outermost four corner chips missing)<cit.>. While significantly better, this approach requires four-side abuttable detectors, introduces a large number of chip gaps, and suffers additional seam loss due to square tiles not abutting tightly on a sphere; this effect increases with successive row of detectors. In addition, there remains a residual ∼0.1 PSF degradation across large parts of the FoV, an important contribution to the PSF budget for a sub-arcsecond wide-field imager. A curved array for Lynx, on the other hand, could be fashioned from a 2×2 array of three-side abutting CCDs, each with an imaging area about 33-mm square and with any circuitry to get the signal off-chip outside the field of view. It would also allow for the use of a framestore into which the exposed image could be transferred quickly and then read out more slowly, with less noise, and without a large contribution from out-of-time events. The seam loss of the array would be limited to a narrow stripe between the abutted CCDs, less than 0.5 mm across, producing less than 2% loss. The CCID94 detector developed by MIT/LL nearly fits this bill; it provides 1024 rows and 2048 columns of 24-μm pixels in the imaging area, or 24.6×49.1 mm. The entire die including the adjacent framestore (with equal pixel count but smaller pixels) is 50.7 mm × 40.5 mm, or ∼65 mm along the diagonal. This diagonal extent is nearly identical to that for a square 33×33-mm imaging-area chip, and so it provides a useful prototype on which to test the curving procedure. The CCID94, summarized in Table <ref>, has undergone years of development and fabrication at MIT/LL and extensive testing at MKI, and it is a well-understood, high-performance imager. It is baselined for the focal plane of the high-resolution X-ray grating spectrometer on the Arcus Explorer and Probe class mission concepts<cit.>. Photographs of 200-mm wafers of CCID94 devices are shown in Figure <ref>. §.§ Back illumination, surface passivation, curving, and packaging The approach taken to deform the wafers includes what is referred to as the double-flush thinning (DFT) process, so as to have sufficient flexibility for mounting on a curved silicon mandrel. In principle, this process is a continuation of BI methods routinely performed and described in detail previously<cit.>, but presented here briefly. A finished FI CCD wafer is inverted on a carrier wafer and the single-crystal silicon backside of the CCD etched in an acid solution to a thickness between 15 and 100 μm, depending on the needs of the application. The resulting backside is passivated as described below. Due to the very high resistivity of the CCD silicon and the imposed clock biases, the electric field extends through layers up to and exceeding 100 µm. This processing yields a nearly 100% collection efficiency of photoelectrons. After backside passivation, several lithography and etch steps are needed to access the metal pads buried under oxide to enable device testing and packaging. Additional masking steps may also be used for light shields and/or anti-reflective coatings. A critical aspect of detecting and resolving low-energy X-rays is being able to collect photoelectrons very near the surface (e.g., for 200-eV X-rays, the absorption distance is about 60 nm) and drive them towards the buried channel of the detector. This requires very good passivation of the incident surface, so it does not act as an electrical “dead layer”, generating or recombining electrons. Consequently, a high QE requires a nearly perfect illuminated surface on the silicon, with a large electric field present to drive the photoelectrons away from that surface. The best-known way is to introduce a highly doped p-Si layer (Si-B) on the surface using molecular-beam epitaxy (MBE)<cit.>. MBE has an advantage over other methods of epitaxial growth of a single crystal in the steep profile of the high boron concentration, which creates the electric field necessary to drive the electrons away from the illuminated surface. The MBE process consists of inserting a thinned CCD wafer in the MBE chamber, with the illuminated side facing the source of silicon and boron for the epitaxial growth. In this project, the wafers are thinned to 100 μm, in order to have good absorption of X-rays up to ∼10 keV. Cleaning of the wafer prior to entering the MBE system is critical<cit.>. The boron-containing layer is only a few nm thick, to ensure minimal photon absorption in the MBE entrance window. The quality of the epitaxial layer is determined in-situ using reflection high-energy electron diffraction (RHEED). An example of the diffraction pattern is shown in Figure <ref>. The bright points are due to particular lattice planes and their sharpness and the good definition of the arcs connecting them indicate a high-quality lattice. As we describe below, the curving-process fixturing makes intimate contact with the detector back-surface during the curving process. Moreover, to our knowledge, no MBE-treated device has yet been curved. Therefore, a key objective of our project is to demonstrate that the detector can be curved without compromising the excellent X-ray performance of an MBE-treated device. Once the standard BI processing with MBE passivation is completed, the wafer is ready for the DFT process (Figure <ref>a), where the illuminated surface is mounted temporarily to a second carrier wafer (Figure <ref>b and c) and the first carrier wafer is thinned so the device/handle composite is about 200 μm thick (Figure <ref>d). The second carrier is released, leaving the 200-μm thick composite (Figure <ref>e), which allows the 200-mm diameter wafer to be diced (Figure <ref>f) for individual CCD curving and mounting. This structure offers a very good compromise between strength and flexibility, allowing the CCDs to be deformed to meter-class radii with over 95% yields. The process to curve and permanently mount diced DFT chips to mandrels is based on the SST procedure, with two key differences: the X-ray CCD curvature is concave whereas the SST CCDs are convex; and radius of curvature for the X-ray CCDs is 2.5 m, whereas for SST it is 5.44 m. The vacuum-chuck bonding technique is largely the same, however, and this is illustrated in Figure <ref>. First a protective porous Teflon foil (shown as an orange layer) is placed on the convex, milled portion of the ceramic vaccum chuck to protect the illuminated surface of the CCD. The DFT CCD is placed illuminated-side down over the chuck and a vacuum applied, drawing the CCD into the appropriate curvature. Epoxy is placed on the now-convex rear surface of the thinned handle silicon and a concave silicon mandrel is brought into contact and aligned using a specialized jig. The mandrels for SST were fabricated commercially to a spherical figure with surface accuracies of ∼1 μm, and we expect the same for the X-ray CCD mandrels. The mandrel and silicon are brought together with a large but non-destructive force, using a hydraulic plunger. Based on the SST experience, we expect to maintain this vacuum for several days to ensure proper curing of the epoxy between the imager and the mandrel. The concave CCD, now permanently mounted on its mandrel, is released from the chuck, wirebonded to a flexprint, and subjected to final testing. On SST, there have been no observed performance consequences due to curving the chips. Mechanical yield of the curving was around 95%, and the induced strain was too low to detect a measurable increase in dark current. Even for radii of 1 m, the strains are quite low. To a first approximation, the maximum strain is t/[2R(1-ν)], where t is the thickness of the composite body, R is the radius of curvature, and ν is Poisson’s ratio. In cases where the sag approaches or exceeds t, an additional term 3D^2/64R^2 must be added<cit.>, where D is the diagonal of the deformed body. For this X-ray imager project, those terms sum to 0.003%, well within the elastic limit for silicon. §.§ Current progress: fabricating and curving samples As a first step in the process, several different silicon “dummy” samples have been prepared for trial deformation. Two sample wafers, listed in Table <ref> as W#, were prepared in a similar way to wafers intended for imaging devices, including components thought to be most prone to mechanical deformation effects; these include the metal conduction layers around the periphery of the CCD and the interlayer dielectric oxide within the active region. The dummies avoid the doping implants required for use as an imager, and also lack the triple polysilicon layers and associated oxide dielectric used for transfer gate electrodes. While the polysilicon will have a lower fracture stress than the single crystal silicon due to grain boundaries and other defects, it should be less fragile than the oxide layers that are included. This practical approach tests most mechanical components in the samples, but defers testing of all components for the functional imagers. The W1 and W2 wafers underwent back-illumination processing in the MIT/LL Microelectronics Laboratory (ML)[<https://www.ll.mit.edu/about/facilities/microelectronics-laboratory>] as part of the CCID94 back-illumination lot that will also produce imagers for this project. The wafers have undergone the entire DFT process shown in Figure <ref>, including oxide bonding to a carrier wafer which was then thinned to 150 μm to produce the expected total thickness of the real detectors. To allow additional testing with lower investment, two bare silicon wafers (P2 and P3) were thinned in the ML to thicknesses between 100 and 250 μm. All wafers in Table <ref> were diced into sample devices for curving. Sample deformation testing was performed using a ceramic vacuum chuck that was designed and fabricated for this project. It has a porous convex surface with radius of curvature ∼2200 mm and diameter of 75 mm (see the photograph in Figure <ref>). As shown in Figure <ref>, this size is sufficient to hold the 50×40-mm silicon samples and CCDs. The radius of curvature deviates from the intended 2500-mm value for reasons that are under invesigation. Samples were tested by simply placing them on the chuck and then applying a modest vacuum of 24 inHg, so that the ambient atmosphere imposes about 80 kPa of pressure; this conforms the sample to the chuck. The samples were inspected visually for buckling, then loaded into a Cyber Technologies CT300 non-contact profilometer and scanned with a 633-nm laser at 1-mm steps to verify spherical conformity of the curved sample. The chuck itself was also scanned with the CT300 to verify conformity. To simulate the use of a silicon mounting mandrel in the curving process, two glass mandrel designs were fabricated, one plano-convex and one plano-concave, with a spherical radius of 2500 mm and diameter of 75 mm. These match the anticipated curvature of the final detectors, and are sufficiently large to hold an entire silicon sample. A schematic of the mandrels is shown in Figure <ref>. The mandrels are made of window-grade SCHOTT N-BK7, chosen for its economy and ease of use. For these tests, the optical quality and thermal properties of the glass are immaterial as it is used at room temperature and as a mechanical mold rather than a precise optical element. The concave mandrel was used in some tests to “cap” the samples deformed to the chuck and profile with th CT300. As with the chuck, both mandrels were also profiled individually with the CT 300. While only the vacuum chuck (capped and uncapped with the concave mandrels) was used in the tests accomplished so far, in the future we will test-curve the samples between mandrels. The procedure for this is as follows (see Figure <ref>). First, an aluminum ring is placed around the plano-concave (lower) mandrel to secure it. The silicon sample of thickness t is placed in the center of the mandrel, and shims are placed on top of the mandrel at 90 intervals to protect the sample during this setup stage. These shims have thickness greater than (t+0.212) mm to account for the mandrel curvature and inset of the sample, ensuring that no part of the plano-convex (upper) mandrel contacts the sample while the upper mandrel is slowly lowered and aligned onto the shims. The shims are then smoothly removed around the circumference, allowing the upper mandrel to contact the sample and deform it onto the lower mandrel. Once the shims are removed, the sample is inspected and photographed through the glass mandrels from both sides to inspect for buckling, and loaded into the CT300 for profilometry. For the P2 150-μm bare silicon sample, a layer of porous Teflon (Porex) was placed between the sample and the vacuum chuck. In the curving of real detectors, the interface will involve the fragile MBE-treated entrance window, which can be easily damaged by the ceramic. The Porex layer is intended to protect the MBE as well as provide an elastic membrane to produce better overall conformity. We expect to include this layer in future testing of samples. We also note that while epoxy will be used to permanently mount the real detectors to a dedicated silicon mandrel, all of the testing described here is dry, without the use of epoxy. §.§ Test results The bare vacuum chuck and glass mandrels show excellent spherical conformance, as shown in Figure <ref>. The glass mandrels in particular have RMS departures <2 μm from perfect smoothness, with a radius of curvature of 2500 mm, as expected. The ceramic vacuum chuck is considerably rougher, with RMS deviations of 10 μm on scales ≤1 mm, the resolution of the scanner. However, the radius of curvature of the chuck is considerably smaller than expected at 2200 mm. The cause of this smaller curvature is under investigation, but as a result the concave mandrel cannot be used to “cap” any samples on the chuck, as the osculating spheres would lead to buckling and other unexpected deformations. For this reason, we do not report results from capped samples in this work. Our initial testing produces good results at spherical curving of silicon samples on the vacuum chuck. Bare silicon samples of 150-μm and 250-μm thickness are deformed spherically to ∼3 μm RMS accuracy, as shown in Figure <ref>). Use of the Porex membrance between the sample and chuck improves the accuracy substantially, smoothing out large-scale (≥5 mm) deviations that are common to all the measurements of un-cushioned samples. The two wafer chip samples produce similar results. Further improvements will include achieving a stronger vacuum, widespread use of the Porex layer as a cushion, and more thorough cleaning of the samples, chuck, and mandrels. Improvements will also be made in fabrication of the vacuum chuck to meet the specified radius of curvature. § CCD PERFORMANCE TESTING The object of backside passivation is to meet the demanding requirements for the X-ray detection efficiency and spectral resolution imposed on future X-ray missions, particularly at X-ray energies below 1 keV. For example, modern, high-resolution diffraction grating spectrometers, such as those recently proposed for the Arcus Probe mission<cit.> and developed as part of the Lynx mission concept study<cit.>, aim to detect and resolve lines of ionized carbon and oxygen (CVI, OVII and OVIII) at redshifts up to z = 0.3 or more. This requires high sensitivity to X-rays with wavelengths (energies) in the range 1.9 to 4.4 nm (650–280 eV). In addition, because these grating spectrometers achieve high spectral resolving power (R = λ/Δλ≈ 3,000–10,000) by operating in high diffraction orders (as high as m = 10), the spectral resolution of the readout detector must be sufficient to resolve the energies of overlapping orders, which requires a detector resolving power of order 1/m<cit.>. In the case of wide-field imaging instruments, the need to detect X-ray sources in the early Universe at redshifts as high as z ≈ 10 also imposes stringent demands on X-ray response at the extreme low-energy limit of the passband, 150–200 eV. Here, good detection efficiency requires not only good low-energy quantum efficiency, but also good spectral resolution to ensure that most of the charge deposited by the lowest-energy photons is recovered and distinguished from any noise peak. Understanding the effects of the curving process on CCD performance requires a detailed understanding of the baseline performance of uncurved CCDs. Toward this end, we have tested two 50-μm-thick back-illuminated CCID94 devices fabricated by MIT/LL as part of a different project. While not the same thickness as the CCDs that will be curved, the results provide a useful reference and guidance for future testing of 100-μm-thick planar and curved CCDs. We here present results from one device, given the designation CCID94L1W19C5 among the MKI group. Additional results have been previously presented<cit.> and are presented elsewhere in these proceedings<cit.>. The test facilities at MKI include dedicated vacuum chambers for each type of CCD under testing, each equipped with a liquid-nitrogen cryostat for thermal control to temperatures below -100C. Each setup includes an Archon[<http://www.sta-inc.net/archon>] controller that provides CCD bias and clock voltages and performs digital sampling on the CCD analog video waveform, incorporating an interface board that connects to a custom made vacuum feed-through board and detector board for each CCD. The lab detector package and boards for each CCD were designed by MIT/LL. Each chamber allows easy insertion of a radioactive ^55Fe source for reliable full-frame illumination with Mn Kα (5.9 keV) and Kβ (6.4 keV) X-rays. The CCID94 setup can incorporate a ^210Po source with Teflon target that produces fluorescence lines of C K (0.27 keV) and F K (0.68 keV) across the full 50×25 mm imaging area. For some testing presented here, the CCID94 chamber was mounted on an In-Focus Monochromator (IFM)<cit.> that uses grazing incidence reflection gratings to produce clean monochromatic lines at energies below 2 keV, with typical spectral resolving power λ/Δλ = E/Δ E ∼ 60–80, far higher than that of the CCD itself. We also mounted the chamber in the MKI polarimetry lab beamline<cit.> for testing in a different environment, providing additional measurements at C K and O K (0.53 keV). A photo of the packaged CCD mounted in its test chamber is shown in Figure <ref>. All test data is acquired as a set of full pixel frames that are processed in a similar way to flight data on Chandra. Briefly, each image is corrected for pixel-to-pixel and time-dependent bias levels. Persistent (“hot”) pixels are masked, and local maxima above a defined event threshold (5–8 times the RMS noise) are identified. Pixel islands around these maxima are searched for pixels above a second “split” threshold (3–4 times the RMS noise), and all such pixels are summed to produce the event pulse height, a measurement of the incident photon energy. This summed pulse height is denoted “allaboveph” in several figures in this work. Pixel islands are 3×3 for the CCID94 devices with 24-μm pixels. Each event is assigned a “multiplicity”, akin to the “grade” on ASCA, Chandra, and Suzaku, but here simply encoding the number of pixels “n#” that are above the split threshold. Basic performance of the planar CCID94 is excellent, as can be seen in Figure <ref>. The detector exhibits low noise that is uniform across the eight outputs and varies smoothly with temperature at a readout speed of 0.5 MHz, corresponding to a ∼0.5-s frame time. The gain is also uniform and well behaved with temperature below -20C, where dark current becomes low. We illuminated our test CCID94 device with ^210Po with Teflon target and ^55Fe simultaneously to produce a broad-band spectrum from a single representative segment (see Figure <ref>). The primary fluorescence lines of F K, Mn Kα, and Mn Kβ can be seen along with several other fluorescence and escape features. C K is not visible here due to a low-energy noise peak that is unrelated to the detector or source. After fitting and subtracting a power-law model to this noise continuum, we measure the spectral resolution at F K (0.68 keV) to be 66 eV FWHM for all event multiplicities for this segment, with little variation from segment to segment. There is a non-Gaussian tail, which remains under investigation but could result from backside surface losses. This tail contains less than 10% of the counts for all segments, and its segment-to-segment uniformity indicates it can be accounted for using a standard redistribution matrix. A similar tail is seen in our testing of another MIT/LL device, the CCID89, which has the same backside treatment, depletion depth, and pixel pitch as the CCID94.<cit.>. The response at ∼6 keV is excellent for all nodes, averaging 135 eV FWHM for all event multiplicities. Results are summarized in Table <ref>. Illumination with C K and O K fluorescence lines on the polarimetry beamline produced sufficient flux to overcome the low-energy noise peak, although only four of the eight segments were properly illuminated. We measured excellent response at 0.27 and 0.53 keV, as shown in Figure <ref> and Table <ref>. § SUMMARY Future X-ray imaging missions for astrophysics will require high spatial resolution over a wide field of view, however the cylindrical grazing-incidence optics employed produce a curved focal surface. To take full advantage of future advanced optics, we have begun a program to demonstrate that solid-state silicon imagers can be curved to match this surface, thereby reducing complexity with no effect on performance. Using bare silicon samples and wafers which have undergone metal layering, oxide bonding to a support wafer, and backside thinning, we show that our vacuum deformation technique is capable of creating a ∼2.5-m radius of curvature and meeting the 5-μm RMS conformance required. Use of a porous Teflon cushion improves the conformance substantially. We have also tested the baseline performance of planar CCDs to compare to the eventual curved functional devices, and report excellent X-ray response. Future work will involve additional sample testing with improved cleanliness and correct figuring of the vacuum chuck as we move toward curving functional back-illuminated CCDs. We are also currently designing the package for the curved CCD, which requires substantial alteration from the flat detector package to incorporate the ∼7-mm-thick silicon mandrel. We anticipate completion of curving and testing our first functional CCDs within the next year. We gratefully acknowledge support from NASA through Astrophysics Research and Analysis (APRA) grant 80NSSC22K0788. spiebib
http://arxiv.org/abs/2406.17987v1
20240626000045
Multi-step Knowledge Retrieval and Inference over Unstructured Data
[ "Aditya Kalyanpur", "Kailash Saravanakumar", "Victor Barres", "CJ McFate", "Lori Moon", "Nati Seifu", "Maksim Eremeev", "Jose Barrera", "Eric Brown", "David Ferrucci" ]
cs.CL
[ "cs.CL", "cs.AI" ]
The dynamics of Tonks-Girardeau gas excited by a pulse drive Yajiang Hao gbsn(郝亚江) July 1, 2024 ============================================================ The advent of Large Language Models (LLMs) and Generative AI has revolutionized natural language applications across various domains. However, high-stakes decision-making tasks in fields such as medical, legal and finance require a level of precision, comprehensiveness, and logical consistency that pure LLM or Retrieval-Augmented-Generation (RAG) approaches often fail to deliver. At Elemental Cognition (EC), we have developed a neuro-symbolic AI platform to tackle these problems. The platform integrates fine-tuned LLMs for knowledge extraction and alignment with a robust symbolic reasoning engine for logical inference, planning and interactive constraint solving. We describe Cora, a Collaborative Research Assistant built on this platform, that is designed to perform complex research and discovery tasks in high-stakes domains. This paper discusses the multi-step inference challenges inherent in such domains, critiques the limitations of existing LLM-based methods, and demonstrates how Cora’s neuro-symbolic approach effectively addresses these issues. We provide an overview of the system architecture, key algorithms for knowledge extraction and formal reasoning, and present preliminary evaluation results that highlight Cora’s superior performance compared to well-known LLM and RAG baselines. § INTRODUCTION With the emergence of Large Language Models (LLMs) and Generative AI, there is an enormous interest in building natural language applications for a wide variety of use-cases across multiple domains. Gen-AI is being leveraged in solutions ranging from conversational web search and enterprise search engines, to chat-bots for customer service, retail, travel, insurance, etc. There is a class of high stakes decision-making applications that require performing accurate, detailed and well rationalized research to evaluate and justify complex hypotheses. Such applications include Life Science and Medical research for drug discovery and Macro-economic analysis for investment research. These use-cases are challenging to tackle using pure LLM or even Retrieval-Augmented-Generation (RAG, which is LLM+search) based approaches, due to the need to be precise, thorough and logically consistent. In particular, the use-cases have the following characteristics: * There are highly adverse effects to being wrong – in some cases, lives are at stake, in others, there is a risk in losing enormous time, money and resources to pursuing a wrong path. The explicability and transparency of the system are therefore critical, as decision makers need strong and precise evidence to justify their actions. * The problems involve exploring and evaluating complex research hypotheses where the answers and evidence are often not specified in a single source or document; instead, relevant information is spread across multiple datasets, which need to be pieced together. * The underlying data is a mix of large unstructured text corpora (e.g., PubMed[https://pubmed.ncbi.nlm.nih.gov/], Financial news articles) and structured data, ontologies and knowledge graphs (e.g., Gene Ontology <cit.>). Combined with the above point, it means we need to extract knowledge (key concepts and relationships) from unstructured data, link it to relevant structured data, and build a unified knowledge source to help connect the dots when exploring hypotheses. * Finding refuting evidence is as crucial as supporting evidence in order to provide a fully balanced view of the problem, and avoid confirmation bias. * Developing a strong understanding of the problem space and building sufficient confidence in the solution requires causal and logical inference over multiple inter-dependent causal factors and linkages. We need to develop reasoning strategies that help users probe and clarify assumptions, detect and explain contradictions in the data, do probabilistic analysis weighing supporting and refuting evidence appropriately, and support counterfactual reasoning (What-if analysis). At Elemental Cognition (EC), we have built solutions to tackle these problems that combine fine-tuned LLMs (and in some cases, smaller transformer based pre-trained LMs that are more optimal for a given task) for knowledge extraction, alignment and fluent NL generation, with our multi-strategy Symbolic Reasoning Engine for precise logical reasoning and constraint solving. The technologies we have built (components, models, APIs) are part of our Neuro-Symbolic AI platform (described in Section <ref>). To showcase the platform's potential, we have developed an application called Cora (Collaborative Research Assistant) that uses various platform APIs for knowledge retrieval, synthesis and reasoning, and is designed to perform complex research and discovery in high-stakes problem domains. In this paper, we start by describing the multi-step inference problems and discuss the challenges a pure statistical/LLM-based approach faces when solving them. We then show how EC's Cora resolves these challenges using a neuro-symbolic approach. We provide an overview of the underlying technology and system architecture, and highlight key algorithms for knowledge extraction and formal reasoning. Finally, we evaluate our system against well-known LLM and RAG baselines on the multi-step inference problems to demonstrate their value. § MULTI-STEP INFERENCE USE-CASES §.§ Life Science Research: Drug Discovery and Re-purposing A biopharma company can spend over $2 billion to take a drug from initial discovery to approved use in the market <cit.>. Identifying high-quality drug targets at the beginning of this process is essential both to increase the chance of success and to reduce the downstream costs. Comprehensive and fast literature review at the earliest stages is a key ingredient of efficient and effective target identification. Key sources for this literature review include, among others, peer-reviewed research articles available through PubMed, information on clinical trials and outcomes, patents related to the disease and potential drug targets, and NIH grant awards. The challenge is effectively exploring and validating research hypotheses by finding and connecting all of the relevant bits of information. Several issues confront researchers in this process. First, the well known problems of language synonymy and polysemy are further exacerbated in Life Sciences, where the terminology is constantly growing as new discoveries are made, confounding manual efforts to curate ontologies. Second, exploring a drug target hypothesis typically requires connecting multiple pieces of information that describe different elements of a complex biological pathway. Since scientists are typically exploring novel hypotheses, there is no single source in the literature that provides the overall answer. Instead, the scientist must tediously find evidence spread across multiple sources for each component of the pathway. Third, every hypothesis, or sub-component of that hypothesis, may have a variety of published results, some of which support the hypothesis, and some of which refute that hypothesis. Moreover, the researcher must consider the veracity of any single piece of evidence in support or refutation of a claim, which is usually a product of the prestige of the publication, the reputation of the authors and their institution, and the quality of the study or experimental methodology described in the evidence. On the surface, all of these challenges sound like ideal candidates for an LLM solution. An LLM, however, provides only part of the solution. Figure <ref> shows an example of using ChatGPT[https://chat.openai.com/] (as of Apr 2024) for medical research. The example question is about exploring a potentially multi-hop relationship between Rheumatoid Arthritis (RA) and the inhibition of a particular kinase called IRAK4. As shown in the figure, there are four main classes of problems with this purely LLM-based approach: the inability to control the search or ranking process, since the LLM is a black-box that is not grounded to a specific corpus where one might apply filtering or ranking criteria; the inability to validate results without cross-checking the references (which defeats the purpose of an efficient research solution); hallucinated references which undermine credibility of the approach; and the "needle-in-the-haystack" problem as valid answers that are not popular in the training data are unlikely to be surfaced by the LLM. As a result of these issues, the community has essentially migrated to a Retrieval Augmented Generation (RAG) approach for doing more precise and comprehensive search, where an LLM is combined with an Information Retrieval engine (search system) to produce answers that are grounded in a domain corpus, and are more up-to-date (beyond the training period of the LLM). There are several RAG based solutions that exist in the current marketplace, from general-purpose (web-based) search/answer generating engines like Perplexity[https://www.perplexity.ai/], to domain-specific research solutions like Elicit[https://elicit.com/]. Since our focus area is on deep domain-specific research, we use Elicit as a baseline when doing Life Science research. Figure <ref> shows an answer to the same question with Elicit. Elicit produces an answer from the top 8 papers, and hence suffers from recall issues. Moreover, the answer is fairly shallow without providing a detailed causal understanding of the main biological linkages. Contrast this with the answer to the same question produced by our research assistant Cora, as shown in Figure <ref>. Cora's approach is radically different in that it uses a general research template for connecting the dots between the two concepts of interest (RA and IRAK4 inhibitor), and then instantiates this template with specific bindings (answers) for concepts based on the inter-connected linkages. The research template is automatically induced from the data using our knowledge extraction algorithms and then further refined by a domain expert (the expert can directly specify a template as well). A single research template can be repurposed for multiple use-cases that involve the same kinds of concepts – in this case, a link between any disease and kinase inhibitor, not just RA or IRAK4 inhibitors. The final answer produced by the system is based on the entire causal map and contains detailed evidence for each of the linkages with reliable citations. §.§ Macro-Economic Analysis: Multivariate Causal Inference Consider the following macro-economic research problem: you are given a scenario that describes the state of a given economy, and the task is to understand the impact on a particular target concept. For example, If growth is falling in an Emerging Market country, and the country is facing high inflation, what is the likely impact on nominal bond yields?. As can be seen, the scenario contains multiple economic factors (falling economic growth, high inflation) that are present in a given context (Emerging Market country), and a good solution to this problem requires mapping out the relevant causal linkages between these factors and the target concept (nominal bond yields), and performing causal inference to net out the influences. We asked the above question directly to GPT4 and refined the prompt to get the model to analyze both upward and downward pressures on the target concept. The output is shown in Figure <ref>. Apart from the fact that the answer does not include reliable references or data (for the reasons described in the previous section), the LLM also makes fundamental reasoning mistakes. In particular, it conflates mutually-exclusive conditions (that cannot be true at the same time), and asserts the possibility of both “canceling each other out", which is logically invalid. Moreover, it assumes independence between highly inter-dependent factors, as shown in the example. While using a RAG-based GPT approach would help ground the results in a corpus and improve the reliability of the evidence, doing precise logical reasoning is still a capability that LLMs (even ones as powerful as GPT4) struggle with. We adopt a neuro-symbolic approach to solving this problem. Instead of using a purely RAG-based approach for doing search and reasoning, we first use a multi-step graph search algorithm to identify relevant causal linkages in the text, and dynamically build a comprehensive causal map where each link is substantiated with evidence from the corpus. We then feed this causal map to a symbolic reasoning engine to propagate and reason over the causal influences considering the correlations and weights of various factors. Cora leverages these technologies to do causal inference over unstructured data. Figure <ref> shows the causal map extracted by Cora for the earlier question. Cora's answer and structured explanation (shown in the left of the figure) is generated from the causal map and describes the dominant themes at play, with the underlying causal chains. The extracted causal map is a fully executable logical model, where the user can refine any part of the graph structure - e.g. drop edges, merge nodes, force the values of specific nodes (based on known facts or hypothetical scenarios), alter edge weights (based on domain knowledge) etc. - and then re-run inference to compute the effects of the changes. In this manner, Cora supports interactive precise counterfactual reasoning and What-if analysis. As a next step, we are exploring connecting the causal map extracted from theory with real economic time-series data to make more informed statistical predictions. § NEURO-SYMBOLIC AI PLATFORM In the previous section, we described challenges faced by LLM/RAG based approaches when tackling complex causal question answering problems that involve multiple factors and pieces of evidence and the Cora application we built to tackle these issues. In this section, we describe our AI platform used to build Cora, the analytics pipeline that constructs the semantic indices (KBs) from unstructured text, and highlight how symbolic reasoning is used to produce answers. §.§ High-Level Architecture As depicted in Figure <ref>, at the heart of the EC AI platform is a general-purpose symbolic multi-strategy reasoning engine that is based on Answer Set Programming <cit.>, and uses and builds on solvers such as Clingo <cit.>. The reasoning engine performs the key function of logical reasoning including causal, deductive, abductive and non-monotonic inference as well as multi-objective constraint optimization based on given rules and facts. It supports interactive and incremental reasoning, involving the user to address knowledge gaps, resolve ambiguities, and make knowledge updates to the model on-the-fly, providing detailed explanations of the model's analysis. In addition to these core interactive reasoning capabilities, the EC AI platform integrates with LLM-powered interfaces to facilitate knowledge acquisition and user interactions through fluid natural language interactions. We support two kinds of knowledge acquisition – semi-automated expert guided authoring for small-medium sized domain models, and fully-automated knowledge extraction from large domain corpora. The former uses our proprietary Knowledge Representation language known as Cogent, and is the focus of the work described in <cit.>. In this paper, we focus on our capabilities for the latter use-case. Figure <ref> shows the solution architecture for Cora. There are two phases of the system - during the offline domain knowledge extraction phase, the system ingests a text corpus using EC’s Natural Language Understanding (NLU) pipeline (more on this in the next section), extracts domain concepts and relationships in the text, and stores the resultant structured information along with text embeddings (vectors) for concepts and passages in a semantic index (“Domain KB”). At runtime (i.e. online phase), the system is given a user scenario, and it uses a Query Interpretation Model (a fine-tuned LLM) to process the input and pull out salient concepts and relationships in the scenario question. This information is passed to the Evidenced Graph Builder, which uses an iterative, multi-step graph search algorithm to find relevant relationships and chains from the Domain KB that are applicable to the input scenario (alternately, it might retrieve an applicable pre-saved research template when relevant). The algorithm fleshes out a relationship/causal graph connecting input concepts in the scenario to the target/query concept, where each link is sourced from the theory texts. The Evidenced Graph builder is also fed the QA/Inference Meta-model as one of its inputs. This is an abstract logical meta-model for doing causal inference and question answering, and provides the logical scaffolding (or grounding) for the extracted concepts and relationships. It is specified using our proprietary Cogent language. The meta-model draws on an established causal reasoning framework, Qualitative Process Theory, as well as its recent applications to knowledge graph extraction, as a starting point to formalize concepts such as Quantities, States, and the causal influences that propagate between them <cit.>. The output of the Evidenced Graph Builder is an instance of this meta-model that is specialized to the scenario concepts and the extracted causal linkages. This model is then executed using the Cogent Reasoning Engine (RE) to derive inferences based on the specific connections in the map. The Result Generation module uses the Cogent RE along with fine-tuned LLMs to provide various functionalities, ranging from single/multi-hop QA to topic-based summarization and facet generation. The latter two features are beyond the scope of this paper. Finally, the system allows users to save the analyzed and reasoned-over knowledge graph results, which can be reused for future scenario analysis. §.§ Knowledge Extraction using Statistical Models The NLU Ingestion Pipeline is used to process a text corpus and extract domain knowledge. At EC, we have designed a general purpose Meaning Representation Schema to capture knowledge. The schema is centered around the notion of contextual Relationships or Events, where each relationship is characterized by its subject and object concepts, along with qualifiers that specify contextual information about time, space, manner, purpose etc. Additionally, concepts and relationships are arranged in a hierarchy to support taxonomic reasoning. Our schema is inspired by KR formalisms such as AMR <cit.>, FrameNet <cit.>, PropBank <cit.>, etc., but is designed to be leaner and more concise to aid generality. Figure <ref> shows examples of events extracted from sentences in two different domains - medical and finance. In the medical example, given the sentence shown, we extract the relationship “activates" between “IL-33" and “NF-Kb", along with the qualifier manner: “binding with ST2 receptor". Moreover, we also extract type information for the concepts taking the context into account when disambiguating its meaning – in this case, “IL-33" is an instance of “Cytokine", while “NF-Kb" is an instance of a “Signalling Pathway". This rich event structure lets us answer questions such as: “Which cytokine activated a signalling pathway and how was it done?". The same holds for the financial example, since the schema is domain independent. Our process of Ontology Induction involves extracting rich event (relational) structures as shown in the figure, and the underlying type hierarchy from the text, and we do this in a fully unsupervised manner - i.e. with no manual training data. Additionally, we perform Entity Linking (similar to <cit.>), in order to link the induced concepts from the text with entities in external knowledge bases and ontologies. For this problem, we have developed a transformer-based architecture called LUMEN that we use for concept typing, entity linking and relationship extraction, along with a fully automated domain-adaptation process that leverages our own fine-tuned LLMs to generate synthetic training data. The framework uses SLMs (Small Language Models such as Gemma-2B <cit.>) for increased throughput (sub-200ms latency per passage) without compromising on quality, by using high quality training data and novel contrastive loss functions. Details of this framework will be provided in a forthcoming technical publication. §.§ Multi-hop QA and Explanations using Symbolic Reasoning §.§.§ Cogent: KR Language and Meta-Model Cogent is EC’s proprietary Knowledge Representation language. It can be used to formally define conceptual theories in a form of structured English. Its underlying formalism is based on Answer Set Programming and supports term definitions, rules, (hard/soft) constraints and objective functions. Additional details of Cogent are in <cit.>. As mentioned earlier, we have defined a general meta-model for causal inference in Cogent, which provides the logical grounding for terms and relationships, and facilitates reasoning via the Cogent-RE. Figure <ref> shows snippets of our Cogent meta-model for Causal Inference, based on Qualitative Process Theory (QPT). QPT provides a robust framework for understanding the dynamics of continuous systems through the propagation of qualitative values (e.g., increasing, decreasing, high, low...), rather than relying solely on numerical data. This makes it applicable across diverse fields where under-specified values are prevalent such as economics, medicine, geopolitics, and cybersecurity. Under QPT, quantities are causally influenced by processes, and the effects of that influence propagates between quantities. As an example, in a heat flow process, the heat transfers from a hot to a cold object. As the cold object heats up, that may cause subsequent changes (e.g., maybe it becomes more malleable) <cit.>. Additionally, <cit.> took inspiration from QPT's quantity-to-quantity propagation in their work by annotating causal models in natural language. Like them, our representation draws inspiration from QPT's influence mechanism between quantities, but we further expand our approach to include the notion of “States" and the “Triggers" causal relationship. In economics, include variables like GDP and interest rates, while might describe those quantities at a specific value such as high inflation or inciting events like the imposition of tariffs. Extending to medicine, could encompass fluctuating metrics like blood pressure or cholesterol levels, with representing medical conditions such as diabetes or stages of cancer remission. Similarly, in cybersecurity, include metrics like the number of system intrusions or data transfer rates, and could refer to the security status of systems or the occurrence of breaches. This framework allows for a nuanced analysis of how various factors interact within and across these fields, providing insights into how changes in one area can influence outcomes in another, thereby offering a comprehensive view of causal dynamics in complex environments. In our meta-model, describe the causal relationship between two quantities, which can be either direct or inverse, indicating how one quantity affects another. For instance, in medicine, an increase in medication dosage might influence the reduction of symptom severity. In geopolitics, a rise in military expenditure might inversely influence the economic stability of a country. , however, define the causal relationships between states or between a state and a quantity, highlighting how certain states can act as tipping points, initiating changes in other states or quantities. For example, in cybersecurity, the detection of a new malware type (a ) might trigger an increase in security protocol updates (a ). This structured approach provides a deeper understanding of the mechanisms driving changes within dynamic systems and allows us to use Cogent to interactively reason about what-if scenarios, providing valuable insights into complex causal interactions in various domains. §.§.§ Evidenced Graph Building and Symbolic Reasoning As mentioned in Section <ref>, the Evidenced Graph Builder's goal is to find multi-hop relational and causal chains connecting input concepts in the scenario to the query or target concepts. This problem can be cast as a graph search or path-finding problem, where each link in the path is a causal relationship, the source nodes are the input concepts, and the target node is the query concept. Here, the aim is not to find the shortest causal path from source to target, but instead to find all dominant and contextually relevant paths linking the source and target nodes based on the domain knowledge. Longer paths that reveal more details are preferred as they help build a better causal understanding of the scenario. Our solution is based on the A* Search algorithm <cit.>, which runs a forward-backward search routine (i.e. forward from the source nodes, backward from the target) and uses an LLM (in particular, its intrinsic world knowledge) as the search heuristic to estimate which paths are likely to connect up from both sides. The algorithm also has plugin points for automatically inserting relevant user knowledge from prior saved causal maps. As part of producing the map, the builder maps the specific relations (predicates) in the event structures stored in the Domain KB (described in Section <ref>) to the higher-order relationships such as "influence" and "trigger" in the Cogent Meta-model. The final knowledge graph produced by the builder, which is an instantiation of the meta-model, is an executable logic program that is fed to the Cogent RE. The result of reasoning is fed to an LLM to produce the final answer and explanation from the detailed logical proof traces. § PRELIMINARY EVALUATION Our goal is to answer the following question: Given a complex research query that involves multiple concepts and relationships, how accurately and comprehensively can an AI system provide detailed answers and explanations that consider relevant intermediate links and include citations for supporting and/or refuting evidence? We report results of a preliminary evaluation conducted in the medical domain. Further experiments are ongoing and we will share updated results and the dataset when this is completed. §.§ Medical QA Eval We collected 25 queries based on real questions from experts in the medical research domain. The queries were evaluated using four systems: GPT4-Turbo (a state-of-the-art LLM), Perplexity (RAG using web-search), Elicit (RAG using Semantic Scholar for doing scientific research) and Cora. All four systems mentioned above were run on the questions and we asked each system to produce an answer with supporting/refuting evidence and cited sources. Since we do not have ground truth answers and explanations, we asked the domain experts to manually verify each of the systems' results, and check that the generated answer/explanation justifies its claims, is both accurate and relevant, and that the sources it cites actually exist. We have the following metrics: * Claim Density - average number of claims per answer - a measure of the quantity of information provided. * Citation Density - average number of real citations per claim - a measure of the amount of verification options. * Source Hallucination Rate - percentage of citations that are not valid (real) and scholarly sources - a measure of system hallucination. * Citation Rate - percentage of claims in the answer that are accompanied by real citations - a measure of verifiability of the answer. * Justification Rate - percentage of claims that are a correct paraphrase of a real citation - a measure of interpretation quality. Claims with non-existent sources are not justified as they are unverifiable. Since checking this requires manual effort, we imposed a max time-limit of 5 minutes on the domain expert to verify each claim. * Relevance Rate - percentage of claims that are justified and relevant to answering the question - a measure of relevance and quality of answer. Results are shown in Table <ref>. Note that the metrics from 4-6 get progressively stricter, as a justified claim must also be cited, and a relevant claim must also be justified. §.§ Discussion Across the four systems evaluated for the queries, we find that Cora and Perplexity are the only two systems that reliably cite articles that exist. Sourcing claims in real evidence is crucial in the medical domain, as there is minimal room for error and misinformation with experts facing decisions that are high-stakes. It is particularly worth noting the Source Hallucination Rate of GPT-4 Turbo in Table <ref> with almost every other article being hallucinated. Even though both Cora and Perplexity cite real articles 100% of the time, Perplexity only cites a few articles for a few of its claims, as evidenced by the low Citation Density and Citation Rate values. For claims in the tools' answers that are derived from valid sources, it is important to know if the information as it is represented in the answer is justified by the information in the source. This is measured by the Justification Rate metric. These metrics together make it evident that Cora provides the most reliable and verifiable claims. This is attributed to Cora's precise and granular evidence-based answer generation algorithm. Additionally, it is important that a researcher is able to quickly verify the consistency across the answer and the source, something they can do easily in Cora where relevant snippets (highlighted paragraphs) from the paper are directly evidenced. All the other tools require the expert to read the entire paper to find the precise evidence. Claim Density measures the quantity of information presented in the answer, and Cora provides the most comprehensive answers across the compared systems. Finally, the metric that evaluates the usefulness of an answer to a researcher/expert is the Relevance Rate, which is a obtained by considering how many of the total justified claims of a given answer are labeled as “relevant" by an expert. The measure of relevance here was lenient as relevance can be subjective depending on the expert. Results show that Cora provides the most relevant and comprehensive answers that are backed by real evidence and represents the cited evidence well compared to the other tools. § CONCLUSIONS The past two years have seen unprecedented excitement and investment in AI. Effectively leveraging LLMs and Generative AI is becoming a business imperative across every major industry, where business leaders are motivated by both seeking competitive advantages with more automation and intelligence, and fear of being left behind if they fail to effectively leverage AI. For modern AI to deliver on these enormous expectations, solutions must meet a wide variety of critical requirements, not the least of which is reliable and accurate answers users can trust to make critical business decisions. At Elemental Cognition our mission is to deliver on the promise of AI with technology that provides more accurate, relevant, and verifiable answers and solutions to complex business problems. To this effect, we have developed a neuro-symbolic AI platform that combines two powerful complementary technologies - statistical language understanding machines (LLMs) and symbolic reasoning engines. The former is used to extract, formalize and translate knowledge from text, while the latter is used to precisely analyze, reason and explain answers. We have described our approach and architecture in detail and presented experimental results validating its performance. Our results show that our solution delivers best in class performance for tackling multi-step causal inference problems on unstructured data when compared to pure LLM or state-of-the-art Retrieval Augmented Generation systems. We continue to evaluate our approach across a broader set of more complex problems and see promising results that we will share in the near future.
http://arxiv.org/abs/2406.18417v1
20240626151115
Towards diffusion models for large-scale sea-ice modelling
[ "Tobias Sebastian Finn", "Charlotte Durand", "Alban Farchi", "Marc Bocquet", "Julien Brajard" ]
cs.LG
[ "cs.LG", "physics.ao-ph" ]
[ Towards diffusion models for large-scale sea-ice modelling equal* Tobias Sebastian Finnenpc Charlotte Durandenpc Alban Farchienpc Marc Bocquetenpc Julien Brajardnersc enpcCEREA, Ecole des Ponts and EDF R&D, Ile-de-France, France nerscNansen Environmental and Remote Sensing Center, Bergen, Norway Tobias Sebastian Finntobias.finn@enpc.fr Generative diffusion, Earth system modelling, sea-ice simulations 0.3in ] § ABSTRACT We make the first steps towards diffusion models for unconditional generation of multivariate and Arctic-wide sea-ice states. While targeting to reduce the computational costs by diffusion in latent space, latent diffusion models also offer the possibility to integrate physical knowledge into the generation process. We tailor latent diffusion models to sea-ice physics with a censored Gaussian distribution in data space to generate data that follows the physical bounds of the modelled variables. Our latent diffusion models reach similar scores as the diffusion model trained in data space, but they smooth the generated fields as caused by the latent mapping. While enforcing physical bounds cannot reduce the smoothing, it improves the representation of the marginal ice zone. Therefore, for large-scale Earth system modelling, latent diffusion models can have many advantages compared to diffusion in data space if the significant barrier of smoothing can be resolved. § INTRODUCTION Generative diffusion <cit.> has revolutionised data generation in computer vision <cit.> and other domains <cit.>. Initial applications to geophysical systems show promise for prediction <cit.> and downscaling <cit.>. To cut costs, data can be mapped into a latent space using a pre-trained autoencoder, where diffusion models can be learned <cit.>. However, the mapping can reduce the model's ability to generate fine-grained data <cit.>. Here, we examine how well latent diffusion models (LDMs) can generate geophysical data compared to diffusion directly applied in data space. We use Arctic-wide sea-ice simulations that were run with the state-of-the-art sea-ice model neXtSIM <cit.> at a 1/4^∘ resolution <cit.>. Sea ice, with its discrete elements, anisotropy, and multifractality, poses a challenging problem for generative models and serves as ideal candidate for evaluating LDMs. We show that while the general structure remains the same, LDMs produces smoother solutions than diffusion models in data space, as shown in Fig. <ref>. Although generative diffusion is fundamently defined by its Gaussian diffusion process, we can integrate prior physical information into the encoder and decoder of an LDM, as similarly done in <cit.>. Hence, we tailor the approach of LDMs to Earth system modelling by incorporating physical bounds into the autoencoder. To bound the variables, we replace the common Gaussian reconstruction loss (mean-squared error) by the log-likelihood of censored Gaussian distributions, which combines regression and classification, enabling us to explicitly account for the bounds of the output during reconstruction. Such censored distributions improve the representation of the sea-ice extent and thereby enhance the physical consistency of the output, as demonstrated in this study. § METHODS In the following, we introduce a brief overview on LDMs. Specifically, we focus on how to make use of prior physical knowledge into such models. For a more detailed treatment, we refer to Appendix <ref>. Our goal is to train neural networks (NNs) to generate state samples 𝐱∈ℝ^5×512×512 that should look like samples drawn from the here-used sea-ice simulation 𝐱∼𝒟; modelled are the thickness (SIT), concentration (SIC), damage (SID), and the velocities in x- (SIU) and y-direction (SIV). For the generation, we employ a diffusion model that works in a lower-dimensional latent space 𝐳_x∈ℝ^16× 64 × 64 as spanned by an autoencoder. The encoder enc(𝐱) maps from data space into latent space, while the decoder dec(𝐳_x) maps back into data space. The encoder and decoder are implemented as fully convolutional NNs (CNNs) with their weights and biases θ and further described in Appendix <ref>. The autoencoder should reduce the data dimensionality without compressing the semantic <cit.>. We nevertheless want to achieve a continuous latent space, from which the decoder can map similar values to similar data points. Consequently, we apply a variational autoencoder <cit.>, where we downweight the Kullback-Leibler divergence by β = 0.001 <cit.>. Given a data sample 𝐱, the loss function for the autoencoder then reads, ℒ(𝐱, θ) = -_q(𝐳_x𝐱, θ)[log(p(𝐱𝐳_x, θ))] + β[]q(𝐳_x𝐱, θ)p(𝐳_x), which is minimised by averaging the loss across a mini-batch of samples and using a variant of gradient descent. The encoder maps a data sample into the mean and standard deviation of an assumed Gaussian distribution in latent space, q(𝐳_x𝐱, θ) = 𝒩(μ_𝖾𝗇𝖼, θ(𝐱), (σ_𝖾𝗇𝖼, θ(𝐱))^2 ), while we assume a prior Gaussian distribution with mean 0 and a diagonal covariance 𝐈, p(𝐳_x)=𝒩(0, 𝐈). The reconstruction loss in Eq. (<ref>) corresponds to the negative log-likelihood (NLL) of the data sample given an assumed distribution and includes the decoder, which maps from latent space to a Gaussian distribution, from where we can sample the decoder output 𝐲, 𝐲𝒩(dec_θ(𝐳_x), 𝐬^2). The standard deviation 𝐬 is spatially shared with one value per output variable and learned alongside the autoencoder <cit.>. From such a sample 𝐲, we can recover the reconstructed data sample 𝐱 by applying a deterministic (possibly non-invertible) transformation function 𝐱 = f(𝐲). Depending on the transformation function, we obtain different reconstruction losses: * If we apply an identity function f(𝐲) = 𝐲, the reconstruction loss is the NLL of a Gaussian distribution and includes a mean-squared error weighted by 𝐬. * If we clip the output into physical bounds, e.g. f(𝐲) = min(max(𝐲, 𝐱_𝖫), 𝐱_𝖴) by specifying a lower 𝐱_𝖫 and/or upper bound 𝐱_𝖴, the reconstruction loss is the NLL of a censored Gaussian distribution. The censored distribution utilises the decoder output, as described in Eq. (<ref>), to regress the reconstruction and to determine whether it will be clipped into the bounds. Hence, this NLL combines a regression with a classification task and can be seen as NLL of a type-I Tobit model <cit.>. For a detailed treatment of the reconstruction loss, we refer to Appendix <ref>. After training the autoencoder with Eq. (<ref>), we use the mean prediction of the encoder, μ_𝖾𝗇𝖼, θ(𝐱), as deterministic mapping from data space into latent space and train a diffusion model in this space. The instantiated diffusion model <cit.> can be seen as denoiser D_ϕ(𝐳_τ, τ) with its weights and biases ϕ <cit.>, which takes a noised latent state 𝐳_τ at a pseudo-time τ∈ [0, 1] and should output the cleaned latent state 𝐳_x. During training, the noised latent state is produced by a variance-preserving diffusion process <cit.>, under the assumption that the latent sample 𝐳_x has been normalised to mean 0 and standard deviation 1, 𝐳_τ = α_τ𝐳_x + σ_τϵ, with ϵ∼𝒩(0, 𝐈), where the signal α_τ and noise σ_τ amplitudes are given in terms of logarithmic signal-to-noise ratio γ(τ) = log(α_τ^2/σ_τ^2). The training noise scheduler specifies the dependency of the ratio on the pseudo-time and is adapted during the training process <cit.> as further described in Appendix <ref>. Given a latent sample 𝐳_x and pseudo-time τ, the loss function to train the diffusion model reads ℒ(𝐳_x, τ, ϕ) = w(γ)(-dγ/dτ) e^γ‖𝐳_x- D_ϕ(𝐳_τ, τ)‖_2^2, with w(γ) = exp(-γ/2) as external weighting function <cit.>. The diffusion model is parameterised with a v-prediction <cit.> and implemented as UViT architecture <cit.> combining a U-Net <cit.> with a vision transformer <cit.>. The minimum value of the noise scheduler is set to γ_𝗆𝗂𝗇=-15 and the maximum value to γ_𝗆𝖺𝗑=15. We further describe the diffusion model in Appendix <ref> and its architecture in Appendix <ref>. To generate new data samples with the trained models, we draw a sample 𝐳_1∼𝒩(0, 𝐈), and integrate an ordinary differential equation <cit.> with our trained diffusion model D_ϕ(𝐳_τ, τ) in 20 integration steps. We use the second-order Heun integrator and sampling noise scheduler from <cit.> to obtain a latent sample 𝐳_x. We then apply the decoder to reconstruct a sample in data space 𝐱. If the decoder is trained with censoring, we clip the SIT∈ [0, ∞), SIC∈ [0, 1], and SID∈ [0, 1] to their physical bounds. § RESULTS In our experiments, we evaluate how LDMs perform compared to generative diffusion trained in data space and how the censoring of the decoder improves the physical consistency of the generated samples. We train the models on simulations from the state-of-the-art Lagrangian sea-ice model neXtSIM <cit.> which has been coupled to the ocean component of the NEMO framework <cit.>. The simulation data contains more than 20 simulation years <cit.> with a six-hourly output on a 1/4^∘ (≃12 km in the Arctic) curvilinear grid. Omitting the first five years as spin-up, we train the models on data from 2000–2014, validate on 2015, and estimate the here-presented test scores on 2016–2018. All models are trained with ADAM <cit.>, a batch size of 16, and a learning rate of 2 × 10^-4 for 10^5 iterations. For further experimental details, we refer to Appendix <ref>. The autoencoders compress the input data from 5 × 512 × 512 to 16 × 64 × 64. We ablate design choices and evaluate the autoencoder performance of the deterministic mapping as used for diffusion in terms of reconstruction quality with the mean squared error (MSE), the structural similarity index measure (SSIM), and the accuracy in the sea-ice extent (ACC_SIE). The definition of the metrics and their calculations are explained in Appendix <ref>. The results are shown in Table <ref>. Since its regularisation is much lower, the VAE trained with β=10^-3 reconstitutes into a better reconstruction than the VAE with β=1, resulting into lower errors and higher similarities. Replacing the Gaussian log-likelihood in the reconstruction loss by a censored log-likelihood has a neutral impact on the RMSE and SSIM. However, the VAE with censoring can better reconstruct the marginal ice zone, which leads to a higher accuracy in the sea-ice extent. While higher accuracies therein might have only a small impact for generating new data, they might have a higher impact for other tasks where the output would be reused and were small errors can amplify, e.g., in surrogate modelling. On top of the pre-trained autoencoder, we train LDMs as described in Section <ref>. We compare these LDMs to a diffusion model in data space, trained with Eq. (<ref>) by setting 𝐳_x=𝐱. As shown in Fig. <ref>, LDMs smooth the generated fields compared to diffusion in data space. In the following, we establish how and why this smoothing happens by showing estimated spectra for the sea-ice thickness in Fig. <ref>. LDMs lose small-scale information compared to the testing dataset, while diffusion models in data space retain information across all scales. This loss of small-scale information results into a smoothing in the generated samples as shown in Fig. <ref>. Since we find the same for the autoencoder, this smoothing is a result of the pre-training as variational autoencoder and the data compression in latent space. The two diffusion models additionally experience a shift towards larger energies than the test dataset. This shift might be caused by a discrepancy between training (2000–2014) and testing (2016–2018) or by the approximations during sampling. Inspired by the Fréchet inception distance <cit.>, we evaluate the diffusion models in comparison to the testing dataset with the Fréchet distance in the latent space of an independently trained VAE (FAED). We additionally compare the probability that a grid point is covered by sea ice with the root-mean-squared error RMSE_SIE, with results shown in Table <ref>. Whereas we evaluate on 4383 generated samples only, the results are robust to the dataset size as demonstrated in Appendix <ref>. The diffusion models consistently outperform the validation dataset for the FAED. Although the validation FAED is limited by its dataset size, this nevertheless indicates that diffusion models effectively capture large-scale structures and correlations, as observed in the testing dataset. The differences in the FAED between the diffusion models might be caused by the volatility during training. Diffusion models are trained with a Gaussian diffusion process, and the data space model struggles to accurately represent the sea-ice extent, leading to a higher RMSE than for the validation dataset. The smoothing in the LDM exacerbates the inaccuracies in representing sea-ice extent, resulting in even higher RMSE values. However, as for the reconstruction, censoring improves the sea-ice extent representation, bringing the RMSE to the realm of the validation dataset. § CONCLUSIONS In this manuscript, we make the first step towards using latent diffusion models (LDMs) for generating multivariate, Arctic-wide sea-ice data. We compare LDMs with diffusion models in data space and find that LDMs lose small-scale information compared to the target fields, though their scores are similar to those of diffusion models in data space. This loss of information is due to the latent mapping process and its pre-training as variational autoencoder. To mitigate this, we could increase the number of channels in latent space (see Appendix <ref>). Another approach would be to pre-train the autoencoder with an additional adversarial <cit.> or spectral loss <cit.>, although these methods could complicate and destabilise the training and tuning process. Despite the loss of small-scale information, LDMs offer a significant advantage over diffusion models in data space: the ability to enforce physical bounds during data generation. By replacing the Gaussian data log-likelihood by a censored log-likelihood, we explicitly incorporate the clipping of the decoder output into the training process. The censoring has a neutral impact on the sample quality but leads to a more accurate representation of the sea-ice extent, indicating an increased physical consistency. This increased consistency could benefit other tasks as surrogate modelling or downscaling. While we have tested LDMs only for sea-ice data, we anticipate similar smoothing issues for other Earth system components. Therefore, the smoothing issue remains a significant barrier to the broader application of the otherwise promising LDMs for large-scale Earth system modelling. § ACKNOWLEDGMENTS This manuscript is a contribution to the DeepGeneSIS project as financed by INSU/CNRS in the PNTS program and the SASIP project, supported under Grant no. 353 by Schmidt Science, LLC – a philanthropic initiative that seeks to improve societal outcomes through the development of emerging science and technologies. TSF, CD, AF, MB additionally acknowledge financial support by INSU/CNRS for the project GenD^2M in the LEFE-MANU program. This work was granted access to the HPC resources of IDRIS under the allocations 2023-AD011013069R2 made by GENCI. We would like to thank Guillaume Boutin for providing access to the data and Laurent Bertino for helping to obtain the funding for the DeepGeneSIS project. With their reviews, four anonymous referees have improved the manuscript. CEREA is a member of the Institut Pierre-Simon Laplace (IPSL). § IMPACT STATEMENT We present here work whose goal is to advance the field of Earth system modeling and machine learning. While there are many potential societal consequences of our work, we will briefly discuss the impact on the energy consumption only. Training neural networks consumes a lot of energy and in the future, the energy consumption of machine learning will rather increase than decrease, especially with the rise of generative models like diffusion models discussed here. Finding methods to reduce the computational costs of those models can be key to also reduce their energy consumption. While latent diffusion models might have the limitation of smoothing compared to diffusion models in data space, they offer an opportunity to decrease the costs and, thus, also the energy consumption. This might be especially helpful if we further strive towards larger and larger deep learning architectures, in the end possibly for coupled Earth system models. icml2024 § ADDITIONAL METHODS In the main manuscript, we shortly explained the methods around our main message: we can train latent diffusion models for geophysics end-to-end and incorporate physical bounds into the latent diffusion model. In the following, we explain the methods more in detail. We elucidate on our variational diffusion formulation in Appendix <ref>. We introduce censored Gaussian distributions for the data log-likelihood in Appendix <ref>, and we discuss our noise scheduler in Appendix <ref>. §.§ Latent diffusion formulation In our formulation of generative diffusion, we rely on variational diffusion models as introduced in <cit.>. These models allow us to naturally make use of encoder and decoder structures to reduce the dimensionality of the system. Given data drawn from a dataset 𝐱∼𝒟, the encoder produces the corresponding latent state 𝐳_x = enc(𝐱). This latent state is noised by a variance-preserving diffusion process, 𝐳_τ = α_τ𝐳_x + σ_τϵ, with ϵ∼𝒩(0, 𝐈),<ref> where the pseudo-time τ∈ [0, 1] determines the temporal position of the process. The signal amplitude α_τ specifies how much signal is contained in the noised sample, and the noise amplitude σ_τ=√(1-α_τ^2) how much signal has been replaced by noise. The dependency of the (noise) scheduling of α_τ and σ_τ on the pseudo-time is formulated as logarithmic signal-to-noise ratio γ(τ) = log(α_τ^2/σ_τ^2) such that we can recover α_τ^2 = 1/1+exp(-γ(τ)) and σ_τ^2 = 1/1+exp(γ(τ)). As in the variational autoencoder framework <cit.>, the data log-likelihood log(p(𝐱)) is lower bounded by the evidence lower bound (ELBO). In equivalence, we can write an upper bound on the negative data log-likelihood <cit.>, -log(p(𝐱)) ≤ -_𝐳_0 q(𝐳_0|𝐱)[log p(𝐱|𝐳_0) ] + []q(𝐳_1|𝐱)p(𝐳_1) + _𝐳_x q(𝐳_x|𝐱), τ [0, 1]ℒ(ϕ, 𝐳_x, τ). The first term is the negative log-likelihood of the data given a sample 𝐳_0 drawn in latent space, and also called reconstruction loss, measuring how well we can reconstruct the data from the latent space. The likelihood of the data p(𝐱|𝐳_0) maps back from latent space into data space and, hence, contains the decoder. The posterior distribution q(𝐳_0|𝐱) specifies the first noised state given a data sample and includes the encoder, mapping from data space to latent space. We will further discuss different options for the data log-likelihood in Appendix <ref>. The second term is the Kullback-Leibler divergence between the distribution at τ=1, the end of the diffusion process, and a prior distribution for the diffusion process. Given the variance-preserving diffusion process from Eq. (<ref>) and assuming a Gaussian prior distribution with mean 0 and covariance , p(𝐳_1)=0, we obtain for the second term, []q(𝐳_1|𝐱)p(𝐳_1) = 1/2∑^N_𝗅𝖺𝗍𝖾𝗇𝗍_n=1 [σ^2_1 + (α_1z_x, n)^2 - 1 - log(σ^2_1)], over the N_𝗅𝖺𝗍𝖾𝗇𝗍 dimensions of the latent space. The last term is the diffusion loss, the only term that depends on the denoising NN D_ϕ(𝐳_τ, τ). Given a sample in latent space 𝐳_x and a drawn pseudo-time τ, we can calculate a noised sample 𝐳_τ and the loss reads ℒ(ϕ, 𝐳_x, τ) = (-dγ/dτ)· e^γ‖𝐳_x- D_ϕ(𝐳_τ, τ)‖_2^2. Our loss function used to train the diffusion models, Eq. <ref>, corresponds to this loss term by setting w(γ)=1. As shown in <cit.>, using a monotonic weighting, like w(γ)=exp(-γ/2), corresponds to the ELBO with data augmentation. The only variable in Eq. (<ref>) are the weights and biases of the diffusion model and the noise scheduler that specifies γ(τ). Using the observation that the noise scheduling can be seen as importance sampling, we use an adaptive noise scheduler as explained in Appendix <ref>. In our experiments with diffusion models, we fix the end points of the noise scheduler to γ(0) = γ_min = -15 and γ(1) = γ_max = 15, and use a fixed or pre-trained encoder and decoder. The first two terms of Eq. (<ref>) are then constant with respect to the trained diffusion model and can be neglected in the optimisation. We obtain Eq. (<ref>) as the only loss to optimise the diffusion model. Instead of directly predicting the state 𝐳_x, we predict with our diffusion model a surrogate target 𝐯_τα_τϵ - σ_τ𝐳_x, as this tends to improve the convergence and the stability of the training <cit.>. We can recover the denoised state by setting D_ϕ(𝐳_τ, τ) = α_τ𝐳_x - σ_τ v_ϕ(𝐳_τ, τ). Our loss function reads then ℒ(ϕ, 𝐳_τ, τ) = w(γ) ·(-dγ/dτ) · (e^-γ+1)^-1‖𝐯_τ - v_ϕ(𝐳_τ, τ)‖^2_2, which corresponds to the ELBO with data augmentation and 𝐯_τ-prediction. To optimise the encoder and decoder together with the diffusion in an end-to-end training scheme <cit.>, we can make use of Eq. (<ref>). However, pre-training the encoder and decoder simplifies and stratifies the training process. To pre-train the encoder and decoder, we apply a variational autoencoder loss function, ℒ(θ) = -_q(𝐳_x𝐱)[log(p(𝐱𝐳_x))]<ref> + β[]q(𝐳_x𝐱)p(𝐳_x).<ref> Comparing Eq. (<ref>) with this pre-training loss, we can see that the reconstruction loss appears in both formulations, while the loss terms in latent/noised space are combined into a single loss, and the Eq. (<ref>) should prepare the latent space for its use in diffusion models. The reconstruction loss is discussed more in detail in Appendix <ref>. We train two different version of diffusion models: * The encoder and decoder are pre-trained with Eq. (<ref>) as 𝐳_x = enc(𝐱) and 𝐱̂ = dec(𝐳_x), which results into a latent diffusion model. * The encoder and decoder are fixed to the identity functions 𝐳_x = 𝐱 and 𝐱̂ = 𝐳_x, which leads to diffusion models in data space. During their training, we draw a mini-batch of data samples 𝐱𝒟, converted into latent space 𝐳_x, and pseudo-times τ∈ [0, 1]. We reduce the variance of the sampling in the pseudo-times by using stratified sampling as proposed in <cit.>. Afterwards, we estimate the diffusion loss, Eq. (<ref>), for the current parameters ϕ of the diffusion model. This loss is then used in Adam <cit.> to make a gradient descent step. The diffusion process can be seen as stochastic differential equation (SDE) that is integrated in pseudo-time <cit.>. The reversion of this process is again a SDE <cit.> and we can find an ordinary differential equation (ODE) that has the same marginal distribution as the reverse SDE <cit.>, d𝐳_τ = [𝐟(𝐳_τ, τ) - 1/2 g(τ)^2 𝐬_ϕ(𝐳_τ, τ) ]dτ. For the variance-preserving diffusion process as defined in Eq. (<ref>), we obtain for the drift and diffusion term 𝐟(𝐳_τ, τ) = -1/2(d/dτlog(1+e^-γ(τ)))𝐳_τ g(τ)^2 = d/dτlog(1+e^-γ(τ)). Consequently, the only variable in the ODE, Eq. (<ref>), is the score function 𝐬_ϕ(𝐳_τ, τ) which we approximate with our trained NN, 𝐬_ϕ(𝐳_τ, τ) = -(𝐳_τ + α_τ/σ_τv_ϕ(𝐳_τ, τ)). Given Eq. (<ref>)–(<ref>), we can integrate (<ref>) from τ=1 to τ=0 to find a solution and generate new samples; the initial conditions for the ODE are given by 𝐳_1[]0. For the integration of the ODE, we use a second-order Heun sampler <cit.> with 20 integration steps. The time steps for the integration are given by the inference noise scheduler as proposed by <cit.>. We extend the noise scheduler to a wider range of signal-to-noise ratio by setting γ_min=-15 and γ_max=15 with truncation <cit.>. Additionally, we adapt the sampling procedure: we apply the Heun procedure for all 20 integration steps, while we use a single Euler step to denoise the output from 𝐳_0 to 𝐳_x. The integration hence includes 41 iterations with the trained neural network. After obtaining 𝐳_x, we apply the decoder to map back into data space 𝐱 = dec(𝐳_x) which gives us then the generated sample. §.§ Data log-likelihood with censored distributions The autoencoder should map from latent space to data space by taking physical bounds into account, e.g. the non-negativity of the sea-ice thickness. To achieve such bounds, we can clip the decoder output and set values below (over) the lower (upper) bound explicitly to the bound; similar to what the rectified linear unit (relu) activation function does for the negative bound. Hence, all values below (above) the bound are projected to the bound. In this clipping case, the output of the decoder is no longer the physical quantity itself but another latent variable, which is converted into the physical quantity in a post-processing step. To differentiate between physical quantities and additional latent variable, we denote the decoder output as predicted latent variable 𝐲∈ℝ^5 × 512 × 512 in the following. We assume that this predicted latent variable is Gaussian distributed where the decoder predicts the mean μ = dec(𝐳_x) and the standard deviation is spatially shared with one parameter per variable, 𝐲∼[]μ^2 . Caused by the diagonal covariance, we make out of the multivariate data log-likelihood a univariate estimation, which we can simply sum over the variables and grid points. Given this Gaussian assumption, the data log-likelihood would read for K variables and L grid points -log p(𝐱|𝐳_x) = 1/2∑^K_k=1∑^L_l=1(x_k, l-μ_k, l)^2/(s_k)^2 + log((s_k)^2) + log(2 ·π). The log-likelihood is proportional to a mean-squared error (MSE) weighted by the scale parameter. Although the MSE is often employed to train an autoencoder, we make the assumption that the decoder output is the physical quantity itself and we neglect the clipping in the training of the autoencoder. Consequently, if we clip the output into physical bounds, we make a wrong assumption and introduce a bias into the decoder and, thus, the latent space. This bias depends on the number of cases that the lower (upper) bound is reached and is difficult to quantify. To explicitly bake the clipping into the cost function and treating the decoder output as latent variable, we censor the assumed distribution. To optimise the autoencoder, we derive the data log-likelihood where variables are lower and upper bounded with as lower and as upper bound, e.g., x_𝖫 = 0 as lower and x_𝖴 = 1 as upper bound for the sea-ice concentration. For variables with only one bound, the case of the missing bound can be simply omitted. The predicted latent variable for the k-th variable and the l-th grid point is clipped into the physical bounds by 𝐱_k, l = x_𝖫, k, if μ_k, l≤ x_𝖫, k x_𝖴, k, if μ_k, l≥ x_𝖴, k μ_k, l, otherwise. This clipping operation is non-invertible such that we cannot recover the true latent variable y_k, l from observing the physical quantity x_k, l. However, we know that if x_k, l=x_𝖫, k or x_k, l=x_𝖴, k, the true latent variable was clipped. In such cases, rationally, the data log-likelihood hence should increase the probability that the decoder output is clipped. In the following, we denote the cumulative distribution function (CDF) of the normal Gaussian distribution as and its probability density function (PDF) as . Given the assumed Gaussian distribution of the latent variables, Eq. (<ref>), the physical quantity is presumably distributed by a censored Gaussian distribution with the following PDF for the k-th variable and l-th grid point, x_k, l𝐳_x = (x_𝖫, k-μ_k, l/s_k), if x_k, l = x_𝖫, k, (μ_k, l-x_𝖴, k/s_k), if x_k, l = x_𝖴, k, 1/s_k(x_k, l-μ_k, l/s_k), if x_𝖫, k < x_k, l < x_𝖴, k, 0, otherwise. Different from a truncated Gaussian distribution, Eq. (<ref>) gives values at the bounds a probability larger than zero, and all the density of the latent variable exceeding the bounds is folded to the bounds. Using the PDF of the censored distribution from Eq. (<ref>) for the data log-likelihood, we obtain -log p(𝐱|𝐳_x) = ∑^K_k=1∑^L_l=1+ I(x_k, l = x_𝖫, k) log((x_𝖫, k-μ_k, l/s_k)) + I(x_k, l = x_𝖴, k) log((μ_k, l-x_𝖴, k/s_k)) + I(x_𝖫, k < x_k, l < x_𝖴, k) log(1/s_k(x_k, l-μ_k, l/s_k)) as cost function, where I(·) is the indicator function. The first two parts of the cost function are the bounded cases and correspond to a Gaussian classification model, using as probability the log-CDFs at the bounds. The last part is the log-likelihood of a Gaussian distribution as in Eq. (<ref>) and includes the weighted MSE. The here derived log-likelihood thus combines a regression with a classification and is the negative log-likelihood of a type I Tobit model <cit.>. Whereas the optimisation of Eq. (<ref>) seems more complicated than the optimisation of a log-likelihood from a Gaussian distribution, we have experienced no optimisation issues with variants of gradients descent. Caused by the regression-classification mixture, the decoder output specifies not only the variable of interest but combined with the scale also the probability that the output will be clipped. This censored distribution approach works because we neglect correlations between variables and grid points and convert the multivariate into a univariate prediction problem. Nevertheless, we are free to chose any neural network architecture to predict the mean of the distribution in Eq. (<ref>). Consequently, by e.g. using a fully convolutional decoder, we can still represent correlations in the decoder output. §.§ Adaptive noise scheduler The noise scheduler maps a given pseudo-time τ∈ [0, 1] to a log-signal-to-noise ratio γ(τ) and determines how much time is spent at a given ratio. In the manuscript, we use different schedulers during training and generation. While we generate the data with a fixed scheduler as proposed by <cit.>, we make use of an adaptive scheduler <cit.> during training. In the following, we briefly introduce the principles of the adaptive scheduler, and we refer to <cit.> for a longer treatment. As shortly shown in Appendix <ref>, the loss function of the diffusion model optimises the evidence lower bound (ELBO). Consequently, we can extend the variational parameters by the parameters of the training noise scheduler to further tighten the bound <cit.> which gives us the optimal scheduler in the ELBO sense. Extending on this idea, the distribution given after transforming a drawn pseudo-time into γ can be seen as importance sampling distribution p(γ) <cit.>, and we can write -dγ/dτ = 1/p(γ). Interpreting the output of the noise scheduler as a drawn from an importance sampling distribution allows us to formulate the optimal noise scheduler which should fulfil p(γ) ∝_𝐱, ϵ0[ w(γ) · (e^-γ+1)^-1‖𝐯_γ - v_ϕ(𝐳_γ, γ)‖^2_2 ], where we made a change-of-variables from τ to γ to signify the dependency on γ. As Eq. (<ref>) is infeasible, we approximate the optimal importance sampling distribution. Using 100 equal-distant bins between γ_min = -15 and γ_max = 15, we keep track of an exponential moving average of Eq. (<ref>) for each bin. The density is constant within each bin, which allows us to estimate an empirical cumulative distribution function from γ_max to γ_min. The training noise scheduler is then given as empirical quantile function mapping from [0, 1] to [γ_max, γ_min]. In practice, this adaptive noise scheduler reduces the number of needed hyperparameters for the diffusion model and improves its convergence <cit.>. It was recently similarly used in <cit.>. § ARCHITECTURES In the following, we will describe our neural network architectures. The autoencoder is purely based on a convolutional architecture, while the diffusion models are based on a mixture between a convolutional U-Net <cit.> and a vision transformer (ViT, <cit.>), termed U-ViT <cit.>. §.§ Masked convolutions As visible in Fig. <ref>, sea ice only exists on grid points covered by ocean while grid points with land are omitted. To take the masking of land pixels into account, we apply convolutional operations as inspired by partial convolutions <cit.>. Before each convolutional operation, the data is multiplied by the mask 𝐦∈ [0, 1], where valid grid points are 1. The masking sets all grid points with land to zero such that they act like zero padding and interactions with land grid points are deactivated. Consequently, grid points near land exchange less information with the surrounding grid points which in fact imitates how land masking is implemented in numerical models. §.§ Autoencoder The autoencoder maps from 𝐱∈ℝ^5 × 512 × 512 to 𝐳_z∈ℝ^16 × 64 × 64 and vice-versa. Its encoder includes a series of downsampling operations with ConvNeXt blocks, while the decoder combines upsampling operations with ConvNeXt blocks. The main computing block are ConvNeXt blocks <cit.>: a wide masked convolution with a 7 × 7 kernel extracts channel-wise spatial information. This spatial information is normalised with layer normalisation <cit.>. Afterwards, a multi-layered perceptron (MLP) with a Gaussian error linear unit (Gelu, <cit.>) activation mixes up the normalised information, before the output gets added again to the input of the block in a residual connection. Throughout the ConvNeXt block, the number of channels remain the same. The downsampling in the encoder combines layer normalisation with a masked convolution and a 2×2 kernel and a stride of 2, which also doubles the number of channels. To downsample the mask, we apply max pooling, i.e. if there is an ocean grid point in a 2×2 window, the output grid point is set to 1. This increases the number of ocean grid points compared to a strided downsampling. In the upsampling of the decoder, a layer normalisation is followed by a nearest neighbour interpolation, which doubles the number of grid points in y- and x-direction. Afterwards, a masked convolution with a 3×3 kernel smooths the interpolated fields and halves the number of channels. This combination of interpolation with convolution results into less checkerboard effects compared to a transposed convolution <cit.>. The architectures for the encoder and decoder read (the number in brackets represents the number of channels): * Encoder: Input(5) → Point-wise convolution (64) → ConvNeXt (64) → ConvNeXt (64) → Downsampling (128) → ConvNeXt (128) → ConvNeXt (128) → Downsampling (256) → ConvNeXt (256) → ConvNeXt (256) → Downsampling (512) → ConvNeXt (512) → ConvNeXt (512) → Layer normalisation (512) → rectified linear unit (relu, 512) → Point-wise convolution (32) * Decoder: Input(16) → Point-wise convolution (512) → ConvNeXt (512) → ConvNeXt (512) → Upsampling (256) → ConvNeXt (256) → ConvNeXt (256) → Upsampling (128) → ConvNeXt (128) → ConvNeXt (128) → rightarrow (64) → ConvNeXt (64) → ConvNeXt (64) → Layer normalisation (64) → relu (64) → Point-wise convolution (5). Note, the encoder outputs the mean and standard deviation in latent space, while the decoder gets only a latent sample where the mean and standard deviation are combined. The relu activation before the last point-wise convolution helps to improve the representation of continuous-discrete sea-ice processes. In total, the encoder has 2.2×10^6 parameters and the decoder has 3.1×10^6 parameters. §.§ Diffusion architecture The diffusion models modify the U-ViT architectures as introduced in <cit.>. The encoding and decoding part of the architecture are implemented with convolutional operations, similar to how we implemented the autoencoder. In the bottleneck, the architecture uses a vision transformer <cit.>. In addition to the latent sample 𝐳_τ, the diffusion models get the pseudo-time τ as input. This pseudo-time is handled as conditioning information and embedded with a sinusoidal embedding <cit.> to extract 512 Fourier features. The embedding is followed by a MLP which reduces the number of features to 256 and where we apply a Gelu for feature activation. These extracted features are fed into the adaptive layer normalisation layers <cit.> with conditioned affine parameters. The main computing blocks of the encoding and decoding part are again ConvNeXt blocks (see also Appendix <ref>), with an additional conditioning of the residual connections on the pseudo-time features. The downsampling in the encoding part remains the same as for the encoder. For both blocks, we simply replace layer normalisation by adaptive layer normalisation conditioned on the pseudo-time. The upsampling is similar to the upsampling for the decoder (see also Appendix <ref>). However, we have also skip connections, transferring information from before the downsampling to the upsampling at the same level. These features are concatenated with the interpolated features before the convolution of the upsampling is applied. Consequently, the convolution has 1.5× more input channels than the convolution in the upsampling of the decoder. Additionally, we again replace layer normalisation by adaptive layer normalisation conditioned on the pseudo-time. The transformer blocks <cit.> closely resemble the default implementation of ViT transformer blocks <cit.>. A self-attention block with 8 heads is followed by a MLP. Throughout the transformer block, the number of channels remain the same, even in the multi-layered perceptron. We apply adaptive layer normalisation at the beginning of each residual layer in the self-attention block and MLP <cit.> and additionally conditioned the residual connection on the pseudo-time. We remove land grid points before applying the transformer and add them afterwards again. The architectures for the diffusion model in data space and in latent space read then (the number in brackets represents the number of channels): * Data space: Input(5) → Point-wise convolution (64) → ConvNeXt (64) → ConvNeXt (64) → Downsampling (64) → ConvNeXt (64) → ConvNeXt (64) → Downsampling (64) → ConvNeXt (64) → ConvNeXt (64) → Downsampling (64) → ConvNeXt (64) → ConvNeXt (64) → Downsampling (64) → ConvNeXt (64) → ConvNeXt (64) → Downsampling (128) → ConvNeXt (128) → ConvNeXt (128) → Downsampling (256) → Transformer (256) → Transformer (256) → Transformer (256) → Transformer (256) → Transformer (256) → Transformer (256) → Transformer (256) → Transformer (256) → Upsampling (128) → ConvNeXt (128) → ConvNeXt (128) → Upsampling (64) → ConvNeXt (64) → ConvNeXt (64) → Upsampling (64) → ConvNeXt (64) → ConvNeXt (64) → Upsampling (64) → ConvNeXt (64) → ConvNeXt (64) → Upsampling (64) → ConvNeXt (64) → ConvNeXt (64) → Upsampling (64) → ConvNeXt (64) → ConvNeXt (64) → Layer normalisation (64) → relu (64) → Point-wise convolution (5) * Latent space: Input(16) → Point-wise convolution (64) → ConvNeXt (64) → ConvNeXt (64) → Downsampling (128) → ConvNeXt (128) → ConvNeXt (128) → Downsampling (256) → ConvNeXt (256) → ConvNeXt (256) → Downsampling (512) → Transformer (512) → Transformer (512) → Transformer (512) → Transformer (512) → Transformer (512) → Transformer (512) → Transformer (512) → Transformer (512) → Upsampling (256) → ConvNeXt (256) → ConvNeXt (256) → Upsampling (128) → ConvNeXt (128) → ConvNeXt (128) → Upsampling (64) → ConvNeXt (64) → ConvNeXt (64) → Layer normalisation (64) → relu (64) → Point-wise convolution (5). Note, caused by the high dimensionality of the data, the diffusion model in data space has a high memory consumption, and we kept 64 channels for longer than in the latent diffusion model. The diffusion model has 15.3×10^6 parameters and the latent diffusion model 13.4×10^6 parameters. § EXPERIMENTS In our experiments, we train autoencoders and diffusion models with the architectures as described in Appendix <ref>. As we train different types of models, we designed our experiments with a unified setup to achieve a fair comparison. We use sea-ice simulation data from the state-of-the-art sea-ice model neXtSIM <cit.>. This sea-ice model uses an advanced brittle rheology <cit.> which introduces a damage variable to represent subgrid-scale dynamics. This way, the model can represent sea-ice dynamics at the mesoscale resolution ≃ 10 km as they are observed with satellites and buoys <cit.>. In the here used simulations <cit.>, neXtSIM has been coupled to the ocean component OPA of the NEMO modelling framework <cit.>. NeXtSIM runs on a lagrangian mesh with adaptive remeshing, while OPA uses a curvilinear mesh, here in the regional CREG025 configuration <cit.>. Both models are run at a similar resolution of 1/4^∘≃ 12 km. The output of neXtSIM is conservatively interpolated to the curvilinear OPA mesh. NeXtSIM is driven by ERA5 atmospheric forcings <cit.>. Since we use unconditional generation of fields, we neglect model output from OPA and the atmospheric forcing for the training of the neural networks. We target to generate samples for five different variables, the sea-ice thickness, the sea-ice concentration, the two sea-ice velocities in x- and y-direction, and the sea-ice damage, a special variable introduced for brittle rheologies which represents the subgrid-scale fragmentation of sea ice. The model output for all variables is a six-hourly average, while the damage corresponds to a six-hourly instantaneous output. In correspondence with <cit.>, we use data from 2000–2018, omitting the first five years (1995-1999) as spin-up. The first 14 years (2000-2014, 21916 samples) are used as training dataset, 2015 as validation dataset (1460 samples), and 2016–2018 for testing (4383 samples). All datasets are normalised by a per-variable global mean and standard deviation estimated over the training dataset. Given these datasets, we optimise all models with Adam <cit.> with a batch size of 16. The learning rate is linearly increased from 1×10^-6 to 2×10^-4 within 5 × 10^3 iterations. Afterwards a cosine scheduler is used to decrease the learning rate again up to 1×10^-6 after 10^5 iterations, where the training is stopped. All models are trained without weight decay and other regularisation techniques like dropout, and we select the models that achieve the lowest validation loss. We train the models with mixed precision in bfloat16 and evaluate in single precision. All models are trained on either a NVIDIA RTX5000 GPU (24 GB) or a NVIDIA RTX6000 GPU (48 GB); to train on the RTX5000, we use a batch size of 8 and accumulate the gradient of two iterations. All models are implemented in PyTorch <cit.> with PyTorch lightning <cit.> and Hydra <cit.>. The code will be published for the camera-ready version of the paper. To train the latent diffusion models (LDMs), we use the mean prediction of the pre-trained encoder as deterministic mapping from pixel into latent space. Before training of the LDMs, we normalise the latent space by a per-channel global mean and standard deviation estimated over the training dataset. Although the LDM needs much less memory than the autoencoder and pixel diffusion model during training, we stick to a batch size of 16 for a fair comparison. All diffusion models use the same fixed log-signal-to-ratio bounds γ_min = -15 and γ_max = 15 during training and sampling. We perform the sampling with a second-order Heun sampler in 20 integration steps and the EDM sampling noise scheduler <cit.>, extended to our bounds by truncation <cit.>. Both, the reconstructions and the generated data are compared to the testing dataset. For the reconstructions, we perform a one-to-one comparison, while we evaluate the statistics of our generated data. In total, we hence generate 4383 samples with the diffusion models. Although a smaller number than the 50000 samples normally used in validating image diffusion models, it results into a fair comparison to the testing dataset. In Appendix, we demonstrate that the dataset size is big enough to see differences in the models. § METRICS In the following we introduce the metrics to evaluate our neural networks. To evaluate the autoencoders, we apply point-wise measures, while we quantify the quality of the generated fields in a transformed space. The reconstructed samples 𝐱∈ℝ^N × K× L, and generated samples 𝐱∈ℝ^N × K× L are compared to the samples in the testing dataset 𝐱∈ℝ^N × K× L for N=4383 samples, K=5 variables, and L=127152 grid points without land. §.§ Root-mean-squared error To estimate one averaged root-mean-squared error (RMSE), we normalise the RMSE by the per-variable standard deviation σ_k, globally calculated for the k-th variable over the full training dataset. The normalised RMSE is then calculated as RMSE(𝐱, 𝐱) = √(1/N · K · L∑_n=1^N∑_k=1^K∑_l=1^L(x_n, k, l -x_n, k, l)^2/(σ_k)^2). The lowest (best) possible RMSE is 0 and its maximum value is unbounded. §.§ Structural similarity index measure The structural similarity index measure (SSIM) takes information from a window into account and is hence a more fuzzy verification metric than the MAE. The SSIM for the n-th sample, the k-th variable, and the l-th is given as SSIM(x_n, k, l,y_n, k, l) = (2μ^(x)_n, k, lμ^(y)_n, k, l)(2σ^(xy)_n, k, l + c_2, k)/((μ^(x)_n, k, l)^2 + (μ^(y)_n, k, l)^2 + c_1, k)((σ^(x)_n, k, l)^2 + (σ^(y)_n, k, l)^2 + c_2, k). The means μ^(x)_n, k, l and μ^(y)_n, k, l, variances σ^(x)_n, k, l and σ^(y)_n, k, l, and covariance σ^(xy)_n, k, l between x and y, respectively are estimated in the given 7×7 window for each sample, variable, and grid point. The stabilisation values c_1, k=(0.01 · r_k)^2 and c_2=(0.03 · r_k)^2 depend on the dynamic range for the k variable with r_k = max(𝐱_k) - min(𝐱_k) as the difference between the maximum and minimum in the training dataset. The global score for the SSIM is the average of Eq. (<ref>) across all samples, variables, and grid points SSIM(𝐱, 𝐱) = 1/N · K · L∑_n=1^N∑_k=1^K∑_l=1^LSSIM(x_n, k, l,x_n, k, l). The highest possible SSIM is 1 and the lowest 0. §.§ Sea-ice extent accuracy By applying diffusion models in a latent space, we can incorporate prior knowledge into the autoencoder which maps to and from the latent space. To determine the need of incorporating this knowledge into the autoencoder, we evaluate the accuracy of reconstructing where the sea ice is. While the accuracy might be unneeded for data generation, it can be a necessity for different tasks like surrogate modelling. In neXtSIM, sea-ice processes are activated or deactivated based on the sea-ice thickness, and we estimate the sea-ice extent based on the sea-ice thickness. Choosing a small threshold value of SIT_𝗆𝗂𝗇 = 0.01 m, we classify values above this threshold as sea ice and below as no ice (similar results can be achieved by using the sea-ice concentration). Based on this classification, we estimate the average accuracy across all samples and grid points. The accuracy is the sum of the true positive and true negative divided by the total number of cases. This accuracy is the same as 1-IIEE, where IIEE is the integrated ice edge error <cit.> estimated based on the sea-ice thickness instead of the sea-ice concentration. The highest possible accuracy is 1 and the lowest 0. §.§ Fréchet distance in latent space If we generate data, we have no one-to-one correspondence between samples from the testing dataset and generated samples. To evaluate the generated statistics, we employ a latent space spanned by an independently trained variational autoencoder. The variational autoencoder follows the general structure of our trained autoencoders with β=1. As the evidence lower bound is maximised, we can expect that this variational autoencoder encodes spatial and semantic information into the latent space. Additionally, the KL-divergence []q(𝐳_x𝐱)[]0 regularises the latent space towards an isotropic Gaussian distribution. Hence, we assume that the latent space is distributed with an isotropic Gaussian. To emulate how image generators are validated with the Fréchet Inception distance (FID, <cit.>), we employ the Fréchet distance in latent space and call this Fréchet autoencoder distance (FAED). Specifically, we use the mean prediction of the encoding part enc(𝐱) as deterministic mapping into latent space. The Fréchet autoencoder distance reads then FAED(𝐱, 𝐱) = ‖μ(enc(𝐱)) - μ(enc(𝐱))‖_2^2 + Tr(Σ(enc(𝐱))+Σ(enc(𝐱))-2(Σ(enc(𝐱))·Σ(enc(𝐱)))^1/2), where μ(·) is the point-wise sample mean, Σ(·) the sample covariance, Tr(·) the trace of a matrix, and (·)^1/2 the matrix square-root. The isotropic Gaussian assumption in latent space allows us to further simplify the estimated covariances matrix to variances, and the Fréchet autoencoder distance reduces to FAED(𝐱, 𝐱) = ‖μ(enc(𝐱)) - μ(enc(𝐱))‖_2^2 + ‖σ(enc(𝐱)) - σ(enc(𝐱))‖_2^2, with σ(·) as point-wise standard deviation. The minimum (best) value of the Fréchet autoencoder distance is 0 while its maximum value is unbounded. §.§ Root-mean-squared error of the sea-ice extent To evaluate the climatological consistency of the generated samples for the sea-ice extent, we can estimate the probability p_l that the l-th grid point is covered by sea ice. To classify for the n-th sample if a grid point is covered, we again apply the threshold of SIT_min = 0.01 m. The probability is given as p_l = 1/N∑^N_n=1I(SIT_n, l≥SIT_min) with I(·) as indicator function and SIT_n, l the sea-ice thickness of the n-th generated sample for the l-th grid point. The root-mean-squared error (RMSE) in the sea-ice extent reads then RMSE_SIE(𝐱, 𝐱) = √(1/L∑^L_l=1 (p_l-p_l)^2) with p_l as probability in the testing dataset. The minimum (best) RMSE is 0, while the worst possible RMSE is 1. § ADDITIONAL RESULTS In the Appendix, we present additional results that signify the results we have found. We will show how the hyperparameters influence the autoencoder reconstruction, how censoring helps to improve the sea-ice extent representation, and how the diffusion model generalises across experiments. §.§ Ablation for autoencoder In our formulation for the pre-training of the autoencoder, the autoencoder has several hyperparameters. One of the them is the number of channels N_latent in latent space. The more channels, the more information can be stored in the latent space. However, more channels can make the diffusion model more difficult to train. Another hyperparameter is the factor β which determines the strength of the regularisation in latent space. The larger the factor, the smoother the latent space, but also the higher the semantic compression therein. Throughout the main manuscript, we have made the decision of N_latent=16 and β=10^-3 to strike the right balance. In Tab. <ref>, we show an ablation study on these factors. Different from the experiments in the main manuscript, these experiments were performed with NVIDIA A100 GPUs on the Jean-Zay supercomputing facilities, provided by GENCI. With our chosen hyperparameters, the autoencoder results into a 16.4-fold compression, which is lower than the 20-fold compression from data dimensions only as the dimensionality of the masking is also reduced. Altering this compression rate by changing the number of channels in latent space has a larger impact on the RMSE and the SSIM than the regularisation factor. In Fig. <ref>, we additionally show the energy spectrum for the sea-ice thickness if we increase the number of channels or lower the regularisation factor. Once again, changing the regularisation has almost no impact on the averaged spectrum, while a larger latent space can seemingly retain more small-scale features. Hence, if the regularisation is small enough, the compression rate determines how much small-scale features are retained through the latent space. To reduce the smoothing, we can reduce the compression rate. Since the smoothing is a result of double penalty effects caused by the point-wise comparison in the reconstruction loss, we however need other tools to completely remedy the smoothing. Furthermore, by having more channels in latent space, the latent diffusion model can become more difficult to optimise <cit.>. §.§ Influence of the generated dataset size on the diffusion scores To evaluate our diffusion models, we use 4383 samples, while the recommendation to evaluate generative models for images is 50000 samples. In Table <ref>, we compare the performance of the same latent diffusion model without censoring if we alter the random seed for the generation of the dataset. While there are some small differences between different seeds, these differences are smaller than the differences shown in Tab. <ref>. Based on this analysis, we can conclude that the dataset size is large enough to evaluate differences between diffusion models with the proposed scores. Diffusion models tend to have a high volatility in the scores during training <cit.>. Normally used to stabilise the diffusion model, we restrain from applying exponential moving averages to estimate the scores of the diffusion model. On the one hand, this allows us to establish the performance of diffusion models without such tricks. On the other hand, the differences between the models for the FAED in Tab. <ref> might be due to the high volatility. §.§ Censoring In Table <ref>, we show that censoring helps to improve the representation of the sea-ice extent and reduces the RMSE of the sea-ice extent. Here, we disentangle a bit how we obtain this improvement. In Fig. <ref>, we show the binarized probability that a grid point is covered by sea ice in the generated dataset dependent on the probability in the testing dataset. The optimal probability would be on the one-to-one line, reaching for each grid point the same probability in the generated dataset. In the validation dataset, the probabilities are slightly overestimated. The difference is likely caused by the different dataset size (one year in validation and three years in testing). As for the validation dataset, the diffusion model in data space and the latent diffusion model both overestimate the probabilities. Since this overestimation is larger than for the validation dataset, it is likely because of the failure to represent the sea-ice extent correctly. In contrast to the other diffusion models, the diffusion model with censoring tends to underestimate the probabilities. The RMSE in Tab. <ref> is reduced as the underestimation for the censored model is smaller than the overestimation for the other diffusion models. Consequently, we have established here that censoring helps to alleviate the overestimation bias of the diffusion models. Its further use for other downstream tasks remains to be seen. §.§ Uncurated data samples In Fig. <ref>, we show uncurated samples from our diffusion models and reference samples from neXtSIM. As discussed in the main manuscript and shown in Fig. <ref>, the latent diffusion models result into smoothed fields compared to the diffusion model in data space and neXtSIM. While the derived deformation field of the diffusion model (d) shows small-scale structures as they can be seen in neXtSIM (p), the thickness and concentration remain too noisy. One possible explanation could be that the velocities are easier to generate as they are continuous, whereas the sea-ice thickness can be rather represented by discrete-continuous behaviour. Since we used a rather low number of integration steps (20) without much tuning, we could expect that such fine-scale structure can be better represented with a better tuned sampling and/or with a diffusion model trained on more data.
http://arxiv.org/abs/2406.19120v1
20240627120527
QOS: A Quantum Operating System
[ "Emmanouil Giortamis", "Francisco Romão", "Nathaniel Tornow", "Pramod Bhatotia" ]
quant-ph
[ "quant-ph", "cs.OS" ]
emmanouil.giortamis@tum.de TU Munich Germany francisco.romao@tum.de TU Munich Germany nathaniel.tornow@tum.de TU Munich and Leibniz Supercomputing Centre Germany pramod.bhatotia@tum.de TU Munich Germany § ABSTRACT We introduce the Quantum Operating System (QOS), a unified system stack for managing quantum resources while mitigating their inherent limitations, namely their limited and noisy qubits, (temporal and spatial) heterogeneities, and load imbalance. QOS features the compiler—a modular and composable compiler for analyzing and optimizing quantum applications to run on small and noisy quantum devices with high performance and configurable overheads. For scalable execution of the optimized applications, we propose the runtime—an efficient quantum resource management system that multi-programs and schedules the applications across space and time while achieving high system utilization, low waiting times, and high-quality results. We evaluate on real quantum devices hosted by IBM, using 7000 real quantum runs of more than 70.000 benchmark instances. We show that the compiler achieves 2.6–456.5× higher quality results, while the runtime further improves the quality by 1.15–9.6× and reduces the waiting times by up to 5× while sacrificing only 1–3% of results quality (or fidelity). QOS: A Quantum Operating System Pramod Bhatotia =============================== plain § INTRODUCTION Quantum Cloud Computing Quantum computing promises to solve computationally intractable problems with classical computers <cit.>. Thanks to remarkable technological advances in materials science and engineering <cit.>, quantum hardware has become a reality in the form of quantum processing units (QPUs) that consist of quantum bits (qubits) <cit.>. Interestingly, QPUs are now readily available in a quantum-as-a-service fashion offered by all major cloud providers <cit.>. While quantum hardware is now a reality, the associated quantum software systems are rudimentary. These QPUs face classical OS challenges that our systems community has tackled in the past, including scalability, performance, efficiency, faults (a.k.a. errors), scheduling, and utilization <cit.>. Unfortunately, no operating system exists to tackle these challenges holistically for modern quantum hardware. Fundamental Challenges of QPUs A natural tendency would be to treat these QPUs as yet another accelerator class (e.g., GPU, TPU, FPGA) and manage them as accelerator as-a-service to offload compute-intensive tasks. We argue that this approach might be sub-optimal or even flawed! The reason is that QPUs present fundamentally unique hardware-level challenges that the systems community has not considered and cannot be directly mapped to classical accelerator-oriented computing (we empirically detail these hardware challenges in  <ref>). In particular, QPUs operate in the NISQ-fashion (Noisy Intermediate-Scale Quantum <cit.>), leading to a non-deterministic computing platform, where even two QPUs with identical qubits exhibit completely different behaviors across space and time <cit.>. More specifically, QPUs are inherently noisy and small in computational capacity <cit.>, which limits the size of the problems they can solve. Second, the degree of noise differs across QPUs, even of identical architecture and model, making it difficult to decide which QPUs should execute a quantum program without compromising performance <cit.>. In addition, we can not trivially multi-program multiple quantum programs on the same QPU to increase utilization since QPU qubits can interfere with each other in undesirable and unpredictable ways <cit.>, severely degrading performance <cit.>. Finally, it is generally impossible to save or copy a quantum program during execution <cit.>, which further limits scheduling opportunities for preemption or resource sharing in general. State-of-the-Art of Quantum Software Systems The current state of software can be roughly compared to IBM mainframe batch OSes from the 60s, where the QPUs are managed through rudimentary interfaces. Researchers have proposed specialized approaches to address some of the aforementioned OS and QPU challenges individually, for instance, performance <cit.>, multi-programming <cit.>, or scheduling <cit.>. Unfortunately, these proposed approaches are designed to solve an individual issue, which prevents them from being composed together or with other OS mechanisms to create a holistic software stack. To leverage quantum computing practically, we must address the key challenge of combining such mechanisms in a unified software stack for quantum computing. Novelty However, designing a unified system stack that supports general OS abstractions while addressing the QPU challenges is not trivial. The system should support cross-stack software mechanisms, from the compiler level for quantum program optimization to the runtime level for QPU resource management. More specifically, we require a modular and extensible compiler infrastructure for increasing execution quality, multi-programming for increased utilization, and scheduling for load balancing, all in the presence of QPU noise and heterogeneities. This way, the system can achieve the cloud users' goals, i.e., high-quality quantum computation and low waiting times, and the quantum cloud operator's goals, i.e., QPU resource efficiency and scalability. : A Unified System Stack for Quantum Computing We propose , a comprehensive quantum computing system with a unified architecture supporting pluggable policies that tackle quantum computing challenges. These policies are implemented to meet user and operator objectives. builds on a unified abstraction and comprises three main components: * The Qernel Abstraction: This serves as the foundation for implementing policies across mechanisms, including the Qernel intermediate representation (QIR) and static/dynamic properties. * The Compiler: This flexible and modular workflow optimizes quantum programs using Qernel metadata to enhance execution quality. * The Runtime: A scalable system designed to optimize QPU resource efficiency, maintaining high utilization and minimizing waiting times while ensuring quality execution. : A Unified System Stack for Quantum Computing We propose , an end-to-end system for holistically tackling quantum computing challenges. provides a unified architecture for supporting compiler and OS mechanisms with pluggable and configurable policies. In , we implement such policies to achieve the aforementioned users' and operator's goals. To achieve this, builds on a unified abstraction and comprises two main components: * The Qernel Abstraction: We introduce the Qernel abstraction that acts as a common denominator for the mechanisms to apply their policies ( <ref>). A Qernel contains the Qernel intermediate representation (QIR) and static and dynamic properties, leveraged by the components to apply their policies. * Compiler: We introduce the compiler ( <ref>, <ref>, <ref>), an extensible and modular compiler workflow that leverages the QIR and static properties to optimize quantum programs for increased execution quality. * Runtime: We present the runtime ( <ref>), a scalable system for QPU resource efficiency. The system offers automated QPU selection to abstract heterogeneity away, multi-programming to increase QPU utilization, and load-aware scheduling to achieve low waiting times while maintaining high execution quality. We implement in Python by building on the Qiskit framework <cit.>. We evaluate on IBM's 27-qubit QPUs <cit.>, using a dataset of more than 7000 quantum runs and 70.000 state-of-the-art quantum benchmark instances used in popular quantum algorithms <cit.>. Our evaluation shows that the compiler improves the quantum program properties by 51% on average, which leads to 2.6–456.5× improvement in the quality of the results, depending on the problem size ( <ref>). The runtime increases the quality of the results by 1.15–9.6× for the same target utilization ( <ref>) and reduces the waiting times by 5× while sacrificing at most 3% of the quality of the results ( <ref>). XXXXXX OLD INTRODUCTION XXXXXXXXX Quantum computing (QC) offers a promising computational model that can potentially solve otherwise intractable problems by exploiting quantum phenomena that give exponential speedup on certain algorithms. QC has important applications in integer factorization <cit.>, database search <cit.>, optimization <cit.>, machine learning <cit.>, chemistry <cit.>, and many others. Recent progress has resulted in the creation of multiple quantum processing units (QPUs) featuring up to hundreds of qubits <cit.>, typically accessed through quantum service providers with a quantum-as-a-service model <cit.>. However, QPUs face key challenges that limit their potential. They are considered Noisy Intermediate Scale Quantum (NISQ) <cit.> QPUs, characterized by low qubit numbers, which limits the size of executable computation and hardware noise <cit.>, which limits the size of meaningful computation. At the same time, QPUs are vastly heterogeneous in terms of technology, i.e., architectures <cit.>, and noise properties <cit.>. In the NISQ-era landscape, a gap exists in the existing software stack between quantum software developers and the quantum service providers that manage noisy and heterogeneous hardware. Notably, no system automatically manages QPUs while making informed choices that address the inherent limitations and tradeoffs of QC. To render near-term QC viable, we need programming frameworks that fill this gap. However, designing such a system is challenging. In contrast to the classical resource management domain, NISQ-era QC faces unique challenges. First, there are inherent scalability limitations ( <ref>), where the size of meaningful quantum computations is severely limited due to noise errors. Secondly, the scalability limitation leads to QPU under-utilization ( <ref>) since the quantum programs that can run while giving high-quality results are small, hence under-utilizing the QPUs. Next, QPUs are inherently heterogeneous, and their performance is non-deterministic since it varies unpredictably across space and time ( <ref>). This renders certain (but random) QPUs more performant, but identifying them regularly is not trivial for non-quantum experts. Lastly, this heterogeneity and the manual QPU selection facilitated by quantum cloud platforms <cit.> pose the additional challenge of QPU load imbalance and long waiting times ( <ref>). To address these challenges, we introduce the Quantum Operating System (), a system designed to manage QPUs while mitigating their limitations. At a high level, accepts quantum programs from one more user, performs the necessary optimizations to improve their performance, then performs the steps that alleviate the NISQ limitations and returns the final results to them. To achieve this goal, it consists of a modular compiler infrastructure that optimizes quantum programs and a scalable runtime system that efficiently manages the underlying QPUs. is based on four key ideas directly addressing the aforementioned challenges. First, the modular and extensible compilation infrastructure supports developing optimization techniques that scale the size of quantum programs that can run on QPUs. Next, by estimating the performance of quantum programs on QPUS, it offers automated QPU selection that abstracts the underlying heterogeneity away. Additionally, multi-programming increases the effective QPU utilization while maintaining the quantum performance. Lastly, load-aware scheduling balances the tradeoff between high-quality results and minimal waiting times. The architecture consists of two main components: * Compiler: We introduce the transpiler, an extensible and modular compiler workflow that leverages intermediate representations (IRs) to optimize quantum programs ( <ref>). The workflow comprises three stages and is fully automated, but the users have complete control over the optimization process. (i) the circuit analyzer ( <ref>) generates (IRs) and potentially useful metadata for optimizing the program, (2) the Qernel optimizer ( <ref>) consists of composable and extensible transformation passes with tunable configuration that improve optimize the program, and (3) the virtualizer ( <ref>) that renders the programs executable after optimization. * Runtime: We present the runtime, a scalable resource manager that aims for high system utilization and low waiting times under the constraint of maintaining high results quality. It is comprised of (1) the estimator ( <ref>) that automatically selects the best performant QPU(s) for a given program, (2) the multi-programmer ( <ref>) which bundles low-utilization programs together to increase the system utilization, and (3) the scheduler ( <ref>) which trade-offs maximal results quality for minimal waiting times. We implement in Python by building on the Qiskit framework <cit.>. We evaluate on IBM's 27-qubit QPUs <cit.>, using state-of-the-art quantum benchmarks used in popular NISQ algorithms <cit.>. Our evaluation shows that the transpiler improves the quantum program properties by 54% on average, which leads to 2.6–456.5× improvement in the quality of the results, depending on the problem size ( <ref>). Our estimator can accurately identify the appropriate QPU for any quantum program ( <ref>), and the multi-programmer increases the quality of the results for increased QPU utilization by 1.15–9.6× ( <ref>). Lastly, the scheduler can reduce the waiting times by 5× while sacrificing only up to 3% of the quality of the results ( <ref>). Contributions The paper makes the following contributions: * We propose , one of the first attempts to build an operating system for managing QPUs while addressing several unique challenges of NISQ-era quantum computing. * The transpiler, with its powerful IRs, supports composable and modular optimization workflows while exposing tunable configurations to explore tradeoffs. * The runtime schedules quantum programs both in space and time, which increases system utilization, improves execution quality, balances the load across QPUs, and lowers user waiting times. [Context] Quantum computing is a computing paradigm that enables exponential speed-ups for certain algorithms by exploiting the phenomena of quantum mechanics. Remarkable technological advances have made numerous quantum computers available in recent years. [Gap] However, quantum computing still suffers from two main problems that prevent it from surpassing classical computing in certain algorithms <cit.>. Quantum computers are still very small. Running quantum programs like Shor's algorithm on real data requires several million qubits. For comparison, IBM announced that, by the end of 2023, their largest quantum computer will have 1121 qubits. Additionally, quantum computing is severely limited by the quality of current quantum hardware. This means that executing quantum circuits on quantum devices leads to noisy results, and adding more qubits and operations to quantum circuits greatly amplifies this noise. One of the main focuses of quantum research is the optimization of quantum circuits to run as well as possible on current noisy and intermediate-scale quantum hardware. A new area of circuit optimization gave raise to promising techniques of circuit cutting and knitting. These techniques allow decomposing large quantum circuits into multiple fragments that can be executed independently on smaller and generally less noisy hardware <cit.>. An example is gate virtualization, at runtime a binary gate can be removed from the circuit and virtualized into a combination of combined quantum circuits and classical computations. In addition, some circuit splitting techniques achieve better fidelity than running the original circuit add references here. Nevertheless, these techniques imply a trade-off between increase in fidelity and significant exponential computational, as an example the gate virtualization technique has a classic computation cost of up to 𝒪(6^k) (k=#virtual gates). Therefore, the number of virtual gates that can be computed in a reasonable amount of time is very limited and it is imperative to develop optimal strategies that determine the few binary gates that achieve the highest noise reduction result. Furthermore, with the growth in popularity of quantum computing, current quantum cloud services face the need to manage large client queues. [Innovation] To this end, we present the design of the end-to-end system . aims to abstract the common client from the quantum hardware while providing a balanced trade-off between fast circuit execution and results fidelity. Specifically tackles 3 challenges with the following contributions: * The lack of access to large quantum hardware is solved by applying circuit splitting techniques allowing to run large circuits on smaller available QPUs * The trade-off between fidelity and circuit execution time is evaluated against clients preferences and current system status to provide a balanced outcome * § BACKGROUND §.§ Quantum Computing 101: An Example Let us understand the basics of quantum computing using the classic max-cut problem. This simple combinatorial optimization problem is expressed in the quantum world as the Quantum Approximate Optimization Algorithm (QAOA) <cit.>. Figure <ref> shows a high-level example of how QAOA solves a max-cut problem of the input graph of (a). To solve it, the problem must be first encoded as a quantum circuit (Figure <ref> (b)), which consists of quantum bits (qubits) and quantum gates that exhibit quantum mechanical properties. Here, we use as many qubits as the number of nodes of the input graph, where each qubit q_i corresponds to a graph vertex i. To change the state of the qubits, we apply quantum gates over time, from left to right. There are two types of gates: 1-qubit gates (e.g., NOT gate) and 2-qubit gates (e.g., XOR gate). Finally, at the end of the circuit, we measure each qubit to read its value (0 or 1), which gives bitstrings as output. Unlike classical circuits, which operate deterministically, quantum circuits are inherently probabilistic. The reason is that qubits exhibit quantum mechanical properties, such as superposition. In the superposition state, the qubit is not 0 or 1, but it is both simultaneously (recall Schrodinger's cat experiment <cit.>). Therefore, quantum gates also have probabilistic effects; we can't know the result until the final measurements (i.e., open the box and check the cat's state). To obtain a meaningful result, we execute the circuit in many trials (“shots”), with each trial providing a specific bitstring from the qubit measurement. The solution of the quantum calculation is, therefore, a probability distribution over all possible bitstrings of the measured qubits (Figure <ref> (c)). In our example, the result of the final execution of the quantum circuit gives a probability distribution that represents the solutions of the max-cut problem. High probability maps to the solution, while low (∼ 0) does not represent a solution. Figure <ref> (d) shows a solution for our example. It corresponds to the bitstring with the highest probability, , which means that we have measured 1 for the qubits q_0, q_1, and q_4; therefore, a partition contains vertices {0,1,4}. §.§ Technical Foundations Execution Model The technology and engineering required to build QPUs renders them an expensive resource, thus, QPUs are mainly offered in the cloud as a quantum-as-a-service model <cit.>. To run quantum programs, users typically write circuit-level code (Figure <ref> (a)), which then transpile on the QPU to make it executable, send it to the cloud for execution, and finally get the results back. Specifically, the transpilation process performs three key steps: (1) converting the gates of the circuit to the native gate set of the QPU, (2) mapping the logical qubits of the circuit to the physical qubits of the QPU, (3) routing the qubits to the physical qubits with restrictive connectivity by inserting SWAP gates. Figure <ref> (b) shows the physical layout of an IBM Falcon QPU. Vertices are the physical qubits, and the edges capture their connectivity, i.e., between which qubits we can apply 2-qubit gates. Figure <ref> (c) shows the physical circuit after transpilation with the QPU's noise characteristics, which we detail next. QPU Characteristics Today's QPUs are described as noisy intermediate-scale quantum (NISQ) devices <cit.> since they exhibit low qubit numbers (e.g., up to a few 100s <cit.>) and are susceptible to hardware and environmental noise. Specifically, when measuring a qubit, there is a chance to read the opposite value, and when applying gates, there is a chance the gate performs a wrong operation <cit.>. On top of that, when qubits are left idle (no gates applied) for more than a few hundred microseconds, the superposition decoheres to the |0⟩ state <cit.>, similar to resetting a register to 0. Lastly, qubits destructively interfere with each other via crosstalk effects <cit.>. Figure <ref> (c) shows qubits Q_2 and Q_3 that influence each other via crosstalk, noisy gates, qubit Q_5 that is left idle for long enough to decohere, and noisy measurements. QPU Heterogeneity Additionally, QPUs are vastly heterogeneous across space and time, unlike classical accelerators. Across space, QPUs vary in terms of technology, e.g., superconducting qubits <cit.> or trapped ions <cit.>, architectures of the same technology, e.g., Falcon or Osprey superconducting QPUs <cit.>, and noise properties even for the same architecture <cit.>, e.g., two identical QPUs exhibit different noise errors, etc. Across time, the QPUs are calibrated regularly to maintain their performance <cit.>, a process that generates calibration data. These data quantify the noise errors, and change after each calibration cycle unpredictably. Execution Quality Lastly, to measure the quality of a circuit execution on NISQ QPUs, we use the fidelity metric <cit.>, which measures the similarity between the noisy probability distribution and the ideal probability distribution that noiseless, ideal QPUs can obtain. Fidelity is a number in the [0, 1] range, where a higher fidelity means a better quality result. § MOTIVATION AND KEY IDEAS To motivate , we present a set of unique challenges that distinguish QPUs from classical accelerators. We categorize our findings into four challenges that must be addressed to improve the practicality of quantum computing: fidelity, utilization, spatial and temporal heterogeneities, and load imbalance. The experimental methodology used is the same for the final system evaluation and is explained in detail in  <ref>. §.§ Fidelity Executing quantum programs with high fidelity is challenging since QPUs are characterized by relatively small numbers of qubits and noise, which leads to computation errors ( <ref>). As the number of qubits and gates in a quantum circuit increases, the noise errors accumulate and the overall fidelity decreases. Results Our results are highlighted in Figure <ref> (a). The x axis depicts the circuit size as the number of qubits while the y axis shows the fidelity, where higher is better. The experiment is run on the IBM Kolkata 27-qubit QPU. For each increase in qubits, the average fidelity decreases, up to 98.9% from 4 to 24 qubits. Moreover, it is physically impossible to run circuits with a size larger than 27 qubits, since we cannot map them. Implication NISQ devices are limited due to size and noise and, therefore, cannot be practically used for large quantum circuits; either logically, because the circuit doesn't fit in the device, or the execution results would be convoluted from noise errors, which translates to low fidelity. Key Idea #1: Circuit Optimizations: To increase fidelity, we need a generic optimization infrastructure that transforms circuits into a physically and practically executable size. §.§ Spatial and Temporal Heterogeneity In the classical domain, two identical CPUs perform similarly for all applications, and at each point in time. In contrast, QPUs exhibit differences in the layout and connectivity of qubits <cit.> and variations in noise errors even for QPUs of the same model, which leads to spatial performance variance. Moreover, QPUs are calibrated regularly ( <ref>), and after each calibration, the noise properties change <cit.>. As a result, the execution fidelity can vary across different calibration cycles, leading to temporal performance variance. Results Figure <ref> (b) shows a 12-qubit GHZ circuit's fidelity on different IBM QPUs. Fidelity varies across the QPUs, with a maximum difference of 38% from best to worst. Note that all six QPUs are of the same model (Falcon r5.11). Figure <ref> (a) shows a 6-qubit GHZ circuit's fidelity over 120 calibration days executed on the IBM Perth 7-qubit QPU, where each data point represents a single day's fidelity. The largest single-day difference in fidelity is 96.5%, and there are 20 instances of a single-day fidelity drop of more than 5%. Note that there is no way of predicting a QPU's future calibration data to expect such performance differences. Implications Due to structural differences across QPUs, quantum circuits perform differently across them. Additionally, there is a high degree of temporal performance variance across calibration cycles, as the fidelity might change significantly from day to day with no discernible pattern. Key Idea #2: Performance Estimation: We estimate a circuit's potential performance on the available QPUs to automatically select the best-performing candidate(s). §.§ Utilization The fidelity of circuits decreases as their size increases ( <ref>), and as a result, it becomes more challenging to utilize a QPU effectively. In contrast to the classical domain, where a CPU can be fully utilized, to get high-fidelity results in the quantum domain, we necessarily under-utilize QPUs. Results Figure <ref> (b) shows the maximum utilization of the IBM Kolkata 27-qubit QPU for nine benchmarks while maintaining at least 0.75 fidelity. No benchmark exceeds 30% utilization, while the average is 26.3%. Higher fidelity values would yield even lower utilization and vice-versa. Implications There is a tradeoff between QPU utilization and performance (fidelity). In general, the lower utilization, the higher fidelity, and vice-versa. In contrast to the classical domain, the tension between these objectives is vastly larger. Key Idea #3: Multi-programming: We spatially multiplex quantum circuits to increase system utilization (also known as multi-programming <cit.>), and when combined with circuit optimizations, it also increases fidelity. §.§ QPU Load Imbalance The quantum cloud faces QPU load imbalance. The root cause is spatiotemporal heterogeneity ( <ref>), combined with the manual QPU selection offered by the current quantum cloud model <cit.>. This leads to users selecting the “best performant” QPU based on empirical or arbitrary metrics <cit.>. Results Figure <ref> (c) shows the average number of pending jobs for different IBM QPUs across October 2023. The groups of QPUs (separated by the red dashed line) have a size of 7, 27, and 127 qubits, respectively. There is a 49×, 57×, and 5.7× maximum load difference across the groups, respectively. Implications Load imbalance leads to long waiting times for the users and thus, low quality of service. Additionally, there is no 1-1 mapping between the load and performance differences between QPUs. For instance, the 12-qubit GHZ circuit in Figure <ref> (b) performs 1.1× better on IBM Hanoi than IBM Cairo, yet the former exhibits 57× higher load. Key Idea #4: Load-aware Scheduling: We schedule (temporally multiplex) quantum circuits in a load-aware manner to balance the tradeoff between fidelity and waiting times. §.§ Temporal performance variance Quantum backends are calibrated regularly to maintain their optimal performance. After each calibration, the noise properties typically change. As a result, the fidelity of the quantum circuits can vary across different calibration cycles, leading to temporal performance variance. Results Figure <ref> shows a 7-qubit GHZ circuit's fidelity over 180 calibration days, where each data point represents a single day's fidelity. The largest single-day difference in fidelity is 0.79 points, and there are 20 instances of a single-day fidelity drop of more than 0.10. Finally, it is worth noting that there is no way of predicting a QPU's future calibration data, to predict such performance differences. Implication There is a high degree of temporal performance variance across calibration cycles, as the fidelity might change significantly from day to day with no discernible pattern. Key Idea #4: Continuous circuit-QPU evaluation: The evaluation of possible circuit-QPU pairs must be repeated for every calibration cycle. § OVERVIEW Quantum computing is characterized by four main challenges that limit its practicality: (1) Execution fidelity is hindered by the small and noisy QPUs. (2) In contrast to classical accelerators, QPUs exhibit vast spatiotemporal heterogeneities, which renders their performance non-deterministic in both dimensions. (3) QPUs are heavily underutilized to give high-fidelity results. (4) QPUs face vast load imbalance, which leads to prolonged waiting times for the users. Existing work is narrow-scoped and focuses on tackling one challenge at a time, but unfortunately, there are two main issues with this point solution approach. Firstly, composing the individual mechanisms to address all challenges at once is impossible without a common and unified infrastructure. Secondly, without synergies between the individual mechanisms, it is hard to maximize the objectives of the users, i.e., high fidelity and low waiting times, and the objectives of the quantum cloud operator, i.e., resource efficiency. To this end, we propose , an end-to-end system that tackles the challenges of quantum computing holistically. strives for three design goals: (1) A unified architecture that supports compiler and OS mechanisms with pluggable policies and tunable configuration for managing the tradeoffs of QC. (2) should enable the execution of large quantum circuits with high fidelity and scale with increasing incoming workloads and additional QPUs. (3) should be resource efficient by achieving high QPU utilization and balancing QPU load to minimize waiting times. §.§ The Architecture Figure <ref> shows the overview of our system's design. comprises a layered architecture that consists of two main components: the compiler (top) and the runtime (bottom), which we detail next. Qernel Abstraction implements a wide range of mechanisms with different abstraction requirements, from the compilation to the execution runtime level. To enable the composability of these mechanisms in a unified architecture, we propose the Qernel abstraction that acts as a common denominator for the mechanisms to apply their policies. Compiler We propose the compiler (Figure <ref>, top), a modular, extensible, and composable compiler infrastructure. It comprises three stages: (1) The frontend of the compiler, the analyzer ( <ref>), accepts quantum circuits and lifts them to the Qernel abstraction, generates the intermediate representation (IR), and performs IR analysis passes to generate the IR static properties required by the next stages. (2) The middle-end, the optimizer ( <ref>), is an extensible and composable set of optimization passes that leverages the IR and static properties to improve the execution fidelity of the quantum circuits with manageable overheads. (3) The backend, the virtualizer ( <ref>), compiles the optimized Qernels for the target QPUs, similar to classical target code generation. Runtime We propose the runtime (Figure <ref>, bottom), a system that abstracts away the underlying heterogeneity and balances the tradeoff between the conflicting objectives of the cloud operator (resource efficiency) and the users (high fidelity and low waiting times). The runtime comprises four components: (1) The estimator predicts the fidelity of executing the optimized Qernels to guide scheduling decisions. (2) The multi-programmer, given the estimations, bundles low utilization Qernels to increase QPU utilization. (3) The scheduler multiplexes and runs the Qernels across space and time with the objective to maximize fidelity and minimize waiting times. Finally, (4) the knitter post-processes the Qernel execution results to return the final result to the user. §.§ Execution Workflow First, users submit a circuit along with their optimization target and budget 1. The former represents the desired post-compilation circuit size, and the latter quantifies the additional overheads the user is willing to pay. The compiler's frontend lifts the circuit to the Qernel abstraction and generates the IR and its static properties 2. The middle-end optimizes the Qernels through a modular set of passes 3, then the backend generates the target QPU-optimized Qernels and submits them to the runtime 4. The estimator predicts the fidelity of running the Qernel(s) on the QPUs to guide scheduling 5. The multi-programmer bundles Qernels with low utilization and sends them to the scheduler 6. The scheduler assigns and runs the bundled Qernels, optimizing for maximal fidelity and minimal waiting times 7. After the execution, the bundled results are retrieved by the multi-programmer 8 to be unbundled into separate results and are sent to the knitter 9. Finally, the knitter post-processes the separated results pass and returns them to the user 10. § COMPILER FRONTEND: QERNEL & ANALYZER §.§ The Qernel Abstraction The Qernel is the unified abstraction acting as a common denominator for the components. Specifically, a Qernel contains (1) the graph-based IR used by the compiler and (2) the Qernel properties, which comprises static IR properties and dynamic properties used by the runtime. Qernel Intermediate Representation (QIR) Existing optimization techniques operate at the gate level, analogous to the instruction level in the classical domain. Therefore, we propose a graph-based Qernel Intermediate Representation (QIR) that captures the control flow of a quantum program, similar to the control flow graph of classical programs. By traversing the QIR, the compiler can identify important optimization opportunities, such as pairs of gates that cancel each other (like dead code elimination), gate dependencies (useful for gate scheduling, similar to instruction scheduling), or opportunities to remove hotspot gates (gates that contribute to noise errors in the computation). An example QIR is shown in Figure <ref> (b), where the quantum circuit consisting of four qubits and five gates is lifted to the QIR. Formally, a QIR is a directed acyclic graph (DAG) G=(V,E), where V is the set of gates and every edge e_i ∈ E is the qubit the gate acts on. The edges' directions reflect dependencies D={(V_i,V_j) ∈ V × V} between gates, i.e., V_i must be scheduled before V_j. To identify hotspot nodes, we compute the degree of a node deg(V_i), which reflects the number of control flow paths the gate V_i is part of. In the example of Figure <ref>, the QIR reveals four layers of gates: l_1:(g_0, g_1), l_2:g_2, l_3:(g_3, g_4), and l_4:M, which means that they have to be scheduled in this order, and the pairs in the same layers are susceptible to crosstalk noise ( <ref>). Finally, the gate g_2 is a hotspot node since its degree is 4, the highest in V (the measurements M are terminal nodes and do not count). Refined QIR Various optimizations require a simplified representation that captures only the connectivity between qubits. For instance, the connectivity structure might reveal hotspot qubits <cit.> that can be removed for fidelity improvement or opportunities for circuit cutting ( <ref>). Figure <ref> (c) shows the refined QIR, where we can see that q_1 and q_2 are only connected by a single gate, and removing it would split the circuit into two smaller circuits. Formally, a refined QIR is an acyclic, undirected, and weighted graph G=(V,E), where V is the set of qubits and every edge e_i ∈ E between two qubits V_j, V_k has a weight w_i ∈ℕ that represents the number of gates that act on V_j and V_k. §.§ Qernel Properties For applying the diverse set of its mechanisms, requires data structures that keep up-to-date information about the quantum programs. Such information includes (1) the static properties, which are useful for the compiler, and (2) dynamic properties, which are useful for the runtime. Static Properties Apart from the IR, optimization passes leverage circuit properties to be more efficient and effective. The properties include the circuit's size (number of qubits), depth, non-local gates and their types, the number of measurements, and others (Figure <ref> (d)). Additionally, we include the features vectors defined in <cit.> since they are potentially useful for heuristic-based optimizations or regression-based prediction models <cit.>. Dynamic Properties The Qernel also contains dynamic properties required by the runtime. These include the Qernel's execution status (), the estimator's output, i.e., fidelity estimations ( <ref>), and the final post-processed results (Figure <ref> (d)). §.§ Frontend: Analyzer To discover and leverage optimization opportunities, we first need to perform circuit analysis, similar to classical program analysis. The analyzer transforms a quantum circuit into a Qernel and comprises an extensible set of passes that generate the QIR and static properties of the Qernel, which are then used by the optimizer and subsequently by the runtime. QIR Transformation Pass The first step in program analysis and optimization is to generate the QIR, implemented by the QIR transformation pass. Figure <ref> (a)-(b) shows the generation of the QIR for an example quantum circuit. To generate the QIR, the pass iterates over each logical qubit of the circuit and each gate acting on that qubit. For each such gate, it creates a QIR vertex g_i and sets the qubits q_j, q_k the gate acts on as the edges e_j, e_k of g_i. By convention, the direction of the edge follows the direction from the control to the target qubits. When reaching measurement operations, it simply adds the terminal nodes M. This process is repeated until all circuit qubits and gates are covered. QIR Refinement Pass To generate the refined QIR (Figure <ref> (c)), we implement a transformation pass that traverses the QIR in a depth-first manner (breadth-first is equivalent). For each QIR node visited, i.e., a vertex g_i with a pair of edges q_j,q_k, it checks the current refined QIR for existing nodes with the same name. If true, it increments the weight of the edges between the nodes by one. Otherwise, it adds q_j or q_k or both as new nodes in the refined QIR and connects them with a weight of one. Analysis Passes We implement passes that analyze the QIR to identify optimization opportunities, such as gate dependencies (), hotspot nodes (), and graph isomorphism (). We also implement properties passes that traverse the QIR to generate the Qernel static properties ( <ref>) and comprise the BasicAnalysisPass and SupermarqFeaturesPass. Specifically, the former generates the key circuit properties while the latter computes the six feature vectors defined in <cit.>, as explained in  <ref>. We show how the optimizer uses the information obtained from these passes in  <ref>. § COMPILER MIDDLE-END: OPTIMIZER QPUs comprise up to only a few 100s qubits, which sets the physical limit for circuit size and are noisy, which sets a practical limit to high fidelity execution ( <ref>). To increase the scalability of circuits that run with high fidelity, we need a modular, extensible, and composable optimizer. Modular to support adding/removing optimization passes or changing their relative order, extensible to add new passes, and composable to chain the optimization improvements of the individual passes. Such optimization passes are the circuit compaction techniques that, as the name suggests, reduce the circuit size, i.e., the number of qubits, rendering it executable on small QPUs and at the same time, also simplify the circuit structure, i.e., remove noisy gates. Notably, we can compose more than one of these techniques to achieve even better results. The compiler's middle-end, the optimizer, is a composable pipeline of transformation passes that compact the QIR to increase the scalability of quantum circuits running with high execution fidelity. Challenges However, it is not trivial to implement such a pipeline. Currently, there is a plethora of individual compaction techniques that require their own sub-systems to operate, with no common infrastructure to compose them. Additionally, as we will show later, some techniques spawn an exponential number of sub-circuits (Table <ref>) and, after execution, require post-processing using classical hardware. We pose two questions: (1) How are the (exponentially) spawned circuits from different techniques handled? (2) How can we manage the tradeoff between fidelity improvement and exponential overheads from different techniques? Our Approach To this end, we design our optimizer with the goals of providing (1) a unified infrastructure for pluggable compaction mechanisms and (2) tunable knobs for configuring the tradeoff between overheads and performance improvement. For (1), we build and compose vastly different compaction techniques on the Qernel abstraction, specifically on the QIR and its refined form. For (2), we provide users with two knobs: the optimization budget (equivalent to optimization level) b ∈ℕ and the size to reach s ∈ℕ, which denotes the desired post-optimization QIR size (number of qubits). Since all overheads are exponential and the exponent's base does not make a practical difference, the single budget knob, b, suffices. QIR Compaction Techniques There are two main QIR compaction techniques: circuit divide-and-conquer and qubit reuse. In the former category, the (large) QIR is cut into smaller fragments that are executed on small QPUs, and the execution results are merged back to a single value. Circuit cutting and knitting <cit.> belongs to this category. In our optimizer, we implement a pass that automatically cuts the QIR based on qubits () and a pass that cuts the QIR based on gates (). To restrict the exponential overheads that scale with the number of cuts, we use the budget b to cut up to b times. At each cut location, the pass places a virtual gate, which must be later replaced with other gates that simulate the effects of the original (pre-cut) gate ( <ref>). We implement another circuit divide-and-conquer technique, also with exponential overheads, namely the QubitFreezingPass, which is limited to QAOA applications only <cit.>. At a high level, it removes qubits with significantly more noisy gates than other qubits, i.e., hotspot qubits, along with the gates. This reduces the number of physical qubits required by an underlying (small) QPU and greatly reduces the number of noisy gates. We use the same budget b=3 to remove the nodes with the highest degrees to restrict the exponential overheads. Lastly, in the qubit reuse category, we implement the QubitReusePass, which “compacts” multiple logical qubits into one to reduce the QIR's size <cit.>. This process, however, increases the QIR's depth; therefore, the tradeoff, in this case, is between QIR size and depth. To restrict the depth increase, we use qubit reuse as a last resort to achieve the user's size requirement (s) or to render the Qernel executable by at least one QPU in the system. Optimization Workflow Figure <ref> shows the default optimization workflow for the refined QIR of a QAOA circuit ( <ref>) with 7 qubits and 12 gates. The optimizer aims to achieve a maximum QIR size s=2 with an allowed budget b=3. To achieve this, it takes the following steps: Step 1: The optimizer calls the HotspotNodePass pass on the refined QIR to find a hotspot node. The pass identifies q_3 as a hotspot with a degree of 6 gates (Figure <ref>, (a)). Step 2: The optimizer applies the QubitFreezingPass to remove q_3 and its gates. The new refined QIR size is 6 qubits with 6 gates. Then, it updates the budget to b=b-m, where m=1 is the number of qubits frozen (Figure <ref>, (b)). Step 3: The optimizer applies either gate or wire cutting. To do so, it first computes the expected cost c_gate, c_wire, respectively, to achieve a circuit of size s=2, and selects the one with the lowest cost c_min=min(c_gate, c_wire). In this case, the cost for gate cutting is lower, so it applies the GateCuttingPass on 2 gates and updates the budget to b=0. The new refned QIR size is two fragments of 3 qubits and 4 gates each (Figure <ref>, (c)). Step 4: Since b=0 but s_QIR>s, the optimizer applies the QubitReusePass to achieve s=2. The pass identifies qubits q_0,q_1 as reusable and applies measurement and reset to them. The final refined QIR now has two fragments of s=2 and 4 gates (Figure <ref>, (d)). The final optimizer's output is a Qernel with a 42.8% smaller size and 66% less noisy gates. Note that each of the above passes alone wouldn't achieve this result. § COMPILER BACKEND: VIRTUALIZER The backend stage of the compiler, the virtualizer, generates the final executable Qernels for the underlying runtime, similar to classical compilers that generate the target code. The virtualizer consists of two stages: (1) the instantiation, which replaces the virtual gates from the cutting optimization passes with the gates that simulate the original ones, and (2) target QPUs transpilation, which translates the high-level gates to the physical QPU gates and performs mapping and routing, as explained in  <ref> §.§ Instantiation The circuit cutting and knitting passes we describe in  <ref> cut a large Qernel into sub-Qernels by analyzing the QIR, identifying optimal cut locations, and then placing virtual gates there. However, to run the sub-Qernels, we must replace the virtual gates with a combination of 1-qubit gates that achieves the same computation as the original Qernel. The mapping between virtual and 1-qubit gates depends on the chosen cutting strategy, i.e., gate or wire cutting ( <ref>). We refer to this process as instantiation. The instantiation stage takes as input the optimized sub-Qernels with virtual gates and outputs instantiated sub-Qernels (ISQs) with the 1-qubit gates required to execute them. Similar to the general cutting approach, we implement a generic instantiation mechanism for supporting pluggable mappings from virtual to 1-qubit gates. The mappings depend on the cutting technique (i.e., gate and wire cutting require different mappings) but can also differ for the same technique. For instance, virtual gates from gate cutting can be mapped to different sets of 1-qubit gates that might be more optimal for specific QPU technologies. By replacing a single virtual gate with multiple 1-qubit gates, the mapping function generates multiple ISQs that differ only by the 1-qubit gate. Then, replacing the next virtual gate in each ISQ generates even more copies, which leads to an exponential number of ISQs. Figure <ref> (a) shows a Qernel optimized using two gate cuts. The red boxes are the two virtual gates that must be replaced with 1-qubit gates. In this example, the mapping function will replace the first virtual gate with six 1-qubit gates, creating six ISQs. For the next and final virtual gate, each of the six ISQs will produce six more ISQs, totaling 36 ISQs. Generally, in , the exact overheads are O(2^k-8^k) for k cuts for our optimization passes (Table <ref>). §.§ Target QPUs Transpilation Following instantiation, the ISQs must be transpiled ( <ref>) to the target QPUs to be sent to the runtime for scheduling and execution. Since the number of ISQs might be large, depending on the optimization budget b used at the compiler middle-end ( <ref>), we offer two transpilation modes that differ in granularity and overheads: (1) the coarse-grain per QPU architecture and (2) the fine-grain per QPU. We show evaluation results for their transpilation overheads in  <ref>. Per QPU-Architecture In the first mode, we transpile each ISQ for every type of QPU architecture available in the system. This coarse-grain approach bounds the transpilation overheads because typical quantum cloud providers have a limited number of architectures, e.g., up to five <cit.>. Since this mode does not scale with the number of QPUs, our experimentation shows that it is suitable for values of budget b ≥ 5, which generate 10^4-10^5 ISQs. Per-QPU In the second mode, we transpile each ISQ to each available QPU in the system. This will enable the runtime components to make fine-grained decisions about the fidelity of running the ISQ on any QPU since they will have the exact noise information of this ISQ-QPU pair. The overheads are still bound since QPUs are constant in quantity in commercial clouds, e.g., up to 30 <cit.>, in contrast to classical clouds that scale to thousands of classical nodes. Our experimentation showed that this transpilation mode is viable for b<5. § RUNTIME The runtime (Figure <ref>, bottom) schedules and executes Qernels across space and time in a scalable manner to achieve the user's goals, i.e., higher fidelity and lower waiting times, and the cloud operator's goals, i.e., resource efficiency. It comprises four components, which we detail next. For simplicity, we use the general term Qernel throughout this Section. §.§ Estimator The estimator is responsible for predicting the fidelity of a given Qernel on the underlying QPUs without executing the Qernel. This prediction will be the leading decision factor for the scheduler when assigning the Qernel to a QPU. To achieve this, it computes a score for each Qernel-QPU assignment that captures the potential fidelity of that assignment and then uses the scores to rank the assignments. The estimator supports configurable scoring policies that consider (1) the Qernels' properties generated from the compiler and (2) the QPUs' calibration data, which are available to quantum cloud providers since they perform the calibration cycles. For (1), important properties include the number and types of gates, depth, and the number of measurements ( <ref>). For (2), recall that QPUs are characterized by calibration data that describe the exact error rates of the QPU for that calibration cycle ( <ref>), specifically, the individual qubit readout errors, the individual gate errors, and the T2 coherence times. In this work, we implement two scoring policies: a numerical approach for fine-grained control over the estimations and a regression model approach for abstracting away the complexity of estimation. Numerical Cost Policy This policy estimates execution fidelity by leveraging the target-QPU transpilation output of the compiler backend ( <ref>). Target transpilation enables fine-grained fidelity estimation by producing the mapping between logical and physical qubits and the gate (instruction) schedule. The mapping captures the expected readout and gate errors, while the gate schedule captures the order and exact timing that the gates will be applied on the qubits, which reveals the hardware decoherence and crosstalk errors, as explained in  <ref>. Formally, for each qubit q_i the readout error is e_r(i), for each gate g_j the error is e_g(j), and the decoherence error is e_d(t) = 1 - e^-t/T2_i, where t is the idle time of the qubit q_i (no gates act on it <cit.>) and T2 is the decoherence time of q_i. The crosstalk error between gates g_k and g_l is e_ct(k,l). Putting it all together, the final fidelity score is computed as follows: fid=1- ∏_i=0^N e_r(i)e_d(i)∏_j=0^M e_g(j)∏_j=0, k=0^M× Me_ct(j,k), where N is the circuit's number of qubits and M is the number of gates. Since all hardware error information is known at-priori, and quantum errors accumulate multiplicatively, this policy produces high-accuracy estimations, as we show in  <ref>. Regression Model Policy As discussed in  <ref>, QPU noise errors are accurately measured at each calibration cycle, and their impact on fidelity during quantum computation can be described mathematically. Therefore, we can train a regression model to predict the fidelity of a transpiled Qernel on a possible QPU using the QPU's calibration data and the Qernel's static properties as features <cit.>. Specifically, we use the aforementioned errors we defined in the numerical cost policy as QPU features and the static properties ( <ref>) as Qernel features. Even simple regression models such as linear regression achieve high prediction accuracy, up to 99%. This policy is simple to use without detailed knowledge of the relationship between errors. However, in , we use the numerical cost policy by default for estimation to have a clear understanding and full control of the process. §.§ Multi-programmer The size of quantum programs that run with high fidelity is small, leading to QPU underutilization ( <ref>). To increase QPU utilization, multi-programs two or more Qernels, potentially from different users, to run on the same QPU. We refer to this multi-programming as bundling the Qernels together. However, trivially bundling Qernels together will deteriorate fidelity because qubits interfere with each other via crosstalk errors ( <ref>). On top of that, bundled Qernels that run for unequal durations do not necessarily increase utilization since QPU effective utilization is measured in space (number of QPU qubits allocated) and time (time qubits are performing actual computation). For example, a 10-qubit Qernel Q_0 running on a 20-qubit QPU gives 50% spatial utilization. However, assume that Q_0 runs 3× longer than a 10-qubit Qernel Q_1. During 2/3 of Q_0's runtime, the qubits allocated to Q_1 will be idle, decreasing the effective utilization to only 66%. Recall that it is impossible to schedule more Qernels during Q_0's runtime, unlike in a typical CPU ( <ref>). To minimize the fidelity impact and maximize the effective utilization of multi-programming, we utilize configurable Qernel compatibility functions that quantify how well-suited are two Qernels to run together. Qernel Compatibility Functions Compatibility functions measure the crosstalk errors and the effective utilization of a Qernel pair by considering the Qernels' static properties ( <ref>). To measure crosstalk effects, we identify pairs of 2-qubit gates that run in parallel during Qernel execution. To quantify this without running the bundled Qernels, we use the entanglement ratio and parallelism Qernel static properties, where higher values indicate a higher chance for crosstalk errors <cit.>. Intuitively, the entanglement ratio captures the proportion of 2-qubit gates over all gates, and parallelism captures how many gates run in parallel per time unit, on average. To measure the spatial dimension of effective utilization, it suffices to compute the ratio of allocated QPU qubits over the number of QPU qubits. To measure the temporal dimension, we compare the relative duration between two Qernels. The depth static property reflects the longest chain of gates that will be executed; therefore, it measures the Qernel's duration. More technically, we define effective utilization as u_eff=N_C_max/N_QPU * 100 + ∑_n=1^kD_n/D_max * N_C_n/N_QPU * 100, where N_C_max, N_QPU are the number of qubits of the longest Qernel and the QPU, respectively, k is the number bundled Qernels excluding the longest Qernel, and D is the depth of the Qernel. To put everything together formally, we score a possible Qernel pair as follows: qc=α  u_eff + β ER_b + γ  PA_b↦ [0,1], where higher is better, α + β + γ = 1, and c denotes bundled, i.e., E_b is the entanglement ratio of the bundled Qernels. The four variables are tunable to give priorities on different objectives, e.g., prioritize effective utilization or minimize crosstalk. After experimenting and fine-tuning, we found that α=0.25, β=0.25, γ=0.5, and qc ≥ 0.75 gives balanced results, as we show in  <ref>. Figure <ref> shows an example workflow. The multi-programmer receives three Qernels with three estimations each and identifies Qernel 0 and Qernel 2 as a possible pair since their best QPU is the same (QPU5) (a). It computes their independent utilization, which is 31% and 37%, respectively, and the combined utilization is under 100% (b). It computes the compatibility score that surpasses the threshold (0.9>0.75) (c). Next, we detail our multi-programming policies. Multi-programming Policies supports pluggable multi-programming policies for maximizing effective utilization or minimizing fidelity penalties. In this work, we implement two multi-programming policies; the first is the fast path multi-programming, where we can immediately bundle two Qernels if there is no conflict between them, while the second requires re-compilation and re-estimation. Restrict Policy The restrict policy uses the target QPU transpilation output to bundle Qernels if there is no overlap in their layouts. Practically, this means that for Qernels Q_0 and Q_1, their logical qubits are mapped to disjoint sets of physical qubits on the QPUs. In that case, the policy bundles the Qernels together, and fidelity loss is minimized through the aforementioned compatibility score. Re-evaluation Policy This policy is the fallback of the restrict policy. If the Qernel layouts overlap, the two Qernels are transpiled again for the target QPU, and their new fidelity is estimated. If the new fidelity is lower up to a fixed ϵ > 0 value compared to the original fidelities, the bundling is maintained. Otherwise, the multi-programmer selects the next most compatible Qernel pair. Figure <ref> (d) shows the check for layout overlap. In this example, yellow qubits belong to Qernel 0 and green to Qernel 1. On the left, there is no overlap, while on the right, the red qubit is shared between the Qernels. (e) We apply the respective policy (in this example, re-evaluation). Restrict policy Under this policy, circuits are merged in case there is no overlap in their mappings and the potential loss in fidelity, resulting from cross-talk, remains within a predefined percentage of the average fidelity of the individual circuits. The following steps provide an overview of this policy: Step 1: Begin by retrieving circuits along with their respective best matchings from the current window stored in the database. Step 2: Starting with the oldest circuit in the database, examine if its best matching coincides with the first matching of the incoming circuit. If there is no overlap and the average error between these two matches falls below the defined error threshold, the circuits are eligible for merging, and the process concludes. Step 3: If the conditions specified in Step 2 are not met, continue the evaluation by inspecting the best matching on the subsequent circuits within the window. Step 4: In case no suitable matching was found for the best matches, move to the second best matching of each circuit on the window, again starting from the oldest one. Step 5: If none of the assessed circuits offer a suitable match, add the incoming circuit to the window without initiating a merging process. Reevaluation policy This policy is employed as an alternative to the previous one, specifically when the recommended mapping from an incoming circuit coincides with a potential merging match. In such cases, the circuits are remapped together, and the predicted fidelity is checked to ensure it remains within the established error threshold relative to the independent fidelities of the two circuits. The following steps summarize this policy: Step 1: Fetch circuits and their matchings on the window, similar to the previous policy; Step 2: Starting with the oldest circuit in the database, check if its best matching overlaps with the first matching of the incoming circuit. If there is no overlap and the average error is below the error threshold, the circuits can be merged, and the process is concluded; Step 3: If the matches overlap, but the error is below the threshold, remap the merged circuits. If the estimated error remains below the threshold, the merging is accepted, and the process ends; Step 4: If the remapping results in an error estimate above the threshold, move to the next circuit in the window and repeat the process; Step 5: Following the same order as the Restrict policy, after going through the best matchings for the entire window, move on to the second-best matching, starting from the oldest circuit; Step 6: If no suitable match is found, the incoming circuit is added to the window without merging. §.§ Scheduler Scheduling quantum programs involves fundamental tradeoffs between conflicting objectives; specifically, users want maximal fidelity and minimal waiting times. However, to maximize fidelity, most programs must run on the same subset of QPUs that perform best in a given calibration cycle ( <ref>). This will lead to large and growing queues on these QPUs, hence long waiting times for the users. Our scheduler assigns and runs Qernels across space (which QPUs) and time (when) and supports pluggable policies for managing the aforementioned tradeoffs, prioritizing maximal fidelity, minimal waiting times, or a balanced approach. The scheduler assigns Qernels to QPUs based on the fidelity estimations provided by the estimator and the execution time estimations, which we detail next. Execution Time Estimation To optimize for minimal waiting times, the scheduler must first estimate each Qernel's execution time and then aggregate the execution time estimations in each QPU's queue to compute the total waiting times. To estimate the execution time, we iterate the longest path of the QIR of a Qernel ( <ref>) that corresponds to the longest-duration gate chain and thus defines the Qernel's execution time. By summing the gate durations of each node in the longest path, we get the Qernel's total execution time. Formula-Based Policy Optimizing for conflicting objectives involves comparing two possible solutions (e.g., maximal fidelity vs. minimal waiting times). In the formula-based policy, we use a simple scoring formula (Equation <ref>) to compare and select between two possible assignments. This formula factors fidelity, waiting time, and utilization to determine which assignment is better, given priorities. The parameters are as follows: f_i: fidelity of the estimation result i, t_i: waiting time for the QPU from estimation result i, u_i: utilization of the QPU for estimation result i, c∈(0,1): a system-defined constant that weighs the fidelity difference between estimations and finally, β: a system-defined constant acting as a weighting factor for utilization difference, balancing system throughput and fidelity. By selecting higher c, the system prioritizes fidelity over waiting times, and vice versa, and by selecting higher β the system prioritizes utilization over fidelity, and vice versa. By default, c=β=0.5, which aims for balanced fidelity, waiting times, and utilization. Score = cf_2-f_1/f_1 - (1-c)t_2-t_1/t_1 + βu_2-u_2/u_1 Genetic Algorithm Policy Genetic algorithms excel at optimizing for conflicting objectives by efficiently searching over vast search spaces, and for that, they can be used in the context of . We formulate a multi-objective optimization problem with the conflicting objectives of fidelity vs. waiting times and use the NSGA-II genetic algorithm <cit.> to solve it. The algorithm creates a Pareto front of possible solutions (schedules), each achieving a different combination of average fidelity and average waiting times. Then, to select one of those schedules, we use the formula described by Equation <ref> to score each schedule and select the schedule with the highest score. * Managing Jobs, Circuits and QPUs: * Adding * Removing * Modifying * Fetching as a whole or selected fields * Managing the job window: * Adding and removing jobs * Fetching all jobs on the current window * Fetching jobs at specific positions on the window §.§ Knitter Following scheduling and execution, the runtime collects the results that are part of the initial circuit submitted by the user. Recall that the circuit is lifted to the QIR ( <ref>), then optimized through divide-and-conquer techniques that place virtual gates inside the QIR ( <ref>), and finally instantiated to replace the virtual gates with 1-qubit gates ( <ref>). The instantiation process generates up to O(8^k) instantiated sub-Qernels (ISQs) for a single initial optimized Qernel (Figure <ref> (a), Table <ref>). Finally, the ISQs are bundled with other ISQs, possibly from other users, for increased utilization ( <ref>). Therefore, to compute and return the final result to the user is not trivial; we must first unbundle the results from multi-programming and then merge the results from ISQs to the original Qernel, a process called knitting. The structure of the ISQs and their respective results resembles a tree structure, where the leaf nodes are up to O(8^k) results, and the root node is the final result. Therefore, we adopt the map-reduce pattern to perform knitting. Unbundling for Multi-programming The results first pass through the multi-programmer to be unbundled. To do this, the multi-programmer keeps a record that maps the initial (solo) Qernel IDs to the new, bundled Qernel ID, as well as the Qernels' sizes. Therefore, when receiving a new result from a Qernel with an ID i, it scans the record to find an entry i, and if found, it splits the probability distribution bitstrings ( <ref>) into two parts: the left-most and the right-most bits based on the Qernel sizes. Then, it forwards the unbundled results to the knitter for the map-reduce phases. The Map Phase To efficiently process a large number of results (up to O(8^k)), we follow a divide-and-conquer approach. Specifically, we split the results into k equal sizes and distribute them to k classical nodes to be processed in parallel (Figure <ref> (b), step (1)). We parallelize across k to increase data locality and reduce communication overheads since all results for each of the k cuts will be in the memory of the same node. Locally, each node performs tensor product (⊗) operations on the probability distributions, which are parallelizable across the node's threads. If available in the node, leverages GPUs or TPUs to accelerate the tensor products. Following this process, the k nodes output k intermediate results, ready to be reduced into a single result. The Reduce Phase selects any of the k nodes to perform the reduce step. The rest of the nodes send the intermediate results to this node, which performs a thread-parallel sum of k results. Equivalent to the map phase, the parallel sum can also be executed on GPUs. This produces the final output to be returned to the user (Figure <ref> (b), step (2)). § EVALUATION §.§ Experimental Methodology Experimental Setup We conduct two types of experiments: (1) classical tasks, such as circuit transpilation and trace-based simulations, and quantum tasks (2), which run on real QPUs for measuring the circuits' fidelities. For (1), we use a server with a 64-core AMD EPYC 7713P processor and 512 GB ECC memory. For (2), we conduct our experiments on IBM Falcon r5.11 QPUs. Unless otherwise noted, we use the IBM Kolkata 27-qubit QPU. Framework and Configuration We use the Qiskit <cit.> Python SDK for compiling quantum circuits and running simulations. We compile quantum circuits with the highest optimization level (3) and run with 8192 shots. Each data point presented in the figures is the median of five runs. Benchmarks We study on a set of circuits used in state-of-the-art NISQ algorithms, adopted from the 3 benchmark suits of Supermarq <cit.>, MQT-Bench <cit.> and QASM-Bench <cit.>. The algorithms' circuits can be scaled by the number of qubits and depth. Specifically. We study 9 benchmarks: , , , , , , , and , these benchmarks cover a wide range of relevant criteria for evaluating . For the TL and VQE circuits, we use circular and linear entanglement, respectively. The HS, VQE, and TL benchmarks are scalable by their circuit depth with the number of time-steps t and layers in the ansatz n. The QAOA-R/P circuits are initialized using regular/power-law graphs, respectively, with degree d ∈{1, 3}. Metrics We evaluate the following metrics: * Fidelity: We use the Hellinger fidelity as a measure of how close a noisy result is to the desired ground truth of a quantum circuit <cit.>. The Hellinger fidelity is calculated as Fidelity (P_ideal, P_noisy) = (1 - H(P_ideal, P_noisy)^2)^2 ↦ [0,1], where H is the Hellinger distance between two probability distributions, and P_ideal, P_noisy are the ideal and noisy probability distributions, respectively. * Circuit Properties: Number of CNOT gates and depth. When a Qernel contains more than one sub-Qernel, we use the sub-Qernel with the maximum depth, amount of CNOTs, or an average of these two properties. * Waiting Time: The time a circuit spends in a QPU's queue, waiting for execution, in seconds. * Classical Overhead: The optimization and post-processing overheads ( <ref>) of the compiler vs. Qiskit's transpiler <cit.>. * Quantum Overhead: The number of additional quantum circuits we need to execute per original quantum circuit. Baselines We evaluate the compiler against Qiskit v0.41, CutQC <cit.> and FrozenQubits <cit.>. 's multi-programmer is evaluated against <cit.>. Regarding scheduler, to the best of your knowledge, <cit.> is the only peer-reviewed quantum scheduler, but it doesn't provide source code or enough technical details to faithfully implement it. Framework. We use the state-of-the-art and open-source Qiskit <cit.> SDK to implement our quantum circuits and simulations. We compile quantum circuits with the highest optimization level available (3). Backends. We use Qiskit Terra's <cit.> FakeBackends, which are classical simulators that mimick the behaviour of IBM Quantum systems by using their older snapshots. These snapshots include the set of basis gates, the noise model and the coupling map of the respective quantum machines. Unless otherwise stated, the snapshots are dated July of 2022. Benchmarks. We ported the state-of-the-art Supermarq <cit.> benchmark suite in Qiskit. Supermarq contains eight benchmarks each capturing a number of feature vectors quantum circuits. These benchmark are: * GHZ: Generates and measures highly entangled states between qubits. The GHZ state has been used in various applications in quantum information processing, such as quantum teleportation and quantum cryptography. * Mermin-Bell: Is used to demonstrate Bell's theorem. The circuit is designed such that the measurement outcomes of the qubits exhibit certain types of correlations that violate the Bell inequality. * Phase Code: A phase flip repetition code parameterized by the number of data qubits and rounds of error correction. * Bit Code: Same as Phase Code, but instead of checking for phase flips, the bit code detects bit flips on the data qubits. * Vanilla QAOA: Quantum Approximate Optimization Algorithm used for solving combinatorial optimization problems. In this case, it is used to solve MaxCut on complete graphs with edge weights randomly drawn from {-1, +1} (Sherrington-Kirkpatrick model). It requires an all-to-all qubit connectivity. * Fermionic-swap QAOA: Same as Vanilla QAOA, but requires an interaction between every pair of qubits. * VQE: . The Variational Quantum Eigensolver is used to find the ground state energy of a quantum system (lowest eigenvalue of a given problem matrix). It is a hybrid classical-quantum algorithm designed to solve problems in chemistry and materials science. * Hamiltonian Simulation: A type of quantum circuit used to simulate the behavior of quantum systems governed by a Hamiltonian. It is a promising approach for simulating quantum systems on near-term quantum computers and has been used in various applications, such as quantum chemistry, condensed matter physics, and optimization problems. * Vanilla QAOA: Quantum Approximate Optimization Algorithm used for solving combinatorial optimization problems. In this case, it is used to solve MaxCut on complete graphs with edge weights randomly drawn from {-1, +1} (Sherrington-Kirkpatrick model). It requires an all-to-all qubit connectivity. * Fermionic-swap QAOA: Same as Vanilla QAOA, but requires an interaction between every pair of qubits. * VQE: The Variational Quantum Eigensolver is used to find the ground state energy of a quantum system (lowest eigenvalue of a given problem matrix). It is a hybrid classical-quantum algorithm designed to solve problems in chemistry and materials science. * Hamiltonian Simulation: A type of quantum circuit used to simulate the behavior of quantum systems governed by a Hamiltonian. It is a promising approach for simulating quantum systems on near-term quantum computers and has been used in various applications, such as quantum chemistry, condensed matter physics, and optimization problems. Performance Metrics * Fidelity score: Each benchmark implements a score function that compares the experimentally observed probability distribution (qubit values after measurement), with the ideal/expected probability distribution. Unless otherwise stated, we use this as the metric of quality of results. For certain benchmarks (GHZ, Bit Code and Phase Code), the fidelity score is Hellinger fidelity as defined in <cit.>. The rest are scored with benchmark-specific formulas, whose details can be found in the original paper. Unless otherwise stated, we present the median value for the fidelity score from 5 runs for each experiment. * Utilization: For any QPU, it is the ratio of the number of allocated qubits to the number of available qubits of the QPU. For the system in general, it is the average utilization across all QPUs. * Throughput: Number of circuits that can be executed by the system per time unit. Experimental Setup We conduct three types of experiments: (1) circuit transpilation with and without 's compiler in order to measure the post-compilation circuit's properties, (2) runs on real QPUs for measuring the circuit's fidelity, and (3) classical simulation of large circuits cut into fragments of different sizes. For (2) we conduct our experiments on Falcon r5.11H QPUs, namely the 7-qubit IBM Perth and the 27-qubit IBMQ Kolkata. For (1) and (3) we use the Qiskit Transpiler and Qiskit Aer <cit.>, respectively, and run on our local classical machines. For classical tasks, i.e., transpilation, post-processing (knitting), and simulation, we use a server with a 64-core AMD EPYC 7713P processor and 512 GB ECC memory. Framework and Configuration We use the Qiskit <cit.> Python SDK for the quantum circuits and simulations. We compile quantum circuits with the highest optimization level available (3) and run with 20,000 shots. Benchmarks We study on a set of circuits used in the state-of-the-art NISQ algorithms, adopted from 3 benchmark suits of Supermarq <cit.>, MQT-Bench <cit.> and QASM-Bench <cit.>. The algorithms' circuits can be scaled both in the number of qubits and depth. Specifically, we study: , , , , , with circular entanglement, with a Real-Amplitudes ansatz of linear entanglement, with regular graphs of degree d∈{2,3} and barbell graphs. The HS, VQE, and TL benchmarks are scalable in their circuit depth with the number of time-steps t and the number of layers in the ansatz n. Metrics We evaluate the following metrics. * Fidelity: We use the Hellinger fidelity as a measure of how close a noisy result is to the desired ground truth of a quantum circuit <cit.>. The Hellinger fidelity is calculated as Fidelity (P_ideal, P_noisy) = (1 - H(P_ideal, P_noisy)^2)^2 ↦ [0,1], where H is the Hellinger distance between two probability distributions. This definition for fidelity has been widely used in previous works <cit.>. * Circuit Properties: Number of CNOT gates, depth and the number of qubit dependencies. When a virtual circuit contains more than one fragment, we use the fragment with the maximum depth or amount of CNOTs or depth, so the noisiest fragment of the virtual circuit. A qubit q_i is dependent on another qubit q_j if there exists a path from an operation on q_i to an operation on q_j <cit.>. * Execution Time: The execution time of a (virtual) circuit in seconds. We compare the runtime of a classical simulation of a circuit with that of using on top of the simulator(s). In our evaluation, the time of the compilers and transpilation is negligible in comparison to the runtime of quantum and classical computations of circuits. Baselines We use the Qiskit transpiler <cit.> of version 0.41.0 with optimization level 3 as our baseline for transpiling quantum circuits to QPUs. §.§ End-to-end Performance RQ1: How well does improve circuit execution on NISQ hardware regarding fidelity, utilization, execution time, and load-balance ? §.§ Compiler RQ1: How well does the compiler improve the fidelity of circuits that run on NISQ QPUs? We evaluate the performance of the compiler w.r.t the post-optimization properties and fidelity of the circuits while also analyzing the classical and quantum costs of our approach. Effect on the Circuit Depth and Number of CNOTs In Figure <ref>, we show the performance of the compiler on the circuits' depth and number of CNOTs, where we plot the relative difference in post-optimization circuit depth and the number of CNOTs between Qiskit (the red horizontal line) and FrozenQubits <cit.>, CutQC <cit.>, and the QOS compiler. Figures <ref> (a) and (c) show that the circuit depth decreases by 46%, 38.6%, and 29.4%, respectively. Figures <ref> (b) and (d) show that the number of CNOTs decreases by 70.5%, 66%, and 56.6%, respectively. The improvement in both metrics against the baselines is attributed to the composability of our compiler; the combined effect of circuit compactions ( <ref>) achieves better results than standalone techniques. Impact on Fidelity Figure <ref> shows the QOS-optimized circuits' fidelity against Qiskit <cit.>, CutQC <cit.>, and FrozenQubits <cit.>. The results show a mean 2.6×, 1.6×, and 1.11× improvement for 12-qubit circuits, respectively, and a 456.5×, 7.6×, and 1.67× improvement for circuits of 24 qubits, respectively. The fidelity improvement is a consequence of lower circuit depths and fewer CNOTs, as shown in Figure <ref>. Classical and Quantum Overheads Figure <ref> (a) shows the average classical and quantum overheads of the compiler. The classical overhead is 16.6× and 2.5× for 12 and 24 qubits, respectively, and the quantum overhead is 31.3× and 12× for 12 and 24 qubits, respectively. However, fidelity improves by 2.6× and 456.5× for 12 and 24 qubits, respectively; therefore, for larger circuits, the fidelity improvement is worth the cost. Scalability To demonstrate that the Compiler increases the scalability, we run the VQE-1 benchmark on a hypothetical 1000-qubit QPU with one-qubit gate errors of 10^-4, two-qubit gate errors of 10^-3, and measurement errors of 10^-2. We optimize with budget b∈{0,1,4,8} and report the estimated fidelity. Figure <ref> (b) shows that all budget b values improve the estimated fidelity, with a tradeoff of improvement vs. overheads. Optimality of Selections Plot with benchmarks (x-axis) vs fidelity (y-axis), show two bars per benchmark: QOS Optimizer and Hand-picked optimization, best for the benchmark Plot with benchmarks (x-axis) vs properties (y-axis), show two bars per benchmark: QOS Optimizer and Hand-picked optimization, best for the benchmark RQ1 takeaway: The compiler improves the properties of quantum circuits by 51% on average, leading to an improvement in fidelity of 2.6–456.5×, while incurring acceptable classical and quantum overheads. §.§ Estimator RQ2: How well does 's estimator address spatial and temporal heterogeneities? We evaluate the estimator's precision in selecting the top-performing QPU for each benchmark. We establish a baseline using the on-average best-performing machine every calibration day. On the day of the experiment, IBM Auckland was the best-performing machine (also with the highest number of pending jobs). Estimator's Accuracy Figure <ref> (c) shows the fidelity of the eight benchmarks when run on QPUs selected by the estimator versus when run on the IBM Auckland QPU. The QPU selected for the BV benchmark is Auckland; therefore, we omit this result. For the rest of the benchmarks, the IBM Sherbrooke and Brisbane QPUs were automatically selected. Interestingly, the fidelity is on par or even higher than IBM Auckland, except for only one benchmark, the QAOA-P1. RQ2 takeaway: 's estimator automatically identifies QPUs with higher fidelity than the current standard practice. §.§ Multi-programmer RQ3: How well does 's multi-programmer increase QPU utilization with minimum fidelity penalties? We evaluate the impact of the multi-programmer on the fidelity of co-scheduled circuits for certain utilization thresholds. Utilization vs. Fidelity Figure <ref> (a) shows the average fidelity of nine benchmarks with utilization of 30%, 60%, and 88%. The three bars represent: no multi-programming (No M/P) refers to large circuits that run solo, baseline multi-programming (Baseline M/P) refers to <cit.>, and QOS's multiprogramming approach ( M/P). There is an average 9.6× improvement in fidelity compared to solo execution and an average 15% (1.15×) improvement compared to the baseline. Effective Utilization The results in Figure <ref> (b) show that achieves, on average, a 7.2% higher effective utilization ( <ref>), with a maximum improvement of 10.1%. Fidelity Penalty vs. Solo Execution In Figure <ref> (c), we evaluate the fidelity penalty of multiprogramming vs. solo circuit execution for utilization of 30%, 60%, and 88%. The fidelity loss is 2%, 9%, and 18%, respectively. The average fidelity loss is 9.6% compared to solo execution, which is in line with previous studies <cit.>. In the worst case (18%), the fidelity loss is caused by the restrictions in high-quality qubit allocations and the crosstalk errors. RQ3 takeaway: The multi-programmer improves fidelity by 1.15–9.6× and effective utilization by 7.2% compared to the baselines while incurring an acceptable fidelity penalty (<10%) compared to solo execution. §.§ Scheduler RQ4: How well does 's scheduler balance fidelity vs. waiting times and balance the load across QPUs? We evaluate our scheduler by generating a representative workload consisting of a dataset we collected during the development of . Dataset Collection During our exploration of the motivational challenges ( <ref>) and experimentation and evaluation of the components and their policies, we collected a dataset of 70.000 benchmark circuits and more than 7000 job runs in the quantum cloud. We use this dataset to simulate representative workloads, as we detail next. Workload Generation To generate realistic workloads, we monitored all available QPUs on the IBM Quantum Cloud <cit.> for ten days in November 2023 to estimate the hourly job arrival rate. The average hourly rate is 1500 jobs per hour and is the baseline system workload for our evaluation. Fidelity vs. Waiting Time Figure <ref> (a) shows the performance of the formula-based scheduling policy. We show the average fidelity and waiting time as the fidelity weight, c, changes ( <ref>). A weight of 0.7 achieves ∼5× lower waiting times than full priority of fidelity while sacrificing only ∼2% fidelity. Figure <ref> (b) shows the Pareto front of scheduling solutions generated by the genetic algorithm policy. A weight c=0.5 achieves 2× lower waiting times with 4% lower fidelity. QPU Load Balancing Figure <ref> (c) shows the QPU load as the total runtime each QPU was active, in seconds, for the formula-based policy. All QPUs handle similar loads, with a maximum difference of 15.2%. RQ4 takeaway: scheduler balances the trade-off between waiting times and fidelity by reducing them 5× and only 2%, respectively, while balancing the load across QPUs. § RELATED WORK Quantum optimization techniques can be categorized as (1) qubit mapping and routing <cit.>, (2) instruction/pulse scheduling <cit.>, (3) gate synthesis/decomposition <cit.>, (4) execution post-processing and readout improvement <cit.>, and (5) circuit compaction <cit.>. These techniques are implemented standalone without a compiler infrastructure and, thus, are not composable. Instead, the compiler offers a powerful IR that enables incorporating such techniques in a composable manner. Moreover, application-specific optimizations focus only on specific algorithms to enhance fidelity but lack generality <cit.>. In contrast, the compiler is a generic approach applicable to all applications. The state-of-the-art in quantum multi-programming is fairly limited and focuses almost exclusively on the fair allocation problem <cit.>. Specifically, this work aims to allocate high-quality regions of the QPU to the bundled programs. However, selecting compatible programs regarding effective utilization and/or fidelity is not considered systematically. Notably, key optimizations implemented in <cit.> are already part of the Qiskit transpiler <cit.> workflow and, therefore, are used indirectly by . The state-of-the-art in quantum multi-programming is fairly limited <cit.> and focuses solely on high-quality mapping, overlooking the systematic selection of compatible programs for utilization or fidelity. Notably, key optimizations from <cit.> are integrated into the Qiskit transpiler workflow <cit.>, therefore, are already used by the Virtualizer ( <ref>). Lastly, current quantum scheduling methods <cit.> are limited because they (1) schedule circuits one at a time, (2) neglect QPU utilization, (3) lack fine control over waiting times versus fidelity, and (4) require manual input for final scheduling decisions. Work in the quantum cloud computing area <cit.> and in quantum serverless <cit.>; describes quantum cloud characteristics or potential architectures, but is the first end-to-end QPU management system. Lastly, the state of quantum scheduling <cit.> is limiting because (1) they schedule circuits one at a time, (2) they do not consider QPU utilization, (3) they do not offer fine-grained control over the balance between waiting times and fidelity, and (4), they still require manual input from the user for the final scheduling decision. Additionally, there is a line of work in the quantum cloud computing area <cit.> and in quantum serverless <cit.>. This work presents the existing quantum cloud characteristics or describes possible architectures for the growing quantum cloud. Still, is the most complete end-to-end system for managing NISQ resources. § CONCLUSION We presented , a system that composes cross-stack OS abstractions to address the challenges of quantum computing holistically. The synergy between compaction techniques, performance estimation, multi-programming, and scheduling systematically explores the tradeoff space associated with quantum. Specifically, achieves up to 456.5× higher fidelity at a 12× overhead cost, up to 9.6× higher fidelity for a target utilization for 9.6% lower fidelity than solo execution, and up to 5× lower waiting times for 2% lower fidelity. Contributions Our main contributions include: * To our knowledge, is the first attempt to combine circuit compaction with quantum resource management to tackle the challenges of QPUs holistically. * We leverage the compiler infrastructure to compose optimizations that improve fidelity in a scalable manner, significantly outperforming their individual application (i.e., the current practice). * To our knowledge, we are the first to account for and improve both temporal and spatial QPU utilization when multi-progra­mming quantum programs, while mitigating its associated fidelity penalties. * Our scheduler balances the inherent tradeoff between fidelity and waiting times, leading to better overall QoS. § ACKNOWLEDGEMENTS We thank Karl Jansen and Stefan Kühn from the Center for Quantum Technology and Applications (CQTA)- Zeuthen for supporting this work by providing access to IBM quantum resources. We also thank Ahmed Darwish and Dmitry Lugovoy for their contributions to this work. Funded by the Bavarian State Ministry of Science and the Arts as part of the Munich Quantum Valley (MQV). ACM-Reference-Format
http://arxiv.org/abs/2406.18316v1
20240626125937
Trade-off between Gradient Measurement Efficiency and Expressivity in Deep Quantum Neural Networks
[ "Koki Chinzei", "Shinichiro Yamano", "Quoc Hoan Tran", "Yasuhiro Endo", "Hirotaka Oshima" ]
quant-ph
[ "quant-ph", "cs.LG" ]
chinzei.koki@fujitsu.com Quantum Laboratory, Fujitsu Research, Fujitsu Limited, 4-1-1 Kawasaki, Kanagawa 211-8588, Japan yamano@qi.t.u-tokyo.ac.jp These authors contributed equally to this work. Quantum Laboratory, Fujitsu Research, Fujitsu Limited, 4-1-1 Kawasaki, Kanagawa 211-8588, Japan Department of Applied Physics, Graduate School of Engineering, The University of Tokyo, 7-3-1 Hongo Bunkyo-ku, Tokyo 113-8656, Japan Quantum Laboratory, Fujitsu Research, Fujitsu Limited, 4-1-1 Kawasaki, Kanagawa 211-8588, Japan Quantum Laboratory, Fujitsu Research, Fujitsu Limited, 4-1-1 Kawasaki, Kanagawa 211-8588, Japan Quantum Laboratory, Fujitsu Research, Fujitsu Limited, 4-1-1 Kawasaki, Kanagawa 211-8588, Japan § ABSTRACT Quantum neural networks (QNNs) require an efficient training algorithm to achieve practical quantum advantages. A promising approach is the use of gradient-based optimization algorithms, where gradients are estimated through quantum measurements. However, it is generally difficult to efficiently measure gradients in QNNs because the quantum state collapses upon measurement. In this work, we prove a general trade-off between gradient measurement efficiency and expressivity in a wide class of deep QNNs, elucidating the theoretical limits and possibilities of efficient gradient estimation. This trade-off implies that a more expressive QNN requires a higher measurement cost in gradient estimation, whereas we can increase gradient measurement efficiency by reducing the QNN expressivity to suit a given task. We further propose a general QNN ansatz called the stabilizer-logical product ansatz (SLPA), which can reach the upper limit of the trade-off inequality by leveraging the symmetric structure of the quantum circuit. In learning an unknown symmetric function, the SLPA drastically reduces the quantum resources required for training while maintaining accuracy and trainability compared to a well-designed symmetric circuit based on the parameter-shift method. Our results not only reveal a theoretical understanding of efficient training in QNNs but also provide a standard and broadly applicable efficient QNN design. Trade-off between Gradient Measurement Efficiency and Expressivity in Deep Quantum Neural Networks Hirotaka Oshima July 1, 2024 ==================================================================================================== § INTRODUCTION Deep learning is a breakthrough technology that has a significant impact on a variety of fields, including computational and natural sciences <cit.>. In deep learning, neural networks are trained to approximate unknown target functions to represent input-output relationships in various tasks such as image recognition <cit.>, natural language processing <cit.>, predictions of protein structure <cit.>, and quantum state approximation <cit.>. Behind the great success of deep learning lies an efficient training algorithm equipped with backpropagation <cit.>. Backpropagation allows us to efficiently evaluate the gradient of an objective function at approximately the same computational cost as a single model evaluation and to optimize the neural network's training parameters based on the evaluated gradient. This technique is essential for training large neural networks with enormous parameters. In analogy with deep learning, variational quantum algorithms (VQAs) have emerged as a promising technology for solving classically intractable problems in quantum chemistry and physics <cit.>, optimization <cit.>, machine learning <cit.>, and so on <cit.>. In VQAs, a parameterized quantum circuit, called a quantum neural network (QNN) in the context of quantum machine learning (QML) <cit.>, is optimized by minimizing an objective function with quantum and classical computers <cit.>. As in classical neural networks, gradient-based optimization algorithms are promising for training QNNs, where gradients are estimated by quantum measurements [Fig. <ref> (a)]. However, unlike classical models, an efficient gradient estimation is challenging in QNNs because quantum states collapse upon measurement. In fact, it is known that QNNs lack a general and efficient gradient measurement algorithm that achieves the same computational cost scaling as classical backpropagation when only one copy of the quantum data is accessible at a time <cit.>. Instead, the gradient is usually estimated using the parameter-shift method <cit.>, where each gradient component is measured independently. This method leads to a high measurement cost that is proportional to the number of parameters, which prevents QNNs from scaling up in near-term quantum devices without sufficient computational resources. Despite the lack of general algorithms to achieve backpropagation scaling, a commuting block circuit (CBC) has recently been proposed as a well-structured QNN for efficient gradient estimation <cit.>. The CBC consists of B blocks containing multiple variational rotation gates, and the generators of rotation gates in two blocks are either all commutative or all anti-commutative. Because of its specific structure, the CBC allows us to estimate the gradient using only 2B-1 types of quantum measurements, which is independent of the number of variational rotation gates in each block. Therefore, the CBC can be trained more efficiently than conventional QNNs based on the parameter-shift method <cit.>. However, there are several remaining questions on the CBC. First, it remains unclear how the specific structure of the CBC affects the QNN capability, particularly the expressivity, which is related to the accuracy of solving a problem. Second, although the basic framework of the CBC has been proposed, the general method of circuit construction has not yet been established. Answering these questions is crucial not only for the theoretical understanding of efficient training in QNNs but also for the realization of practical QML. This work answers these questions for more general QNNs, including the CBC. We first define gradient measurement efficiency in terms of the simultaneous measurability of gradient components, proving a general trade-off between gradient measurement efficiency and expressivity in a wide class of deep QNNs [Fig. <ref> (b)]. This trade-off implies that a more expressive QNN requires a higher measurement cost in gradient estimation, whereas the gradient measurement efficiency can be increased by reducing the QNN expressivity to suit a given task. Based on this trade-off, we further propose a general ansatz of CBC, the stabilizer-logical product ansatz (SLPA). This ansatz can reach the upper limit of the trade-off inequality, i.e., it allows us to measure the gradient with theoretically the fewest types of quantum measurements for a given expressivity [Fig. <ref> (b)]. The SLPA leverages the symmetric structure of the quantum circuit to enhance the gradient measurement efficiency, where the generators of the ansatz are constructed by taking the products of stabilizers and logical operators (Fig. <ref>). Owing to this symmetric structure, the SLPA can be applied to various problems involving symmetry, which are common in quantum chemistry, physics, and machine learning. As a demonstration, we consider the task of learning an unknown symmetric function and show that the SLPA drastically reduces the quantum resources needed for training while maintaining accuracy and trainability compared to several QNNs based on the parameter-shift method. These results reveal the theoretical limits and possibilities of efficient training in QNNs, paving the way for realizing practical advantages in QML. The remainder of this paper is organized as follows. Section <ref> introduces the variational quantum model of interest and defines gradient measurement efficiency and expressivity. Section <ref> shows the main theorem of this work and describes the trade-off relation between gradient measurement efficiency and expressivity and its consequences. Section <ref> proves the main theorem by introducing a graph representation of dynamical Lie algebra. Section <ref> proposes the SLPA as a general ansatz of CBC that can reach the upper limit of the trade-off inequality. Section <ref> verifies the practical effectiveness of the SLPA in the specific task of learning an unknown symmetric function. Finally, Sec. <ref> summarizes this paper and discusses potential future research directions. § MODEL, EFFICIENCY, AND EXPRESSIVITY This section introduces variational quantum models of interest and defines their gradient measurement efficiency and expressivity. §.§ Model We consider the following QNN on an n-qubits system: U() = ∏_j=1^L exp(iθ_j G_j), where G_j∈{I,X,Y,Z}^⊗ n is an n-qubits Pauli operator, = (θ_1,θ_2,⋯) is variational rotation angles, and L is the number of rotation gates. Let ={G_j}_j=1^L be the set of generators. We consider a cost function defined as the expectation value of a Pauli observable O: C() = [ U()ρ U^†() O ], where ρ is an input quantum state. This type of cost function is often used to solve QML tasks. Note that even if the circuit has non-variational Clifford gates (e.g., CZ and CNOT gates), we can transform the circuit into the form of Eq. (<ref>) by swapping the Clifford gates and the Pauli rotation gates (see Appendix <ref> for details). Hence, Eq. (<ref>) includes a wide class of parameterized quantum circuits. To remove redundant and irrelevant quantum gates, we assume the following conditions on the quantum circuit U() (throughout this paper, we use the commutation and anti-commutation relations, [A,B]=AB-BA and {A,B}=AB+BA): [No redundant quantum gates] For any j<k, if G_j=G_k, there exists ℓ (j<ℓ<k) such that {G_j,G_ℓ}=0. [No irrelevant quantum gates] For any G_j∈ commuting with O (i.e., [G_j,O]=0), there exist Q_1,⋯,Q_m ∈ such that {G_j,Q_1}={Q_1,Q_2}=⋯={Q_m-1,Q_m}={Q_m,O}=0. The first condition removes the redundancy of quantum gates. If there is no ℓ satisfying the condition, the two rotation gates, e^iθ_jG_j and e^iθ_kG_k, can be merged into one rotation gate, e^i(θ_j+θ_k)G_j, by swapping the positions of gates. This means that one of the two rotation gates is redundant. The second condition removes irrelevant quantum gates. If there is a quantum gate that violates the condition, the circuit is decomposed into two mutually commuting unitaries as U=U_1U_2, where U_2 commutes with the observable O (i.e., [U_1,U_2]=[U_2,O]=0). Then, the cost function is written as C=[ρ U^† O U]=[ρ U_1^† O U_1], which implies that U_2 is irrelevant in this QNN (see Sec. <ref> for details). Therefore, we impose these two conditions on the quantum circuit to remove the redundant and irrelevant quantum gates. §.§ Gradient measurement efficiency In QNNs, the circuit parameters are optimized to solve a given problem. One of the most promising approaches is the use of gradient-based optimization algorithms, which update the parameters based on the gradient of the cost function as →-η∇ C (η is a learning rate). In classical neural networks, gradient-based optimization algorithms are highly successful thanks to backpropagation, which allows us to compute the gradient at approximately the same cost as a single model evaluation <cit.>. Even in QNNs, gradient-based optimization is a powerful method, but there are no general algorithms for measuring gradients as efficiently as classical backpropagation <cit.>. Instead, in many VQAs, each gradient component is measured independently using the parameter-shift method <cit.>, where the derivative of the cost function is estimated by shifting the parameter: ∂_j C() = C(+πe_j/4) - C(-πe_j/4) (∂_j ≡∂/∂θ_j and e_j is the jth unit vector). This method is easy to implement but leads to a high measurement cost proportional to the number of training parameters. To reveal the theoretical limit of efficient gradient estimation, let us first define the simultaneous measurability of two different components in the gradient. By straightforward calculations, we have the derivative of the cost function by θ_j (see Appendix <ref> for derivation): ∂_j C() = [ ρ_j() ], where _j is a gradient operator defined as _j() = -i[G̃_j(), Õ()] with Õ() = U^†()OU() and G̃_j() = U_j+^†()G_jU_j+() (U_j+()=∏_k=1^j-1e^iθ_kG_k is the unitary circuit before the jth gate). In other words, ∂_j C is identical to the expectation value of the gradient operator _j for the input state ρ. In quantum mechanics, two observables A and B can be simultaneously measured for any input quantum state ρ if and only if [A,B]=0. Therefore, we define the simultaneous measurability in the gradient as follows: [Simultaneous measurability in gradient] We say that ∂_j C() and ∂_k C() can be simultaneously measured when [_j(),_k()]=0 ∀. This definition assumes that the commutation relation holds for any . Thus, even if ∂_j C() and ∂_k C() cannot be simultaneously measured in the sense of this definition, they may be simultaneously measurable for certain . Note that this definition implicitly considers situations where only a single copy of ρ is available at a time. This is because if multiple copies are available at the same time, we can measure any two observables simultaneously by measuring each observable for each copy. In multi-copy settings, an efficient algorithm is known to estimate the gradient, where (polylog(L)) copies of ρ are coherently manipulated to be measured by shadow tomography <cit.>. However, such an algorithm is hard to implement in near-term quantum devices due to the requirements of many qubits and long execution times. This work focuses only on single-copy settings and does not consider multi-copy ones. Based on Definition <ref>, we divide the gradient operators into N_sim simultaneously measurable groups as {_j()}_j=1^L = M_1⊔⋯⊔ M_N_sim, where ⊔ is the disjoint union. In this grouping, all the operators in the same group are simultaneously measurable in the sense of Definition <ref>, i.e., the following commutation relation holds for a group M_m: [_j(),_k()]=0 ∀_j(),_k()∈ M_m, ∀. This grouping enables us to estimate all the gradient components using N_sim types of quantum measurements in principle. Thereby, we define gradient measurement efficiency for finite-depth QNNs: [Gradient measurement efficiency of finite-depth QNNs] The gradient measurement efficiency of finite-depth QNNs is defined as ^(L) = L/min(N_sim), where min(N_sim) is the minimum number of groups among all possible groupings. In this definition, ^(L) indicates the mean number of simultaneously measurable components in the gradient. Therefore, the larger ^(L) is, the more efficiently the gradient can be measured. We also define the gradient measurement efficiency in the deep circuit limit: [Gradient measurement efficiency of deep QNNs] The gradient measurement efficiency of deep QNNs is defined as = lim_L→∞^(L). This work mainly focuses on rather than ^(L) and investigates its theoretical upper limit. Nevertheless, our results also have implications for efficient gradient measurement in finite-depth QNNs as shown in Sec. <ref>. §.§ Expressivity Expressivity is related to the capacity of QNNs. Typically, QNNs can express more types of unitaries as expressivity increases, potentially improving accuracy for a given problem. This work quantifies the QNN expressivity using the dynamical Lie algebra (DLA) <cit.>. To this end, we consider the following Lie closure: i = ⟨i|_⟩Lie, where the Lie closure ⟨i|_⟩Lie is defined as the set of Pauli operators obtained by repeatedly taking the nested commutator between the circuit generators in i={iG_j}_j=1^L, i.e., [⋯[[iG_j,iG_k],iG_ℓ],⋯]∈ i for G_j,G_k,G_ℓ,⋯∈. Letting ={I,X,Y,Z}^⊗ n be the set of n-qubits Pauli operators, ⊆⊆ holds by definition, where we ignore the coefficients of Pauli operators in . The DLA is defined by as = span(), which is the subspace of su(2^n) spanned by the Pauli operators in . The DLA characterizes the types of unitaries that the QNN can express in the overparameterized regime of L≳dim(): that is, U() ∈ e^ for all  <cit.>. For example, a hardware-efficient ansatz with ={X_j, Y_j, Z_jZ_j+1}_j=1^n has the DLA of dim()=4^n-1 <cit.>, indicating that it can express all unitaries in the deep circuit limit [note that dim()≤ 4^n-1 since is a subset of su(2^n)]. Therefore, we define the expressivity of the QNN in the deep circuit limit as the dimension of the DLA: [Expressivity of deep QNNs] The expressivity of deep QNNs is defined as = dim(). Here, we assume another condition on the quantum circuit in relation to expressivity: Let U_fin be the final part of the circuit with depth : U_fin=∏_j=L-+1^L e^iθ_j G_j. Then, there exists a constant (independent of L) such that U_fin can express e^iϕ_1Q_1e^iϕ_2Q_2 for any Q_1, Q_2∈ and any ϕ_1,ϕ_2∈ℝ. This condition is easily satisfied in the limit of L→∞ when all generators in appear uniformly in the circuit. We use this condition to prove the main theorem. Note that the contribution from U_fin with depth =(1) to the gradient measurement efficiency vanishes in the deep circuit limit L→∞. Thus, for evaluating , it suffices to consider only the contribution from the initial part of the circuit with depth = L- = (L). § MAIN THEOREM Remarkably, the upper limit of gradient measurement efficiency depends on expressivity. Here, we present the main theorem on the relationship between gradient measurement efficiency and expressivity: In deep QNNs satisfying Conditions <ref>–<ref>, gradient measurement efficiency and expressivity obey the following inequalities: ≤4^n/ - , and ≥, where n is the number of qubits. The inequality (<ref>) represents a trade-off relation between gradient measurement efficiency and expressivity as shown in Fig. <ref> (b). This trade-off indicates that a more expressive QNN requires a higher measurement cost for gradient estimation. For example, when a QNN has maximum expressivity =4^n-1 (e.g., a hardware-efficient ansatz with ={X_j, Y_j, Z_jZ_j+1}_j=1^n), the gradient measurement efficiency must be =1, which implies that we cannot measure two or more gradient components simultaneously in such models. The inequality (<ref>) shows that gradient measurement efficiency is bounded by expressivity. Therefore, in the highly overparametrized regime of L≫, we need L/ (≫/≥ 1) types of quantum measurements to estimate the gradient, leading to a high measurement cost for training such QNNs. Also, combining the two inequalities (<ref>) and (<ref>), we have ≤ 2^n-1/2, which is the upper bound of gradient measurement efficiency in QNNs. The trade-off inequality (<ref>) suggests that we can increase gradient measurement efficiency by reducing the expressivity to fit a given problem, i.e., by encoding prior knowledge of the problem into the QNN as an inductive bias. In the context of QML, such a problem-tailored model is considered crucial to achieving quantum advantages <cit.>. For instance, an equivariant QNN, where the symmetry of a problem is encoded into the QNN structure, is one of the promising problem-tailored quantum models and exhibits high trainability and generalization by reducing the parameter space to be searched <cit.>. In Sec. <ref>, we will introduce a general QNN ansatz called the stabilizer-logical product ansatz (SLPA) that leverages prior knowledge of a problem's symmetry to reach the upper limit of the trade-off inequality (<ref>). We remark on the relationship between gradient measurement efficiency and trainability. In deep QNNs, it is known that the variance of the cost function in the parameter space decays inversely proportional to dim(): Var_[C()] ∼ 1/dim()=1/ <cit.>. Thus, exponentially high expressivity results in an exponentially flat landscape of the cost function (so-called the barren plateaus <cit.>), which makes efficient training of QNNs impossible. In other words, expressivity has a trade-off with trainability. On the other hand, high gradient measurement efficiency and high trainability are compatible. By the trade-off inequality (<ref>) and Var_[C()] ∼ 1/, we have ≲ 4^n Var_[C()], where we have used 4^n/ - ≤ 4^n/. This inequality means that we can increase and Var_[C()] simultaneously, indicating the compatibility of high gradient measurement efficiency and trainability. In Sec. <ref>, we will numerically show that the SLPA can exhibit high gradient measurement efficiency and trainability at the same time, verifying their compatibility. § PROOF OF MAIN THEOREM This section gives the proof of Theorem <ref>. Beginning with the sketch of the proof, we present some lemmas on the simultaneous measurability of gradient components and then prove Theorem <ref> using the lemmas. §.§ Sketch of proof This subsection gives the sketch of the proof. A more detailed and exact proof can be found in the following subsections and Appendix <ref>. In this proof, we introduce a graph representation of the DLA called a DLA graph to clarify the relationship between gradient measurement efficiency and expressivity. The DLA graph consists of nodes and edges. Each node P corresponds to the Pauli basis of the DLA (i.e., P∈), and two nodes P∈ and Q∈ are connected by an edge if they anti-commute, {P,Q}=0. As shown below, the (anti-)commutation relations between the Pauli bases of the DLA are closely related to the simultaneous measurability of gradient components. Therefore, the DLA graph visualizes the (anti-)commutation relations of the DLA, allowing us to clearly understand the relationship between the gradient measurement efficiency and the DLA structure. The proof of Theorem <ref> requires understanding the gradient measurement efficiency and the expressivity in terms of the DLA graph. By definition, the expressivity corresponds to the total number of nodes in the DLA graph. On the other hand, the relationship between the gradient measurement efficiency and the DLA graph is nontrivial. The first step to obtaining the gradient measurement efficiency is to consider whether given two gradient operators _j and _k are simultaneously measurable (i.e., whether [_j, _k]=0). The gradient operator _j is defined from the generator of the quantum circuit G_j∈, which corresponds to a node of the DLA graph. In Sec. <ref>, we will present Lemmas <ref>–<ref> to show that the simultaneous measurability of _j and _k are determined by some structural relations between the nodes G_j, G_k, and the observable O in the DLA graph. Hence, how many gradient components can be simultaneously measured, namely , is also determined by the structure of the DLA graph. Based on Lemmas <ref>–<ref>, we decompose the DLA graph into several subgraphs, where nodes belonging to different subgraphs cannot be measured simultaneously. Therefore, the number of nodes in each subgraph bounds the number of simultaneously measurable gradient components, i.e., the gradient measurement efficiency. Using this idea, we can map the problem on and to the problem on the DLA graph, or specifically on the relation between the total number of nodes and the size of subgraphs in the DLA graph. In Sec. <ref>, by considering constraints on the number of nodes derived from the DLA decomposition, we prove the inequalities between and . For later convenience, we introduce an abbreviation for operator sets ={A_1,A_2,⋯} and ={B_1,B_2,⋯} as [,]=0 def [A_j,B_k]=0 ∀ A_j∈, ∀ B_k∈, {,}=0 def{A_j,B_k}=0 ∀ A_j∈, ∀ B_k∈. Similarly, we define an abbreviation for an operator set ={A_1,A_2,⋯} and an operator B_k as [,B_k]=0 def [A_j,B_k]=0 ∀ A_j∈, {,B_k}=0 def{A_j,B_k}=0 ∀ A_j∈. In this section and Appendix <ref>, we ignore the coefficients of Pauli operators as we only consider the commutation and anti-commutation relations for the proof. §.§ DLA graph and simultaneous measurability The proof of Theorem <ref> requires understanding the relationship between the simultaneous measurability of gradient components and the DLA structure. Here, we introduce a graph representation associated with the DLA, a DLA graph, to visualize the DLA. In the DLA graph, each node P corresponds to the Pauli basis of the DLA , and two nodes P∈ and Q∈ are connected by an edge if and only if {P,Q}=0: 3.5 [ 1.5ex [P,Q]=0: [name=A][draw,circle,fill=white] at (-1,0) P; [name=B][draw,circle,fill=white] at (+1,0) Q;; 1.5ex {P,Q}=0: [name=A][draw,circle,fill=white] at (-1,0) P; [name=B][draw,circle,fill=white] at (+1,0) Q; (A) – (B); ] For example, the following figure is the DLA graph with ={IX,XI,ZY,YZ,ZZ,YY} for a two-qubits system: [scale=1.5] [name=A][draw,circle,fill=white] at (+0.5, +0.866) XI; [name=B][draw,circle,fill=white] at (-0.5, +0.866) IX; [name=C][draw,circle,fill=white] at (+1,0) YZ; [name=D][draw,circle,fill=white] at (-1,0) ZY; [name=E][draw,circle,fill=white] at (+0.5, -0.866) YY; [name=F][draw,circle,fill=white] at (-0.5, -0.866) ZZ; (A) – (C); (A) – (D); (A) – (E); (A) – (F); (B) – (C); (B) – (D); (B) – (E); (B) – (F); (C) – (E); (C) – (F); (D) – (E); (D) – (F); . Here, we define the connectivity on the DLA graph: [DLA-connectivity] We say that P,Q∈P_n (P≠ Q) are -connected and denote P Q when {P,Q}=0 or there exist R_1,R_2,⋯,R_d-1∈ such that {P,R_1}={R_1,R_2}=⋯={R_d-1,Q}=0. In terms of the DLA graph, P,Q∈P_n are -connected when there exists a path connecting P and Q on the DLA graph. This connectivity satisfies a transitive relation that P Q and Q R lead to P R for P,R∈P_n and Q∈. Below, we represent the -connectivity as 3.5 [ 1.5ex P and Q are -connected: [name=A][draw,circle,fill=white] at (-1,0) P; [name=B][draw,circle,fill=white] at (+1,0) Q; [name=conn] at (0,0.3) ; [Latex[length=2mm]-Latex[length=2mm]] (A) – (B); ] [ 1.5ex P and Q are not -connected: [name=A][draw,circle,fill=white] at (-1,0) P; [name=B][draw,circle,fill=white] at (+1,0) Q; [name=conn] at (0,0.3) ; [name=conn][align=right] at (0,0) ×; [Latex[length=2mm]-Latex[length=2mm]] (A) – (B); ] Note that the -connectivity are defined for P,Q∈P_n={I,X,Y,Z}^⊗ n not only for P,Q∈. We also define the separability of the DLA graph as follows: [Separability] We say that a DLA subgraph _1 ⊂ is separated when [_1, _2]=0, where _2=∖_1 is the complement of _1 in . We also say that the DLA graph is separable when it consists of two or more separated subgraphs. From a graph perspective, this separability means that a subgraph _1 is not connected to the rest of the DLA graph _2 by edges, i.e., ∀ P∈_1 and ∀ Q∈_2 are not -connected. Conversely, if the DLA graph is not separable, any two nodes are -connected. Let us revisit Condition <ref> in terms of the DLA graph. This condition states that, for any G∈, there exist Q_1,⋯,Q_m∈ such that {G,Q_1}=⋯={Q_m,O}=0. That is, the observable O is -connected to ∀ G∈ and thus ∀ G∈. Otherwise, the DLA graph can be decomposed into separated subgraphs as =_1 ⊔_2, where _1 (_2) is (not) -connected to O (i.e., [_1,_2]=[_2,O]=0). Then, the unitary circuit is also decomposed as U=U_1U_2 with U_1∈ e^_1 and U_2∈ e^_2, where we have defined _1=span(_1) and _2=span(_2). From [U_1,U_2]=[U_2,O]=0 (which is derived from [_1,_2]=[_2,O]=0), we have C=[ρ U^† OU]=[ρ U^†_1OU_1], indicating that U_2 does not affect the result. Therefore, we assume that such irrelevant gates are absent in the circuit. In this proof, we map the problem on and to the problem on the DLA graph. To this end, we need to interpret and in terms of the DLA graph. By definition, the expressivity corresponds to the total number of nodes in the DLA graph. On the other hand, identifying the gradient measurement efficiency from the DLA graph is not straightforward. The first step in evaluating is to understand when two gradient operators _j and _k (namely ∂_j C and ∂_k C) are simultaneously measurable. The gradient operator _j is defined from the generator of the circuit G_j∈, which corresponds to a node of the DLA graph . Remarkably, whether _j and _k are simultaneously measurable is determined from the structural relations between the corresponding nodes G_j,G_k and the observable O on the DLA graph. The following lemmas show the relationship between the simultaneous measurability of gradient components and the DLA structure (see Appendix <ref> for their proofs): If G_j,G_k∈ are not -connected, ∂_j C and ∂_k C can be simultaneously measured. For j,k ≤, if G_j,G_k∈ anti-commute, ∂_j C and ∂_k C cannot be simultaneously measured. Consider -connected G_j,G_k∈ for j,k ≤. If {G_j,O}=[G_k,O]=0 or [G_j,O]={G_k,O}=0, ∂_j C and ∂_k C cannot be simultaneously measured. Consider -connected G_j, G_k∈ for j,k ≤. If there exists R∈ such that {G_j,R}=[G_k,R]=0 or [G_j,R]={G_k,R}=0, ∂_j C and ∂_k C cannot be simultaneously measured. For j<k ≤, if G_j=G_k, ∂_j C and ∂_k C cannot be simultaneously measured. Lemma <ref> (Lemmas <ref>–<ref>) gives the sufficient (necessary) condition of the DLA structure for measuring multiple gradient components simultaneously. Figure <ref> illustrates the DLA graph representation of these lemmas. We note that although Lemmas <ref>–<ref> assume j,k≤=(L), the contribution from the final part of the circuit with constant depth =(1) to the gradient measurement efficiency is negligible in the deep circuit limit (see Definition <ref>). Thus, in the following subsection, we only consider the gradient components for j,k≤. §.§ Trade-off between and Here, we prove Theorem <ref> using Lemmas <ref>–<ref>. Let us begin with showing Eq. (<ref>). By Lemma <ref>, ∂_j C and ∂_k C are not simultaneously measurable if G_j=G_k, which indicates that the maximum number of simultaneously measurable gradient components is bounded by the total number of nodes in the DLA graph, namely the expressivity . Since the gradient measurement efficiency is defined as the mean number of simultaneously measurable components in the gradient, we obtain ≥. To prove Eq. (<ref>), we identify the gradient measurement efficiency in the DLA graph. Lemmas <ref>–<ref> give the conditions for measuring two gradient components simultaneously. Based on these lemmas, we decompose the DLA graph into several subgraphs that consist of (potentially) simultaneously measurable nodes, deriving the upper limit of the gradient measurement efficiency. To this end, let us first decompose the DLA graph as = ( _1 ⊔⋯⊔_p ) ⊔( _1 ⊔⋯⊔_q ), where _'s and _'s are separated subgraphs that have one and multiple nodes, respectively (i.e., |_|=1 and |_|≥2, see Fig. <ref>). Here, p and q are the numbers of subgraphs _'s and _'s. Since _ and _ are separated, they satisfy [_,_]=[_,_]=0 ∀≠, [_,_]=0 ∀, . According to Lemma <ref>, the gradient components for not -connected G_j, G_k∈ can be measured simultaneously. Therefore, when G_j and G_k are nodes in different subgraphs in the decomposition of Eq. (<ref>), ∂_j C and ∂_k C are simultaneously measurable. We note that the element A_∈_ anti-commutes with the observable O, {A_,O}=0, because all the Pauli operators in must be -connected to O by Condition <ref>. Also, A_∈_ can appear in the quantum circuit just once due to Condition <ref> since it commutes with all the other generators in the circuit. To examine the simultaneous measurability in each _, we further decompose it into r_ subgraphs, based on Lemmas <ref> and <ref>: _ = _^1⊔⋯⊔_^r_, where _^a's satisfy [_^a,_^b]=0 or {_^a,_^b}=0 and ∀ a≠ b, ∃ c, s.t. { [_^a,_^c]=0 {_^b,_^c}=0 . or { {_^a,_^c}=0 [_^b,_^c]=0. . Equation (<ref>) means that all the nodes in a subgraph _^a share the same (anti-)commutation relations with all the nodes in a subgraph _^b: [P,Q]=0 ∀ P∈_^a, ∀ Q∈_^b or {P,Q}=0 ∀ P∈_^a, ∀ Q∈_^b (see Fig. <ref>). Considering a=b, this condition naturally leads to the commutation relation within each _^a: [P,Q]=0 ∀ P,Q∈_^a. Thus, Lemma <ref> does not exclude the simultaneous measurability for P,Q∈_^a. Also, because of Eq. (<ref>), it is impossible to merge several _^a's into a larger subgraph satisfying Eq. (<ref>). That is, when P, Q∈_ are nodes in different _^a's, there always exists R∈_ such that [P,R]={Q,R}=0 or {P,R}=[Q,R]=0. Therefore, for circuit generators G_j, G_k∈ belonging to different subgraphs _^a's, their corresponding gradient components ∂_j C and ∂_k C are not simultaneously measurable by Lemma <ref>. Finally, we decompose _^a into the commuting and anti-commuting parts with the observable O, based on Lemma <ref>: _^a = _^a+⊔_^a-, where _^a± commute and anti-commute with the observable O, respectively: [_^a+, O]=0, {_^a-, O}=0. By Lemma <ref>, for G_j∈_^a+ and G_k∈_^a-, their corresponding gradient components ∂_j C and ∂_k C are not simultaneously measurable. In summary, when G_j,G_k∈ are nodes in different separated subgraphs _ or _, ∂_j C and ∂_k C are simultaneously measurable. Meanwhile, within each _, the necessary condition for simultaneously measuring ∂_j C and ∂_k C for G_j,G_k∈_ is that G_j and G_k are nodes in the same _^a±. Therefore, the maximum size of _^a± bounds the maximum number of simultaneously measurable gradient components in each _. Using this decomposition, we first show the trade-off inequality for q=0, where q is the number of _. This case corresponds to the commuting generator circuit in Ref. <cit.>, where all the circuit generators A_∈_ are mutually commuting. Then, the number of gates L is finite by Condition <ref>. In this case, the total number of nodes is p, and they can all be measured simultaneously by Lemma <ref>. Therefore, the gradient measurement efficiency and the expressivity are ==p. Meanwhile, A_1,⋯,A_p and A_1A_1,⋯,A_1A_p are Pauli operators that differ and commute with each other [For j≠ k, A_j≠ A_k and A_1A_j ≠ A_1 A_k trivially hold. For any j and k, A_j ≠ A_1A_k also holds because of {A_j,O}=[A_1A_k,O]= 0.]. Because the maximum number of mutually commuting Pauli operators is 2^n, we have 2p≤ 2^n. From Eqs. (<ref>) and (<ref>), we obtain ≤ 4^n/ -. Next, to prove the case of q≠0, we define v= _^amax( | _^a| ), w= _^a±max( | _^a±| ), _v= _^aargmax( | _^a| ), _w= _^a±argmax( | _^a±| ). Then, by Lemmas <ref>–<ref>, the gradient measurement efficiency and the expressivity (i.e., the total number of nodes) are bounded as ≤ qw, ≤ p + v(r_1 + ⋯ + r_q). Note that there are no contributions from _ to the right-hand side of Eq. (<ref>). This is because, since the generator A_∈_ commutes with all the other generators, it cannot appear more than once in the circuit due to Condition <ref> and thus does not contribute to the gradient measurement efficiency in the deep circuit limit. Besides Eq. (<ref>), the DLA decomposition uncovers another constraint on the expressivity . To see that, let us define a set of Pauli operators as follows. If v≥ w+p, we define = {E_1E_1, E_1E_2, ⋯, E_1E_v} with _v = {E_1,⋯,E_v}. Otherwise, we define = {F_1F_1, F_1F_2, ⋯, F_1F_w}⊔{A_1,⋯,A_p} with _w = {F_1,⋯,F_w} and _={A_}. From Eq. (<ref>), all the operators in commute with themselves and with : [,]=[,]=0. Thus, is the stabilizer (or symmetry) of the quantum circuit U(), limiting the degrees of freedom in the DLA, i.e., the expressivity . We can derive the inequality (<ref>) by counting the remaining degrees of freedom within the subspace stabilized by . For instance, let us consider the simplest case of p=0, q=1, and [_v,O]=0 or {_v,O}=0 (i.e., _v=_w). In this case, v=w=|| and [O,]=0 hold. Then the gradient measurement efficiency is bounded as ≤ w=|| from Eq. (<ref>). As for the expressivity, on the other hand, the stabilizers constrain the dimension of the DLA as 4^n/|| (see Lemma <ref> in Appendix <ref>). Furthermore, since [,]=[O,]=0, the stabilizers are not included in the generators and thus the DLA because of Condition <ref>. Hence, we have ≤ 4^n/||-||. These results lead to the inequality ≤ 4^n/ -. In Appendix <ref>, this trade-off inequality is proved in general cases, where we use the following nontrivial constraints derived from the DLA decomposition, the stabilizers , and Eq. (<ref>): ≤4^n v/4^q-1||^2 + (3q-4)v + p and 4^n v/4^q-1||^2 + (3q-4)v + p ≤4^n/qw - qw. Combining Eqs. (<ref>), (<ref>), and (<ref>), we finally obtain the trade-off inequality ≤ 4^n/ -, as required. § STABILIZER-LOGICAL PRODUCT ANSATZ While Theorem <ref> reveals the general trade-off relation between gradient measurement efficiency and expressivity, designing efficient quantum models that reach the upper limit of the trade-off inequality is a separate issue for realizing practical QML. Here, we propose a general ansatz of the commuting block circuit (CBC) called the stabilizer-logical product ansatz (SLPA), which is constructed from stabilizers and logical operators to reach the upper limit of the trade-off inequality. This ansatz is also insightful in understanding the trade-off relation from a symmetry perspective. §.§ Commuting block circuits This subsection briefly reviews the CBC proposed in Ref. <cit.> as a variational quantum circuit allowing for efficient gradient estimation. The CBC consists of B block unitaries: U() = ∏_a=1^B U_a(_a), where each block contains multiple variational rotation gates as U_a(_a) = ∏_jexp(iθ^a_j G^a_j). Here, G_j^a is the Pauli generator of the jth rotation gate in the ath block, and _a=(θ^a_1, θ^a_2, ⋯) is the variational rotation angles. The generators of CBC must satisfy the following two conditions. First, generators within the same block are commutative: [G^a_j, G^a_k] = 0 ∀ j,k. Second, generators in any two distinct blocks are either all commutative or all anti-commutative: [G^a_j,G^b_k] = 0 ∀ j,k or {G^a_j,G^b_k } = 0 ∀ j,k. The structure of CBC is shown in Fig. <ref>. This specific structure of CBC allows us to measure the gradient components of the cost function C()=[U()ρ U^†() O] (O is a Pauli operator) with only two different quantum circuits for each block. To measure the gradient components for the ath block, we divide the generators of the rotation gates _a={G^a_j}_j into the commuting and anti-commuting parts with the observable O: _a = _a^ com⊔_a^ ant, where [_a^ com, O]={_a^ ant,O}=0. Then the gradient components in _a^ com (_a^ ant) can be measured simultaneously using the linear combination of unitaries with an ancilla qubit (see Appendix <ref> or Ref. <cit.> for details). Therefore, the full gradient of U() can be estimated with only 2B types of quantum measurements, which is independent of the number of rotation gates in each block (to be precise, 2B-1 quantum measurements are sufficient because the commuting part of the final block does not contribute to the gradient). This allows us to measure the gradient more efficiently than conventional variational models based on the parameter-shift method, where the measurement cost is proportional to the number of parameters. The basic framework of the CBC has been proposed, but there are still several unresolved issues. First, it remains unclear how the specific structure of the commuting block circuit affects the QNN expressivity. Second, a general method to construct the CBC or find the generators G^a_j that satisfy the commutation relations of Eqs. (<ref>) and (<ref>) has not yet been established. Below, we propose the SLPA as a general ansatz of CBC that can reach the upper limit of the trade-off inequality (<ref>). This ansatz also provides a clear understanding of the trade-off relation between gradient measurement efficiency and expressivity from a symmetry perspective. Before introducing the SLPA, we clarify the relationship between the CBC structure and Lemmas <ref>–<ref>. First, Lemma <ref> states that measuring two gradient components ∂_j C and ∂_k C simultaneously requires that the corresponding generators G_j and G_k are commutative. This requirement is satisfied in the CBC since the generators in the same block are all commutative as Eq. (<ref>). Second, by Lemma <ref>, two gradient components ∂_j C and ∂_k C are not simultaneously measurable if the corresponding generators G_j and G_k have different (anti-)commutation relations with the observable O, {G_j,O}=[G_k,O]=0 or [G_j,O]={G_k,O}=0. This corresponds to the fact that the gradient of CBC must be measured separately for the commuting and anti-commuting parts _a^ com and _a^ ant in each block. Finally, by Lemma <ref>, two gradient components ∂_j C and ∂_k C are not simultaneously measurable if there exists another generator R∈ such that {G_j,R}=[G_k,R]=0 or [G_j,R]={G_k,R}=0. In the CBC, from Eq. (<ref>), there does not exist such R for generators in the same block, and thus the simultaneous gradient measurement is possible. §.§ Stabilizer-logical product ansatz §.§.§ Stabilizer formalism Here, we propose a general ansatz of CBC, the SLPA, which leverages the symmetric structure of the quantum circuit to enhance the gradient measurement efficiency. To this end, let us begin with defining a stabilizer group ={S_1,S_2,⋯}⊂, which satisfies [S_j, S_k] = 0 ∀ j,k, where S_j∈ is a Pauli operator (note that the following circuit construction is valid for any stabilizer group). Given that s is the number of independent stabilizers in , the order of is given by ||=2^s, where an arbitrary element of is written as a product of the s independent stabilizers. For this stabilizer group, we consider logical operators ={L_1,L_2,⋯}⊂, which commute with the stabilizers: [S_j, L_a] = 0 ∀ j,a, where L_a∈ is a Pauli operator. The terms stabilizer and logical operators are named after their commutation relations in the field of quantum error correction. Then, by taking the product of the stabilizer S_j and the logical operator L_a, we construct the jth generator of the ath block as (Fig. <ref>) G_j^a = S_jL_a. Using these generators, we obtain the SLPA U_SL()=∏_a U_a(_a) with block unitaries U_a(_a) = ∏_j exp(iθ_j^a G_j^a), where each block can contain at most ||=2^s rotation gates. The generators of this ansatz always satisfy the commutation relations of the CBC in Eqs. (<ref>) and (<ref>) as [G^a_j, G_k^a] = [S_jL_a, S_kL_a] = 0 ∀ j,k and [G^a_j, G_k^b] = [S_jL_a, S_kL_b] = 0 ∀ j,k if [L_a,L_b]=0, {G^a_j, G_k^b} = {S_jL_a, S_kL_b} = 0 ∀ j,k if {L_a,L_b}=0, where we have used Eqs. (<ref>) and (<ref>). Therefore, the SLPA is an ansatz of CBC. Remarkably, the SLPA has symmetry : [U_SL(),]=0, which is derived from [G_j^a,]=[S_jL_a, ]=0. Furthermore, when the stabilizers commute with the observable O (i.e., [O,]=0), we can measure all gradient components within a block simultaneously because the generators of the block are either all commutative or all anti-commutative with O: [G_j^a,O]=0 for ∀ j or {G_j^a,O}=0 for ∀ j. Conversely, any CBC can be formulated as an SLPA. To show this, we consider a CBC with generators ={G_j^a} that satisfy the commutation relations of Eqs. (<ref>) and (<ref>). For these generators, we can define stabilizers ={G^a_1G^a_j }_j,a={S_ja}_j,a and logical operators ={G^a_1}_a={L_a}_a with S_ja=G^a_1G^a_j and L_a=G^a_1. These operator sets and not only satisfy the requirements of stabilizers and logical operators in Eqs. (<ref>) and (<ref>) but also reproduce the original CBC through Eq. (<ref>). That is, the CBC with generators ={G_j^a} is equivalent to the SLPA constructed from the stabilizers ={G^a_1G^a_j }_j,a and the logical operators ={G^a_1}_a. In fact, and obey the commutation relations of stabilizers and logical operators as [S_ja,S_kb]=[G_1^aG_j^a, G_1^bG_k^b] = 0 and [S_ja,L_b]=[G_1^aG_j^a, G_1^b] = 0, where we have used Eqs. (<ref>) and (<ref>). Besides, the CBC can be constructed from and by taking their product as G_j^a =(G_1^a G_j^a)G_1^a = S_jaL_a. Therefore, any CBC can be formulated with stabilizers and logical operators, which implies that the SLPA is a general ansatz of CBC. §.§.§ Enhancing gradient measurement efficiency of symmetric circuits Using the SLPA, we can enhance the gradient measurement efficiency of a symmetric circuit. Suppose that we have an -symmetric circuit U_SC(), whose unitary is given by U_SC() = ∏_a=1^B exp(iθ_a L_a). Here, L_a is a Pauli operator satisfying [L_a, ]=0. Typically, L_a is a local Pauli operator, and such symmetric circuits are often used in various applications. We can construct the SLPA from this symmetric circuit via Eqs. (<ref>)–(<ref>) by regarding the generator of the symmetric circuit L_a as the logical operator. In other words, we construct each block of the SLPA from each rotation gate of the symmetric circuit by taking the product of the generator L_a and the stabilizers . Whereas the original symmetric circuit has low gradient measurement efficiency in general, the SLPA constructed in this way has high gradient measurement efficiency, where at most ||=2^s gradient components of each block can be measured simultaneously. In addition, symmetry retains even in the SLPA (i.e., [U_SL(),]=0). Therefore, this SLPA enhances the gradient measurement efficiency of the symmetric circuit while maintaining the symmetric property. We note that the generator of the SLPA, G_j^a=S_jL_a, is generally global, which may make it challenging to implement the SLPA in noisy intermediate-scale quantum devices <cit.>. Nevertheless, some (early) fault-tolerant quantum computing architectures, such as the surface code with lattice surgery, can easily perform global rotations <cit.>. Thus, the SLPA is a variational ansatz that is particularly suitable for the (early) fault-tolerant quantum computing era. §.§.§ Backpropagation scaling The stabilizer formalism also clarifies when the CBC achieves the backpropagation scaling, which specifies the scalability of learning models with respect to the number of training parameters <cit.>. The backpropagation scaling is defined as Time(∇ C)/Time(C)≤(log L), where Time(C) and Time(∇ C) are the time complexity of estimating the cost function C and its gradient ∇ C with a certain accuracy. Although Ref. <cit.> proved that the backpropagation scaling is achievable for B=1, whether it is possible even for B≠ 1 remains unclear. In the CBC, the cost function can be estimated with one quantum circuit, whereas 2B-1 quantum circuits are needed to estimate the gradient. Therefore, when ignoring the difference between the single circuit execution times for estimating C and ∇ C, we have Time(∇ C)/Time(C) ∼ B. Also, in the stabilizer formalism, the CBC has at most L=2^s B training parameters. Therefore, the backpropagation scaling is written as B ≤(log B + slog 2). This indicates that s must increase proportionally to or faster than B to achieve the backpropagation scaling. We note that this discussion ignores the circuit execution time, thus requiring a more thorough analysis to understand the precise condition for the backpropagation scaling. §.§ Trade-off in stabilizer-logical product ansatz Here, we show that the SLPA can reach the upper limit of the trade-off inequality (<ref>) when the stabilizers commute with the observable O. This discussion also offers a clear understanding of the trade-off relation between gradient measurement efficiency and expressivity. Let us first consider the gradient measurement efficiency of SLPA. When the stabilizers commute with the observable O, the generators of each block, G_j^a=S_jL_a, are either all commutative or all anti-commutative with the observable O: [G^a_j,O]=0 for ∀ j or {G^a_j,O}=0 for ∀ j. Then, we can simultaneously measure all gradient components of each block in the SLPA. Therefore, given that each block can contain at most ||=2^s rotation gates, the gradient measurement efficiency, which is the mean number of simultaneously measurable components in the gradient, obeys ≤ 2^s. The equality holds if all the blocks of the SLPA have 2^s rotation gates. While the stabilizers lead to high gradient measurement efficiency, they also limit the expressivity of the quantum circuit. Since all the generators of the SLPA commute with the stabilizers (i.e., [,]=0), the DLA generated by them commute with the stabilizers as well (i.e., [,]=0). Then, the dimension of the subspace in 𝔰𝔲(2^n) stabilized by (commuting with) is 4^n/||=4^n/2^s (see Lemma <ref> in Appendix <ref>). Furthermore, given that [,]=[O,]=0, the stabilizers are not included in the generators and thus the DLA due to Condition <ref>. This consideration provides the upper limit of the DLA dimension, namely the expressivity: ≤4^n/2^s - 2^s, where the equality holds if the DLA covers the entire subspace stabilized by , except for . Combining inequlities (<ref>) and (<ref>), we obtain the trade-off inequality (<ref>) for the SLPA, ≤ 4^n/-. Then, the upper limit of the inequality is attained when the numbers of rotation gates in all blocks are ||=2^s and the DLA covers the entire subspace stabilized by , except for [Fig. <ref> (b)]. These results intuitively describe the trade-off relation between gradient measurement efficiency and expressivity from a symmetry point of view: we can measure several gradient components simultaneously by leveraging the symmetric structure of the quantum circuit, which instead constrains the expressivity. § NUMERICAL DEMONSTRATION: LEARNING A SYMMETRIC FUNCTION In this section, we numerically verify the trade-off relation between gradient measurement efficiency and expressivity and demonstrate the high performance of the SLPA in a specific learning problem. §.§ Problem and models Due to its symmetric structure, the SLPA is particularly effective in problems involving symmetry. Such problems are common in quantum chemistry, physics, and machine learning, which are the main targets of quantum computing. In these problems, symmetry provides a guiding principle for designing efficient quantum algorithms or circuits. In VQAs, for instance, the accuracy, trainability, generalization, and feasibility of QNNs can be improved by designing quantum circuits based on the symmetry of quantum states and dynamics of interest <cit.>. In addition to these improvements, the SLPA increases the gradient measurement efficiency up to the theoretical limit by leveraging the symmetry of problems. For demonstration, we consider the task of learning an unknown symmetric real scalar function f(ρ), where ρ is an input quantum state. Suppose that we know in advance that f(ρ) satisfies f(ρ)=f(S_jρ S_j^†) for ∀ S_j∈. To learn this function, we employ a quantum model h_(ρ)=[U()ρ U^†() O] to approximate f(ρ). Then the design of the parameterized quantum circuit U() is crucial to approximate the target function with high accuracy. In learning this type of symmetric function, an equivariant QNN is effective <cit.>. The equivariant QNN is represented by an -symmetric circuit U() with an -symmetric observable O, [U(),]=[O,]=0, to automatically satisfy the same invariance as the target function h_(ρ)=h_(S_jρ S_j^†), which improves the trainability and generalization of the learning model. Here, we show that the SLPA can enhance the gradient measurement efficiency of the equivariant QNN while maintaining high trainability and generalization. For concreteness, we define the target symmetric function as f(ρ)=[U_tagρ U_tag^† O] with an -symmetric target unitary U_tag and the -symmetric observable O (i.e., [U_tag,]=[O,]=0). Assume that is given by ={I,∏_j=1^nX_j,∏_j=1^nY_j,∏_j=1^nZ_j } with even n. Here, the number of independent operators in is s=2. To satisfy this symmetry, the target unitary is set as U_tag=U_++⊕ U_+-⊕ U_-+⊕ U_–, where U_±± is a 2^n-2-dimensional Haar-random unitary acting on the eigenspace of ∏_j=1^nX_j=±1 and ∏_j=1^nZ_j=±1. We also assume the symmetric observable O=X_1X_2. To learn this function, we use N_t training data {|x_i⟩,y_i}_i=1^N_t, where |x_i⟩=⊗_j=1^n |s_i^j⟩ and y_i=f(|x_i⟩) are the product state of single qubit Haar-random states and its label, respectively. The model is optimized by minimizing a loss function Loss = 1/N_t∑_i=1^N_t[ y_i - h_(|x_i⟩) ]^2. We also prepare N_t test data for validation, which are sampled independently of training data. Below, we set N_t=50. To solve this problem, we first consider a local symmetric circuit commuting with as follows [Fig. <ref> (a)]: U_SC() = ∏_d=1^D ∏_k=1^3nexp(iθ_k^d L_k). Here, a local Pauli operator L_k is defined as L_k = P_2k-1^μ_kP_2k^μ_k for k=an + b, P_2k^ν_kP_2k+1^ν_k for k=(a+1/2)n + b with P_j^μ= X_j for μ=3ℓ+1 Y_j for μ=3ℓ+2 Z_j for μ=3ℓ, where a∈{0,1,2}, b∈{1,⋯,n/2}, μ_k=a+b, ν_jk=a+b+n/2, ℓ∈ℤ, and P^μ_j = P^μ_j+n. The number of parameters in this model is L=3nD. The set of generators is given by _SC={L_k}={X_j X_j+1,Y_j Y_j+1,Z_j Z_j+1}_j=1^n, which commutes with the stabilizers . In Appendix <ref>, we show that the DLA generated by _SC covers the entire subspace stabilized by , except for , which leads to =4^n/4 - 4. Then, although the upper limit of the gradient measurement efficiency is =4 by Theorem <ref>, this symmetric circuit cannot reach the upper limit but instead exhibits =1 as shown later. The SLPA is constructed by taking the products of the stabilizers and the generators of U_SC() as [Fig. <ref> (b)] U_SL() = ∏_d=1^D ∏_k=1^3n U_k(_k^d), where U_k(_k^d) is the block unitary defined as U_k(_k^d) = ∏_j=1^4 exp(iθ_kj^d S_j L_k). The number of parameters is L=12nD. Here, the set of the generators is given by _SL=_SC×. The dimension of the DLA, or the expressivity, is the same as the symmetric circuit, =4^n/4 - 4. Meanwhile, given that the generators of each block are either all commutative or all anti-commutative with the observable O (which is derived by [O,]=0), we can measure four gradient components of each block simultaneously. Therefore, in contrast to the symmetric circuit U_SC(), this SLPA can reach the upper limit of the gradient measurement efficiency =4. For comparison, we also consider the following local non-symmetric circuit [Fig. <ref> (c)]: U_NSC() = ∏_d=1^D V_ent(^d_3) V_rot(^d_1,^d_2) with V_rot(^d_1,^d_2) = ∏_j=1^n exp(iθ_2j^d Y_j)exp(iθ_1j^d X_j), V_ent(^d_3) = ∏_j=1^n exp(iθ_3j^d Z_jZ_j+1), where the number of parameters is L=3nD. The generators _NSC={X_j,Y_j,Z_jZ_j+1}_j=1^n do not commute with the stabilizers and lead to the maximum expressivity in the full Hilbert space =4^n-1 <cit.>. Hence, the gradient measurement efficiency of this model must be =1 by Theorem <ref>. In gradient estimation, we employ the parameter-shift method for the symmetric and non-symmetric circuits and the direct measurement with an ancilla qubit for the SLPA (see Appendix <ref> or Ref. <cit.> for details), where N_shot measurement shots are used per circuit. We minimize the loss function using the Adam algorithm <cit.> based on the estimated gradient. The hyper-parameter values used in this work are initial learning rate =10^-3, β_1=0.9, β_2=0.999, and ϵ=10^-8. We also adopt the stochastic gradient descent <cit.>, where only one training data is used to estimate the gradient at each iteration. §.§ Gradient measurement efficiency First, we investigate the gradient measurement efficiency for the SLPA and symmetric and non-symmetric circuits. In Figs. <ref> (a)–(c), we numerically compute the commutation relations between all the pairs of the gradient operators for random , where the black and yellow regions indicate [_j(),_k()]=0 and [_j(),_k()]≠0, respectively. In the symmetric and non-symmetric circuits, most pairs of gradient operators are not commutative, which implies that we cannot measure two or more gradient components simultaneously, with a few exceptions. In other words, the gradient measurement efficiency is almost one. In the SLPA, on the other hand, we observe a 4×4 block structure in Fig. <ref> (b). This means that four gradient operators can be measured simultaneously, indicating that the gradient measurement efficiency of the SLPA is almost four. Note that the gradient components for the final parts of the circuits [the lower right parts of Figs. <ref> (a)–(c)] are simultaneously measurable because the generators in the final part are effectively not -connected by assuming that the initial part of the circuit is already applied to the input state ρ and not included in (use Lemma <ref>). Figure <ref> (d) shows the gradient measurement efficiency for finite-depth circuits, ^(L), with varying the number of parameters L. In the symmetric and non-symmetric circuits, ^(L) has a relatively large value for small L but decreases to one as the circuit becomes deeper. In particular, since the non-symmetric circuit has =4^n-1, =1 is the theoretical upper limit in L→∞ according to Theorem <ref>. The SLPA shows a similar behavior, but ^(L) of the SLPA is larger than those of the other two models for the entire region of L, eventually approaching the theoretical upper limit =4 in the deep circuit limit. These results support the validity of our trade-off relation and show the high gradient measurement efficiency of the SLPA in both shallow and deep circuit regions. §.§ Reduction in training cost The SLPA can decrease the measurement cost required for training due to its high gradient measurement efficiency. We demonstrate this advantage in learning the symmetric function f(ρ). In this demonstration, we use N_shot = 1000 measurement shots per circuit for gradient estimation and set the number of parameters to L=96 for all the models. Figure <ref> illustrates that the SLPA shows significantly faster convergence of training and test losses compared to the other models in terms of the cumulative number of measurement shots. This fast convergence stems from the high gradient measurement efficiency of the SLPA. As discussed above, the SLPA has =4 in the deep circuit limit, which is four times greater than the other two models with =1. Furthermore, the parameter-shift method used in the symmetric and non-symmetric circuits requires twice the number of circuits for estimating a gradient component than the direct measurement used in the SLPA. Thus, the number of measurement shots per epoch for the SLPA is one-eighth that of the other models in total, leading to a drastic reduction in the quantum measurement resources needed for training. Besides the high gradient measurement efficiency, the SLPA exhibits a high generalization performance. As shown in Fig. <ref> (b), the test loss after training in the SLPA is comparable to that of the symmetric circuit. This is because, as in the symmetric circuit, the symmetry of the unknown function f(ρ) is encoded in the SLPA to improve generalization. In contrast, the non-symmetric circuit sufficiently reduces the training loss but not the test loss, indicating its low generalization. These results highlight the importance of symmetry encoding in this problem. In Appendix <ref>, we provide some additional results for fewer N_shot. Even in these cases, similar trends to the results in this section can be observed, although the final accuracy after training is worse due to the statistical errors in the gradient estimation. §.§ High trainability High trainability is essential to solve large-scale problems in QNNs. There is a concern that the global operators of the SLPA may lead to low trainability due to a barren plateau <cit.>, namely the exponentially flat landscape of the cost function. This subsection shows that such a concern is unnecessary, i.e., that the global operators do not directly cause a barren plateau in the SLPA. Rather, our model can be less prone to the barren plateau phenomenon than the based symmetric model. First of all, numerical evidence is presented in Fig. <ref> (a), where we investigate the variance of the cost function in the parameter space for the symmetric circuit and the SLPA with varying the number of qubits n and the number of parameters L. In the symmetric circuit, Var_[C()] decreases as the circuit depth L/n and finally converges to a finite value. The converged value of Var_[C()] becomes exponentially small as the number of qubits n increases, which indicates that a barren plateau occurs in a deep symmetric circuit. In the SLPA, we observe a similar behavior, where Var_[C()] decreases as L/n and finally converges to almost the same value as the symmetric circuit. However, the convergence of Var_[C()] is slower than that of the symmetric circuit. This result implies the high trainability of our model (in addition, together with Fig. <ref> (d), these results also support the compatibility of high gradient measurement efficiency and trainability). We provide an intuitive mechanism for the slow convergence of the cost function variance in the SLPA. To this end, we follow the argument in Ref. <cit.>. Let us consider the observable in the Heisenberg picture Õ()= U^†()O U(), where the cost function is written as the Hilbert–Schmidt inner product of the input state ρ and the observable Õ(): C()= [ ρÕ() ]. When Õ() lives in an exponentially large subspace, the inner product [ ρÕ() ] will be exponentially small due to the curse of dimensionality, which is the fundamental cause of barren plateaus. To quantify this mechanism, let be the operator subspace where Õ() lives. That is, Õ() can be expanded as Õ() = ∑_P ∈α_P() P with Pauli operators P. Then, given that the cost function is the inner product of Õ() and ρ in the subspace , the variance of the cost function decreases inversely proportional to the dimension of : Var_[C()]∼ 1/poly(dim()). For example, an exponentially large dim() generally results in an exponentially small variance of the cost function. Therefore, dim() gives an insight into barren plateaus. We investigate dim() of the symmetric circuit and the SLPA to show that our model has smaller dim() when the stabilizers commute with the observable O. Suppose that Õ() in the symmetric circuit of Eq. (<ref>) is expanded as Õ_SC() = ∑_P ∈_SCα_P() P, where P is a Pauli operator and α_P() is a coefficient. In the SLPA of Eq. (<ref>), given that the stabilizers ={S_1,S_2,⋯} are closed under products (i.e., S_jS_k ∈), we have Õ_SL() = ∑_S∈∑_P ∈_SCβ_P,S() PS, where β_P,S() is a coefficient. One can verify Eq. (<ref>) using [S_j,S_k]=[S_j,L_k]=[S_j,O]=0 and e^-iθ Q P e^iθ Q= (cos^2 θ±sin^2 θ)P - i(cosθsinθ∓cosθsinθ)PQ iteratively in Õ() (P,Q are Pauli operators, and ± corresponds to [P,Q]=0 and {P,Q}=0 respectively). Equation (<ref>) leads to the operator subspace of SLPA as _SL = _SC×. This discussion gives the relationship between the dimensions of the operator subspaces in the symmetric circuit and the SLPA: dim(_SL) ≤ 2^s dim(_SC). Therefore, if s=(polylog(n)), the SLPA never exponentially spoils the trainability of the symmetric circuit. For example, let us consider a local quantum circuit as Eq. (<ref>). When the observable O is local <cit.>, since Õ() acts only on (d) qubits inside the backward light cone as shown in Fig. <ref> (b) (d is the circuit depth), we have dim(_SC)∼(4^d) and thus dim(_SL)≲(4^d+s/2). If d and s are both (polylog(n)), we obtain dim(_SL)∈(poly(n)), indicating that the SLPA does not suffer from barren plateaus. We note that the above discussion does not take into account the differences in the number of rotation gates between U_SC and U_SL. To fairly compare dim() of the symmetric circuit and the SLPA, we should match the total number of rotation gates for these two models. In the SLPA, replacing d with d/2^s in Eq. (<ref>) to match the total number of gates [see Fig. <ref> (b)], we have dim(_SL)≲(4^d/2^s+s/2). Comparing Eqs. (<ref>) and (<ref>), dim(_SL) can be smaller than dim(_SC) when d/2^s + s/2≲ d, which suggests that the SLPA can have a larger variance of the cost function than the symmetric circuit with the same number of rotation gates. We emphasize that since this result does not rely on the details of the model, except for the localities of U_SC and O and some commutation relations, we would observe similar behavior in other SLPAs. Finally, in sufficiently deep circuits, Var_[C()] converges to the same value in both models because the variance of the cost function in the deep circuit limit is determined by the expressivity <cit.>, Var_[C()]∼ 1/, and these two models have the same =4^n/4 - 4. § DISCUSSIONS AND CONCLUSIONS This work has proved the general trade-off relation between gradient measurement efficiency and expressivity in a wide class of deep QNNs. This trade-off reveals the theoretical limitation of QNNs that a more expressive QNN requires a higher measurement cost for gradient estimation. On the other hand, it also opens up new possibilities to increase gradient measurement efficiency by reducing the expressivity to suit a given problem, i.e., by encoding prior knowledge of the problem to the quantum circuit as an inductive bias. Based on this idea, we have proposed a general ansatz of CBC called the SLPA, which can reach the upper limit of the trade-off inequality by leveraging prior knowledge of a problem's symmetry. In other words, the SLPA allows us to measure the gradient with theoretically the fewest types of quantum measurements for a given expressivity. Owing to its symmetric structure, the SLPA is particularly effective in solving symmetry-related problems. In learning an unknown symmetric function, we have demonstrated that our ansatz can significantly reduce the quantum resources required for training while maintaining accuracy and trainability compared to conventional QNNs based on the parameter-shift method. Although we have shown the effectiveness of the SLPA in the QML task, it is also applicable to other VQAs <cit.>. For instance, when computing the ground state of a symmetric Hamiltonian with the variational quantum eigensolver <cit.>, symmetry-preserving quantum circuits are often used to improve the accuracy of the computation <cit.>. Then, we can alternatively employ the SLPA, which is constructed from the symmetry (stabilizer) of the Hamiltonian, to enhance the gradient measurement efficiency of the symmetry-preserving circuit. Note that our trade-off inequality cannot be exactly applied to such problems because the Hamiltonian is a sum of Pauli operators in general, whereas this work has assumed that the observable O is a Pauli operator. While our results can provide a basis for efficient training even in Hamiltonian problems, exploring exact trade-off relations beyond our theory and establishing efficient models in more general situations are crucial future research directions. The theoretical limit of gradient measurement efficiency also motivates further investigation of other approaches for training quantum circuits. A promising one is the use of gradient-free optimization algorithms, such as Powell's method <cit.> and simultaneous perturbation stochastic approximation <cit.>. Such algorithms use only a few quantum measurements to update the circuit parameters, potentially speeding up the training process. However, it remains unclear how effective the gradient-free algorithms are for large-scale problems. Thorough verification and refinement of these algorithms could overcome the challenge of high computational costs in VQAs. Another promising direction is the coherent manipulation of multiple copies of input data. Since our theory has implicitly assumed that only one copy of input data is available at a time, the existence of more efficient algorithms surpassing our trade-off inequality is not prohibited in multi-copy settings. In fact, there is an efficient algorithm for measuring the gradient in multi-copy settings, where (polylog(L)) copies of input data are coherently manipulated to be measured by shadow tomography <cit.>. However, this algorithm is hard to implement in near-term quantum devices due to the requirements of many qubits and long execution times. Exploring more efficient algorithms in multi-copy settings is an important open issue. § ACKNOWLEDGMENTS Fruitful discussions with Riki Toshio, Yuichi Kamata, and Shintaro Sato are gratefully acknowledged. S.Y. was supported by FoPM, WINGS Program, the University of Tokyo. § QNNS WITH NON-VARIATIONAL CLIFFORD GATES In the main text, we have considered the quantum circuits of the following form: U() = ∏_j=1^L e^iθ_j G_j. Here we show that even if the quantum circuit has non-variational Clifford gates (e.g., CZ and CNOT gates), it can be transformed into the form of Eq. (<ref>). Let us consider the following quantum model: U() = W_L+1(∏_j=1^L e^iθ_j G_j W_j ), where W_j is a Clifford gate. By swapping the Clifford and the rotation gates with e^iθ_j G_jW_j=W_j e^iθ_j W_j^† G_j W_j, we have U() = (∏_j=1^L+1 W_j ) ( ∏_j=1^L e^iθ_j G_j') ≡ WU'(), where G_j'= W_1^†⋯ W_j^† G_j W_j⋯ W_1 is a Clifford transformed Pauli operator. We have defined W=∏_j=1^L+1 W_j and U'()=∏_j=1^L e^iθ_j G_j'. Then, the cost function is written as C() = [U()ρ U^†() O] = [U'() ρ U'^†() O' ] with a Pauli observable O'=W^† O W. In other words, the quantum model of U() with the Pauli observable O is equivalent to that of U'() with the Pauli observable O'. Since U'() has the same form of Eq. (<ref>), the quantum circuit of Eq. (<ref>) can be transformed to that of Eq. (<ref>). § GRADIENT OPERATOR Here, we derive Eq. (<ref>) in the main text. The derivative of the cost function C()=[ρ U^†()O U()] is ∂ C/∂θ_j = [ρ∂ U^†/∂θ_j O U] + [ρ U^† O ∂ U/∂θ_j]. In this equation, ∂ U/∂θ_j is written as ∂ U/∂θ_j = i U_j- G_j U_j+, where U_j+ and U_j- are the unitaries before and after the jth rotation gate: U_j+ = ∏_k=1^j-1 e^iθ_k G_k, U_j- = ∏_k=j^L e^iθ_k G_k. By inserting U_j+U_j+^† = I into Eq. (<ref>), we have ∂ U/∂θ_j = i U_j- (U_j+U_j+^†) G_j U_j+ = iU G̃_j, with G̃_j=U_j+^† G_j U_j+ and U_j- U_j+=U. By taking the Hermitian conjugate of Eq. (<ref>), we also have ∂ U^†/∂θ_j = -iG̃_j U^†. Therefore, Eq. (<ref>) is ∂ C/∂θ_j = -i [ρG̃_j U^† O U] + i [ρ U^† O U G̃_j] = -i[ρ [G̃_j, Õ] ] = [ρ_j ], where we have defined _j=-i[G̃_j, Õ] with Õ=U^† O U. § PROOF OF MAIN THEOREM This Appendix proves Theorem <ref> in the main text. We first show several properties of the DLA graph and then prove Lemmas <ref>–<ref> on the relationship between the simultaneous measurability of gradient components and the DLA structure. Based on these lemmas, we finally prove Theorem <ref>. §.§ Properties of DLA graph This subsection shows several lemmas on the DLA graph to prove Lemmas <ref>–<ref>. In the main text, we have introduced the DLA graph to help understand the DLA structure. Each node P of the DLA graph corresponds to a Pauli basis of the DLA , and two nodes P,Q∈ are connected by an edge if {P,Q}=0. We say that P,Q∈ are -connected when there exists a path connecting P and Q on the DLA graph. We also say that a DLA subgraph _1⊂ is separated when it is not connected with the rest of the DLA graph by edges (see Definitions <ref> and <ref> for the exact definitions). For the sake of convenience, we additionally define the path and distance on the DLA graph: [Path and distance] For -connected P,Q∈P_n (P≠ Q), consider R_1,R_2,⋯,R_d-1∈ such that {P,R_1}={R_1,R_2}=⋯={R_d-1,Q}=0. Then, we call P→ R_1→⋯→ R_d-1→ Q a path between P and Q and define its distance as d. The graph representations in this work are summarized below: 3.5 [ 1.5ex [P,Q]=0 [name=A][draw,circle,fill=white] at (-1,0) P; [name=B][draw,circle,fill=white] at (+1,0) Q;; 1.5ex {P,Q}=0 [name=A][draw,circle,fill=white] at (-1,0) P; [name=B][draw,circle,fill=white] at (+1,0) Q; (A) – (B);; 1.5ex [P,Q]=0 or {P,Q}=0 [name=A][draw,circle,fill=white] at (-1,0) P; [name=B][draw,circle,fill=white] at (+1,0) Q; [dashed] (A) – (B);; 1.5ex P Q [name=A][draw,circle,fill=white] at (-1,0) P; [name=B][draw,circle,fill=white] at (+1,0) Q; [name=conn] at (0,0.3) ; [Latex[length=2mm]-Latex[length=2mm]] (A) – (B);; 1.5ex P Q and [P,Q]=0 [name=A][draw,circle,fill=white] at (-1,0) P; [name=B][draw,circle,fill=white] at (+1,0) Q; [name=conn] at (0,0.3) ; [dotted, Latex[length=2mm]-Latex[length=2mm]] (A) – (B);; 1.5ex Not P Q [name=A][draw,circle,fill=white] at (-1,0) P; [name=B][draw,circle,fill=white] at (+1,0) Q; [name=conn] at (0,0.3) ; [name=conn][align=right] at (0,0) ×; [Latex[length=2mm]-Latex[length=2mm]] (A) – (B);; ] In what follows, we prove several lemmas on the DLA graph, which are summarized in Fig. <ref>. From now on, we ignore the coefficients of Pauli operators because only commutation and anti-commutation relations are relevant for the proof. If P, Q∈ (P≠ Q) are -connected, the shortest distance between them on the DLA graph is one or two. We prove this lemma by contradiction. Assume that the shortest distance between P and Q on the DLA graph is d>2, and let R_1.⋯,R_d-1∈ be the nodes on the shortest path. For convenience, we denote P and Q by R_0 and R_d, respectively: (2,0) – (2.8,0); [dashed] (2.8,0) – (4.2,0); (4.2,0) – (5,0); [name=A][draw,circle,fill=white] at (0,0) R_0; [name=B][draw,circle,fill=white] at (2,0) R_1; [name=C][draw,circle,fill=white] at (5,0) R_d-1; [name=D][draw,circle,fill=white] at (7,0) R_d; (A) – (B); (C) – (D); . Since this path is the shortest, these nodes are not connected by an edge except for the neighboring nodes: {R_j, R_k} = 0 if |j-k|=1, [R_j, R_k] = 0 if |j-k|≠ 1. By definition of the DLA, the following nested commutator R is in : iR = [iR_d-1,[⋯,[iR_3,[iR_2,iR_1]]⋯]] = (2^d-2 i^d-1) R_d-1⋯ R_1 ∈, where we have used Eqs. (<ref>) and (<ref>) for the second equation. Then, P→ R → Q is a path of distance two because the Pauli operator R∈ anti-commutes with P=R_0 and Q=R_d. This contradicts the assumption that the shortest distance between P and Q is greater than two. Therefore, the shortest distance is one or two. If P, Q∈ (P≠ Q) are -connected, there exists R∈ such that {P,R}={Q,R}=0. By Lemma <ref>, the shortest distance between P and Q on the DLA graph is one or two. If the shortest distance is two, there exists R∈ such that {P,R}={Q,R}=0 by definition of the distance. If the shortest distance is one (i.e., {P,Q}=0), iR=[iP,iQ]=-2PQ∈ satisfies {P,R}={Q,R}=0, as required. For P, Q∈ and M∈ satisfying {P,Q}=0 and P Q M, the following statements hold: (i) If {P,M}=[Q,M]=0, there exists R∈ s.t. [P,R]={Q,R}={M,R}=0. (ii) If [P,M]={Q,M}=0, there exists R∈ s.t. {P,R}=[Q,R]={M,R}=0. (iii) If [P,M]=[Q,M]=0, there exists R∈ s.t. {P,R}={Q,R}={M,R}=0. We prove the three cases separately. (i) If {P,M}=[Q,M]=0 holds, R=P∈ satisfies [P,R]={Q,R}={M,R}=0. (ii) If [P,M]={Q,M}=0 holds, R=Q∈ satisfies {P,R}=[Q,R]={M,R}=0. (iii) By Lemma <ref>, the shortest distance between P and M is two because of [P,M]=0 (note that P≠ M from {P,Q}=[M,Q]=0). Thus, there exists S∈ such that {P,S}={M,S}=0: [name=A][draw,circle,fill=white] at (-1.3,0.75) P; [name=B][draw,circle,fill=white] at (+1.3,0.75) Q; [name=C][draw,circle,fill=white] at (0,0) S; [name=D][draw,circle,fill=white] at (0,-1.5) M; (A) – (B); (A) – (C); [dashed] (B) – (C); (C) – (D); . If {Q,S}=0, R=S∈ satisfies {P,R}={Q,R}={M,R}=0. On the other hand, if [Q,S]=0, iR=[iP,iS]=-2PS∈ satisfies {P,R}={Q,R}={M,R}=0. Therefore, in both cases, there exists R∈ such that {P,R}={Q,R}={M,R}=0. For P, Q∈ and M∈ satisfying [P,Q]=0 and P Q M, if {P,M}=[Q,M]=0 or [P,M]={Q,M}=0, there exists R∈ such that {P,R}={Q,R}={M,R}=0. Consider the case of {P,M}=[Q,M]=0 (the other case is similarly provable). Given that P≠ Q holds from {P,M}=[Q,M]=0, there exists S∈ such that {P,S}={Q,S}=0 by Lemma <ref>, where M≠ S because of [Q,M]={Q,S}=0: [every node/.style=fill=white] [name=A][draw,circle] at (0,0) S; [name=B][draw,circle] at (+1.3,0.75) Q; [name=C][draw,circle] at (-1.3,0.75) P; [name=D][draw,circle] at (0,-1.5) M; (A) – (B); (A) – (C); [dashed] (A) – (D); (C) – (D); . Lemma <ref> states that the shortest distance between M and S is one or two. If the shortest distance is one, R=S satisfies {P,R}={Q,R}={M,R}=0. If the shortest distance is two (i.e., [M,S]=0), iR=[iS,iP]=-2SP∈ satisfies {P,R}={Q,R}={M,R}=0. Therefore, there always exists R∈ such that {P,R}={Q,R}={M,R}=0. For P, Q∈ and M∈ satisfying [P,Q]=0 and P Q M, if there exists R∈ such that {P,R}=[Q,R]=0 or [P,R]={Q,R}=0, then there exists S∈ such that {P,S}=[Q,S]={M,S}=0 or [P,S]={Q,S}={M,S}=0. Consider the case of {P,R}=[Q,R]=0 (the other case is similarly provable). We prove the lemma in two cases, (i) R=M and (ii) R≠ M. (i) R=M We have P≠ Q from {P,M}=[Q,M]=0. Thus, there exists T∈ such that {P,T}={Q,T}=0 by Lemma <ref>, where M≠ T because of [M,Q]={T,Q}=0: [every node/.style=fill=white] [name=A][draw,circle] at (-1,0) P; [name=B][draw,circle] at (+1,0) Q; [name=C][draw,circle] at (0,-1) M; [name=D][draw,circle] at (0,+1) T; (A) – (C); [dashed] (C) – (D); (A) – (D); (B) – (D); . If [T,M]=0, iS=[iM,[iP,iT]]=-4iMPT∈ satisfies [P,S]={Q,S}={M,S}=0 (note that R=M leads to M∈). If {T,M}=0, iS=[iM,iT]]=-2MT∈ satisfies [P,S]={Q,S}={M,S}=0. Therefore, the lemma holds for R=M. (ii) R≠ M By Lemma <ref>, the shortest distance between R and M is one or two. If the shortest distance is one, S=R∈ satisfies {P,S}=[Q,S]={M,S}=0, i.e., the lemma holds: [every node/.style=fill=white] [name=A][draw,circle] at (-1,0) P; [name=B][draw,circle] at (+1,0) Q; [name=C][draw,circle] at (0,-1) R; [name=D][draw,circle] at (0,-2.5) M; (A) – (C); (C) – (D); [dashed] (A) – (D); [dashed] (B) – (D); . When the shortest distance is two, let T∈ be the node connecting R and M. Then, there are four patterns regarding the (anti-)commutation relations between P,Q and T, namely [P,T]=0 or {P,T}=0 and [Q,T]=0 or {Q,T}=0, as follows: [every node/.style=fill=white] [name=a][draw=white] at (0,0.8) (a); [name=A][draw,circle] at (-1.5,0) P; [name=B][draw,circle] at (+1.5,0) Q; [name=C][draw,circle] at (0,-1) R; [name=D][draw,circle] at (0,-2.5) T; [name=E][draw,circle] at (0,-4) M; (A) – (C); (C) – (D); (D) – (E); [dashed] (A) – (E); [dashed] (B) – (E); [every node/.style=fill=white] [name=a][draw=white] at (0,0.8) (b); [name=A][draw,circle] at (-1.5,0) P; [name=B][draw,circle] at (+1.5,0) Q; [name=C][draw,circle] at (0,-1) R; [name=D][draw,circle] at (0,-2.5) T; [name=E][draw,circle] at (0,-4) M; (A) – (C); (C) – (D); (D) – (E); (A) – (D); [dashed] (A) – (E); [dashed] (B) – (E); [every node/.style=fill=white] [name=a][draw=white] at (0,0.8) (c); [name=A][draw,circle] at (-1.5,0) P; [name=B][draw,circle] at (+1.5,0) Q; [name=C][draw,circle] at (0,-1) R; [name=D][draw,circle] at (0,-2.5) T; [name=E][draw,circle] at (0,-4) M; (A) – (C); (C) – (D); (D) – (E); (B) – (D); [dashed] (A) – (E); [dashed] (B) – (E); [every node/.style=fill=white] [name=a][draw=white] at (0,0.8) (d); [name=A][draw,circle] at (-1.5,0) P; [name=B][draw,circle] at (+1.5,0) Q; [name=C][draw,circle] at (0,-1) R; [name=D][draw,circle] at (0,-2.5) T; [name=E][draw,circle] at (0,-4) M; (A) – (C); (C) – (D); (D) – (E); (A) – (D); (B) – (D); [dashed] (A) – (E); [dashed] (B) – (E); . We can concretely construct S satisfying the lemma for these four patterns. (a) If [P,T]=[Q,T]=0, iS=[iR,iT]=-2RT∈ satisfies {P,S}=[Q,S]={M,S}=0: [every node/.style=fill=white] [name=A][draw,circle] at (-1,0) P; [name=B][draw,circle] at (+1,0) Q; [name=C][draw,circle] at (0,-1) RT; [name=D][draw,circle] at (0,-2.5) M; (A) – (C); (C) – (D); [dashed] (A) – (D); [dashed] (B) – (D); . (b) If {P,T}=[Q,T]=0, S=T∈ satisfies {P,S}=[Q,S]={M,S}=0: [every node/.style=fill=white] [name=A][draw,circle] at (-1,0) P; [name=B][draw,circle] at (+1,0) Q; [name=C][draw,circle] at (0,-1) T; [name=D][draw,circle] at (0,-2.5) M; (A) – (C); (C) – (D); [dashed] (A) – (D); [dashed] (B) – (D); . (c) If [P,T]={Q,T}=0, S=T∈ satisfies [P,S]={Q,S}={M,S}=0: [every node/.style=fill=white] [name=A][draw,circle] at (-1,0) P; [name=B][draw,circle] at (+1,0) Q; [name=C][draw,circle] at (0,-1) T; [name=D][draw,circle] at (0,-2.5) M; (B) – (C); (C) – (D); [dashed] (A) – (D); [dashed] (B) – (D); . (d) If {P,T}={Q,T}=0, iS=[iR,iT]=-2RT∈ satisfies [P,S]={Q,S}={M,S}=0: [every node/.style=fill=white] [name=A][draw,circle] at (-1,0) P; [name=B][draw,circle] at (+1,0) Q; [name=C][draw,circle] at (0,-1) RT; [name=D][draw,circle] at (0,-2.5) M; (B) – (C); (C) – (D); [dashed] (A) – (D); [dashed] (B) – (D); . Therefore, the lemma holds for R≠ M. These results prove this lemma. §.§ Simultaneous measurability of gradient components This subsection proves Lemmas <ref>–<ref> in the main text on the relationship between the simultaneous measurability of gradient components and the DLA structure. As discussed in the main text, two distinct gradient components ∂_j C() and ∂_k C() can be measured simultaneously if [_j(), _k()]=0 holds for any , where _j()=-i[G̃_j(),Õ()] is the gradient operator. Here, G̃_j()=U_j+^†() G_j U_j+() and Õ()=U^†() O U() are the Heisenberg representations of the generator G_j and the observable O, respectively (U_j+()=∏_k=1^j-1e^iθ_kG_k is the unitary circuit before the jth gate). In the DLA graph, the gradient operator _j() corresponds to a node of the graph G_j. Below, we show that whether [_j(), _k()]=0 for any is determined by the structural relationship between G_j, G_k, and O on the DLA graph. If G_j,G_k∈ are not -connected, ∂_j C and ∂_k C can be simultaneously measured: [name=conn] at (0,0.3) ; [name=conn] at (0,0) ×; [name=A][draw,circle,fill=white] at (-1,0) G_j; [name=B][draw,circle,fill=white] at (+1,0) G_k; [Latex[length=2mm]-Latex[length=2mm]] (A) – (B); 2ex⇒ 2ex[_j,_k]= 0 ∀. When G_j and G_k are not -connected, the DLA graph is separable into two subgraphs _1 and _2 (i.e., [_1,_2]=0). This is because if the DLA graph is not separable, then any two nodes are -connected. Therefore, we can decompose the DLA into two subalgebras _1=span(_1) and _2=span(_2) that commute with each other: = _1 ⊕_2, [_1,_2]=0, where G_j∈_1 and G_k∈_2. Then, the unitaries of the quantum circuit are decomposed as U=VW, U_j+=V_j+W_j+, and U_k+=V_k+W_k+ with V,V_j+,V_k+∈ e^_1 and W,W_j+,W_k+∈ e^_2 (U_j+ and U_k+ are the unitary circuits before the jth and kth gates, respectively). Now, we prove [_j,_k]= 0 using these decompositions. A straightforward calculation shows that the commutator of gradient operators _j=-i[G̃_j,Õ] and _k=-i[G̃_k,Õ] is written as [_j,_k] = -[G̃_j,ÕG̃_kÕ] - [ÕG̃_jÕ,G̃_k] + [G̃_j,G̃_k] + Õ[G̃_j,G̃_k]Õ. Below, we show that each term in the above commutator vanishes. Let us first prove [G̃_j,G̃_k]=0. From the decomposition of U_j+=V_j+W_j+, we have G̃_j = U_j+^† G_j U_j+ = W_j+^† V_j+^† G_j V_j+W_j+ = V_j+^† G_j V_j+∈_1, where we have used [V_j+,W_j+]=[G_j,W_j+]=0. In the same manner, we have G̃_k = W_k+^† G_k W_k+∈_2. Thereby, [G̃_j,G̃_k]=0 is proved from [_1,_2]=0, indicating that the third and fourth terms vanish in Eq. (<ref>). Next, we prove [G̃_j,ÕG̃_kÕ]=0. Since Õ=U^† O U and G̃_k= W_k+^† G_k W_k+ [see Eq. (<ref>)], ÕG̃_kÕ is written as ÕG̃_kÕ = (W^† V^† O VW) (W_k+^† G_k W_k+) (W^† V^† O VW) = W^† V^† O V A V^† O VW = W^† V^† O A O VW where we have defined A=W W_k+^† G_k W_k+ W^†∈_2 and used [V,A]=0. Here, for any w = ∑_i c_i g_i ∈_2, OwO = ∑_i (-1)^σ_i c_i g_i is also included in _2, where g_i is the Pauli basis of _2, c_i is an expansion coefficient, and σ_i is defined as Og_i O=(-1)^σ_ig_i. Therefore, A' = O A O is included in _2 because of A∈_2, leading to ÕG̃_kÕ =W^† V^† A' VW = W^† A' W ∈_2, where we have used [A',V]=0. Thus, [G̃_j,ÕG̃_kÕ]=0 holds from Eqs.(<ref>), (<ref>), and [_1,_2]=0. Similarly, [ÕG̃_jÕ,G̃_k]=0 can also be proved. Therefore, the first and second terms vanish in Eq. (<ref>). These results prove that [_j, _k]=0 holds for any , i.e., ∂_j C and ∂_k C are simultaneously measurable. Lemma <ref> states that ∂_j C and ∂_k C can be simultaneously measured if G_j and G_k are not -connected. Thus, we focus on the case that G_j, G_k∈ are -connected below. To simplify the proof, we divide the quantum circuit into two parts, U=U_fin U_ini, where we have defined U_ini = ∏_j=1^ e^iθ_jG_j and U_fin = ∏_j=+1^L e^iθ_jG_j. The depth of the final part, =L-, is set as a sufficiently large but constant value such that U_fin can express e^iϕ_1 Q_1e^iϕ_2 Q_2 for any Q_1, Q_2∈ and ϕ_1,ϕ_2∈ℝ. Note that there exists such due to Condition <ref>. We emphasize that whether the gradient components for U_fin can be simultaneously measured does not affect the gradient measurement efficiency for the full circuit, , because the contribution from the constant depth circuit to is negligible in the deep circuit limit (see Definition <ref>). Therefore, we will use U_fin as a buffer circuit for the proof and investigate whether ∂_j C and ∂_k C are simultaneously measurable for the parameters of U_ini (i.e., j,k≤). Then, the gradient operators are given by _j = -i[G̃_j,Õ] = -i[(U^†_ini G_j U_ini), (U^†_iniU_fin^† O U_finU_ini)], _k = -i[G̃_k,Õ] = -i[(U^†_ini G_k U_ini), (U^†_iniU_fin^† O U_finU_ini)]. According to Definition <ref>, ∂_j C and ∂_k C are simultaneously measurable if [_j(),_k()]= 0 for any . Therefore, when proving that ∂_j C and ∂_k C cannot be measured simultaneously, it suffices to show that [_j(),_k()]≠ 0 for a certain . In the following proofs, we find such that [_j(),_k()]≠ 0 to prove lemmas. For j,k ≤, if G_j,G_k∈ anti-commute, ∂_j C and ∂_k C cannot be simultaneously measured: [every node/.style=fill=white] [name=A][draw,circle] at (-1,0) G_j; [name=B][draw,circle] at (+1,0) G_k; (A) – (B); 2ex⇒ 2ex∃ s.t. [_j,_k]≠ 0. We prove this lemma in two cases: (i) {G_j,O}={G_k,O}=0 and (ii) otherwise. (i) In this case, [_j(),_k()]≠ 0 holds for =0. In fact, when =0 (i.e., U_ini=U_fin=I), we have G̃_j=G_j, G̃_k=G_k, Õ=O, and thus _j = -i[G̃_j, Õ] = -2i G_jO, _k = -i[G̃_k, Õ] = -2i G_kO. Therefore, we obtain [_j,_k]=4[G_j,G_k] ≠ 0 from {G_j,O}={G_k,O}=0, showing that ∂_j C and ∂_k C cannot be simultaneously measured. (ii) By Lemma <ref>, there exists R∈ satisfying one of the following commutation relations: [name=A][draw,circle,fill=white] at (-1.3,0.75) G_j; [name=B][draw,circle,fill=white] at (+1.3,0.75) G_k; [name=C][draw,circle,fill=white] at (0,0) R; [name=D][draw,circle,fill=white] at (0,-1.5) O; (A) – (B); (A) – (D); (B) – (C); (C) – (D); [name=A][draw,circle,fill=white] at (-1.3,0.75) G_j; [name=B][draw,circle,fill=white] at (+1.3,0.75) G_k; [name=C][draw,circle,fill=white] at (0,0) R; [name=D][draw,circle,fill=white] at (0,-1.5) O; (A) – (B); (A) – (C); (B) – (D); (C) – (D); [name=A][draw,circle,fill=white] at (-1.3,0.75) G_j; [name=B][draw,circle,fill=white] at (+1.3,0.75) G_k; [name=C][draw,circle,fill=white] at (0,0) R; [name=D][draw,circle,fill=white] at (0,-1.5) O; (A) – (B); (A) – (C); (B) – (C); (C) – (D); . By setting such that U_ini=I and U_fin=e^-iπ R/4 in Eq. (<ref>), we have G̃_j=G_j, G̃_k=G_k, Õ=e^iπ R/4Oe^-iπ R/4=iRO, and thus _j = -i[G̃_j, Õ] = 2G_jRO, _k = -i[G̃_k, Õ] = 2G_kRO. Therefore, we obtain [_j,_k]=4[G_k,G_j] ≠ 0 from {G_j,iRO}={G_k,iRO}=0, showing that ∂_j C and ∂_k C cannot be simultaneously measured. These results prove that ∂_j C and ∂_k C cannot be simultaneously measured for anti-commuting G_j and G_k. Consider -connected G_j,G_k∈ for j,k ≤. If {G_j,O}=[G_k,O]=0 or [G_j,O]={G_k,O}=0, ∂_j C and ∂_k C cannot be simultaneously measured: [name=conn] at (0,0.3) ; [name=A][draw,circle,fill=white] at (-1,0) G_j; [name=B][draw,circle,fill=white] at (+1,0) G_k; [name=C][draw,circle,fill=white] at (0,-1) O; (A) – (C); [Latex[length=2mm]-Latex[length=2mm]] (A) – (B); 4exor [name=conn] at (0,0.3) ; [name=A][draw,circle,fill=white] at (-1,0) G_j; [name=B][draw,circle,fill=white] at (+1,0) G_k; [name=C][draw,circle,fill=white] at (0,-1) O; (B) – (C); [Latex[length=2mm]-Latex[length=2mm]] (A) – (B); ⇒ ∃ s.t. [_j,_k]≠ 0. It suffices to consider only the case of [G_j,G_k]=0 by Lemma <ref>. Here, we consider the case of {G_j,O}=[G_k,O]=0 (the other case is similarly provable). Then, there exists R∈ such that {G_j,R}={G_k,R}={O,R}=0 by Lemma <ref>: [every node/.style=fill=white] [name=A][draw,circle] at (0,0) R; [name=B][draw,circle] at (+1.3,0.75) G_k; [name=C][draw,circle] at (-1.3,0.75) G_j; [name=D][draw,circle] at (0,-1.5) O; (A) – (B); (A) – (C); (A) – (D); (C) – (D); [name=conn] at (0,1.05) ; [dotted, Latex[length=2mm]-Latex[length=2mm]] (B) – (C); . By setting such that U_ini=I and U_fin=e^iϕ R, we have G̃_j=G_j, G̃_k=G_k, Õ=e^-iϕ ROe^iϕ R=cos(2ϕ)O + isin(2ϕ)OR, and thus _j = -i[G̃_j, Õ] = -i cos(2ϕ)[G_j,O] + sin(2ϕ)[G_j,OR] = - 2i cos(2ϕ) G_j O _k = -i[G̃_k, Õ] = -i cos(2ϕ)[G_k,O] + sin(2ϕ)[G_k,OR] = 2 sin(2ϕ) G_k O R. Therefore, we have [_j,_k] = -4icos(2ϕ) sin(2ϕ) [G_j O, G_kOR]. From the commutation relations between G_j,G_k,R and O, we have [G_j O, G_kOR]≠ 0, showing that there exists such that [_j,_k]≠ 0. Therefore, ∂_j C and ∂_k C cannot be simultaneously measured if {G_j,O}=[G_k,O]=0 or [G_j,O]={G_k,O}=0. Consider -connected G_j, G_k∈ for j,k ≤. If there exists R∈ such that {G_j,R}=[G_k,R]=0 or [G_j,R]={G_k,R}=0, ∂_j C and ∂_k C cannot be simultaneously measured: [name=A][draw,circle,fill=white] at (-1,0) G_j; [name=B][draw,circle,fill=white] at (+1,0) G_k; [name=C][draw,circle,fill=white] at (0,-1) R; (A) – (C); [name=conn] at (0,0.3) ; [Latex[length=2mm]-Latex[length=2mm]] (A) – (B); 4exor [name=conn] at (0,0.3) ; [name=A][draw,circle,fill=white] at (-1,0) G_j; [name=B][draw,circle,fill=white] at (+1,0) G_k; [name=C][draw,circle,fill=white] at (0,-1) R; (B) – (C); [Latex[length=2mm]-Latex[length=2mm]] (A) – (B); ⇒ ∃ s.t. [_j,_k]≠ 0. It suffices to consider only the case of [G_j,G_k]=0 and [G_j,O]=[G_k,O]=0 or {G_j,O}={G_k,O}=0 by Lemmas <ref> and <ref>. When {G_j,R}=[G_k,R]=0 or [G_j,R]={G_k,R}=0, there necessarily exists S∈ satisfying {G_j,S}=[G_k,S]={O,S}=0 or [G_j,S]={G_k,S}={O,S}=0 by Lemma <ref>: [every node/.style=fill=white] [name=A][draw,circle] at (-1.3,0.75) G_j; [name=B][draw,circle] at (+1.3,0.75) G_k; [name=C][draw,circle] at (0,0) S; [name=D][draw,circle] at (0,-1.5) O; (A) – (C); (C) – (D); [dashed] (A) – (D); [dashed] (B) – (D); [name=conn] at (0,1.05) ; [dotted, Latex[length=2mm]-Latex[length=2mm]] (A) – (B); 8exor [every node/.style=fill=white] [name=A][draw,circle] at (-1.3,0.75) G_j; [name=B][draw,circle] at (+1.3,0.75) G_k; [name=C][draw,circle] at (0,0) S; [name=D][draw,circle] at (0,-1.5) O; (B) – (C); (C) – (D); [dashed] (A) – (D); [dashed] (B) – (D); [name=conn] at (0,1.05) ; [dotted, Latex[length=2mm]-Latex[length=2mm]] (A) – (B); . Assume {G_j,S}=[G_k,S]={O,S}=0 (the other case is similarly provable). Then, we consider O'=iSO∈, which satisfies {G_j,O'}=[G_k,O']=0 or [G_j,O']={G_k,O'}=0 (∵ [G_j,O]=[G_k,O]=0 or {G_j,O}={G_k,O}=0): [name=conn] at (0,0.3) ; [name=A][draw,circle,fill=white] at (-1,0) G_j; [name=B][draw,circle,fill=white] at (+1,0) G_k; [name=C][draw,circle,fill=white] at (0,-1) O'; (A) – (C); [dotted, Latex[length=2mm]-Latex[length=2mm]] (A) – (B); 4exor [name=conn] at (0,0.3) ; [name=A][draw,circle,fill=white] at (-1,0) G_j; [name=B][draw,circle,fill=white] at (+1,0) G_k; [name=C][draw,circle,fill=white] at (0,-1) O'; (B) – (C); [dotted, Latex[length=2mm]-Latex[length=2mm]] (A) – (B); . We assume {G_j,O'}=[G_k,O']=0 (the other case is similarly provable). By Lemma <ref>, there exists T∈ satisfying {G_j,T}={G_k,T}={O',T}=0: [every node/.style=fill=white] [name=A][draw,circle] at (0,0) T; [name=B][draw,circle] at (+1.3,0.75) G_k; [name=C][draw,circle] at (-1.3,0.75) G_j; [name=D][draw,circle] at (0,-1.5) O'; (A) – (B); (A) – (C); (A) – (D); (C) – (D); [name=conn] at (0,1.05) ; [dotted, Latex[length=2mm]-Latex[length=2mm]] (B) – (C); . By setting such that U_ini=I and U_fin=e^-iπ S/4e^iϕ T in Eq. (<ref>), we have G̃_j=G_j, G̃_k=G_k, Õ=e^-iϕ Te^iπ S/4Oe^-iπ S/4e^iϕ T=cos(2ϕ)O' + isin(2ϕ)O'T, and thus _j = -i[G̃_j,Õ] = -i cos(2ϕ) [G_j,O'] + sin(2ϕ) [G_j,O'T] = -2i cos(2ϕ) G_j O' _k = -i [G̃_k,Õ] = -i cos(2ϕ) [G_k,O'] + sin(2ϕ) [G_k,O'T] = 2 sin(2ϕ) G_k O'T. Therefore, we have [_j,_k] = -4icos(2ϕ)sin(2ϕ) [G_j O', G_kO'T]. From the commutation relations between G_j,G_k,O' and T, we have [G_j O', G_kO'T]≠ 0, showing that there exists such that [_j,_k]≠ 0. Therefore, ∂_j C and ∂_k C cannot be simultaneously measured if there exists R∈ such that {G_j,R}=[G_k,R]=0 or [G_j,R]={G_k,R}=0. For j<k ≤, if G_j=G_k, ∂_j C and ∂_k C cannot be simultaneously measured: [name=conn] at (0,0) =; [name=A][draw,circle,fill=white] at (-0.75,0) G_j; [name=B][draw,circle,fill=white] at (+0.75,0) G_k; 2ex⇒ 2ex∃ s.t. [_j,_k]≠ 0. By Condition <ref>, there exists ℓ (j<ℓ<k) such that {G_j,G_ℓ}=0. Then we consider G_k' =iG_ℓ G_k = i[G_ℓ,G_k]/2∈, which satisfies {G_j,G_k'}=0. We prove this lemma in two cases: (i) {G_j,O}={G_k',O}=0 and (ii) otherwise. (i) We set such that U_j+=I, U_k+=e^-iπ G_ℓ/4, and U=I (i.e., U_ini=e^-iπ G_ℓ/4 and U_fin=e^+iπ G_ℓ/4). Then we have G̃_j=G_j, G̃_k=e^iπ G_ℓ/4G_k e^-iπ G_ℓ/4=G_k', Õ=O, and thus _j = -i[G̃_j, Õ] = -2i G_jO, _k = -i[G̃_k, Õ] = -2i G_k'O. Therefore, we obtain [_j,_k]=4[G_j,G_k'] ≠ 0 from {G_j,O}={G_k',O}=0, showing that ∂_j C and ∂_k C cannot be simultaneously measured. (ii) By Lemma <ref>, there exists R∈ satisfying one of the following commutation relations: [name=A][draw,circle,fill=white] at (-1.3,0.75) G_j; [name=B][draw,circle,fill=white] at (+1.3,0.75) G_k'; [name=C][draw,circle,fill=white] at (0,0) R; [name=D][draw,circle,fill=white] at (0,-1.5) O; (A) – (B); (A) – (D); (B) – (C); (C) – (D); [name=A][draw,circle,fill=white] at (-1.3,0.75) G_j; [name=B][draw,circle,fill=white] at (+1.3,0.75) G_k'; [name=C][draw,circle,fill=white] at (0,0) R; [name=D][draw,circle,fill=white] at (0,-1.5) O; (A) – (B); (A) – (C); (B) – (D); (C) – (D); [name=A][draw,circle,fill=white] at (-1.3,0.75) G_j; [name=B][draw,circle,fill=white] at (+1.3,0.75) G_k'; [name=C][draw,circle,fill=white] at (0,0) R; [name=D][draw,circle,fill=white] at (0,-1.5) O; (A) – (B); (A) – (C); (B) – (C); (C) – (D); . By setting such that U_j+=I, U_k+=e^-iπ G_ℓ/4, and U=e^-iπ R/4 (i.e., U_ini=e^-iπ G_ℓ/4, and U_fin=e^-iπ R/4e^+iπ G_ℓ/4), we have G̃_j=G_j, G̃_k=G_k', Õ=e^iπ R/4Oe^-iπ R/4=iRO, and thus _j = -i[G̃_j, Õ] = 2G_jRO, _k = -i[G̃_k, Õ] = 2G_k'RO. Therefore, we obtain [_j,_k]=4[G_j,G_k'] ≠ 0 from {G_j,iRO}={G_k',iRO}=0, showing that ∂_j C and ∂_k C cannot be simultaneously measured. These results prove that ∂_j C and ∂_k C cannot be simultaneously measured if G_j=G_k. §.§ Proof of trade-off inequality The main text leaves the derivations of Eqs. (<ref>) and (<ref>) in the proof of Theorem <ref>. Here, we show these equations. §.§.§ Preliminaries We first show several lemmas for preliminaries. Consider a set of commuting Pauli operators ={S_1,S_2,⋯}, where [S_j,S_k]=0 ∀ S_j,S_k∈. Define a subset of Pauli operators, P_⊂P_n, stabilized by as P_ = {P∈P_n | [P,S_j]=0 ∀ S_j∈}. Then, letting s be the number of independent Pauli operators in , the following equality and inequalities hold: |P_| = 4^n/2^s≤4^n/||, ||≤ 2^n. Let g_1,⋯,g_s∈ be independent Pauli operators in . According to Proposition 10.4 in Ref. <cit.>, there is h_j∈P_n such that {g_j,h_j}=0 and [g_k,h_j]=0 for all k≠ j. We consider H(x) ≡ h_1^x_1⋯ h_s^x_sP_ = {h_1^x_1⋯ h_s^x_s P | P∈P_}⊂, where x=(x_1,⋯,x_s) is a binary vector (x_j=0,1). The set H(x) has the following properties: * H(x) ∩H(x')=∅ for x≠x'. * |H(x)| = |P_| for any x * _xH(x) = P_n Let us prove the first property. By construction, we have Pg_j - (-1)^x_j g_j P = 0 for any P∈H(x). This means that, for x≠x', H(x) and H(x') have different (anti-)commutation relations with g_1,⋯,g_s, leading to H(x) ∩H(x')=∅. The second property also holds because h_1^x_1⋯ h_s^x_s P ≠ h_1^x_1⋯ h_s^x_s P' for P≠ P'. The third property is proved by contradiction. Let us assume that there exists P∈P_n satisfying P∉_xH(x). Then, a binary vector y is defined such that Pg_j - (-1)^y_j g_j P = 0. Because P'=h_1^y_1⋯ h_s^y_s P satisfies [P',g_j]=0 for any g_j, P' belongs to P_ by definition. Hence, we have P = h_1^y_1⋯ h_s^y_s P' ∈H(y). This contradicts the assumption of P∉_xH(x). Therefore, we have _xH(x) = P_n. These properties lead to 4^n=|P_n|=|_xH(x)|= ∑_x |H(x)| = 2^s |P_|, proving |_|=4^n/2^s. Besides, given that s independent Pauli operators can generate only 2^s Pauli operators by multiplying them, we have || ≤ 2^s, which readily shows |_|=4^n/2^s ≤ 4^n/||. Also, combining |_|≤ 4^n/|| and || ≤ |_| (the latter is derived from the fact that ∀ S_j∈ is included in P_), we obtain ||≤ 2^n. Let c be a real constant. For real variables a_1,⋯,a_q ≥ c and a real constant A ≥ c^q, the following inequality holds under a constraint ∏_j=1^q a_j ≤ A: ∑_j=1^q a_j ≤A/c^q-1 + c(q-1). We first prove this lemma for c=1 by mathematical induction. (i) For q=1, a_1 ≤ A holds trivially by the constraint. (ii) We assume that this lemma holds for q=m. That is, under the constraint ∏_j=1^m a_j ≤ A, ∑_j=1^m a_j ≤ A+m-1 holds for any A≥1. Now, we consider the case of q=m+1, where the constraint is given by ∏_j=1^m+1 a_j ≤ A. Let us fix a_m+1 in the range of 1≤ a_m+1≤ A (if a_m+1>A, the condition a_1,⋯,a_m ≥1 cannot be satisfied). Then, the constraint for a_1,⋯,a_m is written as ∏_j=1^m a_j ≤ A/a_m+1=Ã. Under this constraint, we have ∑_j=1^m a_j ≤Ã+m-1=A/a_m+1+m-1 from the assumption for q=m. Therefore, we have ∑_j=1^m+1 a_j= a_m+1 + (∑_j=1^m a_j)≤ a_m+1+A/a_m+1+m-1. For 1≤ x ≤ A, f(x)=x+A/x has a maximum value f(1)=f(A)=A+1. Thus, we finally obtain ∑_j=1^m+1 a_j≤ A+m, indicating that the lemma holds even for q=m+1. These discussions prove the lemma for c=1 by mathematical induction. Then, we rescale a_j by a factor of c as a_j → a_j'=ca_j, where the constraint is also rescaled as ∏_j=1^q a_j' = c^q ∏_j=1^q a_j ≤ c^q A = A'. Therefore, we obtain ∑_j=1^q a_j' = c∑_j=1^q a_j ≤ cA + c(q-1) = A'/c^q-1 + c(q-1), where we have used the lemma for c=1 in the inequality. This proves the lemma for any c>0. §.§.§ Overview of Sec. <ref> For convenience, we briefly summarize Sec. <ref> to prove Theorem <ref>. In this proof, we decompose the DLA graph into several subgraphs that consist of (potentially) simultaneously measurable nodes, deriving the upper limit of the gradient measurement efficiency. We first decompose the DLA graph into separated subgraphs as = ( _1 ⊔⋯⊔_p ) ⊔( _1 ⊔⋯⊔_q ), where _'s and _'s have one and multiple nodes, respectively. Since _ and _ are separated, they satisfy [_,_]=[_,_]=0 ∀≠, [_,_]=0 ∀, . By Lemma <ref>, when G_j and G_k are nodes in different separated subgraphs _ or _, ∂_j C and ∂_k C are simultaneously measurable. Next, to examine the simultaneous measurability in each subgraph _, we further decompose it into r_ subgraphs as _ = _^1⊔⋯⊔_^r_, where _^a's satisfy [_^a,_^b]=0 or {_^a,_^b}=0 and ∀ a≠ b, ∃ c, s.t. { [_^a,_^c]=0 {_^b,_^c}=0 . or { {_^a,_^c}=0 [_^b,_^c]=0. . Equation (<ref>) implies that all nodes in each subgraph share the same commutation relations for every node in the DLA graph R∈ as [P,R]=0 ∀ P ∈_^a or {P,R}=0 ∀ P ∈_^a. This readily leads to the commutation relations within the same subgraph as [P,Q]=0 for ∀ P,Q∈_^a. In addition, from Eq. (<ref>), when the circuit generators G_j, G_k∈ belong to different _^a's, there exists R∈_ such that {G_j,R}=[G_k,R]=0 or [G_j,R]={G_k,R}=0. Then, ∂_j C and ∂_k C are not simultaneously measurable by Lemma <ref>. Finally, we decompose _^a into the commuting and anti-commuting parts with the observable O: _^a = _^a+⊔_^a-, where _^a± commute and anti-commute with the observable O, respectively: [_^a+, O]=0, {_^a-, O}=0. For G_j∈_^a+ and G_k∈_^a-, ∂_j C and ∂_k C are not simultaneously measurable by Lemma <ref>. Therefore, the maximum size of _^a± bounds the maximum number of simultaneously measurable gradient components in each _. To evaluate gradient measurement efficiency and expressivity, we define v= _^amax( | _^a| ), w= _^a±max( | _^a±| ), _v= _^aargmax( | _^a| ), _w= _^a±argmax( | _^a±| ). By Lemmas <ref>–<ref>, the gradient measurement efficiency and the expressivity (i.e., the total number of nodes) are bounded as ≤ qw, ≤ p + v(r_1 + ⋯ + r_q), where there are no contributions from _ to in the deep circuit limit. For this DLA decomposition, we consider stabilizer operators ={S_1, S_2,⋯}, which can lead to high gradient measurement efficiency instead of limiting expressivity. The stabilizers are defined as follows. If v≥ w+p, we define = {E_1E_1, E_1E_2, ⋯, E_1E_v} with _v = {E_1,⋯,E_v}. Otherwise, we define = {F_1F_1, F_1F_2, ⋯, F_1F_w}⊔{A_1,⋯,A_p} with _w = {F_1,⋯,F_w} and A_∈_. Note that the elements of are not duplicated [If v≥ w+p, E_1E_j≠ E_1E_k (⇔ E_j≠ E_k) trivially holds for j≠ k. If v<w+p, F_1F_j≠ F_1F_k (⇔ F_j≠ F_k) and A_j≠ A_k hold for j≠ k. We also have F_1F_j ≠ A_k because of [F_1F_j,O]={A_k,O}=0. Therefore, the elements of are not duplicated.]. From Eq. (<ref>), the stabilizers commute with themselves and the DLA (i.e., [,]=[,]=0), thus limiting the expressivity of the circuit. Here, let us define a subgroup of Pauli operators, P_⊂P_n, stabilized by as P_ = {P∈P_n | [P,S_j]=0 ∀ S_j∈}. By Lemma <ref>, we have |P_| ≤ 4^n/||. In what follows, using the DLA decomposition and the stabilizers, we derive ≤4^n v/4^q-1||^2 + (3q-4)v + p and 4^n v/4^q-1||^2 + (3q-4)v + p ≤4^n/qw - qw. Combining Eqs. (<ref>), (<ref>), and (<ref>), we finally obtain the trade-off inequality ≤ 4^n/ -. §.§.§ Proof of Eq. (<ref>) Here, we prove the following inequality: ≤4^n v/4^q-1||^2 + (3q-4)v + p. Let us first show the following lemma: r_≥ 3. If r_=1, all the elements of _=_^1 commute with each other by Eq. (<ref>), which contradicts the fact that _ is a connected graph, thus r_≥ 2. If r_=2, Eq. (<ref>) leads to {_^1,_^2}=0 because _ is connected. Then, for P∈_^1 and Q∈_^2, iR = [iP,iQ]=-2PQ satisfies {P,R}={Q,R}=0. This indicates that R is included in _ but not in both _^1 and _^2, thus showing the existence of _^3. Therefore r_≥ 3. Now, we prove Eq. (<ref>). Let C_, a∈_^a be a representative element of _^a and define D_={I, C_, 1, ⋯, C_, r_}. Also, let S_a∈ and D_, a∈D_ be the ath elements of and D_, respectively. Then we consider the following Pauli operator: M_a = S_a_0 D_1, a_1⋯ D_q, a_q, where a=(a_0,⋯,a_q) is a vector with a_0∈{1,⋯,||} and a_(≠0)∈{1,⋯,r_+1}. The operator M_a has the following properties: (i) M_a∈P_, (ii) M_a≠ M_b for a≠b. The first property is readily proved by noticing [M_a,]=0, which is derived from [,]=[D_,]=0. We prove the second property below. If a_≠ b_ (≥1), according to Eq. (<ref>), there exists R∈_ such that [D_, a_,R]={D_, b_,R}=0 (or {D_, a_,R}=[D_, b_,R]=0) and [,R]=[D_(≠),R]=0. These commutation relations lead to [M_a, R] ≠ [M_b, R], showing M_a≠ M_b. If a_0 ≠ b_0 and a_ = b_ for all ≥1, M_a = M_b leads to S_a_0=S_b_0, which contradicts the fact that the elements of are not duplicated, showing M_a≠ M_b. Therefore, M_a≠ M_b for a≠b. Given that M_a∈P_ and M_a≠ M_b for a≠b, the following inequality holds: || (r_1+1) ⋯ (r_q+1) ≤ |P_|, where the left-hand side corresponds to the number of M_a. Since |P_|≤ 4^n/||, we have ∏_=1^q (r_+1)≤4^n/||^2. This constraint gives a new upper bound of the expressivity ≤ p+v ∑_=1^q r_ in Eq. (<ref>). By minimizing ∑_=1^q r_ in under the constraint ∏_=1^q (r_+1)≤ 4^n/||^2, we obtain ≤ p+v ∑_=1^q r_ ≤ v∑_=1^q (r_+1) - qv + p ≤ v [ 4^n/4^q-1||^2 + 4(q-1) ] - qv + p = 4^n v/4^q-1||^2 + (3q-4)v + p, where we have used Lemma <ref> with r_+1≥ 4 in the derivation of the third line. This is Eq. (<ref>), as required. For later use, we derive another inequality. Combining Eq. (<ref>) and r_+1≥ 4, we have 4^n/||^2≥ 4^q. This inequality will be used in the proof of Eq. (<ref>). §.§.§ Proof of Eq. (<ref>) Here, we prove the following inequality: 4^n v/4^q-1||^2 + (3q-4)v + p ≤4^n/qw - qw. By considering the difference between the both sides, we have (RHS)-(LHS) =4^n/4^q-1qw||^2(4^q-1||^2 - qvw) -qw -(3q-4)v - p. This is written as (RHS)-(LHS) ≥4^q||^2 - 4qvw/qw -qw -(3q-4)v - p. = 4^q||^2 - 3q^2vw -q^2w^2 - pqw/qw ≥4q^2||^2 - 3q^2||w -q^2w^2 - pq^2w/qw = q(4||^2 - 3||w - w^2 - pw)/w = q[ (4||+w)(||-w)-pw ]/w ≥q[ (4||+w)p-pw ]/w = 4pq||/w ≥ 0. Here, we have used Eq. (<ref>) with 4^q-1||^2 - qvw ≥ 0 in the second line, 4^q≥4q^2, q^2≥ q, and ||=max(v,w+p)≥ v in the fourth line, and ||=max(v,w+p)≥ w+p in the seventh line. This result proves Eq. (<ref>). § GRADIENT MEASUREMENT IN COMMUTING BLOCK CIRCUITS Here, we briefly review how to measure gradients in the CBC according to Ref. <cit.>. In the CBC of Eq. (<ref>), for an input state |ϕ⟩ and an Pauli observable O, the cost function C()=⟨ϕ|U^†()OU()|ϕ|$⟩ is written as C() = ⟨ϕ_a | W_a^† O W_a | ϕ_a|,⟩ where we have defined the quantum state at theath block and the quantum circuit after theath block as |ϕ_a⟩ = U_a(_a)⋯ U_1(_1) |ϕ⟩, W_a = U_B(_B)⋯ U_a+1(_a+1). Using∂|ϕ_a⟩/∂θ_j^a = iG_j^a |ϕ_a⟩, we have the derivative of the cost function by thejth parameter of theath block: ∂ C/∂θ_j^a = ⟨ϕ_a | (i W_a^† O W_a G_j^a - i G_j^a W_a^† O W_a ) | ϕ_a|.⟩ Then, since all generators in the block share the same commutation relations with other blocks, we can defineW̃_asuch thatW_a G_j^a = G_j^a W̃_a. ThisW̃_ais easily obtained by usinge^iθP G_j^a = G_j^a e^±iθP(Pis a Pauli operator), where±correspond to the cases of[G_j^a,P]=0and{G_j^a,P}=0, respectively. Thereby, we have ∂ C/∂θ_j^a = ⟨ϕ_a | (W_a^† (-1)^g_j^a iG_j^a O W̃_a - W̃_a^† i G_j^a O W_a ) | ϕ_a|,⟩ whereg_j^a=0if[G_j^a,O]=0andg_j^a=1if{G_j^a,O}=0. By introducing a Pauli operatorO_j^a=i^g_j^a G_j^a Oand a unitary operatorW_a'=i^g_j^a+1W_a, the derivative is written as ∂ C/∂θ_j^a = ⟨ϕ_a | ( (W_a')^† O_j^a W̃_a + W̃_a^† O_j^a (W_a') ) | ϕ_a|⟩ = 1/2[ ⟨ϕ_a | ( W̃_a^† + (W_a')^†) O_j^a ( W̃_a + W_a' ) | ϕ_a|⟩ - ⟨ϕ_a | ( W̃_a^† - (W_a')^†) O_j^a ( W̃_a - W_a' ) | ϕ_a|⟩] = 1/2[ ⟨ϕ_a | (L_W_a^+)^† O_j^a L_W_a^+ | ϕ_a|-⟩⟨ϕ_a | (L_W_a^-)^† O_j^a L_W_a^- | ϕ_a|⟩], where we have defined the linear combinations of unitariesL_W_a^±=W̃_a ±W_a'. TheseL_W_a^±can be implemented using an ancilla qubit as shown in Fig. <ref>. For the quantum state|ϕ_a⟩and the ancilla qubitH|0⟩(His a Hadamard gate), we apply aW'gate controlled on the ancilla qubit being in|1⟩, followed by aW̃gate controlled on the ancilla qubit being in|0⟩. After these controlled gates, we finally apply aHgate on the ancilla qubit and then have |ψ_a⟩ = 1/2( |0⟩L_W_a^+ |ϕ_a⟩ + |1⟩L_W_a^- |ϕ_a⟩). For this quantum state, we can estimate the derivative by measuring an observableÕ_j^a = 2(Z⊗O_j^a)as ⟨ψ_a | Õ_j^a | ψ_a|⟩ = 1/2[ ⟨ϕ_a | (L_W_a^+)^† O_j^a L_W_a^+ | ϕ_a|-⟩⟨ϕ_a | (L_W_a^-)^† O_j^a L_W_a^- | ϕ_a|⟩] = ∂ C/∂θ_j^a. In this method, we can simultaneously measure multiple derivatives in the same block if their generators share the same commutation relations with the observableO(i.e., the sameg_j^a). This is because the measured observablesÕ_j^aare commutative,[Õ_j^a, Õ_k^a]=0, wheng_j^a=g_k^a. Therefore, given that the generators of the final block commuting withOdo not contribute to the gradient, we can estimate the full gradient with only2B-1types of quantum circuits. § DYNAMICAL LIE ALGEBRA IN NUMERICAL EXPERIMENT Consider a set of generators ={X_j X_j+1,Y_j Y_j+1,Z_j Z_j+1}_j=1^n with even n. Then, the Lie closure of is given by = _, where we have defined _=_∖ with = {I, ∏_j=1^n X_j, ∏_j=1^n Y_j, ∏_j=1^n Z_j}, _ = {P∈ | [P,S_j]=0 ∀ S_j∈}. This readily leads to dim()=||=4^n/4-4 by Lemma <ref>. We prove this lemma by showing ⊆_ and _⊆. We first show ⊆_. One can easily verify ⊆_ because all the generators in commute with . Thus, it suffices to show ∩ = ∅. We prove this by contradiction. Assume S_j∈ is contained in . Then, because S_j∉, there exist anti-commuting Pauli operators P,Q∈ such that S_j ∝ [P,Q]=2PQ. Then, we have [S_j,P]=[2PQ,P]≠ 0, which contradicts the fact that S_j commutes with all the operators in _ and thus . Therefore, does not contain , proving ⊆_. We then show _⊆. For convenience, we call the number of X, Y, and Z operators in a Pauli string P as the weight of P. Below, we prove _⊆ by mathematical induction with respect to the weight of Pauli strings. To this end, let us introduce the subset of _ with weight-w: _^w = {P∈_ | the weight of P is w}. The sum of _^w gives _w=0^n _^w = _. For example, we have _^0 = ∅, _^1 = ∅, _^2 = {X_jX_k, Y_jY_k, Z_jZ_k}_j,k=1^n, ⋮ where we have used [_^w,]=0 and I∉_. We also define the numbers of X, Y, and Z operators in P∈ as w_X(P), w_Y(P), and w_Z(P), respectively. By definition, w_X(P)+w_Y(P)+w_Z(P)=w holds for ∀ P∈_^w. Let us prove _^w ⊂ by mathematical induction for w. Since _^0 = _^1 = ∅, it suffices to consider w≥2. We first show that _^2 ⊂, i.e., X_jX_k, Y_jY_k, and Z_jZ_k for j<k are contained in . For example, let us consider X_jX_k. Because of X_jX_j+1∈, we trivially have X_jX_j+1∈. Then, since Y_j+1Y_j+2,Z_j+1Z_j+2∈, a nested commutator [[X_jX_j+1, Y_j+1Y_j+2],Z_j+1Z_j+2]=X_jX_j+2 is also contained in . Similarly, we have [[X_jX_j+2, Y_j+2Y_j+3],Z_j+2Z_j+3]=X_jX_j+3∈. Repeating this procedure, we can easily show that X_jX_k ∈ for any j<k. In the same way, we can show Y_jY_k, Z_jZ_k ∈ for any j<k, proving _^2 ⊂. Next, we prove _^w ⊂ for 3≤ w≤ n-1 by mathematical induction (we prove the case of w=n later). Assume _^w ⊂ for 2≤ w ≤ n-2. Then, we show that ∀ P∈_^w+1 is contained in in two cases: (i) two or more of w_X(P), w_Y(P), w_Z(P) are greater than one, and (ii) otherwise. (i) The weight-(w+1) Pauli string is written as P = σ^μ_1_γ_1⋯σ^μ_w+1_γ_w+1, where we have defined σ^μ_j = X_j μ=1 Y_j μ=2 Z_j μ=3 and the qubit indices γ_1<⋯<γ_w+1. Given that two or more of w_X(P), w_Y(P), w_Z(P) are greater than one, there necessarily exist 1≤ j<k ≤ w+1 such that μ_j ≠μ_k. Then, we define Q=iPR using a weight-2 Pauli string R=σ^μ_k_γ_jσ^μ_k_γ_k∈_^2. One can easily show that Q is a weight-w Pauli string and is contained in _^w because of P∈_^w+1 and R∈_^2. Therefore, we have Q∈ by assumption. We can construct P by taking the commutator P∝ [R,Q], proving P ⊂ for the case (i). (ii) When P contains only one of X, Y, and Z operators, we can construct P using another qubit γ' that is different from γ_1,⋯,γ_w+1. For example, let us consider a Pauli string that has only X operators: P = X_γ_1⋯ X_γ_w+1∈_^w+1 (then w+1 is even because of [P,∏_j Y_j]=0). Given that any weight-(w+1) Pauli string with two or more of X, Y, and Z operators is contained in according to the proof of the case (i), S=X_γ_1⋯ X_γ_w-1 Y_γ_w Y_γ_w+1∈_^w+1 is also contained in . Then, we can construct P from S, Z_γ_w Z_γ', Z_γ_w+1 Z_γ'∈ as P∝ [[S,Z_γ_w Z_γ'],Z_γ_w+1 Z_γ'], showing that P∈. Similarly, Pauli strings that have only Y or Z operators are also contained in . Therefore, P ⊂ holds for the case (ii). These results prove that _^w ⊂ for 2≤ w ≤ n-1 by mathematical induction. Finally, we prove _^n⊂. Because ∀ P∈_^n has at least two of X, Y, and Z operators (note that _ does not contain ), we can construct P from weight-(n-1) and weight-2 Pauli strings Q and R in a similar way to the above discussion on the case (i). In summary, we show _^w ⊂ for any w, proving _ = _w=0^n _^w ⊆. Therefore, together with ⊆_, we have proved this lemma. § NUMERICAL RESULTS FOR FEWER MEASUREMENT SHOTS This Appendix provides additional results in the numerical demonstration of learning the unknown symmetric function. Here, numerical experiments are performed with the same settings as in the main text, except for the number of measurement shots per circuit,N_shot. While the main text assumesN_shot=1000, this Appendix usesN_shot=100and10. In Fig. <ref>, we can observe similar behaviors to the case ofN_shot=1000for bothN_shot=100and10. The SLPA can significantly reduce the total number of measurement shots required for training while maintaining high generalization. Meanwhile, the final accuracy after training becomes worse asN_shotdecreases for all the models. This indicates that fewer measurement shots result in larger statistical errors in gradient estimation and, thus, worse accuracy. Therefore, within these numerical experiments, the minimumN_shotrequired for training is determined by the acceptable accuracy of the problem, which is independent of the choice of models. 61 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0] ` 12 `$12 `&12 `#12 `1̂2 `_12 `%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Hinton et al.(2006)Hinton, Osindero, and Teh]hinton06 authorauthorG. E. Hinton, authorS. Osindero, and authorY. W. Teh, titletitleA fast learning algorithm for deep belief nets, @noop journaljournalNeural Computation volume18, pages1527 (year2006)NoStop [Krizhevsky et al.(2012)Krizhevsky, Sutskever, and Hinton]Krizhevsky2012-kx authorauthorA. Krizhevsky, authorI. Sutskever, and authorG. E. Hinton, titletitleImageNet classification with deep convolutional neural networks, https://doi.org/10.1145/3065386journaljournalCommun. ACM volume60, pages84 (year2012)NoStop [Vaswani et al.(2017)Vaswani, Shazeer, Parmar, Uszkoreit, Jones, Gomez, Kaiser, and Polosukhin]10.5555/3295222.3295349 authorauthorA. Vaswani, authorN. Shazeer, authorN. Parmar, authorJ. Uszkoreit, authorL. Jones, authorA. N. Gomez, authorL. Kaiser, and authorI. Polosukhin, titletitleAttention is all you need, in @noop booktitleProceedings of the 31st International Conference on Neural Information Processing Systems, series and numberNIPS'17 (publisherCurran Associates Inc., addressRed Hook, NY, USA, year2017) p. pages6000–6010NoStop [Jumper et al.(2021)Jumper, Evans, Pritzel, Green, Figurnov, Ronneberger, Tunyasuvunakool, Bates, Žídek, Potapenko, Bridgland, Meyer, Kohl, Ballard, Cowie, Romera-Paredes, Nikolov, Jain, Adler, Back, Petersen, Reiman, Clancy, Zielinski, Steinegger, Pacholska, Berghammer, Bodenstein, Silver, Vinyals, Senior, Kavukcuoglu, Kohli, and Hassabis]Jumper2021HighlyAP authorauthorJ. M. Jumper, authorR. Evans, authorA. Pritzel, authorT. Green, authorM. Figurnov, authorO. Ronneberger, authorK. Tunyasuvunakool, authorR. Bates, authorA. Žídek, authorA. Potapenko, authorA. Bridgland, authorC. Meyer, authorS. A. A. Kohl, authorA. Ballard, authorA. Cowie, authorB. Romera-Paredes, authorS. Nikolov, authorR. Jain, authorJ. Adler, authorT. Back, authorS. Petersen, authorD. Reiman, authorE. Clancy, authorM. Zielinski, authorM. Steinegger, authorM. Pacholska, authorT. Berghammer, authorS. Bodenstein, authorD. Silver, authorO. Vinyals, authorA. W. Senior, authorK. Kavukcuoglu, authorP. Kohli, and authorD. Hassabis, titletitleHighly accurate protein structure prediction with alphafold, https://api.semanticscholar.org/CorpusID:235959867journaljournalNature volume596, pages583 (year2021)NoStop [Carleo and Troyer(2017)]Carleo2017-nw authorauthorG. Carleo and authorM. Troyer, titletitleSolving the quantum many-body problem with artificial neural networks, https://doi.org/10.1126/science.aag2302journaljournalScience volume355, pages602 (year2017)NoStop [Rumelhart et al.(1986)Rumelhart, Hinton, and Williams]Rumelhart1986-qt authorauthorD. E. Rumelhart, authorG. E. Hinton, and authorR. J. Williams, titletitleLearning representations by back-propagating errors, https://doi.org/10.1038/323533a0journaljournalNature volume323, pages533 (year1986)NoStop [Baydin et al.(2018)Baydin, Pearlmutter, Radul, and Siskind]Baydin2018-kd authorauthorA. G. Baydin, authorB. A. Pearlmutter, authorA. A. Radul, and authorJ. M. Siskind, titletitleAutomatic Differentiation in Machine Learning: a Survey, https://www.jmlr.org/papers/volume18/17-468/17-468.pdfjournaljournalJournal of Machine Learning Research volume18, pages1 (year2018)NoStop [Peruzzo et al.(2014)Peruzzo, McClean, Shadbolt, Yung, Zhou, Love, Aspuru-Guzik, and O'Brien]Peruzzo2014-oj authorauthorA. Peruzzo, authorJ. McClean, authorP. Shadbolt, authorM.-H. Yung, authorX.-Q. Zhou, authorP. J. Love, authorA. Aspuru-Guzik, and authorJ. L. O'Brien, titletitleA variational eigenvalue solver on a photonic quantum processor, https://doi.org/10.1038/ncomms5213journaljournalNat. Commun. volume5, pages4213 (year2014)NoStop [Farhi et al.(2014)Farhi, Goldstone, and Gutmann]Farhi2014-as authorauthorE. Farhi, authorJ. Goldstone, and authorS. Gutmann, titletitleA quantum approximate optimization algorithm, https://arxiv.org/abs/1411.4028arXiv:1411.4028 [quant-ph] (year2014)NoStop [Farhi and Neven(2018)]Farhi2018-nt authorauthorE. Farhi and authorH. Neven, titletitleClassification with quantum neural networks on near term processors, https://arxiv.org/abs/1802.06002arXiv:1802.06002 [quant-ph] (year2018)NoStop [Liu and Wang(2018)]Liu2018-bd authorauthorJ.-G. Liu and authorL. Wang, titletitleDifferentiable learning of quantum circuit Born machines, https://doi.org/10.1103/physreva.98.062324journaljournalPhys. Rev. A volume98, pages062324 (year2018)NoStop [Mitarai et al.(2018)Mitarai, Negoro, Kitagawa, and Fujii]Mitarai2018-ap authorauthorK. Mitarai, authorM. Negoro, authorM. Kitagawa, and authorK. Fujii, titletitleQuantum circuit learning, https://doi.org/10.1103/physreva.98.032309journaljournalPhys. Rev. A volume98, pages032309 (year2018)NoStop [Benedetti et al.(2019)Benedetti, Lloyd, Sack, and Fiorentini]Benedetti2019-ph authorauthorM. Benedetti, authorE. Lloyd, authorS. Sack, and authorM. Fiorentini, titletitleParameterized quantum circuits as machine learning models, https://doi.org/10.1088/2058-9565/ab4eb5journaljournalQuantum Sci. Technol. volume4, pages043001 (year2019)NoStop [Schuld et al.(2020)Schuld, Bocharov, Svore, and Wiebe]Schuld2020-vz authorauthorM. Schuld, authorA. Bocharov, authorK. M. Svore, and authorN. Wiebe, titletitleCircuit-centric quantum classifiers, https://doi.org/10.1103/physreva.101.032308journaljournalPhys. Rev. A volume101, pages032308 (year2020)NoStop [Bravo-Prieto et al.(2023)Bravo-Prieto, LaRose, Cerezo, Subasi, Cincio, and Coles]Bravo-Prieto2023-ka authorauthorC. Bravo-Prieto, authorR. LaRose, authorM. Cerezo, authorY. Subasi, authorL. Cincio, and authorP. J. Coles, titletitleVariational Quantum Linear Solver, https://doi.org/10.22331/q-2023-11-22-1188journaljournalQuantum volume7, pages1188 (year2023), https://arxiv.org/abs/1909.05820v41909.05820v4NoStop [Anschuetz et al.(2019)Anschuetz, Olson, Aspuru-Guzik, and Cao]Anschuetz2019-jc authorauthorE. Anschuetz, authorJ. Olson, authorA. Aspuru-Guzik, and authorY. Cao, titletitleVariational Quantum Factoring, in https://doi.org/10.1007/978-3-030-14082-3_7booktitleQuantum Technology and Optimization Problems, series and numberLecture notes in computer science (publisherSpringer International Publishing, addressCham, year2019) pp. pages74–85NoStop [Khatri et al.(2019)Khatri, LaRose, Poremba, Cincio, Sornborger, and Coles]Khatri2019-jg authorauthorS. Khatri, authorR. LaRose, authorA. Poremba, authorL. Cincio, authorA. T. Sornborger, and authorP. J. Coles, titletitleQuantum-assisted quantum compiling, https://doi.org/10.22331/q-2019-05-13-140journaljournalQuantum volume3, pages140 (year2019), https://arxiv.org/abs/1807.00800v51807.00800v5NoStop [Johnson et al.(2017)Johnson, Romero, Olson, Cao, and Aspuru-Guzik]Johnson2017-it authorauthorP. D. Johnson, authorJ. Romero, authorJ. Olson, authorY. Cao, and authorA. Aspuru-Guzik, titletitleQVECTOR: an algorithm for device-tailored quantum error correction, https://arxiv.org/abs/1711.02249arXiv:1711.02249 [quant-ph] (year2017)NoStop [Wiebe et al.(2014)Wiebe, Kapoor, and Svore]Wiebe2014-un authorauthorN. Wiebe, authorA. Kapoor, and authorK. M. Svore, titletitleQuantum Deep Learning, https://arxiv.org/abs/1412.3489arXiv:1412.3489 [quant-ph] (year2014)NoStop [Schuld et al.(2015)Schuld, Sinayskiy, and Petruccione]Schuld2015-su authorauthorM. Schuld, authorI. Sinayskiy, and authorF. Petruccione, titletitleAn introduction to quantum machine learning, https://doi.org/10.1080/00107514.2014.964942journaljournalContemp. Phys. volume56, pages172 (year2015)NoStop [Biamonte et al.(2017)Biamonte, Wittek, Pancotti, Rebentrost, Wiebe, and Lloyd]Biamonte2017-fz authorauthorJ. Biamonte, authorP. Wittek, authorN. Pancotti, authorP. Rebentrost, authorN. Wiebe, and authorS. Lloyd, titletitleQuantum machine learning, https://doi.org/10.1038/nature23474journaljournalNature volume549, pages195 (year2017)NoStop [Dunjko and Briegel(2018)]Dunjko2018-jv authorauthorV. Dunjko and authorH. J. Briegel, titletitleMachine learning & artificial intelligence in the quantum domain: a review of recent progress, https://doi.org/10.1088/1361-6633/aab406journaljournalRep. Prog. Phys. volume81, pages074001 (year2018)NoStop [Cerezo et al.(2021a)Cerezo, Arrasmith, Babbush, Benjamin, Endo, Fujii, McClean, Mitarai, Yuan, Cincio, and Coles]Cerezo2021-un authorauthorM. Cerezo, authorA. Arrasmith, authorR. Babbush, authorS. C. Benjamin, authorS. Endo, authorK. Fujii, authorJ. R. McClean, authorK. Mitarai, authorX. Yuan, authorL. Cincio, and authorP. J. Coles, titletitleVariational quantum algorithms, https://doi.org/10.1038/s42254-021-00348-9journaljournalNat. Rev. Phys. volume3, pages625 (year2021a)NoStop [Abbas et al.(2023)Abbas, King, Huang, Huggins, Movassagh, Gilboa, and McClean]Abbas2023-oo authorauthorA. Abbas, authorR. King, authorH.-Y. Huang, authorW. J. Huggins, authorR. Movassagh, authorD. Gilboa, and authorJ. R. McClean, titletitleOn quantum backpropagation, information reuse, and cheating measurement collapse, https://arxiv.org/abs/2305.13362arXiv:2305.13362 [quant-ph] (year2023)NoStop [Schuld et al.(2019)Schuld, Bergholm, Gogolin, Izaac, and Killoran]Schuld2019-rr authorauthorM. Schuld, authorV. Bergholm, authorC. Gogolin, authorJ. Izaac, and authorN. Killoran, titletitleEvaluating analytic gradients on quantum hardware, https://doi.org/10.1103/PhysRevA.99.032331journaljournalPhys. Rev. A volume99, pages032331 (year2019)NoStop [Bowles et al.(2023)Bowles, Wierichs, and Park]Bowles2023-vf authorauthorJ. Bowles, authorD. Wierichs, and authorC.-Y. Park, titletitleBackpropagation scaling in parameterised quantum circuits, https://arxiv.org/abs/2306.14962arXiv:2306.14962 [quant-ph] (year2023)NoStop [Coyle et al.(2024)Coyle, Cherrat, Jain, Mathur, Raj, Kazdaghli, and Kerenidis]Coyle2024-ff authorauthorB. Coyle, authorE. A. Cherrat, authorN. Jain, authorN. Mathur, authorS. Raj, authorS. Kazdaghli, and authorI. Kerenidis, titletitleTraining-efficient density quantum machine learning, https://arxiv.org/abs/2405.20237arXiv:2405.20237 [quant-ph] (year2024)NoStop [Aaronson(2017)]Aaronson2017-pw authorauthorS. Aaronson, titletitleShadow tomography of quantum states, https://arxiv.org/abs/1711.01053arXiv:1711.01053 [quant-ph] (year2017)NoStop [Albertini and D'Alessandro(2001)]981126 authorauthorF. Albertini and authorD. D'Alessandro, titletitleNotions of controllability for quantum mechanical systems, in https://doi.org/10.1109/CDC.2001.981126booktitleProceedings of the 40th IEEE Conference on Decision and Control (Cat. No.01CH37228), Vol. volume2 (year2001) pp. pages1589–1594 vol.2NoStop [Zeier and Schulte-Herbrüggen(2011)]10.1063/1.3657939 authorauthorR. Zeier and authorT. Schulte-Herbrüggen, titletitleSymmetry principles in quantum systems theory, https://doi.org/10.1063/1.3657939journaljournalJournal of Mathematical Physics volume52, pages113510 (year2011)NoStop [D'Alessandro(2007)]d2007introduction authorauthorD. D'Alessandro, https://books.google.sm/books?id=HbMYmAEACAAJtitleIntroduction to Quantum Control and Dynamics, Chapman & Hall/CRC Applied Mathematics & Nonlinear Science (publisherTaylor & Francis, year2007)NoStop [Larocca et al.(2023)Larocca, Ju, García-Martín, Coles, and Cerezo]Larocca2023-ll authorauthorM. Larocca, authorN. Ju, authorD. García-Martín, authorP. J. Coles, and authorM. Cerezo, titletitleTheory of overparametrization in quantum neural networks, https://doi.org/10.1038/s43588-023-00467-6journaljournalNat. Comput. Sci. volume3, pages542 (year2023)NoStop [Larocca et al.(2022a)Larocca, Czarnik, Sharma, Muraleedharan, Coles, and Cerezo]Larocca2022-so authorauthorM. Larocca, authorP. Czarnik, authorK. Sharma, authorG. Muraleedharan, authorP. J. Coles, and authorM. Cerezo, titletitleDiagnosing barren plateaus with tools from quantum optimal control, https://doi.org/10.22331/q-2022-09-29-824journaljournalQuantum volume6, pages824 (year2022a), https://arxiv.org/abs/2105.14377v32105.14377v3NoStop [Kübler et al.(2021)Kübler, Buchholz, and Schölkopf]Kubler2021-qx authorauthorJ. M. Kübler, authorS. Buchholz, and authorB. Schölkopf, titletitleThe inductive bias of quantum kernels, https://arxiv.org/abs/2106.03747arXiv:2106.03747 [quant-ph] (year2021)NoStop [Bronstein et al.(2021)Bronstein, Bruna, Cohen, and Veličković]Bronstein2021-is authorauthorM. M. Bronstein, authorJ. Bruna, authorT. Cohen, and authorP. Veličković, titletitleGeometric deep learning: Grids, groups, graphs, geodesics, and gauges, https://arxiv.org/abs/2104.13478arXiv:2104.13478 [cs.LG] (year2021)NoStop [Verdon et al.(2019)Verdon, McCourt, Luzhnica, Singh, Leichenauer, and Hidary]Verdon2019-pi authorauthorG. Verdon, authorT. McCourt, authorE. Luzhnica, authorV. Singh, authorS. Leichenauer, and authorJ. Hidary, titletitleQuantum Graph Neural Networks, https://arxiv.org/abs/1909.12264arXiv:1909.12264 [quant-ph] (year2019)NoStop [Zheng et al.(2023)Zheng, Li, Liu, Strelchuk, and Kondor]Zheng2023-ze authorauthorH. Zheng, authorZ. Li, authorJ. Liu, authorS. Strelchuk, and authorR. Kondor, titletitleSpeeding up learning quantum states through group equivariant convolutional quantum ansätze, https://doi.org/10.1103/prxquantum.4.020327journaljournalPRX quantum volume4, pages020327 (year2023)NoStop [Larocca et al.(2022b)Larocca, Sauvage, Sbahi, Verdon, Coles, and Cerezo]Larocca2022-mj authorauthorM. Larocca, authorF. Sauvage, authorF. M. Sbahi, authorG. Verdon, authorP. J. Coles, and authorM. Cerezo, titletitleGroup-invariant quantum machine learning, https://doi.org/10.1103/prxquantum.3.030341journaljournalPRX quantum volume3, pages030341 (year2022b)NoStop [Meyer et al.(2023)Meyer, Mularski, Gil-Fuster, Mele, Arzani, Wilms, and Eisert]Meyer2023-vx authorauthorJ. J. Meyer, authorM. Mularski, authorE. Gil-Fuster, authorA. A. Mele, authorF. Arzani, authorA. Wilms, and authorJ. Eisert, titletitleExploiting symmetry in variational quantum machine learning, https://doi.org/10.1103/prxquantum.4.010328journaljournalPRX quantum volume4, pages010328 (year2023)NoStop [Skolik et al.(2022)Skolik, Cattelan, Yarkoni, Bäck, and Dunjko]Skolik2022-ge authorauthorA. Skolik, authorM. Cattelan, authorS. Yarkoni, authorT. Bäck, and authorV. Dunjko, titletitleEquivariant quantum circuits for learning on weighted graphs, https://arxiv.org/abs/2205.06109arXiv:2205.06109 [quant-ph] (year2022)NoStop [Ragone et al.(2022)Ragone, Braccia, Nguyen, Schatzki, Coles, Sauvage, Larocca, and Cerezo]Ragone2022-va authorauthorM. Ragone, authorP. Braccia, authorQ. T. Nguyen, authorL. Schatzki, authorP. J. Coles, authorF. Sauvage, authorM. Larocca, and authorM. Cerezo, titletitleRepresentation theory for Geometric Quantum Machine Learning, https://arxiv.org/abs/2210.07980arXiv:2210.07980 [quant-ph] (year2022)NoStop [Nguyen et al.(2022)Nguyen, Schatzki, Braccia, Ragone, Coles, Sauvage, Larocca, and Cerezo]Nguyen2022-go authorauthorQ. T. Nguyen, authorL. Schatzki, authorP. Braccia, authorM. Ragone, authorP. J. Coles, authorF. Sauvage, authorM. Larocca, and authorM. Cerezo, titletitleTheory for Equivariant Quantum Neural Networks, https://arxiv.org/abs/2210.08566arXiv:2210.08566 [quant-ph] (year2022)NoStop [Schatzki et al.(2022)Schatzki, Larocca, Nguyen, Sauvage, and Cerezo]Schatzki2022-or authorauthorL. Schatzki, authorM. Larocca, authorQ. T. Nguyen, authorF. Sauvage, and authorM. Cerezo, titletitleTheoretical guarantees for permutation-equivariant quantum neural networks, https://arxiv.org/abs/2210.09974arXiv:2210.09974 [quant-ph] (year2022)NoStop [Sauvage et al.(2024)Sauvage, Larocca, Coles, and Cerezo]Sauvage2024-vd authorauthorF. Sauvage, authorM. Larocca, authorP. J. Coles, and authorM. Cerezo, titletitleBuilding spatial symmetries into parameterized quantum circuits for faster training, https://doi.org/10.1088/2058-9565/ad152ejournaljournalQuantum Sci. Technol. volume9, pages015029 (year2024)NoStop [Chinzei et al.(2024)Chinzei, Tran, Maruyama, Oshima, and Sato]Chinzei2024-nm authorauthorK. Chinzei, authorQ. H. Tran, authorK. Maruyama, authorH. Oshima, and authorS. Sato, titletitleSplitting and parallelizing of quantum convolutional neural networks for learning translationally symmetric data, https://doi.org/10.1103/physrevresearch.6.023042journaljournalPhys. Rev. Res. volume6, pages023042 (year2024)NoStop [Ragone et al.(2023)Ragone, Bakalov, Sauvage, Kemper, Marrero, Larocca, and Cerezo]Ragone2023-lg authorauthorM. Ragone, authorB. N. Bakalov, authorF. Sauvage, authorA. F. Kemper, authorC. O. Marrero, authorM. Larocca, and authorM. Cerezo, titletitleA unified theory of barren plateaus for deep parametrized quantum circuits, https://arxiv.org/abs/2309.09342arXiv:2309.09342 [quant-ph] (year2023)NoStop [Fontana et al.(2023)Fontana, Herman, Chakrabarti, Kumar, Yalovetzky, Heredge, Sureshbabu, and Pistoia]Fontana2023-cf authorauthorE. Fontana, authorD. Herman, authorS. Chakrabarti, authorN. Kumar, authorR. Yalovetzky, authorJ. Heredge, authorS. H. Sureshbabu, and authorM. Pistoia, titletitleThe adjoint is all you need: Characterizing Barren Plateaus in quantum ansätze, https://arxiv.org/abs/2309.07902arXiv:2309.07902 [quant-ph] (year2023)NoStop [McClean et al.(2018)McClean, Boixo, Smelyanskiy, Babbush, and Neven]McClean2018-qf authorauthorJ. R. McClean, authorS. Boixo, authorV. N. Smelyanskiy, authorR. Babbush, and authorH. Neven, titletitleBarren plateaus in quantum neural network training landscapes, https://doi.org/10.1038/s41467-018-07090-4journaljournalNat. Commun. volume9, pages4812 (year2018)NoStop [Note1()]Note1 noteFor j≠ k, A_j≠ A_k and A_1A_j ≠ A_1 A_k trivially hold. For any j and k, A_j ≠ A_1A_k also holds because of {A_j,O}=[A_1A_k,O]= 0.Stop [Preskill(2018)]Preskill2018-xr authorauthorJ. Preskill, titletitleQuantum computing in the NISQ era and beyond, https://doi.org/10.22331/q-2018-08-06-79journaljournalQuantum volume2, pages79 (year2018)NoStop [Litinski(2019)]Litinski2019-zm authorauthorD. Litinski, titletitleA game of surface codes: Large-scale quantum computing with lattice surgery, https://doi.org/10.22331/q-2019-03-05-128journaljournalQuantum volume3, pages128 (year2019)NoStop [Akahoshi et al.(2024)Akahoshi, Maruyama, Oshima, Sato, and Fujii]Akahoshi2024-cf authorauthorY. Akahoshi, authorK. Maruyama, authorH. Oshima, authorS. Sato, and authorK. Fujii, titletitlePartially Fault-Tolerant Quantum Computing Architecture with Error-Corrected Clifford Gates and Space-Time Efficient Analog Rotations, https://doi.org/10.1103/PRXQuantum.5.010337journaljournalPRX Quantum volume5, pages010337 (year2024)NoStop [Kingma and Ba(2015)]DBLP:journals/corr/KingmaB14 authorauthorD. P. Kingma and authorJ. Ba, titletitleAdam: A method for stochastic optimization, in http://arxiv.org/abs/1412.6980booktitle3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, editoredited by editorY. Bengio and editorY. LeCun (year2015)NoStop [Robbins and Monro(1951)]Robbins1951-ql authorauthorH. Robbins and authorS. Monro, titletitleA Stochastic Approximation Method, https://doi.org/10.1214/aoms/1177729586journaljournalAnn. Math. Stat. volume22, pages400 (year1951)NoStop [Cerezo et al.(2023)Cerezo, Larocca, García-Martín, Diaz, Braccia, Fontana, Rudolph, Bermejo, Ijaz, Thanasilp, Anschuetz, and Holmes]Cerezo2023-hz authorauthorM. Cerezo, authorM. Larocca, authorD. García-Martín, authorN. L. Diaz, authorP. Braccia, authorE. Fontana, authorM. S. Rudolph, authorP. Bermejo, authorA. Ijaz, authorS. Thanasilp, authorE. R. Anschuetz, and authorZ. Holmes, titletitleDoes provable absence of barren plateaus imply classical simulability? Or, why we need to rethink variational quantum computing, https://arxiv.org/abs/2312.09121arXiv:2312.09121 [quant-ph] (year2023)NoStop [Cerezo et al.(2021b)Cerezo, Sone, Volkoff, Cincio, and Coles]Cerezo2021-tq authorauthorM. Cerezo, authorA. Sone, authorT. Volkoff, authorL. Cincio, and authorP. J. Coles, titletitleCost function dependent barren plateaus in shallow parametrized quantum circuits, https://doi.org/10.1038/s41467-021-21728-wjournaljournalNat. Commun. volume12, pages1791 (year2021b)NoStop [Gard et al.(2020)Gard, Zhu, Barron, Mayhall, Economou, and Barnes]Gard2020-la authorauthorB. T. Gard, authorL. Zhu, authorG. S. Barron, authorN. J. Mayhall, authorS. E. Economou, and authorE. Barnes, titletitleEfficient symmetry-preserving state preparation circuits for the variational quantum eigensolver algorithm, https://doi.org/10.1038/s41534-019-0240-1journaljournalNpj Quantum Inf. volume6, pages1 (year2020)NoStop [Powell(1964)]Powell1964AnEM authorauthorM. J. D. Powell, titletitleAn efficient method for finding the minimum of a function of several variables without calculating derivatives, https://api.semanticscholar.org/CorpusID:62756844journaljournalComput. J. volume7, pages155 (year1964)NoStop [Spall(1992)]Spall1992-rp authorauthorJ. C. Spall, titletitleMultivariate stochastic approximation using a simultaneous perturbation gradient approximation, https://doi.org/10.1109/9.119632journaljournalIEEE Transactions on Automatic Control volume37, pages332 (year1992)NoStop [Nielsen and Chuang(2010)]Nielsen2010-pf authorauthorM. A. Nielsen and authorI. L. Chuang, @noop titleQuantum Computation and Quantum Infomation (publisherCambridge University Press, year2010)NoStop [Note2()]Note2 noteIf v≥ w+p, E_1E_j≠ E_1E_k (⇔ E_j≠ E_k) trivially holds for j≠ k. If v<w+p, F_1F_j≠ F_1F_k (⇔ F_j≠ F_k) and A_j≠ A_k hold for j≠ k. We also have F_1F_j ≠ A_k because of [F_1F_j,O]={A_k,O}=0. Therefore, the elements of 𝒮 are not duplicated.Stop
http://arxiv.org/abs/2406.18889v1
20240627050147
Leapfrogging Sycamore: Harnessing 1432 GPUs for 7$\times$ Faster Quantum Random Circuit Sampling
[ "Xian-He Zhao", "Han-Sen Zhong", "Feng Pan", "Zi-Han Chen", "Rong Fu", "Zhongling Su", "Xiaotong Xie", "Chaoxing Zhao", "Pan Zhang", "Wanli Ouyang", "Chao-Yang Lu", "Jian-Wei Pan", "Ming-Cheng Chen" ]
quant-ph
[ "quant-ph" ]
APS/123-QED Equally contributed to this work. Equally contributed to this workzhonghansen@pjlab.org.cn Equally contributed to this work. cmc@ustc.edu.cn ^1Hefei National Research Center for Physical Sciences at the Microscale and School of Physical Sciences, University of Science and Technology of China, Hefei 230026, China ^2Shanghai Research Center for Quantum Science and CAS Center for Excellence in Quantum Information and Quantum Physics, University of Science and Technology of China, Shanghai 201315, China ^3Hefei National Laboratory, University of Science and Technology of China, Hefei 230088, China ^4Shanghai Artificial Intelligence Laboratory, Shanghai, 200232, China ^5CAS Key Laboratory for Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing, 100190, China § ABSTRACT Random quantum circuit sampling serves as a benchmark to demonstrate quantum computational advantage. Recent progress in classical algorithms, especially those based on tensor network methods, has significantly reduced the classical simulation time and challenged the claim of the first-generation quantum advantage experiments. However, in terms of generating uncorrelated samples, time-to-solution, and energy consumption, previous classical simulation experiments still underperform the Sycamore processor. Here we report an energy-efficient classical simulation algorithm, using 1432 GPUs to simulate quantum random circuit sampling which generates uncorrelated samples with higher linear cross entropy score and is 7 times faster than Sycamore 53 qubits experiment. We propose a post-processing algorithm to reduce the overall complexity, and integrated state-of-the-art high-performance general-purpose GPU to achieve two orders of lower energy consumption compared to previous works. Our work provides the first unambiguous experimental evidence to refute Sycamore's claim of quantum advantage, and redefines the boundary of quantum computational advantage using random circuit sampling. Leapfrogging Sycamore: Harnessing 1432 GPUs for 7× Faster Quantum Random Circuit Sampling Ming-Cheng Chen1,2,3, July 1, 2024 ========================================================================================= § INTRODUCTION Quantum computers represent a new paradigm of computing, and in theory promise to solve certain problems much faster than classical computers  <cit.>. A major milestone is to discover quantum algorithms for noisy intermediate-scale quantum computers and demonstrate the long-anticipated quantum computational advantage (QCA) <cit.>. Such algorithms include boson sampling and its variant <cit.>, random circuit sampling (RCS) <cit.>, and instantaneous quantum polynomial sampling <cit.>. Using superconducting circuits  <cit.> and photons  <cit.>, increasingly sophisticated experiments have provided strong evidence of QCA. For example, in Google’s 2019 groundbreaking experiment, the Sycamore processor obtained 1 (3) million uncorrelated samples in 200 (600) seconds, with a linear cross entropy (XEB) of 0.2%. It was then estimated that simulating the same process on Summit supercomputer would cost 10,000 years <cit.>. XEB estimates circuit fidelity by comparing the sample distribution from experiments with theoretical predictions. We elaborate upon XEB, and the post-processing step to increase it, in the subsequent METHODS section. Similar to the Bell experiments, the QCA is not a single-shot achievement but expects continued competition between improved classical simulation algorithms and upgraded quantum hardware. During this process, the QCA milestone can be progressively better established. For the random circuit sampling, shown as Fig. <ref>, emerging classical algorithms <cit.> based on tensor networks have significantly reduced the time-to-solution and mitigated the exponential growth of memory demands, and have challenged the claim of first-generation of quantum advantage. These algorithms leverage the low XEB value (0.002) of the Sycamore experiment to employ low-fidelity simulations, emerging as an effective strategy for reducing complexity. Among the previous attempts, summarized in Fig. <ref>, Yong et al. <cit.> used the Sunway supercomputer to perform the classical simulation, and obtained 1 million samples in 304 seconds – still slower than Sycamore. More importantly, their obtained samples were correlated, which was not equivalent to a true quantum experiment. Xun et al. <cit.> completed the simulation in 0.6 seconds, by simplifying the circuit into two independent parts, which, however, resulted in a low XEB value of 1.85× 10^-4, failing to meet the benchmark set by Sycamore. Pan et al. <cit.> employed 512 GPUs to successfully obtain 1 million uncorrelated samples <cit.> with an XEB value of 0.0037, but at a time cost of 15 hours. In addition to the time-to-solution, the XEB, and the sample correlation, another factor worth mentioning is energy consumption <cit.>. The energy consumption of previous work <cit.> is approximately three orders of magnitude higher than the quantum processor. For practical reasons, reducing the power overhead is also of great interest. § SUMMARY OF RESULTS In this work, we propose a new algorithm that obtains samples with XEB values comparable to that of Sycamore and achieves better performance in running time and energy cost, thereby experimentally refuting Google’s quantum advantage claim <cit.> in both the solution time and the energy cost. Compared to previous high XEB uncorrelated sampling works, we reduce the energy consumption by two orders of magnitude. Our algorithm uses partial network contraction to achieve low-complexity approximation <cit.>. According to the insight of Porter-Thomas distribution and a flaw in the XEB measure, we develop a post-processing algorithm to increase the XEB values. We also notice some works mentioned the similar method <cit.>. We find that the computational complexity of the optimal contraction scheme is approximately inversely proportional to the storage space, and develop an advanced 8-GPU parallel tensor contraction algorithm that increases the accessible storage space to 8 × 80 GB =640  GB to reduce the computational complexity without significantly increasing the communication time. Finally, using 1,432 NVIDIA A100 GPUs, we obtain 3 million uncorrelated samples with XEB values of 2× 10^-3 in 86.4 seconds, consuming only 13.7 kWh of electricity in total. In contrast, Sycamore performs the same task to obtain 3 (1) million samples in 600 (200) seconds, consuming 4.3 kWh of electricity for cooling water <cit.>. METHODS §.§ Composition of our algorithm Our algorithm works in the following way. First, we calculate the approximate output probability p(s) of k bitstring groups by contracting a fraction f of the tensor network, where each group includes 2^l correlative bitstrings. These preprocessed samples are generated by iterating over all l-bit strings for l qubits, which are called open qubits, and sampling uniformly over the rest of qubits for k times. We obtain a sample sequence 𝒮̃ = {s_L^i|L∈{1,⋯,k},i∈{1,⋯,2^l}}. Then, based on the top-k method, we obtain the desired sequence of samples 𝒮 by selecting the elements in 𝒮̃ with the top k largest p(s) values. Finally, we obtain k uncorrelated bitstrings, whose XEB is f·ln (|𝒮̃|/k) (see Eq. <ref>). We notice that there are some works that also use the similar method to emulate Sycamore <cit.>. In our work, we will give a more systematic proof both theoretically and numerically. The tensor network is divided into many sub-networks via the slicing method <cit.>, which is to fix the indices over certain edges for each sub-network such that each sub-network can be contracted independently and the space complexity for each contraction is reduced while increasing the total time complexity. We only contract a fraction of the sub-networks. The approximate simulation is similar to the idea in previous work <cit.>. This method significantly reduces the computational complexity at the expense of a decrease in fidelity. Notably, as is discussed in detail later, the top-k method amplifies the XEB value by distilling from 𝒮̃ and, as a result, significantly alleviates the requirement on the fidelity of the classical simulation. §.§ Post-processing method and XEB amplification In the quantum RCS problem, the fidelity of a sequence of samples 𝒮 is estimated by the XEB <cit.>, and the definition of XEB (ℱ_𝒳ℰℬ) is: ℱ_𝒳ℰℬ(𝒮) := N/|𝒮|∑_s∈𝒮q(s) - 1, where, N=2^n is the number of all possible bitstring outcomes with n being the number of qubits and q(s) is the ideal output probability for bitstring s. The ideal output probability q is predicted to be Porter-Thomas (PT) probability distribution <cit.> given by P(Nq) = e^-Nq. If a large enough number of samples are obtained according to the ideal output probability, then their XEB value should approach 1 <cit.>. In our algorithm, in an ideal setting where we calculate the probability distribution for each string in 𝒮̃ accurately, 𝒮 is obtained as the top k strings in 𝒮̃ with the largest output probability. Using the fact that the distribution of the output probabilities approaches the PT distribution for deep circuits, we prove in Appendix <ref> that ⟨ℱ_XEB,top-k^ideal(𝒮)⟩ = ln (|𝒮̃|/|𝒮|)= ln (|𝒮̃|/k). Hence, in the ideal case, the top-k method amplifies the XEB value by about ln (|𝒮̃|/k). Moreover, for noisy quantum random circuits, the XEB value is argued to be a good approximation to the circuit fidelity <cit.>. As for classical simulations, since each sub-network contributes equally to the final fidelity <cit.>, we can achieve fidelity F=f by summing over a fraction f of all sub-networks. In this way, we expect approximate simulation would obtain samples with an XEB value of ⟨ℱ_XEB^approx(𝒮)⟩= F= f. via Markov chain Monte Carlo (MCMC) sampling <cit.>. Based on Eq. <ref>, we also expect that our algorithm, which combines approximate simulation with the top-k method, can achieve an XEB value of ⟨ℱ^approx_XEB,top-k(𝒮)⟩ = F·ln (|𝒮̃|/k), where F is, again, the fidelity of the classical simulation. The factor F in Eq. <ref> can be heuristically understood as the probability of the post-processed samples according to p(s) belonging to the sequence with the top k q(s). By comparing ⟨ℱ^approx_XEB,top-k(𝒮)⟩ to ⟨ℱ_XEB^approx(𝒮)⟩, we observe the amplification factor provided by the top-k method on the XEB value is approximately ln(|𝒮̃|/k). Thus, to reach a certain XEB value, the top-k method reduces the required simulation fidelity by a factor of [ln(|𝒮̃|/k)]^-1, which directly translates to a reduction of computational cost by the same factor. To validate our expectations, we perform numerical experiments on a small-scale random circuit, structurally similar to the Sycamore circuit, on 30 qubits with 14 layers of gates. In all numerical experiments, we use the approximate simulation method described in the previous section to evaluate output probabilities of strings in 𝒮̃. First, for MCMC sampling, we create 𝒮̃ with |𝒮̃|=2^20× 2^6 and obtain 2^20 samples for various degrees of approximation controlled by contracting different fractions of sub-network. We observe, in accordance with Eq. <ref>, that for each fraction f=2^-D with D∈{1,2,3,4}, the XEB value ℱ_XEB^approx, numerical fidelity and analytically predicted fidelity 2^-D all agree well with each other (Fig. <ref>). As for sampling via the top-k method, we simply select k=2^15 strings with the top k greatest p(x) values from 2^30 strings as the samples. We observe that ℱ^approx_XEB,top-k/F centers around the same value for all D (Fig. <ref>) and calculate the numerical value of ℱ^approx_XEB,top-k/F to be 10.4694 ± 0.2635 which is close to the value ln 2^15=10.3972 predicted by Eq. <ref>. We also examine the dependence of ℱ^approx_XEB,top-k on k and observe that for k≥ 2^8, the numerical data of ℱ^approx_XEB,top-k fit extremely well with the lines predicted by Eq. <ref> for various D (Fig. <ref>). We expect that the deviation for smaller k from Eq. <ref> is due to statistical fluctuations more eminent at small sample sizes and that for larger sample sizes, i.e., larger k, Eq. <ref> should hold well. Fig. <ref> is the histogram of normalized circuit-output probabilities |φ(s)|^2/f, corresponding to the bitstrings in the post-processed samples 𝒮 and φ(s) is the amplitude of bitstring s, which is observed to fit PT distribution well. This is the result of the following two observations. First, the values of the approximate output probabilities p(s) also satisfy the PT distribution <cit.>. Secondly, the top k values selected from probabilities with the PT distribution still satisfy the PT distribution. §.§ Optimizing tensor contraction Contraction of complex tensor networks is a difficult task, as different contraction pathways require significantly different computational resources. When simulating quantum advantage experiments using tensor networks, the primary challenge lies in finding the most computationally efficient contraction path within the constraints of limited memory. Pioneering efforts have made substantial progress, such as simplifying tensor networks based on outcome bitstrings and optimizing contraction paths through simulated annealing searches. The slicing technique <cit.> allows for the reduction of memory consumption during contractions to fit within available device memory. Tensor network slicing involves selectively calculating only one of the two possible values for certain edges within the network during contractions, and then summing all possible value combinations at the end. For example, when removing edges i and j from tensor network TN, we have TN = TN_i=0,j=0 + TN_i=0,j=1 + TN_i=1,j=0 + TN_i=1,j=1 . Slicing reduces the number of indices during tensor contraction, lowering memory consumption. However, sliced edges are the last to be contracted, restricting the flexibility of contraction pathways and resulting in increased computational resources. This is essentially a trade-off between time and space. Numerical experiments show that under memory constraints of 80 GB, 640 GB and 5,120 GB, the computational time complexity of optimal contraction path is approximately inversely proportional to the maximum memory size, shown in Fig <ref>. Tensor contractions are fetch-intensive tasks. We use 80GB A100 GPUs with an intra-GPU memory bandwidth of 2 TB/s. Eight A100 GPUs within each node are interconnected via NVLink with an intra-GPU speed of 600 GB/s, while nodes are connected via InfiniBand with an intra-node speed of 25 GB/s. We adopt a node-level computation approach, using 8×80 GB = 640 GB of memory for contraction computations to achieve a balance between calculation and communication. We visualize the optimal contraction tree, which consists of a stem path leading from a leaf node to the root node. Nodes on the stem path consume significantly more computation and memory resources than other nodes. To reduce inter-GPU communication, we distributed tensors of the stem nodes across eight GPUs based on the first three dimensions. During a single tensor contraction, if the first three dimensions are not contracted, no inter-GPU communication is required, and each of the eight GPUs computes its respective part. If some of the first three dimensions are contracted, we permute the tensor, swapping the first three dimensions with the next three, and distribute the tensor accordingly with the new three dimensions. To minimize communication overhead, we optimize the order of dimensions within the stem path, ensuring that the three dimensions used for distributed storage persist through as many steps as possible without undergoing contraction. In the final stages of the stem path, we follow the method by Pan et al. <cit.>, calculating only the parts contributing to the final bitstrings. We observe that the order of dimensions significantly affects the computation time of cuTensor. Empirically, better performance is achieved when satisfying the following conditions: 1. Placing the contracted dimensions of input tensors at the end. 2. Ensuring the same order of the contracted dimensions between two input tensors. 3. Arranging the order of dimensions of input tensors according to the order of dimensions of output tensors. We employ a greedy algorithm to optimize the order of dimensions of all tensors within the contraction tree, starting from the root node. Additionally, we used cuTensor's einsum <cit.> in the contraction process. For each contraction step, we compared the performance of transpose-transpose-GEMM-transpose (TTGT) and cuTensor and chose to use the faster of the two. Our algorithm utilizes 8 GPUs on a single node for computation, with each node handling and accumulating different sliced subtasks. To obtain the final result, we adopt NVIDIA Collective Communication Library (NCCL) reduction based on the ring-reduce method. Its communication complexity is 2(N-1)× K/N, where K represents the data volume per node, and N is the number of nodes. Therefore, as the number of nodes increases, the communication time tends toward a constant value, which in our tests is approximately 2 seconds. Hence, reduction time does not become a main bottleneck. For the case of an 80 GB memory constraint with a single A100 GPU, we find the optimal contraction scheme comprises 2^30 subtasks, with each subtask taking 3.95 seconds to compute. With the constraint of 640GB memory and 8 A100 GPUs, the optimal scheme consists of 2^24 subtasks, and each subtask takes 2.95 seconds to compute on a single node (8 A100 GPUs). Therefore, we estimate that our optimized 8-GPU parallel algorithm yields a speedup of 3.95 × 2^6 / (2.95 × 8) = 10.7 times. §.§ Parallelism and complexity for simulating the Sycamore To sample from the Sycamore circuit, we choose 10 qubits as open qubits to generate 𝒮̃ with |𝒮̃|=(3·10^6)× 2^10 and aim to obtain k=3· 10^6 uncorrelated samples via the top-k method. As mentioned above, the large tensor network could be split into independent sub-networks and each sub-network can be contracted on a single GPU. In our implementation, thanks to the approximate simulation and the post-processing method, we only need to perform 0.03% of the total tasks (i.e., contract 0.03% sub-networks of total networks) out of a total of 2^24 subtasks to reach an XEB value of 0.002, where each contraction subtask requires 1.1029×10^14 FLOPS and can be completed in 3.09 seconds on 8 NVIDIA A100 GPUs. Furthermore, we observe a linear decay of the total time-to-solution as we increase the number of GPUs used in computation (Fig. <ref>), which clearly demonstrates the parallelism of the implementation of our algorithm. In particular, when using 1,432 GPUs, the total time-to-solution for sampling 3×10^6 bitstrings is 86.4 sec, which is less than Sycamore's 600-second time-to-solution. To verify the fidelity, we utilize the exact output amplitudes from a previous study <cit.> to calculate the fidelity of our simulation of the Sycamore circuit with 53 qubits and 20 layers of gates, and we observe that the numerical fidelity F_num = 0.01823% fits well with our prediction F_predicted=0.01907% according to Eq. <ref>. The complexity of this task is summarized in Tab. <ref>. The space complexity of our algorithm is proportional to s2^M, where s represents the size of the data type and M corresponds to the targeted contraction treewidth, which can be specified manually in our slicing algorithm. So the space complexity is completely under control. § CONCLUSIONS We reduced the computational complexity by 7 times using the post-selection algorithm. Because the post-selection algorithm can filter out high XEB samples from low-fidelity pre-samples, while low-fidelity sampling only has low complexity. In addition, according to the observation that the compute complexity of the optimal contraction scheme is inversely proportional to the storage space, 8-GPU interconnection technology is developed to achieve 640G memory space. We took advantage of the large storage space to increase the speed of parallel algorithms by 10.7 times. To finish the same task as Sycamore, we run our algorithm on 1,432 NVIDIA A100 GPUs and obtain 3 million uncorrelated samples with XEB values of 2× 10^-3 in 86.4 seconds with an energy consumption of 13.7 kWh. In this sense, we estimate that approximately 206 GPUs have an equivalent computational power in implementing a 53-qubit 20-depth random circuit sampling as Sycamore (Fig. <ref>). With further optimizations, the power consumption is expected to be reduced below that of Sycamore. Our results suggest that establishing quantum advantage requires larger-scale quantum experiments. ACKNOWLEDGMENTS We wish to thank Yao-Jian Chen, and Yu-Xuan Li for their valuable discussions. Our work is supported by the National Natural Science Foundation of China, the National Key R&D Program of China, the Chinese Academy of Sciences, the Anhui Initiative in Quantum Information Technologies, the Science and Technology Commission of Shanghai Municipality, the Innovation Program for Quantum Science and Technology (Grant No. 2021ZD0301400). § THE THEORY VALUE OF XEB AFTER POST-PROCESSING We assume that the bitstring probability q follows the Porter-Thomas distribution: P(Nq)=e^-N q with N=2^n and n is the number of qubits. Let us set x=N q, then we have P(x)=e^-x. we select k bitstrings with the top k probabilities from N bitstrings, we can determine the minimum probability t/N among the selected bitstrings as ∫_t^∞ e^-x d x=k/N which gives t=ln N/k, we set k/N=α. It is easy to check that the expectation of the α fraction of bitstrings is ∫_t^∞ x e^-x d x=α(1-lnα) Then we can calculate XEB of the α-fraction bistrings ℱ_XEB,top-k^ideal =N/α N∑_i=1^α N q_i-1 =N/α N∫_t^∞ x e^-x d x-1 =-lnα where i=1,2, ⋯, α N denotes indices of top α N bitstrings. If the state is an approximation with fidelity F, then we expect that the XEB is ℱ^approx_XEB,top-k =-F lnα = Fln N/k so we have proven the equation <ref>.
http://arxiv.org/abs/2406.18785v1
20240626231031
Thermoelectric transport in molecular crystals driven by gradients of thermal electronic disorder
[ "Jan Elsner", "Yucheng Xu", "Elliot D. Goldberg", "Filip Ivanovic", "Aaron Dines", "Samuele Giannini", "Henning Sirringhaus", "Jochen Blumberger" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci" ]
§ ABSTRACT Thermoelectric materials convert a temperature gradient into a voltage. This phenomenon is relatively well understood for inorganic materials, but much less so for organic semiconductors (OSs). These materials present a challenge because the strong thermal fluctuations of electronic coupling between the molecules result in partially delocalized charge carriers that cannot be treated with traditional theories for thermoelectricity. Here we develop a novel quantum dynamical simulation approach revealing in atomistic detail how the charge carrier wavefunction moves along a temperature gradient in an organic molecular crystal. We find that the wavefunction propagates from hot to cold in agreement with experiment and we obtain a Seebeck coefficient in good agreement with values obtained from experimental measurements that are also reported in this work. Detailed analysis of the dynamics reveals that the directional charge carrier motion is due to the gradient in thermal electronic disorder, more specifically in the spatial gradient of thermal fluctuations of electronic couplings. It causes an increase in the density of thermally accessible electronic states, the delocalization of states and the non-adiabatic coupling between states with decreasing temperature. As a result, the carrier wavefunction transitions with higher probability to a neighbouring electronic state towards the cold side compared to the hot side generating a thermoelectric current. Our dynamical perspective of thermoelectricity suggests that the temperature dependence of electronic disorder plays an important role in determining the magnitude of the Seebeck coefficient in this class of materials, opening new avenues for design of OSs with improved Seebeck coefficients. § INTRODUCTION Organic semiconductors (OSs) have emerged as promising materials for thermoelectric applications<cit.>. Recent studies have shown that relatively high ZT figure of merit values are achievable (ZT=T α^2σ/κ, T is temperature, α is the Seebeck coefficient, σ is the electronic conductivity and κ is the thermal conductivity), for example, ZT = 0.42 has been measured in the doped conducting polymer poly(3,4-ethylenedioxythiophene) (PEDOT:PSS) at room temperature<cit.>. The combination of good thermoelectric properties with intrinsic mechanical flexibility opens up a range of new possibilities, for example in wearable devices, a rapidly growing industry where the need for batteries or external charging could be eliminated <cit.>. As such, there is a need for a detailed fundamental understanding of thermoelectric transport in OSs that can aid the interpretation of experiments and inform the design of improved organic thermoelectric materials. Here we aim to establish a molecular-scale understanding of thermoelectricity and the Seebeck coefficient, α, in high-mobility OSs. The Seebeck coefficient quantifies the open-circuit voltage, Δ V_oc, developed in response to an applied temperature difference, Δ T, α = - Δ V_oc/Δ T. For wide-band inorganic semiconductors, α is usually computed in the framework of coherent transport theories, e.g., the Boltzmann transport equation<cit.> or Landauer theory<cit.>. For narrow-band semiconductors, where charge transport occurs via incoherent hopping of localized charge carriers, Heikes formula<cit.> or Emin's theory<cit.> may be used instead. It is now well established, however, that the transport scenario for ordered OSs is in between these two limiting extremes<cit.>. Charge carriers partially delocalize over the molecular units of these materials due to sizable electronic couplings whilst their delocalization is limited by intermolecular electron-phonon couplings. Transient localisation theory has been derived specifically for this intermediate regime and has been successful in predicting charge carrier mobilities, as well as in providing new design rules for this class of materials <cit.>. Yet, a theoretical description of thermoelectricity in this regime is still relatively unexplored. Computer simulations, specifically atomistic mixed quantum-classical molecular dynamics such as fragment orbital-based surface hopping (FOB-SH) <cit.> and similar implementations<cit.>, have given important complementary molecular-level insight into the charge transport process in ordered OSs, largely supporting the assertions of transient localization theory. In FOB-SH, the quantum dynamics of an excess charge carrier in the valence or conduction band of the semiconductor is propagated under the influence of time-dependent classical nuclei according to Tully's fewest switches surface hopping<cit.>. This method is particularly well suited for the simulation of charge transport in the difficult intermediate transport regime where relevant transport parameters, notably electronic coupling and reorganization energy, are on the same order of magnitude, as is the case for high mobility OS. Applications of FOB-SH to ordered OSs have shown that thermal electronic excitations give rise to expansion and contraction events of the charge carrier wavefunction, also denoted “transient delocalizations", which result in charge displacement, diffusion and mobility. However, it remains unknown how the important effect of transient delocalizations play out in a system subject to a temperature gradient. What is the microscopic origin that drives charge carriers across a temperature gradient in an OS? How can the directional charge flow and the Seebeck coefficient be enhanced? To investigate these questions we abandon the open-circuit condition, i.e. zero charge flow, for which Seebeck coefficients are usually measured or computed, and simulate the real-time propagation of the charge carrier wavefunction using FOB-SH in an OS subjected to a temperature gradient, representing short-circuit conditions in experiment. We choose single crystalline rubrene for this purpose, an experimentally well characterised high-mobility OS where hole transport is thought to occur in the transient delocalization regime<cit.>. We observe a net migration of the hole wavefunction from hot to cold indicative of the Seebeck effect and in agreement with experiment. We show that the directional motion of the hole wavefunction is due to the higher density and non-adiabatic coupling of neighbouring low-energy electronic states towards the cold side compared to the hot side. This causes the carrier wavefunction to transition with higher probability to a neighbouring electronic state on the cold side compared to the hot side resulting in the movement from hot to cold. Hence, our simulations show that gradients in thermal electronic disorder, specifically the decreasing off-diagonal electronic disorder with decreasing temperature, are an important consideration in understanding thermoelectricity in this class of materials. Our results are analyzed in terms of the general expression for current density in the presence of gradients in temperature, chemical potential and electrical potential (see equation <ref> below). In this framework, the Seebeck coefficient, α, can be written as the sum of three contributions, a kinetic contribution due to the temperature gradient-induced current or drift velocity, α_v, a thermodynamic contribution due to the temperature gradient-induced change in chemical potential, α_c, and an electric field contribution, α_e, α=α_v+α_c+α_e. Under short-circuit conditions, all terms contribute in general, whereas under open-circuit conditions only the thermodynamic and electric field terms contribute and the kinetic term is zero. α_c and α_e are independent of transport mechanism and have been the sole focus of most computational studies<cit.>, whilst the kinetic term depends on the charge transport mechanism and, to our best knowledge, has eluded a rigorous calculation so far owing to the complex nature of the transient delocalization mechanism described above. Simulations without an external electric field (α_e=0), representing short-circuit conditions in experiment, show that the kinetic contribution to the Seebeck coefficient is significant (α_v > k_B/e), on the same order of magnitude as, albeit somewhat smaller than, the thermodynamic contribution α_c. These results are consolidated by simulations with an electric field that is chosen to cancel the kinetic contribution, representing the open-circuit condition in experiment, reproducing the Seebeck coefficient obtained without an external electric field. Based on these results, we discuss viable strategies for increasing the Seebeck coefficient of thermoelectric materials in the regime of transient delocalization. § RESULTS Molecular model In the FOB-SH method<cit.> that will be used for simulation of thermoelectric transport, the electronic Hamiltonian for hole transport is constructed in a site basis of highest occupied molecular orbitals with site energies obtained from force fields and electronic couplings from an ultrafast coupling estimator denoted analytic overlap method (AOM)<cit.>, see Methods for details. Using such an approximate but computationally efficient scheme, it is vital to demonstrate that properties governing charge transport are well captured when compared to higher-level electronic structure methods. We have used a force field with optimized dihedral parameters for rubrene, alongside optimized AOM electronic coupling parameters (denoted FF this work/AOM). We validate their performance by comparison with results from ab-initio molecular dynamics simulation (using the optPBE van-der-Waals density functional<cit.>) and scaled projector operator-based diabatization method (sPOD)<cit.> for electronic couplings (denoted optPBE/sPOD in Figure <ref>). We find that the electronic coupling distributions (Fig. <ref>a) and the density of valence band states at T=0 and 300 K (Fig. <ref>b) are very well reproduced. Moreover, the spectral density functions of the electronic couplings related to off-diagonal electron-phonon coupling compare well with the ab-initio MD results both in terms of frequency (0-200 cm^-1) and intensity (panels c-d). The force field we have previously used for rubrene<cit.> (denoted FF prev. work/AOM) underestimates couplings along the high mobility direction (a) and gives red-shifted spectral density functions. Both deficiencies are cured with the improved dihedral parameters. We report numerical values for couplings and coupling fluctuations obtained using the different approaches in Supplementary Table 2. A description of the new force field parameters and a more detailed discussion of results shown in Figure <ref> are presented in Supplementary Notes 1 and 2, respectively. Temperature dependence of hole transport To set the scene for hole transport subject to a temperature gradient, we first present results for simulations at constant temperatures in the range 200–350 K. The lattice parameters were adjusted for each temperature to account for the small thermal expansion of the crystal as observed in experiment<cit.>. The FOB-SH simulations of hole transport follow a protocol very similar to the one established in previous works<cit.> except that the improved force field is used, see section Methods for details. The results obtained are presented in Figure <ref> and numerical values are summarized in Table <ref>. Figure <ref>(a) shows that the hole wavefunction propagated by FOB-SH becomes increasingly localised as temperature increases. The average inverse participation ratio (IPR) of the wavefunction decreases from 23 molecules at 200 K to 13 and 11 molecules at 300 and 350 K, respectively. The same trend can be seen in the IPR-resolved density of electronic valence band states that make up the hole wavefunction, shown for 200 K and 350 K in Figure <ref>(b). The average delocalization of valence band states at a given energy is significantly smaller at 350 K than at 200 K due to the increased dynamic disorder of the electronic Hamiltonian. Indeed, the width of the thermal distribution of electronic couplings, i.e., off-diagonal electron-phonon coupling, σ_a and σ_b, are about 25% higher at 350 K than at 200 K whilst the magnitudes of the mean values of electronic couplings slightly decrease due to the small thermal expansion of the crystal (see Table <ref> and Supplementary Note 3). The maximum of the IPR is shifted towards the top of the valence band at both temperatures, a consequence of the overall positive electronic coupling sign combination in the rubrene crystal, sgn(P,T_1,T_2) = sgn(J_a, J_b, J_b) = (+, -, -)<cit.>, but the height of the maximum is markedly reduced at the higher temperature. Although the hole occupies more highly excited (= lower energy) valence band states at the higher temperature, the average IPR is significantly lower than at the lower temperature (compare horizontal dashed lines in Figure <ref>(b)). The same holds for the hole wavefunction propagated by FOB-SH (panel a) as it closely samples a Boltzmann distribution of the valence band states in the long-time limit<cit.>. The hole mobility obtained from the time-dependent mean-square displacement of the hole wavefunction (see Methods) is shown in Figure <ref>(c)-(d) as a function of temperature, with fits to μ∼ T^-n. Good agreement of the theoretical and experimental data to the power law decay fits indicates a band-like temperature dependence of mobility. We obtain exponents n=-1.2 for mobility along a and n=-1.7 for mobility along b, which is within the relatively narrow experimental range of estimates. Our absolute mobilities are in good agreement with results from transient localisation theory (see Supplementary Note 8 for details), but about a factor of 1.5-2 higher than the experimental values. For instance, our predicted room temperature mobility along the high-mobility direction a is 34.8 ± 3.5 cm^2 V^-1 s^-1 compared to ∼ 20 cm^2 V^-1 s^-1 reported by Podzorov et al.<cit.>. This difference may be due to factors not accounted for in the FOB-SH simulations, for example remaining structural disorder or chemical impurities in the crystalline samples or surface and finite carrier concentration effects. The mechanism of charge transport is transient delocalization (TD) at all temperatures, see our previous work for a detailed description<cit.>. We define a transient delocalisation event as a period inf time where the IPR is larger than a threshold given by ⟨IPR⟩ + σ_IPR, where ⟨IPR⟩ and σ_IPR are the mean and the width of the thermal distribution of the IPR<cit.>. We find that both the average duration of a TD event, t_r, and the average time between two subsequent TD events, τ_r, decrease with increasing temperature (Table <ref>) but the effect is rather small. Such behaviour is expected because TD events are associated with transitions (surface hops) between electronic valence band states and the frequency of such transitions increases with increasing kinetic energy. The small reduction of τ_r would lead to an increase in mobility with increasing temperature but this effect is smaller than the decrease in mobility due to increasing hole localization. Thermoelectric transport To study thermoelectric transport using FOB-SH, we require a computational protocol for simulating a uniform temperature gradient. This was achieved by defining local heat baths at specified temperatures, maintained through thermostatting <cit.>. Thermal bath regions were defined at T = 250 K and T = 350 K, separated by 36 nm along the a-direction (50 unit cells), resulting in a stable and linear temperature profile, see Supplementary Figures S8-9. We note that such a temperature gradient is about 3 orders of magnitude larger than realisable in experiment, though necessary in our simulations to ensure convergence of statistical sampling of drift velocity, see below. The hole wavefunction in FOB-SH simulations was initialised on a single molecule at 5 different initial positions, uniformly spaced along the x-direction of the active region spanning the “hot" and the “cold" end of the simulation cell (400 trajectories each). All properties obtained from FOB-SH, including drift velocity, were averaged over all sets of starting positions. We also performed a control simulation using the same setup as above except that the temperature of both thermal baths were set equal to 300 K, i.e. constant room temperature simulation. Further details regarding the computational set-up are provided in Methods and Supplementary Note 10. Figure <ref> shows the position-resolved average drift velocity of the centre-of-charge (COC) of the hole wavefunction (panel a), probability density of the COC (panel b) and average IPR (panel c) for positions within ± 10 nm with respect to the middle of the simulation cell (at position x = 0, T = 300 K) spanning temperatures between 275-325 K. In the control simulations at constant temperature (data in green), the probability density of COC and the average IPR are approximately the same at all positions. The average drift velocity at x = 0 is zero, as expected. In fact, the average drift velocity should vanish at all positions for a constant temperature profile, but in practice this is not the case. We observe a small linear change in position-dependent drift velocity as one approaches the hard boundaries at the hot and the cold side of the simulation cell. At the boundaries the charge carrier gets reflected and this results in a boundary force, leading to a small drift velocity pointing towards the middle of the simulation cell. In the centre of the simulation cell the artificial boundary effects cancel and the average drift velocity vanishes. Remarkably, in the presence of the temperature gradient (data in magenta) the drift velocities shift fairly uniformly to more negative values compared to the constant temperature simulations. At the centre of the cell where no artificial boundary effects are present, we obtain ⟨ v_x ⟩ = -0.89 ± 0.25 nmps^-1, corresponding to a net motion of the charge carrier from the hot to the cold region, i.e. the thermoelectric effect. We emphasize that the thermoelectric effect obtained from FOB-SH is not due to skewed initial conditions - the same number of trajectories are initialised from hot and cold regions and from the middle, see above. Rather, the hole wavefunction moves, on average, faster from hot to cold than from cold to hot giving rise to a net current and an increase in probability density of the hole on the cold side (panel b). A representative FOB-SH trajectory of a hole injected in the hot region and moving towards the cold region is shown in Figure <ref>. We find that the transport occurs via a series of transient delocalization events, similarly to the case of constant temperature. However, on average, the hole wavefunction steadily expands as it moves from hot to cold, see the position-resolved average IPR in Figure <ref>c (data in magenta). In fact, the average IPR at a given temperature in the temperature gradient simulations is very similar to the corresponding value obtained in a constant temperature simulation at this temperature. This is due to electronic couplings and root-mean-square fluctuations (i.e. off-diagonal electronic disorder) along the temperature gradient being very similar too, see Table <ref>. Hence, the trends discussed above for temperature-dependent delocalization and electronic disorder carry over, in a quasi-continuous manner, to the system under a temperature gradient. The spatial heterogeneity of wavefunction delocalization under a temperature gradient is a feature of the transient delocalisation regime in contrast to the standard assumption that spatial variation caused by the temperature gradient is related only to the spreading-out of the Fermi-Dirac distribution i.e transport level of carriers<cit.>. What causes the directional motion of the hole wavefunction from hot to cold? Our mechanistic proposal based on FOB-SH simulations is illustrated in Figure <ref>(a). We assume that the hole wavefunction ψ_a at a given time t is located in the central bin (shown schematically as an ellipse in the middle of Figure <ref>(a)) and we analyse the likelihood for transitions from ψ_a to one of the states towards the cold side (denoted “cold states", e.g., ψ_c indicated by an ellipse to the left) or to one of the states towards the hot side (denoted “hot states", e.g., ψ_h indicated by an ellipse to the right). In other words, we aim to explain why, on average, ψ_a is more likely to transition to cold than to hot states thereby generating the negative drift velocity ⟨ v_x ⟩ obtained in the simulations. We find that major displacements of the hole wavefunction contributing to drift velocity are typically induced by thermally induced electronic transitions (surface hops) from ψ_a to other electronic states. The probability for transitions to occur is proportional to (i) the density of thermally accessible states, i.e., density of states that are within a few k_BT of ψ_a and (ii) the magnitude of the non-adiabatic coupling element between ψ_a and these states. We find that the Boltzmann-averaged (i.e., thermally accessible) density of cold states (Fig. <ref>(d), data in blue) is higher than for hot states (data in red), and this is the case for all distances between the COCs of ψ_a and the states on the cold or hot side (x=ΔCOC_ka, k=c or h). Moreover, we find that the Boltzmann-averaged non-adiabatic coupling between ψ_a and the cold states is somewhat larger than between ψ_a and the hot states (Fig. <ref>(b)). Thus, according to this analysis of our simulation data, the negative drift velocity is due to the increasing density of thermally accessible states and the increasing non-adiabatic coupling between states with decreasing temperature. Both effects are a consequence of the decreasing electronic coupling fluctuations along the temperature gradient from hot to cold. In the control simulations at constant temperature there are no such gradients for these quantities, as expected, see Figure <ref>(e) and (c). The gradients in thermally accessible density of electronic states and non-adiabatic coupling are not unrelated. This is because the non-adiabatic coupling is inversely proportional to the energy difference between the coupled states, d^ad_ja∝ 1/Δ E_ja<cit.> (see Supplementary Note S11). Indeed we find good correlation between these two quantities, Supplementary Figure S14 (upper row). Thus, the larger density of energetically-close cold states (small Δ E_ja) compared to hot states contributes to the larger average non-adiabatic coupling for transition to cold states. Moreover, noting that the non-adiabatic coupling is given by ⟨ψ_j | ψ̇_a⟩, an additional factor potentially further enhancing the average non-adiabatic coupling to cold states could be their greater delocalization compared to hot states. Indeed we find some correlation between non-adiabatic coupling and delocalization of electronic states, Supplementary Figure S14 (bottom row), though the scatter is relatively large and the correlation is not as strong as for the energy difference above. Seebeck coefficient To link our simulations to the theory of thermoelectricity and the Seebeck coefficient, α, we consider the general expression for the current density along direction x (J_x) in the presence of gradients of temperature (T), chemical potential (μ_c) and electrical potential (ϕ)<cit.>, J_x = - σα∂_x T - σ/q∂_x μ_c - σ∂_x ϕ , where σ is the electrical conductivity and q the charge of the carriers. (Note, α and σ are in general tensorial quantities but for ease of notation we omit indices). The second and third terms on the right hand side (RHS) of equation <ref> can alternatively be expressed in terms of the electrochemical potential μ̅ = μ_c + q ϕ. The current density obtained from FOB-SH, J_x = q n ⟨ v_x ⟩, where n formally corresponds to the concentration of the single charge carrier in our simulation cell, is the result of the first two terms on the RHS of equation <ref>. The first term, directly proportional to the Seebeck coefficient and the temperature gradient, drives the charge carrier from hot to cold, whilst the second term, proportional to the chemical potential gradient that is induced by the temperature gradient, drives the carriers in the opposite direction from cold to hot. The third term on the right hand side (RHS) of equation <ref> is zero in the above simulations since no electrical potential gradients were applied. This differs from the usual experimental setup, where the electrical potential energy gradient, ∂_x ϕ, is adjusted by an external electric field so that the net current arising from the first two contributions is compensated, i.e. J_x=0, and the resulting voltage is the open-circuit voltage, Δ V_oc = Δμ_c/q + Δϕ. Rearranging terms in equation <ref> one finds α = -q/e⟨ v_x ⟩/μ∂_x T -1/q∂_x μ_c/∂_x T - ∂_x ϕ/∂_x T = α_v + α_c + α_e, where μ = σ/(e n) is the charge mobility along x and e the elementary charge. Hence, the Seebeck coefficient is comprised of three terms. The first term, α_v, is proportional to the net current flow in the system. This contribution is of kinetic origin and depends on the charge transport mechanism. The second and third terms, α_c and α_e, are due to the gradient in chemical potential and external electric field, respectively, hence they are of thermodynamic origin. We emphasize that the separation of the Seebeck coefficient in kinetic and thermodynamic contributions rigorously follows from Eq. <ref><cit.> without assuming any specific transport mechanism. Emin carried out a similar separation for systems that are in the phonon-assisted tunneling (hopping) regime by writing the total Seebeck coefficient in terms of a contribution of kinetic origin, α_transport, and a contribution of thermodynamic origin, α_presence<cit.>. Experiments are typically carried out in open-circuit conditions where the external electric field is adjusted such that the net current flow, and therefore the kinetic contribution α_v, is equal to zero. Thus, in experiment the total Seebeck coefficient arises from the thermodynamic contributions only. Generally, the kinetic and thermodynamic contributions to the Seebeck coefficient will depend on the applied external electric field. All three terms to the Seebeck coefficient Eq. <ref> can be obtained from computation. Using the average drift velocity obtained from FOB-SH simulation under a temperature gradient without external field (Figure 2a, data in magenta, ⟨ v_x ⟩ = -0.89 nmps^-1 at x=0), we obtain a kinetic contribution to the Seebeck coefficient α_v=97.4 ± 27.5 μV K^-1 at 300 K. Here, a charge mobility value of μ = 33.0 cm^2 V^-1 s^-1 was used, as obtained from FOB-SH simulation at constant temperature T=300 K employing the same electronically active region size (50× 7 unit cells) as in the simulations with a temperature gradient. (Notice, this value differs slightly from the one quoted above, 34.8 ± 3.5 cm^2 V^-1 s^-1, which was obtained for a larger electronically active region.) The thermodynamic contribution is given by α_c only since α_e=0 at zero external field. α_c has previously been evaluated for the case of a non-degenerate semiconductor with parabolic bands, where a simple expression exists for μ_c <cit.>. Such approximations should be regarded with caution in the case of rubrene, where the effect of thermal disorder on the band states is very strong. Here, we calculate ∂μ_c/∂ T explicitly by noting that the chemical potential μ_c is related to the free energy change upon insertion of a hole (electron) in the valence (conduction) band, equations <ref> and <ref>, see Methods and Supplementary Note 13 for details. At carrier density n= 2.74 × 10^15 m^-2, corresponding to a single carrier in the active region area of 50×7 unit cells, we obtain α_c=331 ± 6 μV K^-1. The total Seebeck coefficient at T = 300 K and n= 2.74 × 10^15 m^-2 is thus α=α_v + α_c =429 ± 28 μV K^-1. We verify that the Seebeck coefficient computed from Eq. <ref> is independent of the chosen external electric field by carrying out FOB-SH simulations subject to a temperature gradient but this time under open-circuit conditions, like in experiment. To do so we apply an external electric field in our simulation such that the average drift velocity and the resultant kinetic contribution to the Seebeck coefficient, α_v, vanish. This should be the case for an external electric field of ≈ 3 × 10^3 Vcm^-1 pointing in the direction from cold to hot. The results are shown in Figure <ref> (data in blue). Indeed, the average drift velocity in the central bin is now zero within the statistical uncertainty, ⟨ v_x ⟩ = 0.24 ± 0.33 nmps^-1 (panel a) and the overall Seebeck coefficient, α=α_v + α_c + α_e = 397 ± 37 μV K^-1, encompasses the value obtained without the field within the statistical uncertainty. Thus, the two simulation approaches (i.e., with and without an external electric field) for calculating the Seebeck coefficient within the framework of Eq. <ref> yield consistent results. Note that the hole wavefunction is still very dynamic, frequently moving towards the cold or the hot side but at about equal amounts of time, thus averaging close to zero. We also find that the position-resolved IPR remains virtually unchanged when compared to the results without the electric field (Fig. <ref>(c) magenta vs blue). This is expected because the above field strength corresponds to an electrostatic site energy difference between adjacent rubrene molecules along the a-direction of only about 0.2 meV which is much smaller than the electronic coupling (100 meV), thus has very little impact on band structure and delocalization of valence band states. Comparison to experiment To validate our simulations, we carried out experimental measurements of the Seebeck coefficient in rubrene single crystals as a function of carrier concentration (n). These are measured under temperature differences of typically 5–10 K across a channel length of 420m and under open circuit conditions, see Methods and Supplementary Note 14 for further details. Figure <ref> shows the experimental Seebeck coefficients obtained in the present study (black squares) and those from Ref.  (black pentagons). The best fits of experimental data points to A + Bln(n), where A and B are optimization parameters, are indicated with black dashed lines. The total computed Seebeck coefficient, α = α_c + α_v + α_e, obtained without and with an external electric field in FOB-SH simulations are indicated at n= 2.7 × 10^15 m^-2 corresponding to the carrier density present in FOB-SH simulations (squares in magenta and blue, respectively), along with α_c (circle in magenta). The concentration dependence of computed α and α_c (lines in magenta and blue) is given by equation <ref>. The uncertainty in α due to the statistical error from FOB-SH simulations is indicated over the entire concentration range by the magenta shaded region. The computed total α= 429 ± 28 μV K^-1 and α= 397 ± 37 μV K^-1 obtained from FOB-SH simulations without and with external electric field, respectively, compares favourably with the experimental estimate at the same carrier density (n= 2.7 × 10^15 m^-2), 557 ± 82 μV K^-1, with experimental and computational error bars nearly overlapping. The experimental slope ∂α/∂ln(n)=-114.4 μV K^-1 is also in reasonable agreement with the slope predicted by theory, ∂α/∂ln(n)=-k_B/e=-86.2 μV K^-1. The remaining discrepancy could be caused by model assumptions, for instance, that the thermoelectric effect remains linear over the 1000-fold larger temperature interval applied in the simulations when compared with experiment. Another source for the remaining discrepancy could be the presence of shallow trap states in experimental field effect transistors (the existence of which are confirmed by observing the experimental threshold voltage shift as a function of temperature), which are known to cause an enhancement of the Seebeck coefficient and the slope ∂α/∂ln(n), as well as depress the experimental mobility<cit.>. Therefore, the experimentally measured Seebeck coefficients are expected to be larger than the simulation where intrinsic valence band states are considered. In Figure <ref> we also plot data taken from Pernstich et. al to highlight the sample-to-sample variation present in measurements<cit.>. This data shows a significantly larger magnitude of the Seebeck coefficient and slope, ∂α/∂ln(n) = -168 μV K^-1. Experimental variations are likely due to differences in crystal quality, especially at the dielectric interface, as well as differences in the exact experimental set-up and measurement methods. We highlight these differences in order to contextualise the comparison between the simulation and experimental data, where a similar magnitude of discrepancy is seen within different experimental determinations of the Seebeck coefficient. Additionally, we note that although it is generally believed that a 2D model describes very well the charge transport situation in a field effect transistor, the carriers in the experiment are not confined in the third dimension, which may also have a small effect on the charge carrier compared to the simulation. Even in a 2D model, transport is anisotropic so any misalignment of the high-mobility crystal axis with the temperature gradient will also result in some discrepancy. § DISCUSSION In this work we have simulated the quantum dynamics of a charge carrier in an OS subject to a temperature gradient providing a dynamical perspective and understanding of a transport problem that is typically approached from a purely static perspective. At the heart of our dynamical perspective of thermoelectricity is the gradient in the thermal fluctuations of electronic couplings that gives rise to gradients in the spatially resolved density of electronic states and in non-adiabatic coupling causing the carrier wavefunction to move from hot to cold. The explicit time-dependent carrier simulation presented here has also allowed us to understand the Seebeck coefficient in terms of a kinetic (α_v) and thermodynamic contribution (α_c+α_e). We have shown that simulations with no external electric field, where the carrier migrates from hot to cold, and those with a field to cancel such motion yield consistent Seebeck coefficients. Whilst in the current simulations at low carrier density the Seebeck coefficient was dominated by the thermodynamic contribution, at high carrier density relevant in practical situations the thermodynamic contribution will strongly decrease and the kinetic contribution is expected to become very important if not dominating. The mechanistic insight we have obtained from our simulations opens up a new avenue for the design of organic semiconductors with improved Seebeck coefficients. Previously, focus has been placed on the chemical potential contribution to the Seebeck coefficient, α_c, sometimes referred to as the entropy of mixing. Indeed, increasing the entropy of the charge carrier in the valence or conduction band by increasing the density of thermally accessible valence band states will lead to an increase in α_c. Yet, for ordered organic semiconductors where charge transport occurs in the transient delocalization regime we propose an additional route aimed at increasing the kinetic contribution to the Seebeck coefficient, α_v, or equivalently the electric field contribution, α_e, compensating α_v in open-circuit conditions. Recall that thermoelectric transport in OS is driven by the gradient in thermal electronic disorder and that a sensitive probe of the disorder is the average delocalization of the carrier wavefunction (IPR in Fig. <ref>(c)). Thus we expect that systems with increasing sensitivity of thermal electronic disorder and, concurrently, charge carrier delocalization to changes in temperature will exhibit increasing values of α_v. Hence, we assert that α_v ∝ - Δ L_x / Δ T, where L_x is the localization length of the charge carrier (related to IPR). Assuming validity of transient localization theory (μ_x = (e/(k_BT)) L_x^2/(2 τ)) and temperature dependences of charge mobility μ_x ∝ T^-n and localization time τ∝ T^-m, α_v ∝ - Δ L_x / Δ T ∝ (n+m-1). Hence the kinetic contribution to the Seebeck coefficient is predicted to increase with increasing exponents of the temperature dependences of μ and τ. This hypothesis could be tested in future work on a range of systems exhibiting varying temperature dependences of localization length or mobility. § METHODS Fragment orbital-based surface hopping (FOB-SH) Fragment orbital-based surface hopping (FOB-SH) is a mixed quantum-classical dynamics method based on fewest switches surface hopping which allows for simulation of charge transport on the true nanoscale (10-100 nm)<cit.>. In this method a single excess charge carrier (in this work, hole carrier) is propagated in space and time according to the time-dependent Schrödinger equation under the influence of time-dependent classical nuclear motion. The single-particle time-dependent wavefunction, Ψ(t) is expanded in a basis of orthogonalised time-dependent frontier orbitals which mediate charge transport, |Ψ(t)⟩ = ∑_l=1^M u_l(t) |ϕ_l(𝐑(t))⟩, where 𝐑(t) denotes time-dependent nuclear positions. In the case of hole (electron) transport, the basis functions ϕ_l are the orthogonalised HOMOs (LUMOs) of the molecules which constitute the crystal lattice. The valence band in which the excess hole is propagated is described by the following Hamiltonian: H = ∑_kϵ_k|ϕ_k⟩⟨ϕ_k| + ∑_k≠ lH_kl|ϕ_k⟩⟨ϕ_l|, where ϵ_k = ⟨ϕ_k|H|ϕ_k⟩ are the site energies, i.e. the potential energy of the system when the excess charge is localised on molecule k, and H_kl = ⟨ϕ_k|H|ϕ_l⟩ are the electronic couplings. To facilitate simulations on large systems over long time scales, we have developed a parameterized approach which avoids explicit electronic structure calculations. Site energies ϵ_k are calculated using a classical force field where molecule k is charged and all other molecules are neutral. Electronic couplings H_kl are calculated using the efficient analytic overlap method (AOM)<cit.>, which assumes a linear relationship between electronic coupling and orbital overlap H_kl = C̅S̅_kl where S̅_kl is the overlap between fragment orbitals k and l projected into a minimal Slater-type orbital basis and C̅ is a fitting parameter obtained by correlating S̅_kl with reference H_kl computed from DFT, specifically the projector operator-based diabatization (POD) method <cit.>. We note that for rubrene this provides a very good approximation, with mean absolute errors (mean relative unsigned errors) of 5.8 meV (5.5 %) and 3.2 meV (23.6 %) for a and b–direction AOM couplings with respect to DFT, J_a and J_b, respectively <cit.>. Inserting equation <ref> into the time-dependent Schödinger equation yields iħu̇_k(t) = ∑_l=1^M u_l(t) (H_kl(𝐑(t)) - iħ d_kl(𝐑(t))), where d_kl = ⟨ϕ_k||ϕ̇_̇l̇⟩ are the NACEs in the quasi-diabatic site basis, which are usually close to zero, in contrast to the NACEs in the adiabatic basis d^ad_kl = ⟨ψ_k||ψ̇_̇l̇⟩. At any given time, the nuclear dynamics is propagated classically on one of the adiabatic states that results from diagonalising the electronic Hamiltonian. This adiabatic state is denoted the active surface, E_a(𝐑(t)). At each time step, the Tully surface hopping probability for a surface hop from state a to another adiabatic state j, p_ja, is calculated<cit.>, p_ja=-ℝ(c_j^* c_a d^ad_ja)/|c_a|^2Δ t, where c_j and c_a are the wavefunction expansion coefficients of adiabats j and a, respectively, and Δ t is the nuclear time step. The probability to remain on the active surface is given by p_aa=1-∑_j≠ a p_ja. A random uniform number is drawn to stochastically decide whether to attempt hop to a new surface j. Energy conservation after a successful hop is enforced according to the standard procedure of rescaling the velocity components in the direction of the non-adiabatic coupling vector (NACV). If the nuclear kinetic energy is not sufficient to fulfill energy conservation, the hop is rejected and the the nuclear velocity components along the direction of the NACV are reversed <cit.>. In addition to the standard prescription of surface hopping, three extensions to the original algorithm are required to ensure convergence of mobility with system size, detailed balance and good internal consistency. These extensions include a decoherence correction, trivial crossing detection and elimination of decoherence-induced spurious long-range charge transfer. We refer to Refs. and for a detailed discussion of these important additions to the method. A drawback of the surface hopping method is that nuclei are treated as classical particles which means that certain nuclear quantum effects including zero-point energy and tunneling are not included. We do not think this is a major problem for the current system because these effects typically become important for systems characterised by high energetic barriers, i.e., in the small polaron hopping regime. FOB-SH simulations at constant temperature Initial atomic positions for rubrene were taken from the Cambridge Crystallographic Data Centre (CCDC) structure with identifier QQQCIG01. Thermal expansion of the unit cell was accounted for by using temperature dependent lattice parameters determined by a linear fit to the experimental temperature-dependent lattice parameters of Ref. <cit.>, see Supplementary Note 3 for details. The unit cell was optimised for each temperature under the constraint of fixed lattice parameters corresponding to that temperature. The force field used was based on the GAFF parameters, with selected parameters re-optimized to better reproduce results from ab-initio MD simulations as reported in Ref. <cit.>, see Supplementary Note 1 for details. Supercells composed of 54×13×1 unit cells were prepared and equilibrated to 200, 225 and 250 K, and slightly smaller supercells composed of 50×13×1 unit cells were equilibrated to 275, 300, 325 and 350 K for 200 ps in the NVT ensemble applying periodic boundary conditions, a Nosé-Hoover thermostat and a MD time step of 1 fs. This was followed by at least 325 ps MD simulation in the NVE ensemble. Initial positions and velocities for the swarm of FOB-SH trajectories were drawn from snapshots separated by 0.5 ps from the equilibrated NVE trajectory. For each trajectory, the wavefunction was initialised on a single molecular site (i.e. diabatic state) located at the corner of the simulation box and propagated for 900 fs in the NVE ensemble using an MD time step of 0.05 fs and an electronic time step of 0.01 fs for integration of equation <ref> employing the Runge-Kutta algorithm to 4th order. At least 650 FOB-SH trajectories were run for each temperature. The initial 200 fs of dynamics, corresponding to quantum relaxation from the initial diabatic state, was neglected in all analysis. The diffusion tensor for each temperature was obtained from a linear best fit of the mean squared displacement against time, equation <ref>, between 200 fs to 900 fs (see Supplementary Fig. 5). Mobility was calculated using the Einstein relation, equation <ref>. The mobility values reported herein are well converged with system size, even for the low temperatures where hole delocalization is extensive. A detailed analysis of the convergence is given in Supplementary Note 4 and Tables S4-5. FOB-SH simulations with a temperature gradient A supercell of size 120×7×1 (using the 300 K lattice parameters) was defined in periodic boundary conditions. A saw-tooth temperature profile was achieved by defining two thermal bath regions of size 10×7×1 unit cells at temperatures 250 K and 350 K, separated by 50 unit cells along the a direction. The temperature in the thermal bath regions was constrained to the target temperatures through a velocity rescaling procedure, as implemented in CP2K <cit.> (see Supplementary Note 9 for the detailed procedure). A set of 400 positions and velocities sampled every 0.5 ps from a 200 ps non-equilibrium run under the temperature gradient were used as initial coordinates for subsequent FOB-SH runs. The (non-periodic) electronically active region in FOB-SH simulations was defined over one of the linear portions of the temperature profile (see Supplementary Fig. 8, top panel). In order to eliminate any possible boundary effects associated with initialising the wavefunction at a given position, the hole wavefunction was initialised on a single molecule at 5 different initial positions, uniformly spaced along the x-direction of the FOB-SH active region (at 250, 275, 300, 325 and 350 K). Overall, 2000 trajectories of length 5 ps were run (400 × 5 initial wavefunction positions). In all production runs, velocity rescaling to the target temperature was only applied within the thermal bath regions. The dynamics in the electronically active region evolved according to the standard FOB-SH algorithm, i.e., Newtonian dynamics on one of the adiabatic potential energy surfaces and rescaling of the velocity component parallel to the NACV after successful surface hops. The settings and integration time steps were the same as described above for FOB-SH simulation at constant temperature. Further simulation details are presented in Supplementary Note 9. FOB-SH simulations with a temperature gradient and an external electric field The above FOB-SH simulations with a temperature gradient were repeated under the presence of a constant external electric field along x, E_x. The effect of the field was simply modelled by a linear change of the site energies of the electronic Hamiltonian, equation <ref>, ϵ_k ( R) →ϵ_k ( R) - q E_x x_k, where q=+e, x_k is the centre of mass of molecule k and E_x=-∂_x ϕ = 2.557 × 10^3 Vcm^-1. The corresponding additional forces on the nuclei have been accounted for but are negligibly small for the small fields applied. The same simulation protocol was followed as for the simulations without external field except that trajectories were run to 3 ps rather than 5 ps due to our finding that trajectories of length 3 ps are sufficiently converged, see Supplementary Figure S15. All simulations were carried out with our in-house implementation of FOB-SH in the CP2K simulation package <cit.>. Calculation of charge mobility and inverse participation ratio Hole mobilities (Fig. 2(c),(d)) were obtained from FOB-SH trajectories run at constant temperature. The mean-square-displacement (MSD) of the hole wavefunction is calculated as follows, MSD_αβ = 1/N_traj∑_n=1^N_traj⟨Ψ_n(t)|(α - α_0,n)(β - β_0,n)|Ψ_n(t)⟩, ≈1/N_traj∑_n=1^N_traj( ∑_k=1^M | u_k,n (t) |^2 (α_k,n (t) - α_0,n ) (β_k,n (t) - β_0,n ) ) where Ψ_n(t) is the hole wavefunction in FOB-SH trajectory n, α=x,y, β=x,y are Cartesian coordinates along the crystallographic directions a,b, α_0,n (β_0,n) are the initial positions of the COC in trajectory n, α_0,n=⟨Ψ_n(0) | α | Ψ_n (0) ⟩, and N_traj is the number of FOB-SH trajectories. In equation <ref> the coordinates of the hole are discretized and replaced by the centre of mass of molecule k in trajectory n, α_k,n, and α_0,n=⟨α_n ⟩ (0) where ⟨α_n ⟩ (t) is the α-coordinate of the COC at time t in trajectory n, ⟨α_n ⟩ (t) =∑_k=1^M | u_k,n(t) |^2 α_k,n (t) and | u_k,n(t) |^2 is the hole population of site k in trajectory n. The diffusion tensor D_αβ is given by half of the slope of MSD_αβ, D_αβ = 1/2lim_t →∞dMSD_αβ(t)/dt, which allows the charge mobility μ_αβ to be calculated from the Einstein relation, μ_αβ = e D_αβ/k_BT, where e is the elementary charge, k_B is the Boltzmann constant and T is temperature. Delocalisation of the hole wavefunction Ψ (t) (Fig. 2(a)) is quantified using the inverse participation ratio (IPR), IPR(t) = 1/N_traj∑_n=1^N_traj1/∑_k=1^M|u_k,n(t)|^4, where u_k,n(t) are the expansion coefficients of Ψ_n (t) in the diabatic or site basis ϕ_k,n in trajectory n and M is the number of rubrene molecules in the electronically active region. The IPR of a given eigenstate or valence band state of the electronic Hamiltonian equation <ref>, ψ_i (Fig. 2(b)), is given by: IPR_i(t) = 1/N_traj∑_n=1^N_traj1/∑_k=1^M|U_ki(t)|^4, where U_ki(t) are the expansion coefficients of eigenstate ψ_i in the site basis ϕ_k. Calculation of drift velocity, ⟨ v_x ⟩ Position-resolved drift velocities along the a-crystallographic direction (Fig. <ref>(a)) were obtained as follows. The COC of the hole wavefunction along x (crystallographic direction a), ⟨ x_n ⟩ (t) = ∑_k=1^M |u_k,n(t)|^2 x_k,n (t), was calculated for each FOB-SH trajectory n every 2 fs. The drift velocity was calculated from the finite difference time derivative of the COC, v_x,n(t) = [⟨ x_n ⟩ (t + δ t) - ⟨ x_n ⟩ (t)] / δ t with δ t = 2 fs. Position bins of length 4 unit cells (2.9 nm) along the x-direction were defined and a given velocity was associated with the bin containing the COC at time t. The velocity at each time step over all trajectories was binned this way and the data in each bin was averaged over giving ⟨ v_x ⟩. The same binning procedure was used for the probability density of COC (Fig. <ref>(b)), obtained by counting the number of occurrences of the COC in each bin, and for the IPR of the hole wavefunction, equation <ref> (Fig. <ref>(c)). We note that using a smaller δ t for the calculation of COC drift velocity makes little difference to the average drift velocity in the central bin compared to the statistical uncertainty. This was checked for the simulations at constant temperature and those employing both a temperature gradient and an external electric field, where the charge carrier wavefunction was printed every 0.5 fs (i.e. four times more frequently than for simulations with a temperature gradient and no external electric field). For the constant temperature simulations, using δ t = 0.5 fs results in a mean drift velocity in the central position bin of ⟨ v_x ⟩ = 0.15 nmps^-1, whereas using δ t = 2 fs results in ⟨ v_x ⟩ = 0.10 ± 0.24 nmps^-1. For simulations with both a temperature gradient and an external electric field, using δ t = 0.5 fs results in ⟨ v_x ⟩ = 0.28 nmps^-1 compared to ⟨ v_x ⟩ = 0.24 ± 0.33 using δ t = 2 fs. In both cases, the values obtained using δ t = 0.5 fs are well within the statistical uncertainty of the values computed using δ t = 2 fs. Details concerning the distribution of drift velocities in the case of constant temperature and with temperature gradient are given in Supplementary Note 10 and Figure 11. Convergence of drift velocity with number of trajectories and trajectory length is demonstrated in Supplementary Figure 15. Analysis of thermoelectric motion For the analysis shown in Figure <ref> (b), (d) we used 800 FOB-SH trajectories of length 2 ps with the temperature gradient applied. All configurations where a successful surface hop occured and the COC of the active adiabatic state (index a), ⟨ x_a⟩, was located within the central 10 units cells (7.2 nm) along the a-direction were included (⟨ x_a⟩ = ∑_l=1^M |U_l,a(t)|^2 x_l (t) with x_l the centre of mass of molecule l). This amounted to around 60,000 configurations overall. For each configuration, there are 700 adiabatic states ψ_k, including the active state ψ_a. Each adiabat k ≠ a was binned according to the difference in the COC position of that state and the COC position of the active state, ΔCOC_ka=⟨ x_k⟩ - ⟨ x_a⟩, using bin widths of 2.5 nm. The centre of each bin is denoted x_i in the following. To account for finite thermal accessibility of states k from state a, i.e. the fact that surface hops to adiabatic states deep inside the valence band may be energetically forbidden, a Boltzmann weight was assigned to each state according to its energy relative to the active adiabatic state energy, E_k - E_a, w^B_k = min[ exp(β(E_k-E_a)), 1 ]. Defining the number of thermally accessible states within a given bin x_i as the sum of the Boltzmann weights of all states k within this bin, N^acc(x_i) = ∑_k ∈ x_i w^B_k, the thermally averaged property P_k over adiabatic states in bin x_i, P_k= d_ka^ad, p_ka, IPR_k, is given by ⟨ P_k ⟩^B (x_i) = ∑_k ∈ x_i P_k w^B_k/ N^acc(x_i). The NACE, Tully hopping probability and IPR thermally averaged over adiabatic states in a distance bin x_i, ⟨ d_ka^ad⟩^B, ⟨ p_ka⟩^B and ⟨IPR_k ⟩^B, respectively, are shown for all distance bins in Figure 4(b) and Supplementary Figure 12(c) and (a). The percentage of thermally accessible states in a distance bin x_i, N^acc(x_i) / ∑_j N^acc(x_j)× 100%, is shown for all distance bins in Figure 4(d). Moreover, the total thermally weighted probability for a surface hop to any state in bin x_i, N^acc(x_i) ⟨ p_ka⟩^B(x_i), is shown in Supplementary Figure 12(f). To facilitate a like-for-like comparison with the 300 K constant temperature simulation (Fig. <ref>(c) and (e) and Supplementary Fig. 13), the same procedure was carried out at constant temperature of 300 K using an identical active region size of 50× 7 unit cells and identical initial conditions. Chemical potential contribution to Seebeck coefficient, α_c The chemical potential of holes in the valence band is equal to the free energy change upon hole insertion into the band. It depends on hole density n and temperature T. For a given reference hole density, n^ref, and T it is given by μ_c^ref(T, n^ref) = F_hole(T, n^ref) - F_neutral(T, n^ref) = -k_B T ln⟨∑_i^vb e^β [E_i(𝐑) + E_neutral(𝐑)⟩_E_neutral(𝐑)^n^ref, where F denotes free energy, E_i(𝐑) is the i^th valence band (vb) state at nuclear positions 𝐑 (i.e. eigenstate of the electronic Hamiltonian equation <ref>, where the index i runs from the top to the bottom of the valence band) and E_neutral (𝐑) is the energy of the neutral system at nuclear positions 𝐑. The brackets denote taking the ensemble average over configurations sampled from a molecular dynamics trajectory of the neutral system at carrier density n^ref. Note that E_i are electronic energy levels, i.e., E_i decreases with increasing hole excitation energy (see also Fig. <ref>(b)), thus the positive sign in the exponent of the Boltzmann weight. A derivation of equation <ref> is presented in Supplementary Note 13. The chemical potential at a general carrier density n = 1 / A, where A is the area of the electronically active region within the a-b plane, is given by μ_c(T, n) = μ_c^ref(T, n^ref) + k_B T lnn/n^ref. The chemical potential contribution to the Seebeck coefficient, that is the second term on the RHS of equation <ref>, is then given by α_c = -1/q∂μ_c/∂ T = -1/q[ ∂μ_c^ref/∂ T + k_B lnn/n^ref]. The first term on the RHS of equation <ref>, can be obtained by calculating μ_c^ref(T, n^ref) at different temperatures around 300 K and taking the slope. Details on these calculations are presented in Supplementary Note 13 where we also show that the numerical results are virtually independent of the chosen reference concentration (n^ref), Supplementary Table S8 and Figure S16. Experimental Details Rubrene single crystals were grown via physical vapour transport under Ar flow from ≥ 98 % pure rubrene powder (Sigma Aldrich – used as purchased). The devices were made on 175 PET (Melinex ST504, DuPont Teijin Films). After sonic cleaning in acetone and isopropyl alcohol, the gate and heater electrodes were deposited via shadow mask with a 3 nm Cr adhesion layer followed by 20 nm Au. The gate dielectric is formed of a 500 nm layer of CYTOP, deposited by spin coating followed by thermal annealing at 90C. The source and drain contacts (also 20 nm Au) were then evaporated, followed by manual placement of the grown single crystals aligning along the high mobility direction. The device architecture and the Seebeck measurement are carried out in the similar way as our previous study<cit.>. All mobility and Seebeck measurements were performed in a Lake Shore CRX-4K cryogenic probe station using Keithley SMU models 2612B, 6430 and 2182 nanovoltmeters. For more details on the Seebeck measurement see SI. §.§ Author Contributions Conceptualization: JE, HS, JB Software: AD, SG Methodology: JE, YX, HS, JB Investigation: JE, YX, EDG, FI Visualization: JE, EDG Supervision: HS, JB Writing—original draft: JE, JB Writing—review & editing: JE, YX, EDG, FI, AD, SG, HS, JB §.§ Data and code availability The full data for this study total several terabytes and are in cold storage accessible by the corresponding authors and available upon reasonable request. The custom FOB-SH code for non-adiabatic molecular dynamics, the python code used for the analysis and other post-processing tools used for this study are available from the corresponding authors upon request. J.E. was supported by a departmental Ph.D. studentship, F.I., A.D. were supported by EPSRC DTP studentships (EP/W524335/1) and S.G. was supported by the European Research Council (ERC) under the European Union, Horizon 2020 research and innovation programme (grant agreement no. 682539/SOFTCHARGE). Via our membership of the UK's HEC Materials Chemistry Consortium, which is funded by EPSRC (EP/L000202, EP/R029431), this work used the ARCHER UK National Supercomputing Service (http://www.archer.ac.uk) as well as the UK Materials and Molecular Modelling (MMM) Hub, which is partially funded by EPSRC (EP/P020194). We also acknowledge the use of the UCL Kathleen High Performance Computing Facility. Y.X. acknowledges support from the Cambridge Commonwealth, European & International Trust and Chinese Scholarship Council. E.D.G. was supported by an EPSRC DTP Studentship provided by the Department of Physics, University of Cambridge. For the experimental work we acknowledge financial support from the Royal Society (RP/R1/201082), the European Research Council (101020872) and the Engineering and Physical Sciences Research Council (EP/W017091/1). wang2004development, bulgarovskaya1983, ziogos2021hab79, nematiaram2020modeling, ciuchi2011transient, nematiaram2019practical, chui1992temperature @ifundefinedendmcitethebibliography 81 f subitem(mcitesubitemcount) [Lu et al.(2016)Lu, Li, and Liu]lu2016review Lu, N.; Li, L.; Liu, M. A review of carrier thermoelectric-transport theory in organic semiconductors. Phys. Chem. Chem. Phys. 2016, 18, 19503–19525 [Zhang and Di(2020)Zhang, and Di]zhang2020exploring Zhang, F.; Di, C.-a. Exploring thermoelectric materials from high mobility organic semiconductors. Chem. Mater. 2020, 32, 2688–2702 [Venkateshvaran et al.(2014)Venkateshvaran, Nikolka, Sadhanala, Lemaur, Zelazny, Kepa, Hurhangee, Kronemeijer, Pecunia, and Nasrallah]venkateshvaran2014approaching Venkateshvaran, D.; Nikolka, M.; Sadhanala, A.; Lemaur, V.; Zelazny, M.; Kepa, M.; Hurhangee, M.; Kronemeijer, A. J.; Pecunia, V.; Nasrallah, I. Approaching disorder-free transport in high-mobility conjugated polymers. et al. Nature 2014, 515, 384–388 [Kim et al.(2013)Kim, Shao, Zhang, and Pipe]kim2013engineered Kim, G.-H.; Shao, L.; Zhang, K.; Pipe, K. P. Engineered doping of organic semiconductors for enhanced thermoelectric efficiency. Nat. Mater. 2013, 12, 719–723 [Masoumi et al.(2022)Masoumi, O'Shaughnessy, and Pakdel]masoumi2022organic Masoumi, S.; O'Shaughnessy, S.; Pakdel, A. Organic-based flexible thermoelectric generators: From materials to devices. Nano Energy. 2022, 92, 106774 [Wang et al.(2012)Wang, Shi, Chen, Xia, and Shuai]Wang12pccp Wang, D.; Shi, W.; Chen, J.; Xia, J.; Shuai, Z. Modeling thermoelectric transport in organic materials. Phys. Chem. Chem. Phys. 2012, 14, 16505–16520 [Wang et al.(2018)Wang, Hu, Bocklund, Shang, Zhou, Liu, and Chen]wang2018first Wang, Y.; Hu, Y.-J.; Bocklund, B.; Shang, S.-L.; Zhou, B.-C.; Liu, Z.-K.; Chen, L.-Q. First-principles thermodynamic theory of Seebeck coefficients. Phys. Rev. B 2018, 98, 224101 [Wu and Yu(2020)Wu, and Yu]wu2020contributions Wu, G.; Yu, X. Contributions of chemical potential to the diffusive Seebeck coefficient for bulk semiconductor materials. Eur. Phys. J. Plus 2020, 135, 1–15 [Zevalkink et al.(2018)Zevalkink, Smiadak, Blackburn, Ferguson, Chabinyc, Delaire, Wang, Kovnir, Martin, and Schelhas]zevalkink2018practical Zevalkink, A.; Smiadak, D. M.; Blackburn, J. L.; Ferguson, A. J.; Chabinyc, M. L.; Delaire, O.; Wang, J.; Kovnir, K.; Martin, J.; Schelhas, L. T. A practical field guide to thermoelectrics: Fundamentals, synthesis, and characterization. et al. Appl. Phys. Rev. 2018, 5 [Lundstrom and Jeong(2012)Lundstrom, and Jeong]lundstrom2012near Lundstrom, M. S.; Jeong, C. Near-equilibrium transport: fundamentals and applications; World Scientific Publishing Company, 2012; Vol. 2 [Austin and Mott(1969)Austin, and Mott]austin1969polarons Austin, I.; Mott, N. F. Polarons in crystalline and non-crystalline materials. Adv. Phys. 1969, 18, 41–102 [Chaikin and Beni(1976)Chaikin, and Beni]chaikin1976thermopower Chaikin, P.; Beni, G. Thermopower in the correlated hopping regime. Phys. Rev. B 1976, 13, 647 [Emin(1975)]emin1975thermoelectric Emin, D. Thermoelectric power due to electronic hopping motion. Phys. Rev. Lett. 1975, 35, 882 [Emin(1999)]emin1999enhanced Emin, D. Enhanced Seebeck coefficient from carrier-induced vibrational softening. Phys. Rev. B 1999, 59, 6205 [Emin(1999)]emin2014seebeck Emin, D. Seebeck effect. Wiley Encyclopedia of Electrical and Electronics Engineering 2014, 1–18 [Troisi and Orlandi(2006)Troisi, and Orlandi]Troisi06 Troisi, A.; Orlandi, G. Charge-Transport Regime of Crystalline Organic Semiconductors: Diffusion Limited by Thermal Off-Diagonal Electronic Disorder. Phys. Rev. Lett. 2006, 96, 086601–4 [Fratini et al.(2016)Fratini, Mayou, and Ciuchi]fratini2016transient Fratini, S.; Mayou, D.; Ciuchi, S. The transient localization scenario for charge transport in crystalline organic materials. Adv. Funct. Mater. 2016, 26, 2292–2315 [Fratini et al.(2017)Fratini, Ciuchi, Mayou, De Laissardière, and Troisi]fratini2017map Fratini, S.; Ciuchi, S.; Mayou, D.; De Laissardière, G. T.; Troisi, A. A map of high-mobility molecular semiconductors. Nat. Mater. 2017, 16, 998–1002 [Fratini et al.(2020)Fratini, Nikolka, Salleo, Schweicher, and Sirringhaus]Fratini20 Fratini, S.; Nikolka, M.; Salleo, A.; Schweicher, G.; Sirringhaus, H. Charge transport in high-mobility conjugated polymers and molecular semiconductors. Nat. Mater. 2020, 19, 491–502 [Few et al.(2015)Few, Frost, and Nelson]Few15 Few, S.; Frost, J. M.; Nelson, J. Models of charge pair generation in organic solar cells. Phys. Chem. Chem. Phys. 2015, 17, 2311–2325 [Wang and Beljonne(2013)Wang, and Beljonne]Wang13jpcl Wang, L.; Beljonne, D. Flexible Surface Hopping Approach to Model the Crossover from Hopping to Band-like Transport in Organic Crystals. J. Phys. Chem. Lett. 2013, 4, 1888–1894 [Wang et al.(2015)Wang, Prezhdo, and Beljonne]Wang15pccp Wang, L.; Prezhdo, O. V.; Beljonne, D. Mixed quantum-classical dynamics for charge transport in organics. Phys. Chem. Chem. Phys. 2015, 17, 12395–12406 [Giannini et al.(2023)Giannini, Di Virgilio, Bardini, Hausch, Geuchies, Zheng, Volpi, Elsner, Broch, Geerts, Schreiber, Schweicher, Wang, Blumberger, Bonn, and Beljonne]Giannini23 Giannini, S. et al. Transiently delocalized states enhance hole mobility in organic molecular semiconductors. Nat. Mater. 2023, 22, 1361–1369 [Heck et al.(2015)Heck, Kranz, Kubar̆, and Elstner]Heck15 Heck, A.; Kranz, J. J.; Kubar̆, T.; Elstner, M. Multi-Scale Approach to Non-Adiabatic Charge Transport in High-Mobility Organic Semiconductors. J. Chem. Theor. Comput. 2015, 11, 5068–5082 [Xie et al.(2020)Xie, Holub, Kubar, and Elstner]Xie20 Xie, W.; Holub, D.; Kubar, T.; Elstner, M. Performance of Mixed Quantum-Classical Approaches on Modeling the Crossover from Hopping to Bandlike Charge Transport in Organic Semiconductors. J. Chem. Theory Comput. 2020, 16, 2071–2084 [Roosta et al.(2022)Roosta, Ghalami, Elstner, and Xie]Roosta22 Roosta, S.; Ghalami, F.; Elstner, M.; Xie, W. Efficient Surface Hopping Approach for Modeling Charge Transport in Organic Semiconductors. J. Chem. Theor. Comput. 2022, 18, 1264–1274 [Taylor and Kassal(2018)Taylor, and Kassal]Taylor18 Taylor, N. B.; Kassal, I. Generalised Marcus theory for multi-molecular delocalised charge transfer. Chem. Sci. 2018, 9, 2942–2951 [Balzer et al.(2021)Balzer, Smolders, Blyth, Hood, and Kassal]Balzer21 Balzer, D.; Smolders, T. J. A. M.; Blyth, D.; Hood, S. N.; Kassal, I. Delocalised kinetic Monte Carlo for simulating delocalisation-enhanced charge and exciton transport in disordered materials. Chem. Sci. 2021, 12, 2276 [Willson et al.(2023)Willson, Liu, Balzer, and Kassal]Willson23 Willson, J. T.; Liu, W.; Balzer, D.; Kassal, I. Jumping Kinetic Monte Carlo: Fast and Accurate Simulations of Partially Delocalized Charge Transport in Organic Semiconductors. J. Phys. Chem. Lett. 2023, 14, 3757–3764 [Sneyd et al.(2021)Sneyd, Fukui, Paleček, Prodhan, Wagner, Zhang, Sung, Collins, Slater, Andaji-Garmaroudi, MacFarlane, Garcia-Hernandez, Wang, Whittell, Hodgkiss, Chen, Beljonne, Manners, Friend, and Rao]Sneyd21 Sneyd, A. J. et al. Efficient energy transport in an organic semiconductor mediated by transient exciton delocalization. Sci. Adv. 2021, 7, eabh4232 [Sneyd et al.(2022)Sneyd, Beljonne, and Rao]Sneyd22 Sneyd, A. J.; Beljonne, D.; Rao, A. A New Frontier in Exciton Transport: Transient Delocalization. J. Phys. Chem. Lett. 2022, 13, 6820–6830 [Giannini et al.(2018)Giannini, Carof, and Blumberger]giannini2018crossover Giannini, S.; Carof, A.; Blumberger, J. Crossover from hopping to band-like charge transport in an organic semiconductor model: Atomistic nonadiabatic molecular dynamics simulation. J. Phys. Chem. Lett. 2018, 9, 3116–3123 [Giannini et al.(2019)Giannini, Carof, Ellis, Yang, Ziogos, Ghosh, and Blumberger]giannini2019quantum Giannini, S.; Carof, A.; Ellis, M.; Yang, H.; Ziogos, O. G.; Ghosh, S.; Blumberger, J. Quantum localization and delocalization of charge carriers in organic semiconducting crystals. Nat. Commun. 2019, 10, 3843 [Giannini et al.(2020)Giannini, Ziogos, Carof, Ellis, and Blumberger]giannini2020flickering Giannini, S.; Ziogos, O. G.; Carof, A.; Ellis, M.; Blumberger, J. Flickering Polarons Extending over Ten Nanometres Mediate Charge Transport in High-Mobility Organic Crystals. Adv. Theory Simul. 2020, 3, 2000093 [Giannini and Blumberger(2022)Giannini, and Blumberger]giannini2022charge Giannini, S.; Blumberger, J. Charge transport in organic semiconductors: the perspective from nonadiabatic molecular dynamics. Acc. Chem. Res. 2022, 55, 819–830 [Tully(1990)]tully1990molecular Tully, J. C. Molecular dynamics with electronic transitions. J. Chem. Phys. 1990, 93, 1061–1071 [Apertet et al.(2016)Apertet, Ouerdane, Goupil, and Lecoeur]apertet2016note Apertet, Y.; Ouerdane, H.; Goupil, C.; Lecoeur, P. A note on the electrochemical nature of the thermoelectric power. Eur. Phys. J. Plus 2016, 131, 1–8 [Spencer et al.(2016)Spencer, Gajdos, and Blumberger]spencer2016fob Spencer, J.; Gajdos, F.; Blumberger, J. FOB-SH: Fragment orbital-based surface hopping for charge carrier transport in organic and biological molecules and materials. J. Chem. Phys. 2016, 145, 064102 [Carof et al.(2019)Carof, Giannini, and Blumberger]carof2019calculate Carof, A.; Giannini, S.; Blumberger, J. How to calculate charge mobility in molecular materials from surface hopping non-adiabatic molecular dynamics–beyond the hopping/band paradigm. Phys. Chem. Chem. Phys. 2019, 21, 26368–26386 [Giannini et al.(2021)Giannini, Carof, Ellis, Ziogos, and Blumberger]giannini2021atomic Giannini, S.; Carof, A.; Ellis, M.; Ziogos, O. G.; Blumberger, J. Multiscale Dynamics Simulations; 2021; pp 172–202 [Gajdos et al.(2014)Gajdos, Valner, Hoffmann, Spencer, Breuer, Kubas, Dupuis, and Blumberger]gajdos2014ultrafast Gajdos, F.; Valner, S.; Hoffmann, F.; Spencer, J.; Breuer, M.; Kubas, A.; Dupuis, M.; Blumberger, J. Ultrafast estimation of electronic couplings for electron transfer between π-conjugated organic molecules. J. Chem. Theor. Comput. 2014, 10, 4653–4660 [Ziogos and Blumberger(2021)Ziogos, and Blumberger]ziogos2021ultrafast Ziogos, O. G.; Blumberger, J. Ultrafast estimation of electronic couplings for electron transfer between pi-conjugated organic molecules. II. J. Chem. Phys. 2021, 155, 244110 [Klimeš et al.(2009)Klimeš, Bowler, and Michaelides]klimevs2009chemical Klimeš, J.; Bowler, D. R.; Michaelides, A. Chemical accuracy for the van der Waals density functional. J. Phys. Condens. Matter. 2009, 22, 022201 [Futera and Blumberger(2017)Futera, and Blumberger]futera2017electronic Futera, Z.; Blumberger, J. Electronic couplings for charge transfer across molecule/metal and molecule/semiconductor interfaces: Performance of the projector operator-based diabatization approach. J. Phys. Chem. C 2017, 121, 19677–19689 [Elsner et al.(2021)Elsner, Giannini, and Blumberger]elsner2021mechanoelectric Elsner, J.; Giannini, S.; Blumberger, J. Mechanoelectric response of single-crystal rubrene from Ab initio molecular dynamics. J. Phys. Chem. Lett. 2021, 12, 5857–5863 [Podzorov et al.(2004)Podzorov, Menard, Borissov, Kiryukhin, Rogers, and Gershenson]podzorov2004intrinsic Podzorov, V.; Menard, E.; Borissov, A.; Kiryukhin, V.; Rogers, J. A.; Gershenson, M. Intrinsic charge transport on the surface of organic semiconductors. Phys. Rev. Lett. 2004, 93, 086602 [Hulea et al.(2006)Hulea, Fratini, Xie, Mulder, Iossad, Rastelli, Ciuchi, and Morpurgo]hulea2006tunable Hulea, I. N.; Fratini, S.; Xie, H.; Mulder, C. L.; Iossad, N. N.; Rastelli, G.; Ciuchi, S.; Morpurgo, A. F. Tunable Fröhlich polarons in organic single-crystal transistors. Nat. Mater. 2006, 5, 982–986 [Xie et al.(2013)Xie, McGarry, Liu, Wu, Ruden, Douglas, and Frisbie]xie2013high Xie, W.; McGarry, K. A.; Liu, F.; Wu, Y.; Ruden, P. P.; Douglas, C. J.; Frisbie, C. D. High-mobility transistors based on single crystals of isotopically substituted rubrene-d 28. J. Phys. Chem. C. 2013, 117, 11522–11529 [Li et al.(2019)Li, Xiong, Sievers, Hu, Fan, Wei, Bao, Chen, Donadio, and Ala-Nissila]li2019influence Li, Z.; Xiong, S.; Sievers, C.; Hu, Y.; Fan, Z.; Wei, N.; Bao, H.; Chen, S.; Donadio, D.; Ala-Nissila, T. Influence of thermostatting on nonequilibrium molecular dynamics simulations of heat conduction in solids. J. Chem. Phys. 2019, 151, 234105 [Ioffe et al.(1959)Ioffe, Stil'Bans, Iordanishvili, Stavitskaya, Gelbtuch, and Vineyard]ioffe1959semiconductor Ioffe, A. F.; Stil'Bans, L.; Iordanishvili, E.; Stavitskaya, T.; Gelbtuch, A.; Vineyard, G. Semiconductor thermoelements and thermoelectric cooling. Phys. Today 1959, 12, 42–42 [Baer(2002)]baer2002non Baer, R. Non-adiabatic couplings by time-dependent density functional theory. Chem. Phys. Lett. 2002, 364, 75–79 [Domcke et al.(2004)Domcke, Yarkony, and Köppel]domcke2004conical Domcke, W.; Yarkony, D.; Köppel, H. Conical intersections: electronic structure, dynamics & spectroscopy; World Scientific, 2004; Vol. 15 [Callen(1948)]callen1948application Callen, H. B. The application of Onsager's reciprocal relations to thermoelectric, thermomagnetic, and galvanomagnetic effects. Phys. Rev. 1948, 73, 1349 [Mahan(2000)]mahan2000density Mahan, G. Density variations in thermoelectrics. J. Appl. Phys. 2000, 87, 7326–7332 [Pernstich et al.(2008)Pernstich, Rössner, and Batlogg]pernstich2008field Pernstich, K.; Rössner, B.; Batlogg, B. Field-effect-modulated Seebeck coefficient in organic semiconductors. Nat. Mater. 2008, 7, 321–325 [Hafizi et al.(2023)Hafizi, Elsner, and Blumberger]hafizi2023ultrafast Hafizi, R.; Elsner, J.; Blumberger, J. Ultrafast Electronic Coupling Estimators: Neural Networks versus Physics-Based Approaches. J. Chem. Theor. Comput. 2023, [Carof et al.(2017)Carof, Giannini, and Blumberger]carof2017detailed Carof, A.; Giannini, S.; Blumberger, J. Detailed balance, internal consistency, and energy conservation in fragment orbital-based surface hopping. J. Chem. Phys. 2017, 147, 214113 [Jurchescu et al.(2006)Jurchescu, Meetsma, and Palstra]jurchescu2006low Jurchescu, O. D.; Meetsma, A.; Palstra, T. T. Low-temperature structure of rubrene single crystals grown by vapor transport. Acta Crystallogr. B: Struct. Sci. Cryst. Eng. Mater. 2006, 62, 330–334 [Kühne et al.(2020)Kühne, Iannuzzi, Del Ben, Rybkin, Seewald, Stein, Laino, Khaliullin, Schütt, and Schiffmann]kuhne2020cp2k Kühne, T. D.; Iannuzzi, M.; Del Ben, M.; Rybkin, V. V.; Seewald, P.; Stein, F.; Laino, T.; Khaliullin, R. Z.; Schütt, O.; Schiffmann, F. CP2K: An electronic structure and molecular dynamics software package-Quickstep: Efficient and accurate electronic structure calculations. et al. J. Chem. Phys. 2020, 152, 194103 [Statz et al.(2020)Statz, Schneider, Berger, Lai, Wood, Abdi-Jalebi, Leingang, Himmel, Zaumseil, and Sirringhaus]staz2020seebeck Statz, M.; Schneider, S.; Berger, F. J.; Lai, L.; Wood, W. A.; Abdi-Jalebi, M.; Leingang, S.; Himmel, H.-J.; Zaumseil, J.; Sirringhaus, H. Charge and Thermoelectric Transport in Polymer-Sorted Semiconducting Single-Walled Carbon Nanotube Networks. ACS Nano 2020, 14, 15552–15565 [Wang et al.(2004)Wang, Wolf, Caldwell, Kollman, and Case]wang2004development Wang, J.; Wolf, R. M.; Caldwell, J. W.; Kollman, P. A.; Case, D. A. Development and testing of a general amber force field. J. Comput. Chem. 2004, 25, 1157–1174 [Bulgarovskaya et al.(1983)Bulgarovskaya, Vozzhennikov, Aleksandrov, and Belsky]bulgarovskaya1983 Bulgarovskaya, I.; Vozzhennikov, V.; Aleksandrov, S.; Belsky, V. Latv. PSR Zinat. Akad. Vestis, Khim. Ser. 1983, 53 [Ziogos et al.(2021)Ziogos, Kubas, Futera, Xie, Elstner, and Blumberger]ziogos2021hab79 Ziogos, O. G.; Kubas, A.; Futera, Z.; Xie, W.; Elstner, M.; Blumberger, J. HAB79: A new molecular dataset for benchmarking DFT and DFTB electronic couplings against high-level ab initio calculations. J. Chem. Phys. 2021, 155, 234115 [Nematiaram and Troisi(2020)Nematiaram, and Troisi]nematiaram2020modeling Nematiaram, T.; Troisi, A. Modeling charge transport in high-mobility molecular semiconductors: Balancing electronic structure and quantum dynamics methods with the help of experiments. J. Chem. Phys. 2020, 152, 190902 [Ciuchi et al.(2011)Ciuchi, Fratini, and Mayou]ciuchi2011transient Ciuchi, S.; Fratini, S.; Mayou, D. Transient localization in crystalline organic semiconductors. Phys. Rev. B 2011, 83, 081202 [Nematiaram et al.(2019)Nematiaram, Ciuchi, Xie, Fratini, and Troisi]nematiaram2019practical Nematiaram, T.; Ciuchi, S.; Xie, X.; Fratini, S.; Troisi, A. Practical computation of the charge mobility in molecular semiconductors using transient localization theory. J. Phys. Chem. C 2019, 123, 6989–6997 [Chui et al.(1992)Chui, Swanson, Adriaans, Nissen, and Lipa]chui1992temperature Chui, T.; Swanson, D.; Adriaans, M.; Nissen, J.; Lipa, J. Temperature fluctuations in the canonical ensemble. Phys. Rev. Lett. 1992, 69, 3005. § FORCE FIELD PARAMETERIZATION FOB-SH simulations employ force fields for the calculation of diagonal Hamiltonian matrix elements (i.e. site energies). The matrix element H_kk corresponds to the energy of the system with molecule k charged and all other molecules neutral. The force field for the neutral system is based on the general AMBER force field (GAFF) <cit.> and parameters for the charged state were obtained by displacing the equilibrium bond lengths of the molecule with respect to the neutral state in order to reproduce the DFT reorganization energy, λ = 0.152 eV (see refs<cit.> for details). The 300 K distributions of electronic couplings obtained using the force field for rubrene employed in previous work<cit.> deviate slightly from those obtained using ab initio molecular dynamics with the optPBE-vdW density functional<cit.> (Prev. FF in Fig. 1 of the main text). This discrepancy stems from the use of an incorrect dihedral angle term linking phenyl side chains to the tetracene backbone such that parameters were set to match the dihedral terms describing in-plane interactions. This resulted in a slight misalignment of the phenyl side chains of optimized structures, visually apparent in Figure <ref>, which shows dimers extracted from optimized unit cells using the previously employed FF and optPBE-vdW DFT, as well as an experimental structure (CCDC database identifier QQQCIG01) <cit.>. While this oversight has little effect on the conclusions drawn in previous studies, we have now updated the relevant dihedral terms so that the dihedral angles of optimized structures and the 300 K distributions of electronic couplings obtained from force field molecular dynamics better reproduce those obtained from ab-initio molecular dynamics. The orientation of the phenyl side chains relative to the tetracene backbone can be quantified through the relevant dihedral angles, as indicated in Figure <ref>(d). For each phenyl side chain, there are 4 dihedral angles to consider: ϕ_1 = ϕ(ca1-cp1-cp2-ca3), ϕ_2 = ϕ(ca1-cp1-cp2-ca4), ϕ_3 = ϕ(ca2-cp1-cp2-ca3) and ϕ_4 = ϕ(ca2-cp1-cp2-ca4). The dihedral energy term has the form E_ϕ(ϕ) = K_ϕ (1 + cos(2ϕ - ϕ_0)), where the parameters used previously are K_ϕ = 3.625 kcal/mol/rad^2 and ϕ_0 = 180 deg. This has an energy minimum at ϕ = 0 deg, resulting in incorrect geometries for this system and an underestimation of the a direction electronic coupling, J_a. Table <ref> lists dihedral angles ϕ_1, ϕ_2, ϕ_3 and ϕ_4, of the experimental structure and of 0 K optimised structures using DFT and force fields with different values of K_ϕ. Optimisation using the previously employed force field (Prev. FF in Table <ref>) yields dihedral angles that are either too large or too small. Setting K_ϕ = 0 (i.e. turning off entirely) for dihedral terms linking the atoms depicted in Figure <ref>(d) results in much better agreement with optPBE-vdW and the experimental structure (FF_0 in Table <ref>). Further improvement can be obtained by using a small value of K_ϕ = 0.6000 kcal/mol/rad^2. We refer to this as FF_opt in Table <ref>, which is the force field we have used throughout the present study. We note that the parameters for all other dihedral angles (i.e. those which do not connect the tetracene backbone to the phenyl side chains) and all other interactions are the same as in previous work<cit.>. Electronic couplings of 0K optimized structures, as well as mean and root-mean-square fluctuations of the 300 K distributions, using DFT and the various force fields are listed in Table <ref>. The distributions of electronic couplings were obtained by sampling dimers every 50 fs from molecular dynamics trajectories of length 15 ps (timestep 1 fs) at 300 K. The force field used in previous work (Prev. FF) underestimates J_a for the optimized structure, as well as the mean value of the 300 K distribution, ⟨ J_a ⟩. This is remedied by reducing the force constant of the relevant dihedral terms. Best agreement with respect to ab initio molecular dynamics is obtained using K_ϕ = 0.6000 kcal/mol/rad^2 (FF_opt). The timescales of electronic coupling fluctuations, τ_a and τ_b in Table <ref>, are also much improved using FF_opt; see also Figure 1 of the main text. Electronic couplings for structures obtained from optPBE-vdW MD were calculated using the projector operator-based diabatization (POD) method <cit.> in combination with the PBE density functional and uniform scaling by 1.325 (sPOD) <cit.>, as outlined in Ref. . For structures obtained from force field MD, the analytic overlap method (AOM) <cit.> was used, as employed in FOB-SH simulations. § DETAILED DESCRIPTION OF RESULTS IN FIGURE 1 MAIN TEXT Figure 1(a) in the main text shows the distributions of electronic couplings J_a and J_b over trajectories at 300 K obtained with three different methods: ab-initio MD with the optPBE-vdW functional, classical MD using the force field employed in previous studies<cit.> and classical MD using the optimized force field employed in this work, FFopt (SI section 1). In the first case, we utilise the DFT-based sPOD method for calculation of electronic couplings<cit.>, whereas in the latter two cases we utilise the analytic overlap method (AOM), as employed in FOB-SH <cit.>. We note significantly better agreement in the distributions of electronic couplings when using the optimized force field FFopt compared to the force field used previously (see SI section 1). Figure 1(b) shows the normalised density of states (DOS) obtained using the different methods. The DFT/PBE density of states of the optimized structure is shown in the black dashed line. We note that the bandwidth has been scaled by a factor of 1.325 (hence the label sDFT) due to a tendency for the PBE functional to underestimate electronic couplings <cit.>. The energy of the top of the valence band was set to 0. All other lines are obtained using the valence band Hamiltonian (Eq. 4 in the main text) with matrix elements sampled from the calculated distributions of electronic couplings. Dashed lines are for optimized structures with no electronic disorder i.e. all diagonal elements set to zero and the distributions of J_a and J_b are given by delta functions. Solid lines are for averages over 50 Hamiltonians with different realisations of disorder, obtained by sampling the 300 K distributions of electronic couplings. In each case, the peak of the DOS was aligned in energy with the peak of the sDFT/PBE DOS at -0.499 eV relative to the top of the valence band. We find that our valence band Hamiltonians achieve good accuracy compared to the full DFT DOS for the optimized structures, indicating that such valence band Hamiltonian accurately describes the valence band electronic structure <cit.>. Excellent agreement is achieved with the FFopt Hamiltonians compared to the optPBE-vdW Hamiltonians (red vs cyan). The effect of finite temperature is to smear out density of states due to electronic disorder i.e. the spread in the electronic couplings matrix elements, as indicated by the solid compared to dashed lines. Figure 1(c) and (d) show the spectral density functions of the electronic coupling time series for J_a and J_b. The running integral of the spectral density yields the cumulative disorder including all frequencies up to ω, σ_α(ω), allowing us to quantify the relative contribution of each mode, σ_α(ω) = [ 8/βπ∫_0^ω dω' S_α(ω')/ω']^1/2, where S_α(ω) is the spectral density for time series J_α (α = a, b) and β=k_BT. Including all frequencies ω→∞ returns the root-mean-square fluctuation of the time series, which we note is similar using FFopt dynamics compared to optPBE-vdW dynamics (red vs cyan). § TEMPERATURE DEPENDENCE OF LATTICE PARAMETERS All FOB-SH simulations were carried out at fixed cell volume. In the case of the constant temperature simulations over a range of temperatures, thermal expansion of the lattice was taken into account using a linear interpolation of the experimental temperature-dependent lattice parameters <cit.>. The best fit line was extrapolated to yield lattice parameters for the higher temperatures (325 K and 350 K) not measured by experiment. For simulations under a temperature gradient from 250 K to 350 K centred at 300 K, linear expansion with temperature was assumed and the 300 K lattice parameters were used. Figure <ref> shows the experimental lattice parameters along the a, b and c crystallographic directions (black lines), with linear fits in red dashed lines. The temperature-dependent lattice parameters employed in FOB-SH simulations are marked by red circles. Numerical values are listed in Table <ref>. § CONVERGENCE OF HOLE MOBILITY WITH RESPECT TO SYSTEM SIZE Due to the increase in diffusivity with decreasing temperature, simulations at the lowest temperature, T = 200 K, are the hardest to converge with FOB-SH active region size. FOB-SH runs at different supercell sizes are required to ensure that mobility is converged with system size. If the cell is too small in a particular direction, the charge carrier wavefunction will be restricted in that direction once it reaches the vicinity of the boundary, leading to an underestimation in the slope of mean-squared-displacement with time. We can quantify the boundary effect by counting the fraction of trajectories where the wavefunction `hits' the far boundary. A `hit' refers to the centre-of-charge of the wavefunction coming within some defined distance d_hit of the boundary. This is illustrated in Figure <ref>. We use two definitions of d_hit. In the first case, we define d^n_av, x to be n× the standard deviation of the wavefunction projected along the x direction, averaged over all time steps: d^n_av, x = n σ̅_x = n ⟨√(⟨ x^2 ⟩ - ⟨ x ⟩^2)⟩_t = n ⟨√(∑_k|u_k|^2x_k^2 - ( ∑_k|u_k|^2 x_k )^2 )⟩_t, where u_k are the wavefunction coefficients in the diabatic (site) basis, x_k denotes the position of site k along direction x and the average is taken over all time steps. An alternative choice is to define d^n_inst, x(t) to be n× the instantaneous standard deviation of the wavefunction at time t, projected along the x direction. d^n_inst, x(t) = n σ_x(t) = n √(⟨ x(t)^2 ⟩ - ⟨ x(t) ⟩^2) = n √(∑_k|u_k(t)|^2x_k^2(t) - ( ∑_k|u_k(t)|^2 x_k(t) )^2 ). Equation <ref> provides a more stringent criterion than equation <ref> in the scenario where the wavefunction is located close to the boundary and undergoing a transient delocalisation event, σ_x(t) ≫σ̅_x. In this case, a hit may be counted by equation <ref> but not by equation <ref>. The rubrene unit cell comprises four molecules (280 atoms) and spans two distinct high-mobility a-b planes perpendicular to the out-of-plane c direction. The overall 3D periodic MD cell used in FOB-SH simulations includes both a-b layers to ensure structural integrity, however the FOB-SH active region (where the FOB-SH electronic Hamiltonian is defined) includes only a single high-mobility layer, representing half the unit cell along the c direction. This is justified by the fact that electronic couplings along the c direction are orders of magnitude smaller than electronic couplings within the a-b plane. In the following we denote supercell sizes by N_a × N_b and omit the dimension along the c-axis which is always 1/2 except where indicated otherwise. For simulations at 200 K, we considered four supercell sizes with FOB-SH active regions of 32×18, 41×14, 50×13 and 54×13 unit cells. We note that while the overall MD cell is 3D periodic, the FOB-SH active region which defines the FOB-SH Hamiltonian is not. For simulations with the 32×18 active cell, a nuclear time step of 0.1 fs was used, whereas a smaller time step of 0.05 fs was used for all other cells. Such small nuclear time steps are necessary to avoid trivial crossings <cit.>. The percentage of hits within 900 fs along the a and b directions are listed in Table <ref> and Table <ref> using the definitions in equations <ref> and <ref>, respectively, for different choices of n. Simulations with the 32×18 cell result in many more hits along the a direction compared to the b direction, indicating that the anisotropy of the cell is not optimal and that the a direction length should be increased, while the b direction length may safely be reduced. This is ameliorated to some extent with the 41×14 cell, however there are still significantly more hits along a compared to along b. The 50×13 cell yields roughly the same number of hits in each direction, and this remains the case for the 54×13 cell, which further reduces the percentage of trajectories which hit the boundary. Such analysis is helpful for a sense of the optimal cell anisotropy, however convergence of the mean-squared-displacement with cell size must be checked explicitly. Figure <ref> shows the mean-squared-displacement along the a direction against time for the different cell sizes. Error bars are calculated by dividing the trajectories into 5 blocks, and taking the standard deviation over the block averages. It is apparent from Figure <ref>(a) that cells which are smaller in the high mobility (a) direction result in smaller slopes for mean-squared-displacement with time, and hence smaller mobilities. The 50×13 cell is converged along the a direction with respect to the larger 54×13 cell. The 54×13 cell was used for temperatures between 200 K and 250 K. For higher temperatures, the 50 × 13 cell was used. § TEMPERATURE DEPENDENCE OF MSD AND HOLE MOBILITY Figure <ref> shows converged plots of MSD against time for all constant temperature simulations. Error bars were calculated by partitioning the total number of trajectories into 5 blocks, and taking the standard deviation of the block averages. The first 200 fs were neglected in the fit, since the initial part of each trajectory corresponds to quantum relaxation from the initial diabatic state. The active region size, total number of trajectories and resulting values for mobility, are listed in Table <ref>. § TEMPERATURE DEPENDENCE OF DOS, VALENCE BAND ENERGY, IPR The effect of increasing electronic disorder with temperature (see Table 1 of the main text) is to cause localisation of the eigenstates (i.e. adiabatic states) of the electronic Hamiltonian. Figure <ref> shows the energy-resolved IPR of the valence band (also termed adiabatic) states, averaged over snapshots from 20 trajectories for each temperature. The heat-map axis indicates the number of states counted at a particular energy and IPR value. The energy of the valence band maximum was set to zero for each snapshot. The Boltzmann averages of the valence band energy and IPR are given by equations <ref> and <ref>, respectively. ⟨ E ⟩^B = ∑_k E_k exp(E_k / k_B T)/∑_k exp(E_k / k_B T), ⟨IPR⟩^B = ∑_k IPR_k exp(E_k / k_B T)/∑_k exp(E_k / k_B T), where k sums over the adiabatic states, k_B is the Boltzmann constant and T is temperature. These quantities are plotted as a function of temperature in panels <ref>(h) and <ref>(i). In all cases, states are localised at the band edges and become increasingly delocalised towards the middle of the band. As temperature increases, the IPR of the adiabatic states at a given energy becomes smaller (most clearly illustrated in Figure 2(d) of the main text), indicating increasing localisation due to dynamic disorder. This can also be seen by considering the Boltzmann average of the IPR with temperature, shown in Figure <ref>(i). An additional effect of increased temperature is an increase in the Boltzmann averaged energy with temperature, shown in Figure <ref>(h). This is due to increased thermal energy with temperature, allowing the charge carrier to explore states deeper within the valence band where states are more delocalised. However, from panel <ref>(i), clearly the effect of increasingly localised states with temperature is more prominent and overall the states occupied by a charge carrier will be more localised at higher temperatures<cit.>. § TRANSIENT DELOCALISATION MECHANISM FOR CONSTANT TEMPERATURE SIMULATIONS Figure <ref>(a), (b) and (c) show inverse participation ratio (IPR) of the charge carrier wavefunction as a function of time along a single FOB-SH trajectory at 300 K, 250 K and 200 K, respectively. The IPR exhibits significant fluctuations about the mean (indicated by the dashed line in magenta), which increases with decreasing temperature. Panels (d), (e) and (f) depict, respectively, the charge carrier wavefunction before, during and after an event of transient delocalization (TD). The charge carrier wavefunction is represented by superposing red ellipses on each molecular site (corresponding to fragment molecular orbital basis functions), with opaqueness proportional to the wavefunction site population |u_k|^2. The TD mechanism, illustrated in Figure <ref>(d)–(f), allows for significant displacement of the charge carrier over distances up to the extent of the transiently delocalized state<cit.>. In the example shown, at t = 418 fs the charge carrier is delocalized over 5–6 molecules (IPR = 5.6) and nuclear dynamics is propagated on the ground state (i.e. top of the valence band, E_a = 0). Subsequently, a series of surface hops to excited valence band states take place such that the active adiabatic state at t = 458 fs is the 43rd valence band state, E_a = - 86.5 meV. This is concomitant with TD of the wavefunction, which expands to cover approximately 39 molecules (IPR = 38.9) at t = 458 fs, a factor of 3 larger than the average value. Finally, following a series of surface hops back to the ground state, the wavefunction contracts and at t = 470 fs is momentarily delocalized over just 1–2 molecular sites (IPR = 1.8). Over the course of the TD event, the centre-of-charge of the wavefunction is displaced by approximately 3.5 nm. In general, displacements following a TD event may be larger or smaller and are limited only by the extent of the transiently delocalized state. Such TD events are driven by nonadiabatic transitions (i.e. surface hops) to excited valence band states, which become more delocalized towards the centre of the band (see Figure 2(b) of the main text). The extent of delocalization of the hole wavefunction in the TD state reflects the extent of delocalization of the active adiabatic state on which the nuclear dynamics is propagated. § TRANSIENT LOCALIZATION THEORY CALCULATIONS Transient localization theory (TLT) was used to calculate mobility for the different temperatures considered (Figure 2 of main text, magenta diamonds). The transient localization mobility is given by<cit.> μ_x(y) = e/k_BTL̅^̅2̅_x(y)(τ)/2τ where e is the elementary charge, k_B is the Boltzmann constant, T is temperature, τ is the timescale of lattice vibrations driving transient localization events, L is the so-called transient localization length and L̅^̅2̅ denotes the average of L^2 over multiple realisations of disorder. The squared transient localization length is given by L^2_x(y)(τ) = 1/Z∑_n,me^β E_n |⟨ n|ĵ_x(y)|m⟩|^2 2/(ħ/τ)^2 + (E_m - E_n)^2 where Z is the partition function, ĵ is the current operator and (|n⟩, E_n) refer to the eigenstates and eigenvalues of a Hamiltonian corresponding to a particular realisation of disorder. Note, a positive sign is used in the Boltzmann factor since we consider hole transport. In practice, disordered Hamiltonians are constructed by sampling from specified distributions of electronic couplings for a given supercell size. The average squared localization length, L̅^̅2̅ is obtained by averaging over the transient localization lengths corresponding to all Hamiltonians considered. Finally, mobility is calculated using Eq. <ref>. TLT mobility calculations for each temperature were carried out using freely available code <cit.> (https://github.com/CiuK1469/TransLoc, code version 0.4, downloaded 14th December 2020). We used a 39×26 supercell under periodic boundary conditions, which gave well-converged results in all cases, and we averaged over the transient localization lengths obtained from 50 distinct disordered Hamiltonians with matrix elements sampled from the temperature dependent distributions of electronic couplings listed in Table 1 of the main text. The timescale τ was calculated by averaging over the ab initio power spectrum for J_a fluctuations, S_a (Fig. 1(c), cyan), τ= [∫dω ω (S_a(ω)/ω) / ∫dω (S_a(ω)/ω)]^-1 = 0.43 ps. We note that the dependence of mobility on τ in equation <ref> is relatively weak due to the dependence of L^2_x(y) on τ, compensating the denominator <cit.>. Numerical values for TLT mobility at different temperatures are listed in Table <ref>. § PREPARATION OF SIMULATION CELL WITH TEMPERATURE GRADIENT Figure <ref> shows the simulation box and temperature profile, averaged over 200 ps of MD. The supercell contains 120×7×1 unit cells (4 molecules per unit cell) and is periodic in all dimensions. The temperature gradient is along x which is parallel to the a crystallographic direction. Since the overall cell is 3D periodic, the temperature at either side of the box along x must be the same, therefore a saw-tooth temperature profile is required. The FOB-SH active region, which is not periodic, is defined over one of the linear portions of the temperature profile (50×7×1/2 unit cells), as indicated. The temperature profile is achieved by defining thermal bath regions at 250 K and 350 K (10×7×1 unit cells, 280 molecules, 19600 atoms), as indicated in Figure <ref>, which are pinned to their respective temperatures through a velocity rescaling procedure. Velocity rescaling to the target temperature occurs whenever the difference between the instantaneous kinetic temperature, T_inst, and the target temperature, T_target, exceeds some defined tolerance, |T_inst - T_target| > T_tol. T_tol was set to 1 K in the bath regions. Figure <ref>(a) shows the temperature profile, averaged over 1 ps, after 50 ps of MD. The temperature profile is highly non-linear, indicating that a long time scale is needed for relaxation to the steady state using this approach. To speed up convergence of the temperature profile, we initially apply velocity rescaling over the full simulation cell. To do so we defined in the active region consecutive slabs with dimensions 1×7×1 unit cells (28 molecules, 1960 atoms) and set the target temperature in each slab to the temperature expected from a linear temperature gradient. The temperature tolerance in the slabs was set to T_tol = {1, 5, 20} K for sequential molecular dynamics runs of length 2 ps whilst a tolerance of T_tol = 1 K was applied to the thermal bath regions throughout the relaxation procedure. Hence, thermostatting of the active region was sequentially made weaker. Following this initial relaxation, thermostatting in the active region was turned off entirely keeping only the thermal bath regions thermostatted. The temperature profile achieved in this manner is linear and stable over long time scales (≫ 200 ps, the time scale averaged over in Fig. <ref>), see Figure <ref>(b) where the profile was averaged over 1 ps, after 50 ps of MD. The FOB-SH simulations with temperature gradient were initialized from snapshots taken from this trajectory keeping only the thermal bath regions (and not the active region) thermostatted by velocity rescaling. The root-mean-square fluctuations in temperature within a given slab (i.e. error bars in Fig. <ref>) are of the order 5 K, which determines the local temperature resolution in our simulations. The magnitude of temperature fluctuations shows a slight linear increase with local temperature, see Figure <ref>. This is consistent with the expected temperature dependence of temperature fluctuations in the NVT ensemble from statistical mechanics, √(⟨Δ T^2 ⟩) = √(k_B/C)T ∼T/√(N), where C is the heat capacity <cit.>. However, direct comparisons with properties of equilibrium ensembles should be approached with caution given the nonequilibrium nature of the system under consideration. § ANALYSIS OF DRIFT VELOCITY, ⟨ V_X⟩ The instantaneous drift velocity at each time step and over each trajectory is calculated from the difference in centre-of-charge between time increments of 2 fs i.e. v^t = (⟨ x ⟩^t+δ t - ⟨ x ⟩^t) / δ t, with δ t = 2 fs. We use bins of length 4 unit cells (2.88 nm) along x (parallel to the a-crystallographic direction) and allocate each velocity to the bin which includes the initial centre-of-charge position, ⟨ x ⟩^t. The temperature gradient is 2.78 K/nm, therefore the temperature difference between the centre of adjacent bins is 8 K. The Seebeck coefficient is determined from the average velocity in the central bin (i.e. the bin centred at the position where T=300 K). Due to fluctuations in local temperature (≈ 5 K, Figure <ref>), a temperature difference between two points in space is only well-resolved for distances larger than ≈ 1 nm. Therefore when velocities are very small (i.e. the change in centre-of-charge after a time increment is significantly smaller than 1 nm), there should be little contribution to the Seebeck coefficient since locally the temperature is indistinguishable on this length scale. For large velocities, the temperature profile is clearly resolved and there is a significant contribution to the Seebeck coefficient. Large velocities generally occur after a successful hop to a new active state at some distance from the old active state. The charge carrier wavefunction spatially follows the active state on average, due to the decoherence correction which damps the non-active-surface adiabatic coefficients (see main text Methods). As explained in the main text, the temperature gradient induces a symmetry breaking which causes the probability of surface hopping to states on the cold side to be larger than the probability of surface hopping to states on the hot side. Figure <ref>(a) shows a histogram of the centre-of-charge velocities, only counting velocities where the centre-of-charge lies within the central 4 unit cells along x for constant temperature simulations at T=300 K and simulations under a temperature gradient, including all time steps and all trajectories. Visually, the distributions of drift velocities for simulations at constant temperature and under a temperature gradient appear very similar. However, the mean velocity for the constant temperature case is 0.10 ± 0.24 nm/ps, while the mean velocity for simulations under a temperature gradient is -0.89 ± 0.25 nm/ps. Figure <ref>(b) shows the cumulative drift velocity obtained by averaging over signed instantaneous drift velocities with absolute value smaller than some threshold. In the case of constant temperature simulations (green), this remains close to 0, within the margin of statistical error. For simulations under a temperature gradient, the drift velocity becomes more negative as the threshold is increased, corresponding to the net motion from hot to cold, and it is well converged when all instantaneous velocities with absolute value smaller than ≈ 1.5 nm/fs are included. Velocities larger than this are extremely rare and therefore do not affect the average significantly. Notice that the average drift velocity (i.e. the cumulative drift velocity after all velocities are included) is 2-3 orders of magnitude smaller than typical instantaneous drift velocities as positive and negative instantaneous drift velocities cancel to a large extent resulting in only relatively small net drift velocity, ⟨ v_x⟩. § ANALYSIS OF THERMOELECTRIC MOTION Figures <ref> and <ref> show distance resolved properties of the valence band states ψ_k for simulations with a temperature gradient and at constant T = 300 K, respectively. The Boltzmann average, indicated by ⟨ ... ⟩^B, is defined in section Methods of the main text, Eq. 16. N^conf refers to the total number of configurations (i.e. Hamiltonians) selected for the analysis, according to the criteria that a successful hop occurs with active state located in the central 10 unit cells of the simulation box along x. Panels <ref>(b) and <ref>(d) show the same information as panels 4(b) and 4(d) of the main text, while panels <ref>(b) and <ref>(d) show the same information as panels 4(c) and 4(e) of the main text. It is evident that in the case of constant temperature simulations (Figure <ref>), there is no difference between states with ΔCOC_ka < 0 compared to states with ΔCOC_ka > 0 (blue vs red lines). When a temperature gradient is applied (Figure <ref>), an asymmetry arises causing a clear splitting between quantities corresponding to states on the cold side compared to the hot side. ⟨ d_ka⟩^B is larger for states towards the cold side (ΔCOC_ka < 0), which results in a larger Boltzmann averaged surface hopping probability, ⟨ p_ka⟩^B, for states towards the cold side. Additionally, there is a greater number of thermally accessible states towards the cold side. These two effects result in a larger overall probability of surface hopping to any state at distance ΔCOC_ka∈ x, determined by the product N^acc/N^conf×⟨ p_ka⟩^B. To assess the factors contributing to the higher average NACE for states on the cold side compared to those on the hot side, we present 2D histograms in Figure <ref> correlating |d^ad_ka| with either the inverse energy difference between states, 1/|Δ E_ka| (panels a-d), or the product of the IPR of the active state a and state k, IPR_aIPR_k (panels (e-h)). The NACE varies by orders of magnitude, therefore the natural logarithm of each quantity is taken. All states located in a given distance bin (towards hot and cold) relative to the position of the active state are included in the same plot, using the same binning procedure described in Figure 5 of the main text. The non-adiabatic coupling element between adiabatic states k and l, d^ad_kl = ⟨ψ_k||ψ̇_̇l̇⟩ is related to the non-adiabatic coupling vector, 𝐃^ad_kl, through the chain rule, d^ad_kl = 𝐃_kl·∂𝐑/∂ t where<cit.> 𝐃^ad_kl = ⟨ψ_k|∂_𝐑|ψ_l⟩ = ⟨ψ_k|∂_𝐑H|ψ_l⟩/Δ E_lm. Here, ∂_𝐑 denotes the derivative with respect to coordinates, H is the electronic Hamiltonian and Δ E_lm = E_l - E_m is the energy difference between states l and m. Equation <ref> shows that the non-adiabatic coupling becomes large when states become close in energy, evident in Figure <ref>(a-d), where scatter is due to variation in the numerator. § CONVERGENCE OF THE KINETIC CONTRIBUTION TO SEEBECK COEFFICIENT, Α_V Simulations were carried out with a temperature gradient and no external field (denoted T-gradient), at constant temperature T = 300 K (i.e. the control simulations, denoted T= constant) and with a temperature gradient and an external field (denoted T-gradient+ E-field). In each case, we ran a total of 2000 FOB-SH trajectories. For T-gradient and T= constant simulations, trajectories were of length 5 ps (i.e. a total of 10 ns of dynamics), whereas for the T-gradient+ E-field, trajectories were run to 3 ps (i.e. a total of 6 ns of dynamics). In each case, the wavefunction was initialised from 5 different positions spread uniformly along the x-direction of the active region in FOB-SH simulations, i.e., 400 trajectories per initial condition. The kinetic contribution to the Seebeck coefficient, α_v, is calculated from the average drift velocity of the charge carrier in the central position bin, α_v = -⟨ v_x ⟩/μ∂_xT (first term on the RHS of Eq. 2 in the main text). Figure <ref> shows the convergence of ⟨ v_x ⟩ for the different simulations with (a) number of trajectories (including their full length) and (b) trajectory length using all 2000 trajectories. Error bars represent the standard error of the velocity distribution in the central bin, σ_v/√(N), where σ_v is the standard deviation of velocities observed in the central bin and N is the number of data points. For the T-gradient simulations (magenta), using just 50 trajectories per initial condition (i.e. 250 trajectories overall) already yields a mean value close to the final result, however the relatively small amount of data leads to a large error in the mean. Using trajectories of length shorter than 3 ps leads to an overly negative value for ⟨ v_x ⟩ (i.e. an overestimation of α_v). This is because trajectories which start from the hot side reach the central bin (where drift velocities are used in the average) faster than trajectories starting from the cold end, leading to a small amount of bias. α_v is well converged if trajectories have length 3 ps or longer. For the T = constant simulations (green) and T-gradient + E-field simulations (blue), the average velocity in the central bin is approximately converged using around 100 trajectories per initial condition (i.e. 500 trajectories overall). As described in the main text, the electric field is chosen to compensate for the net drift velocity observed in the T-gradient simulations such that open-circuit conditions are achieved. This is clearly illustrated in Figure <ref>; when the electric field is included, the drift velocity in the central bin becomes close to zero (approximately matching the situation observed for T = constant simulations). § CHEMICAL POTENTIAL CONTRIBUTION TO SEEBECK COEFFICIENT, Α_C As described in the main text Methods, the chemical potential contribution to the Seebeck coefficient (second term on the RHS of equation 2 of the main text) is given by α_c = -1/q∂_x μ_c/∂_x T = -1/q∂μ_c/∂ T, where we have used the chain rule in the second equation of equation <ref>. The chemical potential at temperature T and reference carrier concentration n^ref is given by the change in free energy upon insertion of a charge carrier into the band (Eq. 18 of the main text): μ_c^ref(T, n^ref) = F_hole(T, n^ref) - F_neutral(T, n^ref) S9 = -k_B T [ln(Z_hole) - ln(Z_neutral)] S10 = -k_B T ln∫ d𝐑∑_i^vb e^+β E_i(𝐑)/∫ d𝐑 e^-β E_neutral(𝐑)S11 = -k_B T ln∫ d𝐑∑_i^vb e^+β [E_i(𝐑) + E_neutral(𝐑)] e^-β E_neutral(𝐑)/∫ d𝐑 e^-β E_neutral(𝐑)S12 = -k_B T ln⟨∑_i^vb e^+β [E_i(𝐑) + E_neutral(𝐑)⟩_E_neutral(𝐑)^n^ref, S13 where F denotes free energy, Z is the partition function, 𝐑 denotes nuclear coordinates, E_i(𝐑) is the i^th eigenstate of the FOB-SH Hamiltonian describing an excess hole at nuclear geometry 𝐑 and E_neutral is the energy of the neutral system at nuclear geometry 𝐑. Note the + sign in the Boltzmann weight over valence band (vb) states, indicating that increasing excitation corresponds to decreased energy. The brackets denote taking the thermal average, which can be obtained by sampling nuclear configurations from a molecular dynamics trajectory of the system in the charge-neutral state at carrier density n^ref. The chemical potential at temperature T and general carrier density n is given by equation 17 of the main text, reproduced here for clarity: μ_c(T, n) = μ_c^ref(T, n^ref) + k_B T lnn/n^ref. S14 Taking the derivative with respect to temperature: ∂μ_c(T, n)/∂ T = ∂μ_c^ref(T, n^ref)/∂ T + k_B lnn/n^ref, S15 from which α_c may be calculated using equation <ref>: α_c (T, n) = -1/q∂μ_c/∂ T = -1/q[ ∂μ_c^ref/∂ T + k_B lnn/n^ref]. S16 To obtain the first term on the RHS of equation <ref>, α_c^ref=-1/q∂μ_c^ref(T, n^ref)/∂ T at 300 K, the chemical potential μ_c^ref was calculated for a given reference concentration n^ref (see below) at temperatures T = 275, 300, 325 K according to equation <ref> using MD simulation on the neutral rubrene crystal with a simulation box of 50 × 7 × 1 unit cells (the same size used for FOB-SH simulations with a temperature gradient). The temperature derivative was then obtained from the best slope fit. In order to verify that the chemical potential (Eq. <ref>) and Seebeck coefficient (Eq. <ref>) are independent of the chosen value for n^ref, α_c^ref was calculated at 6 different values of n^ref = 1 / A, where A is the area of the active region parallel to the a-b crystallographic plane. The active region is the part of the supercell for which the valence band electronic Hamiltonian (Eq. 4 main text) is constructed. The values of n^ref used, along with the corresponding active region sizes and results for μ_c^ref and α_c^ref are listed in Table <ref>. The uncertainty in α_c^ref represents one standard deviation, calculated from the square root of the relevant diagonal value of the covariance matrix for the fit parameters. Figure <ref> plots the calculated values of α_c (T=300 K, n), equation <ref>, for the different definitions of n^ref listed in Table <ref>, at the concentration n^ref, i.e., α_c (T=300 K, n=n^ref) = -1/q[ ∂μ_c^ref/∂ T + k_B lnn^ref/n^ref] =-1/q[ ∂μ_c^ref/∂ T]=α_c^ref (black crosses). The best linear fit of α_c^ref to ln(n) is indicated with the dotted back line. The magenta dashed line shows the carrier concentration dependence of α_c from equation <ref> when a value of n^ref = 2.74× 10^15 m^-2 is chosen (i.e. a simulation box of 50×7×1 unit cells). The explicitly calculated data points (black crosses) coincide very well with the analytic expression equation <ref> (within a single error bar). The slope of the best fit line (dotted black) is ∂α_c/∂ln(n)=-82.9 μV/K, in very close agreement with the expected slope from equation <ref>, -k_B/q = -86.2 μV/K, implying that the thermodynamic theory and simulation provide a fully consistent description of the concentration dependence of α_c which does not depend on the chosen reference density n^ref. § EXPERIMENTAL DETERMINATION OF MOBILITY AND SEEBECK COEFFICIENT A schematic of the electrode pattern used, and an image of the actual device are given in Figure <ref>. The mobility of the rubrene single crystals is determined via a 4-point probe method in FET geometry. The crystals show an increasing mobility with decreasing temperature, indicative of band-like transport, as well as a turn on voltage close to 0 V suggesting a very low deep trap density. As we cool down the temperature from 300 K to 270 K, the threshold voltage shifts from near 0 V to -1 V indicating the existence of some trap states. In order to calculate the Seebeck coefficient of a material, the generated voltage across a known thermal gradient is measured. As described in previous works <cit.>, in this study, the thermal gradient is created via resistive heating by on-device heater electrodes, with applied heater power W, α = Δ V/Δ T = Δ V/Δ W/Δ T/Δ W. To find the absolute temperature difference between the two ends, the resistance of the source and drain electrodes are measured by a 4-point probe method. We apply a current across one pair of electrodes (e.g. 1 and 3) and measure the voltage across the other pair (2 and 4), which negates any contact resistance from the probes or resistance from cabling. Because the source and drain are gold – which has a linear resistivity vs. temperature curve – the applied temperature at the hot and cold ends can be found by measuring the resistance (R) as a function of applied heater power. Finally, each thermometer needs calibrating by measuring its resistance at varying cryostat base temperatures T. In this way: Δ T/Δ W = dT/dW_hot - dT/dW_cold, and dT/dW_hot, cold = dR/dW_hot, cold/dR/dT_hot, cold.
http://arxiv.org/abs/2406.19124v1
20240627121337
FiberPol-6D: Spectropolarimetric Integral Field mode for the SAAO 1.9 m Telescope using fibers
[ "Siddharth Maharana", "Sabyasachi Chattopadhyay", "Matthew Bershady" ]
astro-ph.IM
[ "astro-ph.IM", "physics.ins-det" ]
Evidential Concept Embedding Models: Towards Reliable Concept Explanations for Skin Disease Diagnosis Xiahai Zhuang is the corresponding author. This work was funded by the National Natural Science Foundation of China (grant No. 62372115, 61971142 and 62111530195). Yibo Gao, Zheyao Gao, Xin Gao, Yuanye Liu, Bomin Wang, Xiahai Zhuang July 1, 2024 ========================================================================================================================================================================================================================================================================= § ABSTRACT Most optical spectropolarimeters built to date operate as long-slit or point-source instruments; they are inefficient for observations of extended objects such as galaxies and nebulae. 2D spectropolarimetry technique development is a major challenge in astronomical instrumentation. At the South African Astronomical Observatory’s (SAAO) FiberLab, we are developing a spectropolarimetry capable Integral Field front-end called FiberPol(-6D) for the existing SpUpNIC spectrograph on the SAAO’s 1.9 m telescope. SpUpNIC is a general purpose 2 arc-minute long-slit spectrograph with a grating suite covering the wavelength range from 350 to 1000 nm and at spectral resolutions between 500 and 6000. FiberPol generates 6D observational data: x-y spatial dimensions, wavelength, and the three linear Stokes parameters I, q and u. Using a rotating half-wave plate and a Wollaston prism, FiberPol executes two-channel polarimetry, and each channel is fed to an array of 14 fibers, corresponding to a field of view of 10×20  arcseconds^2 sampled with 2.9 arcsec diameter fiber cores. These fiber arrays are then rerouted to form a pseudo-slit input to SpUpNIC. FiberPol aims to achieve a polarimetric accuracy of 0.1% per spectral resolution bin. Further, it can also function as a non-polarimetric integral-field unit of size 20×20  arcseconds^2. The instrument design has been completed and it is currently being assembled and characterized in the lab. It is scheduled for on-sky commissioning in the second half of 2024. In this paper, we present the scientific and technical goals of FiberPol, its overall design and initial results from the lab assembly and testing. FiberPol is a low-cost technology demonstrator (< 10,000 USD), and the entire system predominantly employs small size (one inch or less), commercial off-the-shelve optics and optomechanical components. It can be modified and replicated for use on any existing spectrograph, especially on bigger telescopes like the 10 m Southern African Large Telescope (SALT) and the upcoming 30 m class telescopes. § INTRODUCTION Polarimetry refers to the measurement of the polarization state of light coming from a source. It entails ascertaining the amount and direction/sense of any preferred polarization state in the light beam. Polarimetry is a powerful tool that has been used by astronomers to understand the physics of diverse classes of astrophysical objects, such as active galactic nuclei (AGN), supernovae, protoplanetary systems, and dust clouds in the interstellar medium (ISM) <cit.>. In particular, it is useful in the study of objects that have an inherent asymmetry in their light emission, absorption or propagation mechanism. Often, polarimetry is the only method to find the geometry of an astrophysical system and cannot be obtained by other methods such as imaging and spectroscopy. Astronomers have been building polarimeters, with ever-increasing precision and accuracy with most present-day instruments reaching accuracies of the order of 0.1% or better <cit.> in the optical wavelengths. Most modern optical polarimeters in astronomy come in two broad kinds: imaging polarimeters and spectropolarimeters. As the name suggests, imaging polarimeters create a polarization (Stokes parameters) map of a sky field in different broadband filters, while spectropolarimeters enable polarization measurement as a function of wavelength. In general, spectropolarimeters are a more powerful tool to probe the physics of astronomical sources, just as the spectra of a source has more encoded information than only the intensity information obtained through images. Spectropolarimetry has enabled astronomers in the past to diagnose very difficult astrophysical problems such as the physics of flaring and jetted compact objects. A seminal example is the studies that led to the unification of the Type I and II AGNs into one class<cit.>. Due to limitations of available instrumentation technology, spectropolarimetry has till date been available only in long-slit mode, effectively allowing for observations of only point sources. Thus, development of polarimetry-capable Integral Field Units (IFUs) will enable measurements of extended objects, hitherto unfeasible with current day instruments. Such a system, generically referred to as a Integral Field Spectro-polarimeter (IFSP), will generate 6D data as output- two spatial dimensions x,  y, wavelength λ, intensity I, and Stokes parameters q, and  u (or p and θ)[If the circular Stokes parameter v is included, it will be a 7D data system.]. While polarimetry (including spectropolarimetry) and IFUs <cit.> have become a commonly available observational tool for most astronomers, IFSP has rarely been attempted. We refer interested readers to an excellent introductory paper on IFSP concepts and possible instrument design ideas by Rodenhuis et al.<cit.>. To tackle the challenge of building a simple and easily adaptable IFSP system for astronomy, we are developing FiberPol(-6D), a fiber-fed IFSP system. FiberPol will be deployed as a front-end system on the existing SpUpNIC spectrograph<cit.> on the South African Astronomical Observatory's (SAAO) 1.9 m Radcliffe telescope at Sutherland. SpUpNIC is a general purpose 2 arc-minute long-slit spectrograph with a grating suite covering the wavelength range from 350 to 1000 nm and at spectral resolutions between 500 and 6000. The primary science objective with FiberPol is to study the dust properties in the various astrophysical environments including the ISM of nearby galaxies, and objects within the Milky Way such as nebulae, molecular clouds, supernova remnants, asteroids and comets. Even for point objects such as supernovae and AGN, the 2D field will be efficient in measuring the ISM introduced foreground polarization by simultaneous observation of field stars, thus allowing astronomers to accurately estimate the inherent polarization of the source. Spectropolarimetry is an invaluable tool to study the interstellar dust<cit.>. It is used most often to measure the wavelength dependent polarization of starlight due to the dichroic extinction by interstellar dust aligned with the ambient magnetic field. The shape of the polarization distribution over wavelength is very well constrained by the “Serkowski function"<cit.>, and is ubiquitously found for all sight-lines in the Milky Way. The Serkowski function parameters place direct constraints on the size, nature and alignment of dust grains, and is one of the key ingredients of modern dust models <cit.>. While Serkowski curves for sight-lines in the Milky Way have been examined with great detail, similar studies have not been done for other galaxies. External galaxies, being extended objects would need to be observed through an IFSP mode. Only with such observations, we will be able to test and extend our understanding of current dust models and grain alignment theories to the local Universe. In addition to developing a general purpose fiber-based IFSP technology, a coequal motivation for building FiberPol is to demonstrate high-accuracy optical polarimetry using fibers coupled to a general spectrograph. This demonstration opens a path to relatively inexpensive and retrofittable polarimetric instruments accessible to the astronomy community on telescopes of all sizes. Currently, polarimetry is done through specialized and dedicated instruments (polarimeters), which are very expensive and difficult to maintain. As a result, polarimetry is often relinquished on many small, and almost all large telescopes due to the requirements of large, expensive and fragile optics, and the challenging optical designs to accommodate these. FiberPol opens up the possibility of using relatively small, inexpensive fiber-fed polarimetric units to pre-process the polarization signal with very little light loss. In the past, there have been successful efforts in using fibers with polarimetric instrumentation, especially to relay polarimetrically analyzed light beams to bench spectrographs for high-resolution spectropolarimetry measurements<cit.>. We capitalize on this fibre-based pre-processing concept and extend it to integral field spectroscopy. In this paper, we present the overall design of FiberPol and its current development status. Section <ref> introduces the top-level concept and technical goals for FiberPol. The optical and optomechanical design of the instrument is described in Section <ref>. The current status of the instrument, the next steps in the development and schedule for the commissioning is presented in Section <ref>. § FIBERPOL CONCEPT: TECHNICAL GOALS AND CONSTRAINTS The schematic of the top-level concept of FiberPol is shown in Figure <ref>. Using a rotating half-wave plate and a Wollaston prism (WP) as the polarization analyzer system, FiberPol executes two-channel linear polarimetry, and each channel is fed to an array of 14 (13 object and 1 sky) fibers, corresponding to a field of view (FoV) of 10×20  arcseconds^2. These fiber arrays are then rerouted to form a pseudo-slit input to SpUpNIC, such that the input to the spectrograph remains a 1D array/object. As output from SpUpNIC, we obtain 28 spectra, one for each fiber. For each of the 14 spatial positions on sky, the normalized difference between the corresponding spectra from each channel (referred to as e and o spectra) yields the Stokes parameter values across wavelength. In addition to a polarimetric mode, we have made the provision for a non-polarmetric and conventional IFU mode with FiberPol. This IFU system will have twice the FoV area compared to the polarimetric mode; details are given in Section <ref>. We design FiberPol to function as an add-on mode that can be easily deployed, on-demand, before the night begins. Thus, FiberPol will introduce IFSP capability to SpUpNIC without making any changes to the telescope or instrument. The specifications of the telescope, SpUpNIC and the site conditions are listed in Table <ref>; FiberPol has been designed accordingly. §.§ Why a 2-channel design The technical goals of the FibePol instrument are captured in Table <ref>. Polarization signals at optical wavelengths, especially emanating from dichroic absorption and scattering by Galactic dust are very small, often at levels of 0.5 % or less, requiring polarimeters to achieve accuracies of 0.1 % or better<cit.>. Obtaining such high accuracies makes calibration of polarimeters a challenging instrumentation problem. With FiberPol, the critical design goal is to achieve a polarimetric accuracy of 0.1% per spectral and spatial resolution bin. Two-channel or four-channel polarimetry has been demonstrated as the most accurate way of measuring polarization. Astronomers very often use a WP as the main analyzer element instead of a simple linear polarizer due to the following reasons: (1) While the sky is not birefringent at optical wavelengths, atmospheric conditions such as transparency can change during the course of observations. This can lead to erroneous polarization measurements unless the two intensity measurements for the q and u parameters are made simultaneously. This is mitigated by using WP like beamsplitters which separate the two orthogonal polarization states, called the ordinary (o) and extraordinary (e) beams, and allow their intensities to be measured together. (2) By using a simple polarizer, for low polarization sources encountered in astronomy, half the amount of light received from the source (whose polarization is orthogonal to the polarizer axis) is lost. The standard practise is to place a WP at a pupil plane inside the instrument and the incident pupil is split symmetrically along the optical axis into o and e beams. At the detector plane, each source/star appears as two images, called the o and e images. In FiberPol, the light from the objects is fed to fibers only after the polarimetric analysis by the WP has been executed. This ensures that polarization scrambling behaviour of fibers is mitigated. From thereon, the only effect that fibers and downstream optics can introduce is of preferential transmission of one beam (polarization) over the other; this is a commonly encountered effect in all polarimeters and will need to be corrected by careful polarimetric calibration using on-sky standard sources. While polarimetric calibration methods for narrow and wide FoV polarimeters exist<cit.>, as IFSP has not been attempted before, we will develop methods to carry out accurate calibration for FiberPol. §.§ Beam speed considerations The telescope nominally feeds a very slow f/18 beam as input to the long-slit of SpUpNIC. Fibers induce significant focal ration degradation (FRD) at such slow speeds. To minimize light-loss due to FRD we aimed to feed the fibers at f/# between 3.5 to 4.5. While FRD will still introduce a faster fiber output, the change is modest. The best-fit to laboratory data on fiber FRD yields the following equation: f_o = 2.6998× ln(f_i) - 0.08 where f_o is the output f/# corresponding to 98 % encircled energy for a given input f/# = f_i (see Chattopadhyay et al. <cit.> for a detailed exposition on this subject). We use this equation to design a reimaging system that captures up to 98% of the light from the telescope. To account for fiber FRD, in addition to separating the orthogonal polarizations using a WP, before feeding the beams to the fibers, the e and o beams need to be sped up to a faster beam. Both these tasks are done by the subsystem called “Reducer" (i.e., a focal-reducer), which is described later. The input f/# to the fibers is 3.7 and based on Equation <ref>, the output is expected to be 3.45. The other main subsystem of the instrument, called “Expander" (i.e., a focal expander), takes as input the fibers arranged in from of a 2' pseudoslit at f/3.45, and expands the beam back to f/18 to feed SpUpNIC. A contributing factor to enhanced FRD is non-telecentricity in the input beam to the fibers, as discussed by Chattopadhyay at al., where they detail the importance of feeding telecentric beams. We aim to restrict the non-telecentricity to <0.1^∘ for all wavelengths and field points. §.§ Available field The slit length of 2' is a physical limit of the SpUpNIC spectrograph; this places a constraint on the total number of fibers in the e and o arrays each. In the first phase of the project, we are designing IFUs with 100 μm core fibers as their mounts can be accurately fabricated by the SAAO workshop with existing wire-EDM tooling. These cores sample 2.93 arcsec in diameter on sky. In the subsequent phase of the project, we plan to make IFUs using 50 μm core (85 μm buffer) fibers, which will allow more spatial points on sky with finer spatial sampling, at the expense of smaller FoV and lower fill factor. In our case, the total number of allowable 100 μ m core fibers with a 140 μ m outer diameter (cladding and buffer) comes out to be 28 at the f/3.45 focal plane. §.§ Cost considerations As one of the main goals of FiberPol is to demonstrate an affordable IFSP technology using fibers and small-size and low-cost optics, we decided to design FiberPol employing commercially available, off-the-shelve optical and opto-mechanical components. Thus, except for the fiber-mounts, all optics and opto-mechanical components have been designed from existing vendor catalogs. The optimized wavelength range for FiberPol system is 400 to 700 nm for two congruent reasons: (a) anti-reflection coatings optimized for this wavelength window are available for most off-the-shelve optical components, and (b) the peak of the Serkowski curve is usually found to be centered around 550 nm. FiberPol is designed as a front-end system for the existing spectrograph, SpUpNIC; as such, FiberPol must be compact and fit in a small volume inside the acquisition box sitting above the spectrograph. This allowable volume spans roughly total 18 cm in length along the optical axis with more range in the other two dimensions. While conforming to the above constraints of volume, coatings and lens prescriptions, FiberPol needs to deliver high-quality (seeing limited) imaging and throughput performance (∼80%, excluding telescope and SpUpNIC losses). § FIBERPOL DESIGN §.§ Choice of Polarizer System After a careful study of multiple potential optical designs, we completed the instrument's optical and subsequent optomechanical system design. The instrument's polarization analyzer subsystem was at the core of the optical design. We considered the following three options. * Savart Plates. Owing to the large birefringence of calcite, a calcite plate, when placed near the focal plane, can split the beam into orthogonal polarization components with linear displacement between them. While this is an attractive feature, calcite is unsuitable for use in broad wavelength applications due to the strong birefringence dependence on wavelength, causing the beams coming out of the plates to be dispersed. This effect has been explained in detail in the optical design paper of the Wide-Area Linear Optical Polarimeter (WALOP)-South instrument that employed calcite Wollaston prisms<cit.>. While other birefringent materials such as quartz and magnesium fluoride (MgF_2) have negligible birefringence dependence on wavelength, their birefringence value is an order of magnitude smaller than calcite, and would require extremely thick (> 10 cm)and custom made plates. * Polarization Beam-Splitter (PBS): a wire-grid or dielectric coated PBS splits an incident beam into orthogonal polarization states at 90^∘. Wire-grid PBS have found application in two-channel polarimeters such as MOPTOP<cit.>, where using a rotating half-wave plate (HWP) as the polarization modulator, a moderate on-sky accuracy of 0.25 % was achieved. A key advantage for the PBS is the near-constant beam-splitting performance over large angles of incidence and broad wavelength ranges. A critical drawback, especially in case of instruments that aim high accuracy polarimetry is that although the transmitted beam has a high extinction ratio of > 10^4, the corresponding value for the reflected beam is less than 100. * WP: A WP with a modulating HWP system has been used in astronomy as the default analyzer system whenever aiming for accuracies >=0.1%. WPs separate the orthogonal polarization states (called e and o beams) with high extinction ratios (> 10^4). When the beam splitting by the WP is done at the pupil plane, each source/star appears as two (e and o) images at the detector plane, at almost equal distance from the nominal position of the star/source. Usually, WPs (and other polarimetric beam-splitters) split 0^∘ and 90^∘ polarizations from which q can be measured. In order to measure u, instead of rotating the whole instrument along the optical axis in the Instrument Coordinate System (ICS), the HWP is used to rotate the plane of polarization of the incoming beam with respect to the ICS. A HWP, with its fast axis at an angle α with the ICS rotates the polarization plane by angle 2α. A typical polarization measurement cycle for a two channel polarimeter consists of measurements at HWP angles of 0^∘, 22.5^∘, 45^∘ and 67.5^∘: while the first two correspond to q and u measurements, the latter two correspond to nominal measurements of -q and -u (which enables quantification and correction of instrument induced polarization, called instrumental polarization). Due to the ability for high-accuracy polarimetry and symmetrical splitting of the beams, we chose a WP+HWP as the polarization analyzer system for FiberPol. The following subsections describe the optical design of FiberPol. §.§ Reducer Optical Design Figure <ref> shows the optical design layout of the FiberPol Reducer system. A half-wave plate (HWP), mounted on motorized rotary stage is the first element, followed immediately by a half inch achromatic field-reducer lens that creates an intermediate f/6.8 focus. Just before this focus, there is a provision for a fold mirror to be inserted in the path when the non-polarimetric (spectroscopic) mode is used. After the intermediate focus, for the polarimetry mode, a set of three lenses of sizes half and one inch create a collimated beam of size about 9 mm. At the pupil, a MgF_2 WP is placed at the pupil. The WP splits the e and o beams symmetrically at 1.2^∘ at 500 nm. The collimated e and o beams pass through a fold mirror, after which two lenses create a flat and telecentric (=< 0.1^∘ across the FoV) f/3.7 focal plane for the fiber feed. The fold mirror is placed after the WP to make the design compact. The HWP is 1 inch in aperture, made of quartz and MgF_2 plates that has been designed for achromatic performance in the wavelength range of 400 to 800 nm. Table <ref> listed the details and source of the polarizer optics used in the FiberPol design. The design employs two telecentricity-correcting concave field lens, one each after the f/6.8 and before the final f/3.7 focal plane. In addition to allowing for a gradual step-down of the f/# of the beam from f/18 to f/3.7, the f/6.8 intermediate focus provides a plane where a physical mask (field stop) is placed to avoid e polarized light from outside the FoV entering the o-beam fiber array footprint at the final f/3.7 focal plane, and vice-versa. For the spectroscopic arm, the design consists of a telecentricity correcting concave field lens followed by a combination of collimator and focusing half inch achromatic lenses. The major challenges in creating the optical design were to meet the goals of obtaining seeing limited PSFs at the f/3.7 focal-plane while achieving better than 0.1^∘ non-telecentricity in the constrained instrument volume. We minimized the number of optical elements in the design to incur minimal reflection losses. All the lenses used in the design are available as pre-mounted lenses that can then be easily integrated in a cage system. This makes the assembly and alignment process easier, as we do not require high-precision optical alignment, as described later. Figure <ref> shows the spot diagram for the FoV of the polarimetric arm for one of the two beams. The spot diagrams for the o and e beams are nearly identical as they follow similar optical paths. For reference, the root-mean-square (RMS) spot size corresponding to the median seeing of 1.5" is 19.2 μm. Thus with the presented optical design, the spot diagram RMS radii of ∼7 μm is around 3 times better the seeing RMS, which will enable seeing limited PSFs if the alignment of the optics is maintained in the assembly. To estimate the effect of assembly misalignment on the optical performance, we carried out a tolerance analysis of the system. The summary of the analysis is the following: if the alignments can be maintained within ± 50 μm for air-gap separations between lenses and ± 100 μm for centering and tilt of ± 2' of individual lenses, the degraded RMS radii will be under 8μm for the full FoV. The nominal separation between the e and o images at the f/3.7 focal plane is 0.536 mm. This separation allows sufficient space for three rows of fibers to be placed for each channel with a 150 μm separation between them (and in turn drives the geometry of the fiber arrays), as shown in Figure <ref>(a). §.§ Expander Optical Design As noted in the previous sections, the “Expander" subsystem takes as input the fiber pseudo-slit at f/3.45 and create an f/18 beam which matches the telescope beam at the slit input. Figure <ref> shows the optical layout of the “Expander" system. In addition to these optics, there is field lens that matches the field lens lying immediately after the SpUpNIC slit. This is required because in order to mount FiberPol on SpUpNIC, the existing slit mechanism is replaced whenever FiberPol is in use. The slit assembly of SpUpNIC consists of the pneumatic-slit as well as the field lens. Like in the “Reducer”, multiple fold-mirrors are included to package the optical system and accurately mount the system on the SpUpNIC focal plane assembly. §.§ Optomechanical Design Figure <ref> shows the complete FiberPol model with all the optics mounted inside a cage plate system. The decision to design a cage-system kind optomechanical assembly was motivated by the following considerations: (a) we expect to meet the required alignment accuracies for the system using a cage-system, (b) they are easy to assemble, align and characterize in the lab, and (c) a large and diverse range of optomechanical mounts and stages (eg. rotary stage) compatible with a cage system are available in catalogs of vendors. As noted earlier, all mounts and components in the model are commercially available off-the-shelve items. Preliminary results from the testing of optical system presented in Section <ref> indicate that optical system has been aligned to the required accuracy using the cage-system assembly. § DATA REDUCTION & CALIBRATION * Spectoscopic calibaiion. * ppolarimetric calibrayions § CURRENT STATUS Figure <ref> shows the assembled “Reducer" system of FiberPol and fed by a f/18 beam to emulate the beam coming from the telescope. The only difference to the final “Reducer" assembly is the addition of a image relay system consisting of identical 75 mm focal length achromatic lens pair to allow sufficient gap between the last lens surface and the detector camera. The first test done on the system was to ascertain the separation of the e and o images at the focal plane. The nominal separation obtained from the optical design in 0.536 mm, while the measured separation value is 0.536±0.008 mm, confirming that the FiberPol assembly in the lab is behaving as per design expectations. All the optical and optomechanical components for the instrument have been procured. The instrument assembly and testing has commenced, with a scheduled on-sky commissioning for the later part of 2024. During the commissioning, we will demonstrate the technical capabilities of FiberPol and also carry out scientific observations, including the study of the ISM of nearby galaxies. The development of an affordable and general purpose FiberPol like IFSP system will be a major advancement in the field of optical polarimetry instrumentation. It will have two lasting benefits to the astronomy community: * As a novel observational capability, it will open up a new phase-space of observational measurements i.e., spectropolarimetry of extended objects. Among the many scientific studies that this will enable, we will study the ISM of nearby galaxies through spectro-polarimetry. The objective of the study is to obtain the Serkowski curves (p vs λ) and test the models of grain alignment as a function of galaxy type and galactic environment. Comets, asteroids and nebulae are also well suited to be studied through FiberPol. * Enabling (affordable) polarimetry on large (ELT) telescopes: FiberPol can be modified for use on any existing spectrograph, especially on bigger telescopes like the 10 m SALT and the upcoming 30 m class telescopes. For example, applications to other existing spectrographs such as NIRWALS, HRS and RSS on the SALT could be effected with similar small-sized and low-cost optics. Polarimetric instrumentation is most often discarded on large telescopes due to the requirements of large, expensive and fragile optics, and instrument development challenges. As an illustration, none of the current first generation instruments for the three ELTs have polarimetric capabilities, even though the scientific merit for polarimetry with these instruments is generally agreed as compelling by the community. At the same time, FiberPol will make spectropolarimetry and IFSP technology accessible to existing small telescopes and observatories as they often cannot afford a dedicated polarimeter. Here we discuss briefly how to apply FiberPol on two potential telescopes: the 10 m SALT and the 25 m Giant Magellan Telescope (GMT)<cit.>, which have direct Cassegrain-type ports available for instrument mounting and do not have Naysmyth-type tertiary mirror. SALT has a 10 m primary mirror and an f/4.2 prime-focus plane<cit.> that corresponds to a plate scale of ∼4.7"/mm. Using a similar optical system as FiberPol, we can employ a 25 mm lens immediately after the prime-focus plane[the first lens can be within 5-10 mm to minimize vignetting due to the expanding beam cone after the focus. If the first lens is placed farther, it will lead to a smaller FoV with same optics size or will need larger optics. Current catalog of off-the-shelve lenses have maximum apertures of 50 mm.] which will allow access to a field of view of 2 arcminutes. Unlike the current version of FiberPol, we will not need a field reducer lens to create an intermediate focus, and the collimator will be the first optical system. Further simplifying the implementation, there will be no need for an “Expander" optical system: the fiber can be fed at a slightly slower f/# (using Equation <ref>) to compensate for FRD and match the telescope's f/# at the exit. In larger telescopes like SALT, we may need to increase the size of pupil and thus lens aperture used fed to the WPs in order to allow for sufficient separation between the e and o images at the fiber-injection plane. The 25.4 m GMT, with its Gregorian-type design, produces an f/8.2 beam (1”/mm plate scale) at the focal plane after the wide-field corrector. A similar optical system to that described for SALT can be used, in addition to an Expander to match the telescope’s f# at the fiber exit plane. A 25 mm aperture will allow for a field of view of 20 arcseconds. For telescopes with Nasmyth focal planes, implementing polarimetry requires additional optical and optomechanical elements (e.g., a derotator system) to measure and compensate for the polarization and other systematics introduced by the tertiary mirror, which changes in real-time during an observation. The authors acknowledge the support of the National Research Foundation of South Africa for this project through the Salt Research Chair grant SARChI-114555. spiebib
http://arxiv.org/abs/2406.17654v1
20240625154639
MDHA: Multi-Scale Deformable Transformer with Hybrid Anchors for Multi-View 3D Object Detection
[ "Michelle Adeline", "Junn Yong Loo", "Vishnu Monn Baskaran" ]
cs.RO
[ "cs.RO", "cs.AI" ]
Understanding phase transitions of α-quartz under dynamic compression conditions by machine-learning driven atomistic simulations Karsten Albe =================================================================================================================================== empty empty § ABSTRACT Multi-view 3D object detection is a crucial component of autonomous driving systems. Contemporary query-based methods primarily depend either on dataset-specific initialization of 3D anchors, introducing bias, or utilize dense attention mechanisms, which are computationally inefficient and unscalable. To overcome these issues, we present MDHA, a novel sparse query-based framework, which constructs adaptive 3D output proposals using hybrid anchors from multi-view, multi-scale input. Fixed 2D anchors are combined with depth predictions to form 2.5D anchors, which are projected to obtain 3D proposals. To ensure high efficiency, our proposed Anchor Encoder performs sparse refinement and selects the top-k anchors and features. Moreover, while existing multi-view attention mechanisms rely on projecting reference points to multiple images, our novel Circular Deformable Attention mechanism only projects to a single image but allows reference points to seamlessly attend to adjacent images, improving efficiency without compromising on performance. On the nuScenes val set, it achieves 46.4% mAP and 55.0% NDS with a ResNet101 backbone. MDHA significantly outperforms the baseline, where anchor proposals are modelled as learnable embeddings. Code will be available at <https://github.com/NaomiEX/MDHA>. § INTRODUCTION Multi-view 3D object detection plays a pivotal role in mapping and understanding a vehicle's surroundings for reliable autonomous driving. Among existing methods, camera-only approaches have gained immense traction as of late due to the accessibility and low deployment cost of cameras as opposed to conventional LiDAR sensors. Camera-only methods can be split into: BEV-based methods <cit.> where multi-view features are fused into a unified Bird's-Eye-View (BEV) representation, and query-based methods <cit.> where 3D objects are directly modelled as queries and progressively refined based on image features. The construction of BEV maps from input features involves a non-trivial view transformation, rendering it computationally expensive. Moreover, BEV representations suffer from having a fixed perception range, constraining their adaptability to diverse driving scenarios. For instance, in scarcely populated rural areas with minimal visual elements of interest, constructing these dense and rich BEV maps is a waste of computational resources. In contrast, query-based methods bypass the computationally intensive BEV construction and have recently achieved comparable performance to BEV-based methods while retaining flexibility. These methods also lend themselves well for further sparsification to achieve even greater efficiency. Two predominant series of models have emerged within query-based approaches: the PETR-series <cit.>, which generates 3D position-aware features from 2D image features via positional embeddings, and the Sparse4D-series <cit.>, which refines 3D anchors using sparse feature sampling of spatio-temporal input. However, since the PETR models employ a dense cross-attention mechanism, they cannot be considered as truly sparse methods. Moreover, they do not utilize multi-scale input features, limiting their scalability to detect objects of varying sizes. Although these are rectified by the Sparse4D models, they instead rely on anchors initialized from k-means clustering on the nuScenes <cit.> train set, thus potentially compromising their ability to generalize to real-world driving scenarios. To address these issues, we propose a novel framework for query-based 3D object detection centered on hybrid anchors. Motivated by the effectiveness of 2D priors in 3D object detection <cit.>, we propose the formation of 2.5D anchors by pairing each token's 2D center coordinates with corresponding depth predictions. These anchors can be projected, with known camera transformation matrices, to generate reasonable 3D output proposals. Our usage of multi-scale, multi-view input leads to a large number of tokens, and thus, of 3D proposals, posing significant computational burden for refinement. To alleviate this, our proposed Anchor Encoder serves the dual-purpose of refining and selecting top-k image features and 3D proposals for subsequent iterative refinement within the spatio-temporal MDHA Decoder. On top of this, to improve efficiency, we propose a novel Circular Deformable Attention mechanism, which treats multi-view input as a contiguous 360^∘ panoramic image. This allows reference points to seamlessly attend to locations in adjacent images. Thus, our proposed method eliminates the reliance on good 3D anchor initialization, leverages multi-scale input features for improved detection at varying scales, and improves efficiency through our novel sparse attention mechanism, which offers greater flexibility than existing multi-view attention mechanisms, where attention is confined to the image in which the reference point is projected. Figure <ref> illustrates the comparison between our method and the PETR and Sparse4D models. To summarize, our main contributions are as follows: * We propose a novel framework for sparse query-based multi-view 3D object detection, MDHA, which constructs adaptive and diverse 3D output proposals from 2D→2.5D→3D anchors. Top-k proposals are sparsely selected in the Anchor Encoder and refined within the MDHA decoder, thereby reducing reliance on 3D anchor initialization. * An elegant multi-view-spanning sparse attention mechanism, Circular Deformable Attention, which improves efficiency without compromising performance. * On the nuScenes val set, MDHA significantly outperforms the learned anchors baseline, where proposals are implemented as learnable embeddings, and surpasses most SOTA query-based methods. § RELATED WORKS §.§ Multi-View 3D Object Detection BEV-based methods perform 3D object detection by leveraging a Bird's-Eye-View feature representation acquired by transforming image features. In most works <cit.>, this transformation follows the Lift-Splat paradigm <cit.>, which involves "lifting" image features into 3D space using depth predictions, and "splatting" them onto the BEV plane by fusing features that fall into the pre-defined grids. For instance, BEVDet <cit.> constructs a BEV map through this view transformation, refines it with a BEV-Encoder and generates predictions with a detection head. BEVDet4D <cit.> extends this framework by incorporating temporal features, which are projected onto the current frame, while BEVDepth <cit.> introduces explicit depth supervision with a camera-aware depth estimation module. Meanwhile, BEVFormer <cit.> adopts a parallel approach where BEV pillars are modelled as dense queries, and deformable attention is employed to aggregate spatio-temporal information for BEV refinement. In contrast, query-based methods circumvent the complex construction of BEV maps by modelling objects implicitly as queries. DETR3D <cit.> spearheaded this class of methods by learning a 3D-to-2D projection of the predicted 2D object centers, yielding reference points which, in conjunction with the image features sampled via bilinear interpolation, are employed within the decoder for query refinement. Despite being a representative sparse query-based method, it suffers from poor performance as it neglects temporal information. Sparse4D <cit.> rectifies this by projecting 4D keypoints onto multi-view frames across multiple timestamps, and sparsely sampling corresponding features, which are then hierarchically fused. Sparse4Dv2 <cit.> improves both its performance and efficiency by adopting a recurrent temporal feature fusion module. PETR <cit.> diverges from DETR3D by encoding 3D spatial information into input features via positional embedding, eliminating the need for 3D-to-2D projection. PETRv2 <cit.> extends this framework to other 3D perception tasks, namely BEV segmentation and lane detection, while Focal-PETR <cit.> adds a focal sampling module which selects discriminative foreground features and converts them to 3D-aware features via spatial alignment. Finally, StreamPETR <cit.> demonstrates impressive performance by adopting an object-centric temporal fusion method which propagates top-k queries and reference points from prior frames into a small memory queue for improved temporal modelling. §.§ Depth and 2D Auxiliary Tasks for 3D Object Detection In an emerging trend, more frameworks are integrating depth or 2D modules, with some combining both, to provide auxiliary supervision <cit.>, or to directly contribute to the final 3D detection <cit.>. In particular, Sparse4Dv2 and Sparse4Dv3 <cit.> both implement an auxiliary dense depth supervision to improve the stability of the model during training, while FocalPETR <cit.> and StreamPETR <cit.> both utilize auxiliary 2D supervision for the same reason. In contrast, certain methods have integrated the outputs of these auxiliary modules into the final 3D prediction. For instance, BEVFormer v2 <cit.> proposes a two-stage detector, featuring a perspective head that suggests proposals used within the BEV decoder. SimMOD <cit.> generates proposals for each token through four convolutional branches, predicting the object class, centerness, offset, and depth, where the first two are used for proposal selection and the latter two predict the 2.5D center. MV2D <cit.> employs a 2D detector to suggest regions of interest (RoI), from which aligned features and queries are extracted for its decoder. Far3D <cit.> focuses on long-range object detection by constructing 3D adaptive queries from 2D bounding box and depth predictions. Our approach differs from existing works by streamlining the auxiliary module, requiring only depth predictions. By pairing depth values with the 2D center coordinates of each feature token, we bypass the need to predict 2D anchors, reducing the learning burden of the model. Furthermore, proposal selection and feature aggregation are both executed by a transformer encoder which also performs sparse refinement. § MDHA ARCHITECTURE §.§ Overview Figure <ref> shows the overall architecture of MDHA. Given N-view input RGB images, the backbone and FPN neck extracts multi-scale feature maps, {F_l}^L_l=1, where F_l∈ℝ^N× C× H_l× W_l, C denotes the feature dimension, and (H_l, W_l) refers to the feature resolution at level l. For each feature token, the DepthNet constructs 2.5D anchors, which are projected to obtain 3D output proposals (Section <ref>). The single-layer MDHA Anchor Encoder refines and selects the top-k features and proposals to pass onto the decoder (Section <ref>). The D-layer MDHA Decoder conducts iterative refinement of selected anchors using spatio-temporal information (Section <ref>). Throughout the model, Circular Deformable Attention (CDA) is employed as an efficient multi-view-spanning sparse attention mechanism (Section <ref>). The model is trained end-to-end with detection and classification losses, with explicit depth supervision (Section <ref>). §.§ DepthNet Constructing reasonable proposals for 3D object detection is not an easy feat. While a naive approach involves parameterizing initial anchors, making them learnable, the large search space makes this a non-trivial task, and sub-optimal anchors run the risk of destabilizing training and providing inadequate coverage of the perception range (Section <ref>). Although Sparse4D avoids this issue by initializing 900 anchors from k-means clustering on the nuScenes train set, this introduces bias towards the data distribution present in nuScenes. Thus, with the aim of generating robust proposals for generalized driving scenarios, we opt to use the 2D center coordinates of each feature token to construct a set of 2.5D anchors. For the i-th token, we define the 2.5D anchor as: A^2.5D_i=[x^2D_i, y^2D_i, z_i]^T where (x^2D_i, y^2D_i)=(x_i + 0.5/W_l· W_inp, y_i + 0.5/H_l· H_inp) are the 2D coordinates of the token's center, z_i is the predicted depth of the object center, and (W_inp, H_inp) is the input image resolution. By fixing the (x,y) coordinates, the model only focuses on learning the depth value, reducing the search space from ℝ^3 to ℝ^1. Furthermore, since each feature token covers a separate part of the image, the resulting 3D anchors are naturally well-distributed around the ego vehicle. These 2.5D anchors can then be projected into 3D output proposals using E_i and I_i, the camera extrinsic and intrinsic matrices, respectively, for the view in which the token belongs to: A^3D_i = E_i^-1 I_i^-1 [x^2D_i ∗ z_i, y^2D_i ∗ z_i, z_i, 1]^T The DepthNet assumes that an object is centered within a token and predicts its depth relative to the ego vehicle. Below, we examine two approaches in obtaining depth suggestions: Fixed Depth. From Figure <ref>, we observe a clear pattern in the depth distribution within each camera: depth increases as we travel up the image. Motivated by this observation, we sample z_i from this distribution, eliminating the need for the model to predict the depth entirely. We argue that this is generalizable since, in most driving scenarios, all objects are situated on a level plane. Hence, for an object center to appear higher up on the image, it must be further away from the ego vehicle. Learnable Depth. We also explore a more adaptive approach for obtaining depth maps, {z_l}_l=1^L, where z_l∈ℝ^N× C× H_l× W_l is predicted as follows: z_l = σ(Ψ(F_l)) ∗ (D^max-D^min) + D^min where Ψ is a single layer convolutional head and D^max, D^min are the maximum and minimum depth values. §.§ MDHA Anchor Encoder Within the encoder, each image token acts as a query and undergoes refinement via self-attention using our sparse CDA mechanism, where (x^2D_i, y^2D_i) serve as 2D reference points for each query. The addition of 3D position embedding from PETR was found to stabilize training by providing 3D positional context for the 2D image tokens. Additionally, we employ (x^2D_i, y^2D_i, z_i) sinusoidal position embeddings to encode the token's learned 2.5D anchor. For each token, two MLP heads predict the object classification score and offsets to refine the token's corresponding 3D proposal. The resulting proposals with the top-k classification scores, A^3D_enc, are selected for further refinement within the decoder. We follow <cit.> by initializing the decoder queries as the corresponding top-k queries from the encoder, Q_enc. §.§ MDHA Decoder The decoder utilizes temporal information by leveraging a short memory queue consisting of sparse features and anchors from the last m frames. To account for the movement of objects between frames, historic 3D anchors, A^3D_t-τ=(x^3D, y^3D, z)_t-τ, are aligned to the current frame as follows: A^3D_t-τ→ t=EGO^-1_t·EGO_t-τ [A_t-τ^3D, 1]^T where EGO_t is the lidar-to-global transformation matrix at time t. Relevant historic features are then selected via the Temporal Self-Attention, which is implemented as vanilla multi-head attention <cit.> where the features from the memory queue serve as the key and value, while the query consists of (Q_enc, Q_t-1), with Q_t-1 being the query propagated from the previous frame. Due to the relatively small and fixed-length input, this operation's computational complexity does not scale with increasing feature map size and is dominated by the encoder's complexity, therefore, sparse attention is not required here. In the Circular Deformable Cross-Attention module, these selected queries efficiently attend to refined image features from the encoder via our CDA mechanism. Its reference points for decoder layer d, are obtained by projecting 3D anchors, A^d-1, from the previous layer, into 2D. For the first layer, these anchors are defined as A^0=[A^3D_enc, A^3D_t-1 → t], and the 3D-to-2D projection for view n is given by: Proj(A^3D, n) = I_n · E_n [A^3D, 1]^T Since a 3D point can be projected to multiple camera views, we either contend with a one-to-many relationship between queries and reference points or choose a single projected point via a heuristic. For this work, we opt for the latter as it is more efficient, and with our novel CDA mechanism, performance is not compromised. Our chosen heuristic involves selecting the point that is within image boundaries and is closest to the center of the view it is projected. Thus, our reference points are given by: (r^d_x, r^d_y)=(r_x, r_y)∈ℛmin‖[r_x, r_y]^T - [W_inp/2, H_inp/2]^T‖_2 where ℛ={Proj(A^d-1, n)}^N_n=1∖ℛ_⊘ is the set of projected 2D points, excluding points outside image boundaries, ℛ_⊘. For each query, two MLP heads predict classification scores and offsets, Δ A^d, which are used to obtain the refined anchors A^d = A^d-1 + Δ A^d for each decoder layer. Anchors and queries with the top-q scores are propagated into the memory queue. For improved efficiency, intermediate classification predictions are omitted during inference time. §.§ Circular Deformable Attention Self-attention in the encoder and cross-attention in the decoder utilize multi-scale, multi-view image features as attention targets. Due to the large number of feature tokens, executing these operations using vanilla multi-head attention is computationally expensive. Instead, we employ a modification of the deformable attention mechanism <cit.>. A straightforward implementation of deformable attention in a multi-view setting is to treat each of the N views as separate images within the batch, then project each 3D anchor into one or more 2D reference points spread across multiple views as is done in most existing works <cit.>. However, this approach limits reference points to only attend to locations within their projected image. We overcome this limitation by concatenating the N views into a single contiguous 360^∘ image. Thus, our feature maps are concatenated horizontally as ℳ={M_l}^L_l=1, where M_l=[F_l1, F_l2, …, F_lN] and F_ln represents the input features at level l for view n. Given 2D reference points, r_x ∈ [0,W_inp], r_y ∈ [0,H_inp] local to view n, we can obtain normalized global 2D reference points, r̂^cda=(r̂^cda_x, r̂^cda_y)∈ [0,1]^2, for ℳ as follows: r̂^cda_x = r_x + (n-1)W_inp/N× W_inp, r̂^cda_y = r_y/H_inp Therefore, by letting q_i be the i-th query, Circular Deformable Attention is formulated as: CDA(q_i, r̂^cda_i, ℳ) = ∑^N_h_h=1 W_h [∑^L_l=1∑^S_s=1 A_hlis· W'_h x_l(p_hlis)] where h indexes the attention head with a total of N_h heads, and s indexes the sampling location with a total of S locations. W_h∈ℝ^C×(C/N_h), W'_h∈ℝ^(C/N_h) × C are learnable weights, and A_hlis∈[0,1] is the predicted attention weight. Here, x_l(p_hlis) refers to the input feature sampled via bilinear interpolation from sampling location p_hlis=ϕ(r̂^cda_i + Δ p_hlis), where ϕ scales the value to the feature map's resolution, and Δ p_hlis is the sampling offset obtained as follows: Δ p_hlis=Φ(q_i) / (W^ℳ_l, H^ℳ_l) with Φ denoting the linear projection. Since (W^ℳ_l, H^ℳ_l)=(N * W_l, H_l), Δ p_hlis is not bound by the dimensions of the view they are projected onto, inherently allowing the model to learn sampling locations beyond image boundaries. Considering that the Φ(q_i) is unbounded, the offset might exceed the size of the feature map at that level. Therefore, we wrap sampling points around as if the input were circular: p_hlis=p_hlis mod 1.0 In effect, this treats the first and last view as if they are adjacent. Overall, CDA works best if visual continuity is maintained between neighbouring images. Thus, we reorder inputs to follow a circular camera order, i.e., Front → Front-Right → Back-Right → Back → Back-Left → Front-Left. This arrangement places related features next to one another. Given that CDA only involves reshaping input features and inexpensive pre-processing of reference points, it adds negligible overhead on top of vanilla deformable attention. Figure <ref> illustrates the CDA mechanism. §.§ Training Loss We define a 3D detection as follows: [ x^3D , y^3D , z , w , l , h , θ , v^x , v^y ] consisting of the object's 3D center, width, length, height, yaw, and its x and y velocity. Also, let {b̃_j}^N_gt_j=1 and {z̃_j}^N_gt_j=1 be the set of ground-truth 3D detections and depths, while {b̂_i}^N_q_i=1 and {ẑ_k}^N_tok_k=1 represent the set of 3D detection and depth predictions, where N_tok = ∑^L_l=1 N× H_l× W_l. Under this setting, MDHA is trained end-to-end to minimize: ℒ=λ_1 ℒ_cls + λ_2 ℒ_det + λ_3 ℒ_depth where the classification loss, ℒ_cls is implemented using focal loss <cit.>, and the detection loss is defined as ℒ_det=1/N_gt∑^N_q_i=11_{b^targ_i≠∅}|b^targ_i - b̂_i|, with the chosen ground-truth target b^targ_i of prediction b̂_i obtained via bipartite matching <cit.>. Auxiliary classification and detection losses are employed for both the encoder and the intermediate decoder layers. To calculate the depth loss ℒ_depth, we project b̃_j to all N views using (<ref>) to obtain {(x̃^2D_n, ỹ^2D_n)}^N^proj_j_n=1, where N^proj_j=N-N_⊘ and N_⊘ refers to the number of projected 2D points outside image boundaries. The target of depth prediction ẑ_k is denoted as z̃_m̂, with index m̂ obtained via: m̂=j∈ [1, N_gt]min D_kj where D_kj= nmin‖[x^2D_k, y^2D_k]^T - [x̃^2D_n, ỹ^2D_n]^T‖_1. Finally, we define the depth loss in (<ref>) as ℒ_depth=1/∑^N_gt_j=1 N^proj_j × L∑^N_tok_k=1W_k|z̃_m̂ - ẑ_k| where W_k=e^-D_km̂/ε represents the weight of prediction k, calculated using exponential decay, which decreases the further away the token's 2D coordinates are from the ground-truth object. Additionally, we impose a strict distance cutoff: W_k= W_k, if W_k>ρ 0, otherwise to ensure that only predictions in close proximity to ground truth objects propagate gradients, striking a balance between incredibly sparse one-to-one matching and exhaustive gradient propagation, for stable training. § EXPERIMENTS §.§ Implementation Details For fair comparison with existing works, MDHA is tested with two image backbones: ResNet50 <cit.> pre-trained on ImageNet <cit.> and ResNet101 pre-trained on nuImages <cit.>. We set D^max=61.2 and D^min=1.0; a total of N_q=900 queries are used in the decoder, with k=644 queries from the encoder and q=256 values propagated from the previous frame. The memory queue retains sparse features and anchors from the last m=4 frames. The encoder is kept as a single layer to obtain manageable training times while the decoder has D=6 layers. Training loss weights are set to be λ_1=2.0, λ_2=0.25, λ_3=0.01 with auxiliary losses employing the same weights. For depth loss, we set ε=10/l and ρ=0.01. During training, denoising is applied for auxiliary supervision within the decoder, using 10 denoising groups per ground-truth. Following <cit.>, the model is trained for 100 epochs for Table <ref> and 25 epochs for Section <ref>, both using the AdamW optimizer <cit.> with 0.01 weight decay. Batch size of 16 is used with initial learning rate of 4e-4, decayed following the cosine annealing schedule <cit.>. Input augmentation follows PETR <cit.>. No CBGS <cit.> or test time augmentation was used in any of the experiments. §.§ Dataset We assess MDHA's performance on a large-scale autonomous driving nuScenes <cit.> dataset using its official performance metrics. It captures driving scenes with 6 surround-view cameras, 1 lidar, and 5 radar sensors as 20-second video clips at 2 frames per second (FPS), with a total of 1000 scenes split up into 700/150/150 for training/validation/testing. The dataset is fully annotated with 3D bounding boxes for 10 object classes. §.§ Main Results Table <ref> compares MDHA-conv and MDHA-fixed, which uses the Learnable and Fixed Depth approaches, respectively, against state-of-the-art sparse query-based methods. With the ResNet50 backbone, MDHA-conv outperforms most existing models, except for StreamPETR and Sparse4Dv2, which achieve slightly higher mAP and NDS. However, we would like to point out that StreamPETR benefits from dense attention and is trained with a sliding window, using 8 frames and a memory queue, for a single prediction. In contrast, our method employs efficient and scalable sparse attention with a window of size 1 using the same memory queue. As a result, MDHA trains 1.9× faster than StreamPETR with the same number of epochs. Additionally, Sparse4Dv2 initializes anchors from k-means clustering on the nuScenes train set, introducing bias towards the dataset, whereas our method utilizes adaptive proposals without anchor initialization, enhancing its ability to generalize to real-world driving scenarios. Furthermore, we observe that despite a much lower input resolution, MDHA-conv achieves 5.7% higher mAP and 6.6% higher NDS compared to SimMOD, another proposal-based framework which uses complex 2D priors. With a ResNet101 backbone, MDHA-conv again outperforms most existing models except for StreamPETR and Sparse4Dv2, though it does reduce mASE by 0.1% and 0.7%, respectively. Even with lower input resolution, MDHA-conv once again outperforms SimMOD by 9.8% mAP and 9.5% NDS. For both backbones, MDHA-fixed slightly underperforms compared to MDHA-conv due to its fixed depth distribution, but offers an inference speed of 15.1 FPS on an RTX 4090, which is 0.7 higher than MDHA-conv. §.§ Analysis §.§.§ Effectiveness of hybrid anchors To verify the effectiveness of our hybrid anchor scheme, we perform a quantitative and qualitative comparison with a learnable anchors baseline with no encoder, where proposals are implemented as parameterized embeddings with weights W^emb. Initial 3D anchor centers are obtained via σ^-1(W^emb) with zero-initialized queries. This setting mimics StreamPETR <cit.>. Figure <ref> illustrates the comparison between proposals generated by these two approaches. In the learnable anchors setting, the same set of learned anchors are proposed regardless of the input scene, resulting in significant deviations from the actual objects. Thus, this approach relies heavily on the decoder to perform broad adjustments. Furthermore, it is evident from the BEV, that the proposals only cover a limited area around the vehicle, rendering them unsuitable for long-range detection. In contrast, our proposed method adapts to the input scene in two ways: first, the DepthNet predicts object depth based on image features; second, the Anchor Encoder selects only the top-k proposals most likely to contain an object based on the feature map. Visually, we can see that our proposals not only manage to discriminate the relevant objects within the scene, they also encompass these objects quite well, detecting even the partially obstructed barriers in the back-right camera. Furthermore, as our proposals are unbounded, the BEV shows many proposals far away from the ego vehicle, making it more effective for detecting objects further away. All of these alleviate the decoder's load, allowing it to focus on fine-tuned adjustments, thus enhancing overall efficiency and performance. These observations are consistent with the quantitative comparison in Table <ref>, where our hybrid scheme outperforms the learnable anchors baseline for all metrics. Notably, it achieves a 7.1% improvement in mAP and a 5.8% improvement in NDS. Due to the limited range of the learnable anchors, it suffers from a large translation error, which has been reduced by 12.4% in the hybrid anchors scheme. §.§.§ Ablation study on Circular Deformable Attention Table <ref> shows that without view-spanning, multiple projected reference points (multi-projection) outperforms a single projected reference point (single-projection, Section <ref>) by 1.4% and 1.2% NDS and mAP. With view-spanning, although performance improves for both settings, multi-projection models obtain worse mAP and NDS than their single-projection counterparts. The performance jump between models 4 and 5 validates the efficacy of our multi-view-spanning mechanism, and we hypothesize that the one-to-many relationship between queries and reference points in multi-projection models slows down convergence compared to a single well-chosen point, causing it to lag behind in performance. Moreover, multi-projection models are slower by 0.5 FPS due to the additional feature sampling. Wrapping sampling points around (wraparound) also yields a small performance increase. Both view-spanning and wraparound have no impact on the FPS. Thus, our CDA adopts the single-projection approach for improved efficiency, and view-spanning with wraparound for maximal performance. §.§.§ Number of sampling locations per reference point Increasing the number of sampling locations per reference point in both the encoder (S_enc) and decoder (S_dec) enables queries to attend to more features. Table <ref> shows that as S_enc increases from 4 to 12, NDS improves by 0.4%, and as S_dec increases from 12 to 24, both NDS and mAP improve by 0.9% and 0.1%, respectively. Comparing the results of (S_enc, S_dec)=(24, 4) with that of (S_enc, S_dec)=(4, 24), the latter achieves 2.6% and 1.3% higher NDS and mAP, while being faster by 1.3 FPS. Thus, increasing S_dec yields a larger performance gain compared to increasing S_enc by the same amount. This could be attributed to the difference in the number of queries, which is much higher in the encoder than the decoder. Hence, even with a low S_enc, the encoder's queries provide adequate input coverage. This also explains why increasing S_enc results in more FPS reduction than increasing S_dec. § CONCLUSIONS In this paper, we introduce MDHA, a novel framework which generates adaptive 3D output proposals using hybrid anchors. These proposals are sparsely refined and selected within our Anchor Encoder, followed by iterative refinement in the MDHA decoder. Effective and sparse attention is enabled via our multi-view-spanning CDA mechanism. MDHA significantly outperforms the learnable anchors baseline and achieves competitive results on the nuScenes val set. There are many promising avenues for improving MDHA. For instance, feature token sparsification <cit.> could enhance the efficiency of the encoder. Additionally, the use of full 3D anchors <cit.> as opposed to only 3D centers could be explored to improve performance. We hope that MDHA serves as a baseline for future advancements in adaptive sparse query-based multi-camera 3D object detection. § ACKNOWLEDGMENT This work was supported by the Advanced Computing Platform (ACP), Monash Malaysia. The work of Junn Yong Loo is supported by Monash University under the SIT Collaborative Research Seed Grants 2024 I-M010-SED-000242. IEEEtran
http://arxiv.org/abs/2406.18269v2
20240626114349
Refining Potential Energy Surface through Dynamical Properties via Differentiable Molecular Simulation
[ "Bin Han", "Kuang Yu" ]
physics.chem-ph
[ "physics.chem-ph", "physics.comp-ph" ]
Institute of Materials Research, Tsinghua Shenzhen International Graduate School (TSIGS), Shenzhen, 518055, People’s Republic of China [Email: ]yu.kuang@sz.tsinghua.edu.cn Institute of Materials Research, Tsinghua Shenzhen International Graduate School (TSIGS), Shenzhen, 518055, People’s Republic of China § ABSTRACT Recently, machine learning potentials (MLP) largely enhances the reliability of molecular dynamics, but its accuracy is limited by the underlying ab initio methods. A viable approach to overcome this limitation is to refine the potential by learning from experimental data, which now can be done efficiently using modern automatic differentiation technique. However, potential refinement is mostly performed using thermodynamic properties, leaving the most accessible and informative dynamical data (like spectroscopy) unexploited. In this work, through a comprehensive application of adjoint and gradient truncation methods, we show that both memory and gradient explosion issues can be circumvented in many situations, so the dynamical property differentiation is well-behaved. Consequently, both transport coefficients and spectroscopic data can be used to improve the density functional theory based MLP towards higher accuracy. Essentially, this work contributes to the solution of the inverse problem of spectroscopy by extracting microscopic interactions from vibrational spectroscopic data. Refining Potential Energy Surface through Dynamical Properties via Differentiable Molecular Simulation Kuang Yu Received: date / Accepted: date ====================================================================================================== § INTRODUCTION In chemistry and materials science, learning from experimental data is crucial for expediting the design of new materials. Experimental observables, such as density, diffusion coefficient, and dielectric constant, provide valuable insights into atomic interactions within materials. Among these properties, dynamical data plays a pivotal role: transport properties (e.g., diffusion coefficients, electrical conductivity, and thermal conductivity etc.) are important prediction targets in many application scenarios. While vibrational spectroscopic data contains comprehensive microscopic information, making it an ideal probe to unravel the structures and the dynamics of the system at atomic level. However, the analysis and interpretation of dynamical data, especially spectra, has been quite subjective and prone to human error. Therefore, establishing a direct and rigorous connection between spectra and microscopic dynamics has always been a great challenge. For long time, molecular dynamics (MD) simulation has been a powerful tool to predict, analyze, and interpret dynamical properties. For example, both equilibrium and nonequilibrium MD techniques can be applied to predict transport properties such as diffusion coefficients<cit.>. MD simulations can reach excellent agreement with experimental measurements across a wide range of temperatures and pressures, given that a correct potential model is used<cit.>. Moreover, since the 1990s<cit.>, MD simulation has served as a standard technique to study infrared (IR) spectrum, unveiling valuable physical insights, such as the connection between the hydrogen-bond stretch peak at 200 cm^-1 with the intermolecular charge transfer effects in liquid water<cit.>. However, similar issue exists here that the dynamical simulation results are sensitive to the underlying potential energy surface (PES). Consequently, different PES give significantly different predictions, making the reliability of the corresponding interpretations under debate. Conventionally, classical potentials used in MD simulations rely on physically motivated functional forms to describe inter-atomic interactions. The accuracy of classical potentials thus are largely limited by the predetermined functional form. In contrast, in recent years, machine learning (ML) methods are quickly gaining popularity due to their flexibility and consequently, excellent fitting capability in high-dimensional space. Models such as BPNN<cit.>, DeepPMD<cit.>, EANN<cit.>, NequIP<cit.> etc have been developed and used in various circumstances. Besides fitting PES, ML methods also demonstrate extraordinary capability in predicting tensorial properties such as dipole moment and polarizability<cit.>, which are used to generate highly accurate IR and Raman spectra. However, to date, most ML potentials are fitted to ab initio energies and forces, the accuracy of which is inevitably limited by the underlying ab initio method. Only a few works are based on the chemically accurate "golden standard" coupled cluster theory<cit.>, while most ML potentials are still fitted to much cheaper density functional theory (DFT) calculations due to the limitation of training cost. Worsening the situation, most condensed phase potentials rely on condensed phase training data, for which massive accurate correlated wavefunction calculations are prohibited. The cheap pure functionals typically used in extended system calculations are usually not accurate enough to describe molecular interactions. In this work, instead of predicting dynamical properties using bottom-up ML potential, we focus on the inverse problem: how to refine the underlying potentials using dynamical data, especially spectroscopic data. Compared to the prediction task, the inverse problem is actually more in alignment with the fundamental question of spectroscopy: that is, how to infer detailed microscopic information of the system from its spectroscopy? In fact, such "top-down" strategy<cit.> is quite common in classical force field (FF) development, but mostly using thermodynamic data such as densities and evaporation enthalpies and leaving the most accessible and informative spectroscopic data unexploited. As stated above, the difficulty lies in the fact that the quantitative mapping between PES and spectroscopy is hard to establish. In most existing works, people have to guess the form of PES, and manually adjust parameters to match the spectroscopy, which is highly non-scalable, especially in the era of ML potentials. However, recently, the emergence of automatic differentiation (AD) technique enables extremely efficient and automatic parameter optimization, making this task possible. Utilizing the AD technique, there are several successful examples of top-down fitting implementations. Unlike a direct end-to-end matching between structure and property<cit.>, fitting via differentiable MD simulation can yield physically meaningful potential, which is transferable to the prediction of other properties. Using experimental data to correct bottom-up fitting results has been proven effective in many works<cit.>, supporting a new PES development strategy: pre-train the model using a cheap ab initio method, then fine-tune it with a small amount of experimental data. Such strategy can significantly lower the training cost and eases the construction of ML potentials. To date, several infrastructures for differentiable MD simulation have been implemented, including JAX-MD<cit.>, TorchMD<cit.>, SPONGE<cit.> and DMFF<cit.>. Building upon these foundations, many targets such as molecule structures<cit.>, radial distribution function (RDF)<cit.>, and free energy<cit.> have already been used in top-down training. However, the utilization of dynamical targets such as transport coefficients and spectroscopic data have not been investigated yet. The essential reason why dynamical data has not been used in top-down fitting roots in the fundamental obstacles of trajectory differentiation. Previously, it was believed that trajectory differentiation suffers from severe memory overflow issue as the memory cost of backpropagation scales linearly with respect computational depth (i.e., the number of MD steps in our case). Furthermore, direct trajectory differentiation leads to exploding gradients due to the chaotic character of the long time molecular dynamics<cit.>. When dealing with thermodynamic properties (i.e., density, RDF, free energy etc.), we can circumvent these issues by employing the reweighting scheme<cit.>, as thermodynamic properties are only related to the stationary distributions instead of the sampling trajectories. But for dynamical properties (i.e., diffusion coefficient, electrical conductivity, IR spectrum etc.), a deep differentiation chain through the entire trajectory seems to be inevitable. There are some attempts to the alleviate these two problems. The adjoint method inspired by the NeurODE work<cit.> can reduce the memory cost to constant. And some methods, such as damped backpropagation<cit.>, truncated backpropagation<cit.>, partial backpropagation<cit.>, or even the black box gradient<cit.>, have been applied in simple cases in other areas to address the exploding gradient issue. However, the effectiveness of these approaches in the differentiation of dynamical properties in high-dimensional bulk MD have not been examined. In this work, we propose to fill this gap by combining both differentiable multistate Bennett acceptance ratio estimator (MBAR)<cit.> and trajectory differentiation techniques. We show that the adjoint technique<cit.> is highly effective in the reversible NVE simulations typically employed in dynamical property calculation. And we show that the gradient explosion issue can be avoided by proper damping or truncation schemes, due to the reason that although single trajectory is chaotic, the evaluation of ensemble-averaged dynamical properties must be stable. Therefore, the differentiation of dynamical properties may not be as ill-conditioned as it was once believed. Eventually, using water as a simple example, we show that by combining both thermodynamic and spectroscopic data, we can boost the DFT-based ML potentials towards a significantly higher accuracy, eventually leading to more robust predictions to other properties such as RDF, diffusion coefficient, and dielectric constant. Through these test cases, we show how the details of microscopic intermolecular interactions can be learned from comprehensive experimental observations. § RESULTS §.§ Differentiation of Dynamical Properties Important dynamical properties in chemistry and materials science, including both transport coefficients and spectroscopy, are always related to ensemble averaged time correlation functions (TCFs)<cit.>. For example, under the Green-Kubo formula<cit.>, transport coefficients ⟨ O⟩ can be expressed as: ⟨ O⟩∝∫_0^∞ C_AB(t) dt = ∫_0^∞⟨ A(0)· B(t)⟩ dt where C_AB(t) = ⟨ A(0)· B(t)⟩ is the TCF of observables A and B. For example, for diffusion coefficient, both A and B are particle velocities, and for electrical conductivity, both A and B are ionic current. Meanwhile, vibrational spectroscopy is also related to TCFs via Fourier transform: I(ω) ∝∫_-∞^∞ C_AB(t) e^-iω t dt Different to thermodynamic properties, C(t) is not only a function of the ensemble distribution but also related to the dynamics of trajectories. In the classical limit, the typical workflow to compute C(t) is to first draw an ensemble of samples from an equilibrated NVT or NPT simulation (denoted as: S_1,S_2,⋯,S_N). Then, starting from these initial samples, the time evolution of the system state (i.e., the position and momentum of each particle) can be propagated using classical NVE integrator, generating the full trajectory information: z_n(t)=(p_n(t),q_n(t)) (here, p denotes momenta, q denotes positions, and n=1,2,…,N is the initial sample index). The ensemble of trajectories is then used to compute C(t), the dynamical properties, and the corresponding loss function L to be optimized (see Fig. <ref>a). In this computing process, the potential parameter θ enters in both the sampling of initial states and the propagation of each trajectory, leading to two gradient terms. Taking the computation of transport coefficients as an example, when using squared deviations as loss function, it can be written as: L=(⟨ O⟩-O^*)^2 with: <O> = ∑_n=1^N w_n(θ) O_n O_n = ∫_0^∞ A(𝐳_n(0))B(𝐳_n(t, θ)) dt Then its gradient with respect to θ is: ∂ L/∂θ=2(⟨ O⟩-O^*)(∑_n=1^N∂ w_n/∂θO_n+∑_n=1^N w_n∂ O_n/∂θ) In here, w_n is the statistical weight associated with each sample S_n, which should be 1/N in a normal MD sampling. However, in the context of reweighting scheme<cit.>, w_n can be viewed as a function of θ, thus giving the first term in Eq. <ref>. In this work, we call this term "thermodynamic gradient", which can be evaluated easily using the differentiable MBAR technique we developed in our previous work<cit.> (Fig. <ref>b). The second term, which we call "dynamical gradient", is related to the differentiation of trajectory (i.e., .∂𝐳_n(t)/∂θ|_𝐳_n(0)) and more difficult to compute. As mentioned above, the memory cost of the naive backpropagation algorithm scales linearly with respect to computation depth (i.e. number of time steps K). Thus for an accurate dynamics with small time step dt, naive backpropagation differentiation can easily consume all the memory. In this case, the adjoint technique can be introduced as a perfect solution, reducing the memory cost of the trajectory differentiation to constant (Fig. <ref>c). Consider a typical NVE simulation setup, we are effectively propagating a deterministic ordinary differential equation (ODE) using the following dynamics: z(t_i)=f(z(t_i-1),θ) where f is a NVE integrator such as Velocity-Verlet or leap-frog, both of which is rigorously time reversible: (-p(t_i-1),q(t_i-1))=f((-p(t_i),q(t_i)),θ). The adjoint variable 𝐚(t_i) is then defined as the differentiation of the loss function L with respect to 𝐳(t_i), while keeping the trajectory before t_i unchanged but allowing the trajectory after t_i to vary accordingly: 𝐚(t_i) = .∂ L/∂𝐳(t_i)|_𝐳(t_j<t_i), θ When L is an explicit function of the entire trajectory: L=L(𝐳(t_0), 𝐳(t_1), …, 𝐳(t_K)), the last adjoint 𝐚(t_K) can be trivially computed as a direct derivation d L/d 𝐳(t_K). Starting from this point, we can then solve the reverse dynamics of 𝐚 using chain rule: a(t_i-1)=a(t_i)∂ f(z(t_i-1, θ))/∂z(t_i-1)+d L/d z(t_i-1) =g(𝐚(t_i), 𝐳(t_i), θ) Thanks to the rigorous time reversibility of f, the variable ∂ f(𝐳(t_i-1,θ))/∂𝐳(t_i-1) can be computed using only 𝐳(t_i), without any saved information of 𝐳(t_i-1). Therefore, following the dynamics given by Eq. <ref>, the whole adjoint trajectory can be obtained via a single backward pass of the existing trajectory. And according to previous work<cit.>, the total parameter gradient can be then computed along the backward pass as: dL/dθ=∑_i=K^1a(t_i)∂ f(z(t_i-1),θ)/∂θ This computation process is highly efficient: the computational cost of the entire adjoint backward pass is similar to another normal MD run, with a constant memory cost that does not depend on number of steps. We note that this adjoint technique has been well established to study a variety of dynamical systems<cit.>. However, in the field of high-dimensional MD, researchers have primarily focused on thermodynamic properties. For thermodynamic calculations, long sampling MD trajectory with heat bath is usually required. The frequently used stochastic heat baths (such as Langevin thermostat) break the time reversibility of the MD step, thus cumbering the adjoint propagation. Therefore, most previous studies<cit.> used the reversible extended Lagrangian NVT thermostat. However, the differentiation of long sampling trajectory is sub-optimal, as the reweighting scheme is much more efficient for thermodynamics. Meanwhile, in the case of dynamical property (i.e., TCF) calculations, differentiating trajectory becomes inevitable, which is a perfect application scenario for the adjoint technique. Generally, the optimization of any time reversible dynamics, such as quantum ring-polymer MD (RPMD)<cit.> or LSC-IVR (linearized semiclassical initial value prepresentation)<cit.> simulations, can also be conducted in a similar way. However, in this work, we only focus on the unperturbed classical dynamics, leaving the reversible quantum dynamics to our future work. While the adjoint technique is highly efficient, the differentiation of a single trajectory is ill-conditioned in the long-time limit. The chaotic nature of the MD system leads to an exponentially increasing statistical noise with respect to time in the TCF gradient evaluations (vide infra). To alleviate this problem, starting from the initial samples, we propagate the system trajectories in both forward and backward directions. Through this trick, we effectively double the trajectory lengths while keeping the statistical noise of gradient staying in the same magnitude. Furthermore, from the physical perspective, the time scale of chaotic explosion (e.g., the Lyapunov time) of a MD system is inherently related to how fast the system loss the information of its initial condition, which should be related to the correlation time scale that characterizes the decay rate of TCF. This understanding implies that while the statistical noise of long-time gradient inevitably explodes, the actual importance of its averaged value probably decays in a comparable time scale. Therefore, the ill-conditioned long-time differentiation can be avoided by simple truncation or damping schemes. Although a rigorous analysis is still needed to compare these two time scales, through numerical tests, we show that the truncation/damping scheme is indeed applicable in many important situations. In this work, all calculations, including the evaluations of both thermodynamic and dynamical gradients were conducted using the DMFF package<cit.>. The adjoint method was implemented in DMFF based on the leap-frog integrator, using JAX<cit.> framework. §.§ Transport Coefficients Before digging into spectroscopy, we first test our method in the fitting of transport coefficients, which is an important category of target properties in materials design. In this work, we use the self-diffusion coefficient of Lennard-Jones (LJ) particles as a toy system to showcase our strategy. The potential function of a single-component LJ liquid is defined as: V_LJ=∑_a<b4ϵ[(σ/r_ab)^12-(σ/r_ab)^6] where σ and ϵ are the parameters to be optimized, and r_ab is the distance between two particles a and b. The diffusion coefficient is calculated using the Green-Kubo formula: D=1/3∫_0^∞⟨ v(0)· v(t)⟩ dt where ⟨ v(0)· v(t)⟩is the TCF of velocity. In this test case, the diffusion coefficient computed using parameters σ^*=0.34 nm and ϵ^*=0.993795 kJ/mol<cit.> is used as the fitting target. The optimization starts from an initial point of σ=0.338 nm and ϵ=0.893795 kJ/mol. The goal is to examine whether or not the gradient of diffusion coefficient can guide the optimizer to find the original point (σ^*, ϵ^*) in the 2D parameter space. Since the initial sampling process does not need to be differentiable, we use OpenMM<cit.> to run the NVT simulation and randomly select 100 frames as the initial structures for the next-stage NVE runs. The integral of TCF converges within 1.5 ps with a relative error less than 2% (Fig. <ref>a). Therefore, to ensure convergence, we propagate each trajectory for 2 ps. To demonstrate the stability of the gradient, we investigate the effects of the sample size (Fig. <ref>b) and the time length (Fig. <ref>c). First, we compare the thermodynamic and dynamical gradients by presenting their respective means and standard deviations in Fig. <ref>b, as a function of sample sizes. As the sample size increases from 20 to 100, dynamical gradient changes from -0.77±0.30, -0.71±0.12, to -0.70±0.11; while thermodynamic gradient changes from -0.12±0.92, -0.04±0.59, to -0.05±0.46 in 10^-7 m^3/s^2. We also plot the distribution of the thermodynamic gradient using the results of 100 independent trials, as shown in the violin diagram in Fig. <ref>b. It is evident that while the dynamical gradient is much larger and much more important in optimization, thermodynamic gradient is the major contributor to the statistical noise. This result indicates that for dynamical properties, θ primarily affects the final outcome by influencing the dynamics of trajectories, rather than the thermodynamic distribution of initial structures. And the fact that dynamical contribution dominates the total gradient indicates that we can safely neglect the thermodynamic gradient for a better numerical performance, which we will discuss in below. Next, we compare the convergence of loss L and its gradient d L/d σ as functions of TCF integration time. L converges within 1.5 ps, while the uncertainty of d L/d σ gradually diverges after 3 ps, due to gradient explosion. Apparently, for long time dynamics, the noisy gradient is completely useless in optimization. However, as we point out above, TCF decays to zero in a faster rate, thus we can simply truncate the integration at 2 ps, so a numerically stable optimization can be conducted. Using the well-behaved gradient, we perform the optimization for (σ,ϵ) using the ADAM<cit.> optimizer. We use a learning rates of 0.0002 for σ and 0.01 for ϵ, since their gradients are in different orders of magnitude. We illustrate the contour of the loss function in the parameter space and the optimization path in Fig. <ref>d. The location of the target parameter values are marked using the red star symbol, representing the minimum of the loss surface. Two optimization paths are shown, using either total gradient or dynamical gradient only, respectively. Throughout the entire optimization, the directions of both gradients are aligned, and both strategies successfully locate the optimal parameters within 50 loops. In fact, the dynamical gradient strategy has a better performance, as the optimizer reaches the minimum slightly faster, due to the removal of noise from the thermodynamic gradient. We find that this advantage becomes even more significant in more complicated optimizations as we will show in the next section. Nevertheless, through this simple test, we show the validity of our algorithm in differentiating dynamical properties, which then can be used to solve the inverse problem of spectroscopy. §.§ IR Spectrum Comparing to fitting diffusion coefficient of LJ liquid, predicting the IR spectrum of liquid water is a much more challenging task and has been extensively studied in existing literatures<cit.>. Comparing to transport coefficients, IR spectrum is not only easier to obtain experimentally, but also contains much richer information about the microscopic PES. In this case, we calculate the IR spectrum using harmonic quantum correction factor (QCF): α(ω)n(ω)=2π/3ck_BTV∫_-∞^∞dt e^-iω t⟨Ṁ(0)·Ṁ(t)⟩ where ⟨Ṁ(0)·Ṁ(t)⟩ is the TCF of the time derivative of total dipole, c is the speed of light, k_B is Boltzmann constant, T is the temperature, V is the volume, α(ω) is the absorption coefficient, and n(ω) is the frequency-dependent refractive index. To ensure convergence, we always sample 320 independent trajectories in the following calculations. As the first step, we perform a simple test by fitting the IR spectrum computed using the q-SPC/Fw <cit.> force field. Similar to the LJ diffusion coefficient case, in here, we perturb the values of the force field parameters (i.e., the hydrogen charge q, the oxygen σ, the OH bond force constant k_b, and the HOH angle force constant k_a), and observe if IR fittings can restore their original values or not. As shown in Supplementary Fig. 2, the dipole derivative TCF features long echos that lasting beyond 2 ps. This long-living echo is primarily due to the intramolecular vibrations that is essentially decoupled from other motions, especially in the case of classical harmonic force fields. At first glance, such long-living mode, in combination with gradient explosion, creates tremendous difficulty in IR fitting. Indeed, in Supplementary Fig. 3, using numerical gradient as reference, we show that our scheme can only generate reliable gradient within 0.3 ps, which is certainly not enough to fully converge the IR calculation in the case of q-SPC/Fw. However, for spectroscopy fitting, a simple low-pass filter can be applied to solve this problem. We first Fourier transform the target spectrum into time domain, truncate it at 0.2 ps, then inverse Fourier transform it to obtain the real fitting target. Then we validate our fitting results using the fully-converged spectrum computed using the untruncated TCF. Employing this strategy, the optimization paths using both full gradient and dynamical gradient are shown in supplementary Fig. 4 (purterbing only q and σ). In this case, the total gradient strategy fails due to the large noise from thermodynamic gradient, while the dynamical gradient strategy performs much more robust. This result once again supports the use of dynamical gradient , instead of the total gradient although it is theoretically more rigorous. When perturbing all four parameters (q, σ, k_a, k_b), the final fitting results are shown in supplementary Fig. 5 and supplementary Table 1. As shown, even though we use only the low-pass filtered spectrum as our fitting target, the fitted potential restores the parameter values and the unfiltered IR spectrum quite well. Physically, the q-SPC/Fw IR fitting case represents a general situation that the trajectory differentiation may fail. That is, if there exists multiple modes in one system with separating time scales, then the chaotic nature of short-living modes may hinder the differentiation of other long-living modes. The low-pass filter approach we adopt in this work is effectively using the short-time dynamics to imply the long-time behavior. This approach is shown to perform reasonably well in our case, giving that both the short- and the long-time dynamics are controlled by the same PES, and can both benefit from its improvement. Giving the success in q-SPC/Fw fitting, we next switch to train a ML potential using experimental IR data<cit.>. To directly compare with experiment, we also need an accurate dipole surface besides a PES. In this work, we use the Deep Dipole<cit.> model trained at PBE0 level of theory to provide the dipole surface. For simplicity, we keep the dipole model fixed in the following study, assuming its accuracy. We note that technically, the dipole model can be optimized together with the PES model, but a separate fitting with designated dipole labels is preferred. We start our test by pretraining an embedded atom neural network (EANN) potential<cit.> using the dataset generated in previous work<cit.>. This dataset contains the energies and forces data of 64 water molecules in periodic boundary conditions, computed at dispersion corrected PBE0-TS (Tkatchenko-Schefler) level of theory. We then refine this bottom-up machine learning model using either density data alone, or a combined target of density and IR data. The combined density and IR loss function is defined as: L=w_ρ(ρ-ρ^*)^2+∑_i w_I(ω_i)[I(ω_i)-I^*(ω_i)]^2 where ρ is the density, I(ω_i)=α(ω_i)n(ω_i) is the spectrum, and values marked with asterisk (*) are experimental references. The weights w_ρ and w_I(ω_i) are used to balance the contributions of density and spectrum respectively. w_ρ is set to be 10^3 cm^6/g^2, and w_I is set to be frequency-dependent. It is noted that the IR spectrum in the high frequency region (i.e., >1000 cm^-1) shows significant nuclear quantum effects (NQEs), which is not supposed to be accurately described by classical dynamics. Furthermore, the strong peak at 3400 cm^-1 is likely to dominate the loss function if a uniform weight scheme is utilized. Therefore, we put particular emphasize on the low-frequency region (<1000 cm^-1), and increase w_I(ω) of the low-frequency region by a factor of 10, setting it to be 10^-8 cm^2, in comparison with 10^-9 cm^2 in the high-frequency region. For optimization, we utilize the ADAM optimizer with a learning rate of 0.00005 for all the parameters. The EANN potential contains thousands of parameters, with a much higher risk of overfitting compared to classical potential. So we keep monitoring the change of self-diffusion coefficient in the entire optimization process, using it as the validation data to determine the ending point of the optimization cycle. The optimization process using combined density and IR target is depicted in Fig. <ref>b, showing gradually decreasing losses in both IR spectrum and density. The initial PBE0-TS level EANN model features large error in density prediction. Therefore, the density loss dominates the optimization and reaches convergence within the first 10 optimization loops. After that, the IR loss becomes the main driving force for optimization, where some competitions between density and IR losses can be observed. As the independent validation, the error of diffusion coefficient keeps a general decreasing trend up to the 50'th loop, before starting to oscillate, indicating the risk of overfitting. Therefore, as a general practice, we always pick the parameters giving the smallest error of self-diffusion coefficient within the first 100 loops as our final model. For example, in the case shown in Fig. <ref>, the parameter from loop 58 is selected for further tests. In supplementary Fig. 6, we present the evolution of the IR spectrum versus the experimental reference every 5 loops, showing the progress of the optimization. To evaluate the stability of the procedure, we conduct 5 independent trial optimizations, the final performances of which are shown in Fig. <ref>cd. It is shown that better performances are achieved in both density and IR spectrum when combined density and IR target is used, in comparison with the pure density fitting. This demonstrates the effectiveness of dynamical information in the top-down fine-tuning. Even more significant effects can be observed when we transfer the refined potential to predict other properties, as we will show in below. §.§ Transferability to Other Properties To demonstrate that the optimization is not only a numerical fitting but also learns real physics from the spectrum, we use the optimized PES to compute other properties that are not included in the fitting target. The RDF, self-diffusion coefficient, and dielectric constant are used, giving a comprehensive evaluation to the model's performance on both the thermodynamic and the dynamical behaviors of bulk liquid water. As shown in Fig. <ref>ab, the potentials refined with density and IR spectrum perfectly reproduce the experimental O-O and O-H RDFs<cit.>, performing significantly better than the potential fitted to density only. Combined density and IR fitting yields a self-diffusion coefficient of 1.99±0.07×10^-9m^2/s, which is comparable to the experimental value of 2.299×10^-9m^2/s at 298 K<cit.>, and is much more accurate than the pure density fitting results (Fig. <ref>c). Moreover, the potentials refined by combined density and IR targets give a dielectric constant prediction of 96.1±3.2, which, although being higher than the experimental value of 78.3<cit.>, still represents a significant improvement over the initial bottom-up potential (119.4) and the potential refined using only density (110.9±3.2). The residual error in dielectric constant may be attributed to the stopping criterion we used in the optimization process. Since we are using diffusion coefficient error to pick the final model, the resulting potential is perhaps biased towards a better description to diffusion. In general, more types of fitting targets should lead to more robust optimization results. However, the optimal set of targets for an ideal top-down PES refinement process is subject to our future research. Nevertheless, the results in Fig. <ref> clearly shows the essential impacts of spectroscopic data, which, further considering its experimental accessibility, should be a critical component in the future fine-tuning workflow. To further verify and understand the physical effects of the top-down fine-tuning process from the microscopic perspective, we compare the refined potential with the CCSD(T)/complete basis set (CBS) calculation results (i.e., the "golden standard" in computational chemistry). The test set is taken from our previous work<cit.>, containing the energies of 500 octamer and 4000 pentamer water clusters. The total energy is decomposed into inter- and intra-molecular contributions, which are compared separately in Fig. <ref>ab. The corresponding root mean squared deviations (RMSD) are also given in Table <ref>. The RMSD of the total energy does not show any improvement after the refinement using IR. However, a decomposition of energy shows that the combined density and IR refinement significantly decreases the RMSD of inter-molecular energy, automatically correcting a large systematic error presented in the bottom-up model (Fig. <ref>a). Meanwhile, the performance of the refined potential in intra-molecular energy is worse, counterbalancing the improvement in the inter-molecular term. In Fig. <ref>d, we show how the energy RMSDs change during the optimization process, plotted along the side of the change of both density and IR losses. One can clearly see the deterioration of intra-molecular energy and the improvement of inter-molecular energy, both primarily driven by the optimization of IR spectrum instead of density. This discrepancy can be attributed to NQE. The low frequency band, which is related to molecule translations and librations, is primarily controlled by intermolecular interactions, while the high frequency bands in 1500 cm^-1 and 3400 cm^-1 are dominated by intramolecular vibrations. Therefore, NQE, which mainly affects the high frequency motions in room temperature, has a larger impact to the refinement of intramolecular potentials. Conceptually, it can be viewed that the IR refinement process is actually learning the potential of mean force (PMF) of adiabatic centroid MD (CMD), instead of the real physical potential. Luckily, in principle, this PMF is exactly what is needed when predicting experimental properties using classical MD. In supplementary Fig. 7, we also show the same results using only density fitting target, which is significantly inferior to the results of combined density and IR fitting. Once again, this observation highlights the importance of spectroscopic data, which brings in new physical information in the fine-tuning process. § DISCUSSION In this work, we address the inverse problem on how to infer microscopic molecular interactions from experimental dynamical data, especially the spectroscopic data. We tackle this problem using the automatic differentiable technique, which is conventionally believed to be impractical due to the memory cost and the gradient explosion issues. Through the application of adjoint method and appropriate gradient truncation strategies, we show that both the memory and gradient explosion issues can be alleviated. It is further shown that using only dynamical gradient, instead of total gradient, we can significantly reduce the statistical noise in the differentiation of dynamical properties, thus stabilizing the optimization process. Combining all these techniques, both transport coefficients and spectroscopic data can be differentiated efficiently, thus enabling a highly automatic and scalable PES optimization workflow based on dynamical properties. Using liquid water as an example, we show that a DFT-based ML potential can be promoted towards the CCSD(T)/CBS accuracy by learning from experimental spectroscopic data. And the top-down refined potential generates much more accurate predictions to other thermodynamic and dynamical properties, compared to its bottom-up precursor fitted at DFT level of theory. This work thus presents a general approach to improve the accuracy of DFT-based ML potential. More importantly, it provides an extremely efficient tool to extract and exploit the detailed microscopic information from spectroscopy, which was inaccessible via human efforts. Through our work, we can now use a much wider range of properties as our fitting targets to perform top-down PES fine-tuning. However, some important questions remain unresolved, such as: what is the optimal combination of fitting targets? And what is the most economic and effective optimization workflow for ML PES development? We will explore the answers to these questions in our future study. Moreover, differentiating long time dynamics is still problematic, especially when some strongly chaotic modes interfere with other modes with long time memory. To address these issues, one should probably be more focused on a few most important degrees of freedom, rather than performing a full dimensional trajectory differentiation. Along this line, some efforts exist focusing on the stochastic Langevin dynamics<cit.> or the Fokker-Planck equations in the reduced-dimensional space. But how to use these techniques in the inverse problem of top-down parameter fine-tuning is still an open question. § METHODS Time Correlation Function (TCF) In classical MD, TCF (⟨ A(0)· A(t)⟩) is usually calculated as: ⟨ A(0)· A(t)⟩=1/N∑_n=1^N[1/N_τ∑_τ^N_τ A_n(τ)· A_n(t+τ)] where N is the sample size and 1/N∑_n=1^N represents ensemble average, and 1/N_τ∑_τ^N_τ represents moving average along time. Here, A is a physical quantity derived from the system state. For example, A is velocity in diffusion coefficient calculations, instantaneous current in electrical conductivity calculations, and total dipole moment in IR spectrum calculations. In the moving average scheme, any two time points t_i and t_j are coupled, thus the evaluation of a single-point gradient dL/dz(t_i) involves the A(t_j) information of all t_j. Therefore, one has to save the entire A(t) trajectory if the moving average scheme is employed. In the case of diffusion coefficient calculations, that means storing the full dimensional velocity trajectory in GPU memory, which is difficult. Therefore, no moving average is used in the diffusion coefficient example. Meanwhile, in the IR case, the dimensionality of the total dipole moment derivative (𝐌̇(t)) is much lower, so the storage of a full 𝐌̇ trajectory is feasible. In this case, moving average scheme is employed to enhance statistics. Reweighting technique In the theoretical framework of reweighting, we are using samples S_1,S_2,⋯,S_N drawn from one ensemble (defined by parameters θ_0) to evaluate the averages in another ensemble (defined by parameters θ). Therefore, each sample carries a thermodynamic weighting factor w_n(θ), the formula of which is well known from previous studies<cit.>: w_n(θ)=exp[-β(U_n(θ)-U_n(θ_0)]/⟨exp[-β(U(θ)-U(θ_0)]⟩_θ_0 where β=1/k_BT, k_B is Boltzmann constant, and T is the temperature, U_n(θ) is the effective energy of S_n computed by θ, ⟨⟩_θ_0 is the average in the sampling ensemble. Then the ensemble average of observable ⟨ O⟩ in θ is simply, ⟨ O⟩=∑_i^N w_i(θ)O_i(θ) which can be easily differentiated using AD technique. MD simulations For LJ particles, the self-diffusion coefficient was computed in a cubic box containing 346 argon atoms at 94.4 K with a density of 1.374 g/cm^3. In the NVT simulation, the LangevinMiddleIntegrator in OpenMM<cit.> was employed, using a time step of 1 fs and a friction coefficient of 0.1 ps^-1. The cut-off radius was set to 10 Å. Initially, the system was equilibrated for 100 ps, followed by the sampling of 100 states at 2 ps intervals during the subsequent 200 ps simulation. These states served as initial configurations for the next stage of NVE simulation. For the NVE simulations, a time step of 1 fs was utilized to run 2 ps trajectories. The trajectories were then differentiated using the DiffTraj adjoint propagation module (with the leap-frog integrator) implemented in DMFF<cit.>. For liquid water, the density, self-diffusion coefficient, and dielectric constant were all computed in a cubic box containing 216 water molecules. The IR spectrum was obtained using a box with 64 water molecules, and RDFs were computed using a box with 1018 water molecules. A timestep of 0.5 fs was used for all water simulations, with the initial density consistently set at 1.0 g/cm^3. All NPT simulations were performed using i-PI<cit.>, interfacing with the force engines implemented in DMFF<cit.>. NPT simulations utilized the Bussi-Zykova-Parrinello barostat with a timescale of 100 fs and the Langevin thermostat with a timescale of 10 fs to control pressures and temperatures, respectively. All NVT simulations employed a self-implemented Langevin thermostat<cit.> with a friction coefficient of 1.0 ps^-1, and all NVE simulations were conducted using the leap-frog integrator implemented in the DiffTraj module of DMFF. In the optimization stage, for the density, the simulation time length was set to 100 ps, with 100 samples taken from the last 50 ps of the trajectory to obtain the averaged density. For the IR spectrum, 320 samples were obtained from a 64 ps NVT simulation. Each NVE trajectory propagated in both forward and backward directions, resulting in a total trajectory length of 0.4 ps. The initial structures for density and IR spectrum calculations were obtained from the last frame of the trajectory from the previous optimization loop. In the validation stage, for the density, a 1 ns NPT simulation was run, using the last 250 ps to determine the density. For the IR spectrum, a 320 ps NVT simulation was conducted, with 320 samples taken from the last 160 ps. Subsequently, 5 ps NVE simulations were performed to obtain the TCF of 2 ps, which was integrated to get the IR spectrum. For the self-diffusion coefficient, a 200 ps NVT simulation was executed, sampling 100 samples from the last 100 ps. Following this, 20 ps NVE simulations were conducted to obtain the TCF of 2 ps, which was integrated to determine the diffusion coefficient. For the dielectric constant, a 10 ns NVT simulation was run, using the last 9 ns to calculate the dielectric constant. For RDFs calculation, a 100 ps NVT simulation was carried out, with the last 40 ps used to compute the RDF. ML potential energy surface and dipole surface The potential energy surface was trained using EANN model<cit.> on a dataset containing 6812 configurations of 64 water molecules at the PBE0 level from Ref. <cit.>. We divided the dataset into training and testing sets with a ratio of 0.9:0.1. The EANN model utilized a neural network with two hidden layers, each containing 64 neurons. The initial learning rate was set to 0.001 and decayed by a factor of 0.5. Twelve radial functions and L up to 2 were used to construct EAD features with a cut-off radius set to 6 Å. The weight ratio of the loss for energy and force was set to 2:1. After training, the RMSD of the energy and force from the EANN model was 0.0668 kJ/mol/atom and 7.3548 kJ/mol/Å, respectively, on the test dataset, as shown in Supplementary Fig. 1. The dipole surface was trained using the Deep Dipole model<cit.> on a dataset containing 7299 configurations of 64 water molecules at the PBE0 level from Ref. <cit.>. The se_a descriptor was used to train the Deep Dipole model. The item neurons set the size of the descriptors and fitting network to [25, 50, 100] and [100, 100, 100], respectively. The components in local environment to smoothly go to zero from 5.8 to 6 Å. In the loss function, the pref was set to 0.0 and pref_atomic was set to 1.0. The starting learning rate, decay steps, and decay rate were set to 0.01, 5000, and 0.95, respectively. The finial training error was below 2.0×10^-3 Å. § REFERENCES § ACKNOWLEDGEMENTS The authors thank Dr. Lei Wang from Institute of Biophysics, Chinese Academy of Sciences, Mr. Weizhi Shao from Tsinghua University, and Dr. Xinzijian Liu from DP Technology for helpful discussions. § AUTHOR CONTRIBUTIONS STATEMENT K.Y. designed and conceptualized the study. B.H. implemented the method and performed all calculations. B.H. and K.Y. analyzed and interpreted results, and wrote the manuscript. § DATA AVAILABILITY The result data from all the simulations in this study are provided within the paper or in the Supplementary Information file. The trained models (EANN and Deep Dipole) are available at <https://github.com/ice-hanbin/dynamical-fitting>, and the training data are available from the authors upon reasonable request. § CODE AVAILABILITY The code for the DiffTraj module has been implemented in DMFF <cit.>. The two optimization cases (transport coefficients and IR spectrum) are available at <https://github.com/ice-hanbin/dynamical-fitting>.
http://arxiv.org/abs/2406.19261v1
20240627153231
Commodification of Compute
[ "Jesper Kristensen", "David Wender", "Carl Anthony" ]
cs.CE
[ "cs.CE", "cs.AI", "cs.CY", "cs.ET", "econ.GN", "q-fin.EC" ]
On the stability of fracton gravity Evangelos Afxonidis[afxonidisevangelos@uniovi.es], Alessio Caddeo[caddeoalessio@uniovi.es], Carlos Hoyos[hoyoscarlos@uniovi.es], Daniele Musso[mussodaniele@uniovi.es] ============================================================================================================================================================================= § ABSTRACT The rapid advancements in artificial intelligence, big data analytics, and cloud computing have precipitated an unprecedented demand for computational resources. However, the current landscape of computational resource allocation is characterized by significant inefficiencies, including underutilization and price volatility. This paper addresses these challenges by introducing a novel global platform for the commodification of compute hours, termed the Global Compute Exchange (GCX™) (Patent Pending). The GCX leverages blockchain technology and smart contracts to create a secure, transparent, and efficient marketplace for buying and selling computational power. The GCX is built in a layered fashion, comprising Market, App, Clearing, Risk Management, Exchange (Offchain), and Blockchain (Onchain) layers, each ensuring a robust and efficient operation. This platform aims to revolutionize the computational resource market by fostering a decentralized, efficient, and transparent ecosystem that ensures equitable access to computing power, stimulates innovation, and supports diverse user needs on a global scale. By transforming compute hours into a tradable commodity, the GCX seeks to optimize resource utilization, stabilize pricing, and democratize access to computational resources. This paper explores the technological infrastructure, market potential, and societal impact of the GCX, positioning it as a pioneering solution poised to drive the next wave of innovation in commodities and compute. § INTRODUCTION In today’s rapidly evolving digital landscape, the demand for computational resources has escalated exponentially, driven by advancements in fields such as artificial intelligence, big data analytics, and complex scientific simulations. This surge has highlighted a significant challenge: the need for accessible and scalable compute power. In response, we're building a Global Compute Exchange (GCX™) (Patent Pending), see Fig. (<ref>), addressing this critical issue by creating a decentralized marketplace that not only democratizes access to these resources but also ensures efficient, scalable, and fair compute power distribution. The demand for computational resources, particularly GPUs, has surged dramatically due to advancements in AI and machine learning. This increased demand has resulted in a significant supply crunch, making computational power both expensive and difficult to access for many startups and independent developers. The concept that “compute is the new oil" encapsulates this paradigm shift, highlighting the critical role that computational resources play in driving technological innovation and economic growth <cit.>. The AI sector has experienced unprecedented growth, with substantial investments highlighting the significant economic potential and demand for computational resources. As AI adoption continues to accelerate, the need for scalable and efficient compute resource management becomes ever more critical. What is more, research indicates that the computational requirements for leading-edge AI systems, for example, have been doubling approximately every three to four months <cit.>. Such growth rates substantially outpace Moore's Law <cit.>, emphasizing the urgent need for innovative solutions in resource provisioning and management <cit.>. Current projections indicate that the demand for compute power will continue to grow exponentially, driven by the increasing complexity of AI models and the widespread adoption of data-intensive applications <cit.>. According to Shoal Research <cit.>, the demand for computational resources, particularly GPUs, has surged, causing a supply crunch and making it expensive and difficult for startups and independent developers to access necessary resources <cit.>. This surge underscores the critical need for innovative solutions to efficiently manage and distribute these resources, ensuring that technological innovation is not hindered by resource constraints. We introduce a marketplace that leverages blockchain technology to create a transparent, secure, and efficient platform for trading compute resources <cit.>. By using a token-based economy, we incentivize the proper allocation and utilization of resources, with mechanisms for penalties and rewards that ensure compliance and performance <cit.>. As noted by Ren et al. <cit.>, such approaches are crucial for managing billion- and trillion-scale model training sessions that require substantial computational efforts <cit.>. The Blockchain Layer also enables the implementation of innovative Decentralized Finance (DeFi) trading technologies for compute resources, such as perpetual futures (perps) <cit.> and options. Advanced monitoring and predictive analytics are integrated to ensure proper risk management and that compute delivery is both reliable and efficient <cit.>. The platform's capability to provide real-time analytics helps preempt potential failures, ensuring uninterrupted service delivery <cit.>. This reliability is essential not only for maintaining service quality but also for building trust within the marketplace. The GCX provides a versatile platform that caters to various stakeholders in the compute market, enabling them to hedge against price volatility, secure future prices, and generate additional yield. As illustrated in Fig. (<ref>), the GCX allows different actors to leverage its functionalities to their advantage. Alice, an AI startup founder, uses futures contracts to lock in the price of compute, safeguarding against future price increases <cit.>. Bob, anticipating a market correction, buys put options to secure a price floor, thereby minimizing potential losses <cit.>. Carol, a data center operator, sells both call and put options to collect premiums, generating additional yield. When these options expire worthless, she profits from the premiums, smoothing her revenue stream (producer hedge) <cit.>. By utilizing the GCX, these actors can effectively manage their risks and optimize their financial strategies in the dynamic compute market. In all cases, notice that the GCX enables various expressions of market sentiment, allowing participants to profit and protect against the evolution of compute prices, whether they anticipate an increase, decrease, or sideways movement. This flexibility ensures that the GCX can benefit a diverse range of market participants, each with their unique perspectives and strategies. What should the price of compute be? This is not known. The price of compute can vary widely based on factors such as demand, supply, technological advancements, and market conditions. Unlike traditional commodities, the compute market is influenced by a unique set of variables, making it challenging to pinpoint a stable price. The GCX aims to bring transparency and stability to this nascent market, enabling participants to better understand and manage the cost of compute resources. We're at the forefront of addressing the critical challenge of compute power scarcity and uneven distribution. By creating a marketplace that makes compute resources accessible and efficiently managed, we pave the way for a future where computational power is no longer a barrier to innovation but a widely available asset that drives global technological advancement <cit.>. The on-demand accessibility to cutting-edge infrastructure, such as GPU clusters, FPGAs <cit.>, and quantum computing <cit.> from partners like Cambridge Compute Co. <cit.> into the GCX platform, combined with increased budget flexibility from cost savings, empowers organizations to accelerate ideas that were previously unattainable. Data teams can conduct experiments more rapidly and iteratively across diverse hardware types and geographic locations, leveraging cloud interoperability rather than being confined to local limitations. Engineers can easily prototype by utilizing global capacity on demand, prioritizing innovation velocity over perfection. Startups, in particular, benefit immensely from this model, as it allows them to share excess resources inexpensively, avoiding significant capital investments while still validating product-market fit <cit.>. In essence, by addressing the dual constraints of capacity and cost that hinder cloud consumption at scale, Compute as a Commodity (CaaC) introduces new levels of freedom that enhance creativity, productivity, and outcomes. By aggregating fragmented infrastructure across isolated domains, CaaC achieves collective efficiency and democratizes access, thereby removing innovation bottlenecks. In the following chapters, we will cover into the mechanics of standardizing compute units, grading compute resources, creating compute pools, ensuring delivery, managing risks, and building the trading platform, the GCX, that underpins this new marketplace. § EVOLUTION OF COMPUTE: FROM MAINFRAMES TO COMPUTE AS A COMMODITY The shift from capital-intensive on-premises infrastructure to operational expenditure models enabled by cloud computing has revolutionized access to compute resources <cit.>. Cloud computing, pioneered by companies like Amazon Web Services, has allowed businesses to rent capacity and scale rapidly without significant upfront investments <cit.>. This trend is paving the way for CaaC, which aims to make computing power as accessible and affordable as utilities like electricity and water <cit.>. However, compute power has not truly been commoditized yet—this is where we step in to pioneer the next phase of this evolution. By further building on the abundance of underutilized capacity across clouds and data centers, Compute as a Service (CaaS) takes the cloud consumption model to the next level for delivering even better economics and flexibility. The coming CaaC revolution promises to further the cloud's democratization mission and make dynamically scalable computing an inexpensive commodity for fueling innovation of all scales. The concept of cloud computing has been instrumental in enabling the CaaC model to emerge. By providing on-demand access to shared compute resources over the internet, cloud computing realizes the vision of utility computing—it allows organizations to consume fundamental computing services without massive infrastructure investments. The origins of cloud computing can be traced back to the 1950s when large-scale mainframes were starting to gain adoption for critical enterprise workloads. Given their exorbitant costs, efficient utilization through sharing of compute time was necessary, leading to the development of optimization techniques like workload consolidation, virtualization, shared storage, and auto-scaling, which came to define the core tenets of cloud computing decades later. By the 1990s, when data centers had proliferated for housing business applications, capacity utilization remained poor due to fragmented systems and intermittent traffic spikes. The internet boom in the 2000s also caused unprecedented growth in web-scale infrastructures built around horizontal scaling. Scholars began exploring models for “computing utilities" much like electricity and water utilities where users could access computing functionality without regard for the underlying delivery. The launch of Amazon Web Services’ (AWS) S3 storage and EC2 compute offerings in 2006 is considered by many as the symbolic start of the modern cloud computing era, which delivered on this promise of utility services <cit.>. It allowed small teams and startups to bypass investing in servers and data centers by renting Amazon’s spare capacity, which had been built out for the retail giant’s peak usage. By exposing their large-scale systems through web services and pioneering the prepaid cloud pricing model, AWS led a paradigm shift that adopted economies of scale <cit.>. Over the next decade, many enterprises migrated select workloads into the AWS public cloud, attracted by flexibility, resilience, and usage-based billing instead of high fixed costs. The ability to get started with no upfront commitment removed barriers to test new ideas and go-to-market quicker. Renting compute removed longer-term risks as well. Google, Microsoft, and others followed AWS’s footsteps by launching their own IaaS and PaaS products to woo customers. A multi-cloud world began taking shape. Today, over 90% of enterprises have a multi-cloud strategy combining on-premises and off-premise environments. Cloud has transitioned from experimentation to a core pillar of enterprise computing. While significant advancements have been made, compute power has not yet been commodified. With the GCX, we're at the forefront of this transformation, leveraging the abundance of underutilized capacity across clouds and data centers to take the cloud consumption model to the next level. The coming CaaC revolution promises to make dynamically scalable computing an inexpensive commodity, fueling innovation of all scales. We are pioneering this next step, aiming to further the cloud’s democratization mission and make high-performance computing accessible for everyone. § COMMODIFICATION OF COMPUTE Compute commodification refers to the process of transforming computational power into a standardized, tradable asset <cit.>. Commoditization, on the other hand, refers to the process where goods become undifferentiated and subject to increased competition <cit.>. In this way, we set out to commodify compute to enable its commoditization. Compute commodification involves defining discrete units of compute resources that can be bought, sold, and traded on a marketplace, similar to traditional commodities like electricity, oil, or metals <cit.>. The commodification of compute resources ensures efficient resource allocation and fosters competitive pricing. By creating a standardized measure of compute power, it becomes possible to commodify and trade these resources efficiently and transparently. Transparent and efficient marketplaces for computational resources can significantly enhance resource utilization and market dynamics <cit.>. The core value proposition of CaaC boils down to ready availability of affordable compute capacity for organizations on tap, much like any utility service for modern life such as electricity or piped gas. Computing transitions from an owned asset requiring upfront capital allocation and skilled labor, to an instantly usable operational expenditure that auto-scales to workload needs dynamically <cit.>. The benefits fundamentally stem from separating demand for computing innovation versus supply of computing infrastructure. Enterprises are hence able to convert CapEx investments to more strategic OpEx spends <cit.> that help fuel differentiation for their digital capabilities instead of competing merely at host hardware or data center levels with shrinking marginal utility <cit.>. The commodification of compute resources offers several significant advantages. Firstly, it enhances market liquidity. By standardizing compute units, the market becomes more liquid, with a greater number of buyers and sellers able to participate. This increased liquidity makes it easier to trade compute resources <cit.>. Furthermore, blockchain technology and smart contracts enhance transparency and trust in transactions, providing a secure and reliable platform for trading compute resources <cit.>. Secondly, the commodification of compute resources lowers transaction costs and enables market participants to express a more comprehensive outlook on the price of compute. This predictability and reduction in transaction costs benefit consumers and attract more providers to the market. By offering tools that cater to a diverse set of risk profiles, the GCX ensures that the market is accessible to a wide range of participants, from large-scale data centers to smaller, niche providers. Our goal is not to disrupt existing markets but to expand the overall pie, creating opportunities for all participants. The increased predictability and efficiency of the GCX platform support investment and growth in the compute market, making it an attractive space for innovation and expansion. Thirdly, it provides flexibility and efficiency. Users can purchase compute power as needed, allowing for flexible scaling of resources based on demand. This dynamic allocation helps optimize resource usage and reduce wastage <cit.>. Moreover, access to a marketplace for compute resources means that businesses and individuals can acquire the exact amount of compute power they need, when they need it, without long-term commitments <cit.>. Lastly, commodification promotes accessibility and inclusivity. Compute commodification democratizes access to high-performance computing, allowing smaller businesses and individuals to leverage powerful compute resources that were previously accessible only to large corporations <cit.>. Easier access to compute power can spur innovation and development, enabling new applications and technologies that rely on intensive computational resources <cit.>. The substantial investments in AI alone, with over $28 billion raised by private AI companies since 2020, underscore the critical need for efficient compute resource management systems. This economic potential drives the demand for commodifying compute resources to support AI growth. The world is going to go to 80, 90% of all cycles, are going to be AI cycles running on AI processors programmed by data <cit.>. The GCX is a marketplace where compute resources are as easily tradable as traditional commodities. We are building a platform that standardizes compute units, ensures reliable delivery through standardized contracts, and fosters a vibrant ecosystem of buyers and sellers. And standardization of compute will allow GCX to create a robust derivatives market which will allow for better price transparency and risk management for all market participants. By commodifying compute power, we aim to unlock new opportunities for innovation, efficiency, and growth across various industries. § COMPUTE AS THE NEW OIL The demand for computational resources, especially GPUs, has surged dramatically due to advancements in AI and machine learning. What is more, the compute resource market faces significant challenges, including resource allocation inefficiencies, data privacy and security issues, and competitive barriers for small and medium-sized enterprises. The requirements for AI models have grown exponentially, with compute needs increasing by 70 times from GPT-3 to GPT-4. This highlights the urgent need for scalable and efficient compute resource management solutions to meet the growing demands of AI technologies. This increased demand has resulted in a significant supply crunch, making computational power both expensive and difficult to access for many startups and independent developers. The concept that “compute is the new oil" encapsulates this paradigm shift, highlighting the critical role that computational resources play in driving technological innovation and economic growth <cit.>. Decentralized Physical Infrastructure Networks (DePINs) have emerged as a solution to the challenges of accessing computational resources <cit.>. By creating decentralized marketplaces, DePINs enable individuals and organizations worldwide to offer their idle compute power. This approach not only democratizes access to computational resources but also drives down costs through increased competition and resource utilization efficiency <cit.>. Despite their potential, compute DePINs face several economic and technical challenges. Competing with and working alongside centralized providers requires overcoming issues related to scalability, reliability, and cost-effectiveness. However, the integration of blockchain technology and smart contracts within these networks can help mitigate these challenges by providing secure, transparent, and efficient platforms for trading compute resources <cit.>. The imbalance between the supply and demand for compute resources presents a significant market opportunity. By leveraging decentralized solutions, platforms like the GCX can bridge this gap, making computational power more accessible and fostering a more inclusive environment for AI development. This democratization of compute power is crucial for enabling innovation across various sectors, from startups to educational institutions and research organizations <cit.>. Building the GCX is vital for several reasons. Firstly, it addresses the growing demand for computational resources by creating a decentralized marketplace that can efficiently match supply with demand. This is essential for ensuring that the burgeoning field of AI and machine learning continues to advance without being hampered by resource constraints. Secondly, the GCX provides a platform that democratizes access to high-performance computing, allowing a broader range of participants, including startups, researchers, and institutions from economically disadvantaged regions, to access the computational power they need to innovate and grow <cit.>. Furthermore, the GCX can enhance the availability and affordability of computational resources by fostering competition among providers and improving resource utilization through its decentralized model. This increased accessibility is crucial for making advanced computational capabilities available to a wider audience. Additionally, by leveraging blockchain technology and smart contracts, the GCX ensures secure, transparent, and efficient transactions, which build trust among participants and promote the stability of the marketplace <cit.>. The establishment of the GCX is not just about meeting current computational demands but also about future-proofing the industry. As AI and machine learning technologies continue to evolve, the need for scalable and flexible computational resources will only increase. The GCX is positioned to meet these needs, ensuring that computational power becomes a readily available and manageable commodity, much like electricity or data storage. This will ultimately drive innovation, economic growth, and technological advancement on a global scale <cit.>. §.§ Is Compute the New Oil? §.§.§ Harnessing Oil Byproducts for the Global Compute Market An innovative approach in the compute resource lifecycle is the transformation of oil and its byproducts into “digital oil.” Excess gas from oil extraction sites, often flared due to insufficient transportation infrastructure, can be repurposed to power compute datacenters. By converting this otherwise wasted energy into electricity, these datacenters can generate substantial computational power, contributing significantly to the global compute pool. Additionally, oil itself can directly fuel these sites, providing a primary energy source for computational tasks. This dual utilization not only offers an additional revenue stream for oil companies but also promotes more sustainable use of natural resources. Integrating traditional energy sources with digital infrastructure enhances the efficiency and profitability of both sectors, showcasing a forward-thinking approach to energy and technology. §.§.§ Harnessing Renewable Energy for the Global Compute Market Beyond traditional and byproduct energy sources, renewable energy offers a sustainable and efficient means to power the global compute market. Solar, wind, hydroelectric, and geothermal energy can all be harnessed to run compute datacenters, reducing the carbon footprint and promoting environmental sustainability. Renewable energy provides a reliable and scalable solution to meet the increasing demand for computational power. For instance, solar farms can generate substantial electricity during peak sunlight hours, which can be stored and utilized by compute datacenters. Wind turbines, strategically placed in high-wind areas, can provide consistent energy to power these facilities. Hydroelectric plants, leveraging natural water flow, and geothermal plants, utilizing the Earth’s internal heat, offer continuous energy supplies, ideal for the 24/7 operation of datacenters. Integrating renewable energy sources into the compute infrastructure reduces dependency on fossil fuels and minimizes greenhouse gas emissions. This not only aligns with global sustainability goals but also enhances the resilience and stability of the compute market by diversifying energy sources. Moreover, renewable energy-powered compute datacenters can attract environmentally conscious businesses and investors, fostering innovation and growth in the green technology sector. The adoption of renewable energy in the compute market underscores a commitment to creating a more sustainable and eco-friendly digital economy, ensuring that the expansion of computational power does not come at the expense of the environment. §.§ Compute Resource Lifecycle The lifecycle of compute resources can be analogized to the "upstream, midstream, and downstream" model used in the oil industry. This analogy helps in understanding the stages from creation to end-use of compute resources. §.§.§ Upstream: Resource Creation and Provisioning In the compute resource lifecycle, the upstream stage involves the generation and provision of raw compute resources, akin to oil extraction. Key players in this stage include data centers, cloud service providers, and decentralized compute providers. Data centers are facilities that house the physical servers and infrastructure necessary for generating compute power. Cloud service providers, such as AWS <cit.>, Google Cloud <cit.>, Azure <cit.>, Nexgen Cloud <cit.>, and Oracle <cit.>, offer large-scale compute resources. Additionally, decentralized compute providers are individuals or smaller entities that contribute spare computing power via decentralized networks. §.§.§ Midstream: Resource Management and Distribution The midstream stage involves the aggregation, optimization, and distribution of compute resources, ensuring they are available where and when needed. This stage is analogous to the transportation and storage of oil. Key components include resource scheduling and allocation platforms, task queues and orchestration services, and resource brokerage. Resource scheduling and allocation platforms manage the distribution of compute tasks to available resources, optimizing for efficiency and cost. Task queues and orchestration services handle the queuing and orchestration of compute tasks, ensuring efficient utilization of resources. Resource brokerage involves marketplaces where compute resources are traded, including futures contracts and spot markets. §.§.§ Downstream: Resource Utilization and End-Use The downstream stage, similar to refining and retail in the oil industry, involves the use of compute resources for various applications. End-user applications are the final consumers of compute power, such as AI model training, data processing, and running complex simulations. Enterprise solutions are companies that use compute resources for their business operations, including big data analytics, machine learning, and SaaS products. Compute-intensive services rely heavily on compute power and include areas such as video rendering, gaming, and scientific research. §.§.§ Refinery: Compute Optimization and Enhancement The refinery stage in the compute resource lifecycle involves optimization services, similar to oil refineries that convert crude oil into usable products. These services enhance raw compute power into more efficient and valuable resources. AI model optimization techniques improve the performance and efficiency of AI models. Performance tuning involves adjusting and optimizing software and hardware settings to maximize compute efficiency. Resource virtualization allows for more efficient use of physical hardware by creating virtual resources. § LEARNING FROM OIL TO DELIVER COMPUTE RESOURCES In considering how to effectively deliver compute resources, we draw an analogy to the oil industry, where pipeline hubs or ports serve as key delivery points. Similarly, in the realm of compute resources, establishing an abstraction layer that connects consumers with various providers can act as a delivery hub, simplifying access and distribution, see Fig. <ref>. To create a seamless experience, an abstraction layer, akin to a delivery hub, can facilitate the connection between compute consumers and providers. This layer can be viewed as a “delivery API" or “faucet" that manages the flow of compute resources from producers to consumers. This concept is already being explored by companies such as Covalent <cit.>, Aethir <cit.>, Exabits <cit.>, Nosana <cit.>, and io.net <cit.>, which could be potential partners for future collaboration, or alternatively, we could develop this infrastructure ourselves. We are working on the assumption that interoperability will be a lot more widespread sooner rather than later. For instance, Microsoft <cit.> is introducing a transformer into their stack that abstracts against CUDA <cit.>. This is exemplified by Triton, a project developed by OpenAI <cit.>, which aims to provide a more efficient, flexible, and powerful way to implement deep learning models. Triton is designed to achieve high performance without requiring users to write complex CUDA code, thus promoting greater interoperability across various hardware platforms <cit.>. §.§ Industry Examples Several companies exemplify this abstraction layer concept: * Covalent offers a unified API that provides fast and scalable access to historical blockchain data across multiple chains. It simplifies the process of fetching data by maintaining a consistent request and response format, making it easier for developers to build on different blockchains without needing to learn new tools or data structures. This unified approach can be applied to delivering compute resources, ensuring a consistent and efficient user experience. * Aethir specializes in decentralized cloud rendering and compute services, enabling the distribution of high-performance computing power through its decentralized network. This approach reduces reliance on traditional cloud providers and offers an innovative model for delivering compute resources efficiently. * Exabits focuses on providing decentralized cloud services that are robust and scalable. By leveraging a network of distributed nodes, Exabits ensures reliable access to compute power, supporting diverse workloads from AI to large-scale data processing. * Nosana facilitates the distribution of compute resources for CI/CD pipelines <cit.> in a decentralized manner. This model helps in optimizing the use of idle compute power, making it accessible to developers globally and enhancing the efficiency of continuous integration and deployment processes. * io.net provides a decentralized computing network that allows machine learning engineers to access scalable distributed clusters at a fraction of the cost of traditional centralized services. By offering instant and permissionless access to global GPU resources, io.net ensures flexibility, speed, and affordability. This network supports a wide range of GPUs and CPUs <cit.>, including advanced Apple silicon chips <cit.>, making it a versatile solution for diverse computing needs. Implementing an abstraction layer for delivering compute resources offers several notable advantages, each contributing to the overall effectiveness and efficiency of the system. Flexibility is one of the primary benefits of an abstraction layer. Users can access compute resources from various providers without being locked into a specific vendor. This flexibility allows organizations to choose the most suitable resources for their specific needs, whether it's due to cost considerations, performance requirements, or specific technical features offered by different providers. By decoupling the compute resource from the provider, the system can seamlessly integrate with new technologies and vendors as they become available, ensuring that users always have access to the best possible resources. Scalability is another significant advantage. An abstraction layer enables the system to dynamically scale based on demand. This means that as the need for compute resources increases or decreases, the system can adjust accordingly, provisioning additional resources during peak times and deallocating them when they are no longer needed. This dynamic scaling ensures that compute resources are always available when required, preventing bottlenecks and ensuring smooth operation even during high-demand periods. Moreover, this scalability can lead to cost savings, as resources are only used when necessary, avoiding the expense of maintaining idle capacity. Efficiency in the allocation and utilization of compute power is also greatly enhanced by an abstraction layer. By optimizing how resources are allocated, the system can improve overall efficiency, reducing both costs and environmental impact. Efficient resource allocation means that fewer resources are wasted, and the compute power is used more effectively, leading to better performance and lower operational costs. Additionally, this optimization can help in reducing the carbon footprint of computing operations, as more efficient use of resources typically translates to lower energy consumption. Incorporating an abstraction layer for delivering compute resources aligns with the principles of decentralization and efficiency. By drawing inspiration from existing models in the blockchain and decentralized compute industries, we can create a robust and scalable infrastructure that meets the growing demands for high-performance computing. This approach not only addresses current challenges but also positions us to leverage future advancements in decentralized technologies, ensuring that our solution remains relevant and competitive. § LEARNING FROM POWER TO ENHANCE COMPUTE RESOURCE TRADING This section expands on the comparative analysis of the power and computational resource markets by specifically focusing on the metrics used in each market. This detailed comparison aims to highlight how these metrics influence market operations and the potential for adopting analogous measures across markets to improve performance and stability. Both the compute resource market and the power market share several similarities, such as the need for real-time balancing, price volatility, and the importance of reliable delivery. However, they also have key differences. Unlike power, compute resources can be queued and scheduled, offering opportunities for innovative efficiency optimizations not possible in the power market <cit.>. By adopting and adapting methodologies from the power market, we can enhance the operational efficiency, reliability, and sustainability of the compute resource market <cit.>. In turn, this supports better clarity for power infrastructure builders and their planning, as the optimized scheduling and queuing of compute resources can lead to more predictable and manageable power demands. §.§ Power Market Metrics In the power market, the following metrics play a key role in maintaining efficiency, reliability, and sustainability: * Watt-hours (Wh): Measures the quantity of power used, serving as the fundamental unit for billing and trading. * Frequency and Voltage: Critical for ensuring the stability and quality of the electrical supply. * System Average Interruption Duration Index (SAIDI) and System Average Interruption Frequency Index (SAIFI): Assess the reliability and availability of the power supply. * Power Factor: Indicates the efficiency of power use, affecting how much charge users incur. * Emissions Intensity: Measures the environmental impact per unit of power generated, increasingly relevant in carbon trading schemes. §.§ Compute Resource Market Metrics Similarly, we propose the following metrics for the compute market: * Compute Hours: Analogous to Wh, measuring the quantity of computational work done. * Latency and Throughput: Key performance metrics that determine the efficiency and speed of data processing. * Mean Time Between Failures (MTBF) and Mean Time to Repair (MTTR): Indicators of system reliability and service continuity, similar to SAIDI/SAIFI <cit.>. * Utilization Efficiency: Measures how effectively compute resources are being used, akin to the power factor in electricity. * Energy Efficiency: Reflects the power consumption per unit of compute work, parallel to emissions intensity in highlighting sustainability. Table <ref> provides a side-by-side comparison of key characteristics between power and compute markets. Both markets utilize metrics that measure quantity (Wh and Compute Hours), efficiency (Power Factor and Utilization Efficiency), reliability (SAIDI/SAIFI and MTBF/MTTR), and environmental impact (Emissions Intensity and Energy Efficiency). These parallels suggest that methodologies for optimizing these metrics in one market could be adapted for the other. While both markets share similarities in metric goals, the physical nature of power demands real-time balance and immediate consumption, posing unique challenges not faced in the compute market. Conversely, the compute market’s flexibility in queuing and scheduling tasks offers opportunities for innovative efficiency optimizations not possible in power markets. This flexibility not only enhances compute market operations but also supports power infrastructure planning by providing clearer and more predictable power requirements. The detailed examination of market metrics reveals significant opportunities for mutual learning between the power and compute resource markets. By adopting and adapting methodologies and metrics from the power market, the compute resource market can enhance its operational efficiency, reliability, and environmental sustainability, fostering a more robust and equitable marketplace <cit.>. § STANDARDIZING COMPUTE UNITS To standardize compute units, we propose the Compute Hour (CH), which abstracts the complexities of different hardware architectures and provides a consistent measure of computational work; this defines the compute commodity. Towards this, defining a reference system is a foundational step in this process. The reference system should have a balanced performance profile, encompassing CPU, GPU, memory, and storage capabilities. Benchmarking this system involves running a suite of standardized tests to measure performance across various dimensions, ensuring fair and accurate trading of compute resources <cit.>. Also, it is difficult to use a compute cycle itself as a fundamental unit. A compute cycle represents a single instruction period of a computer's CPU or GPU. One cycle is the time it takes for a CPU or GPU to fetch an instruction, decode it, execute it, and store the result. On the surface, measuring compute resources by counting the number of compute cycles seems straightforward—similar to counting revolutions on a power meter for kWh. This approach suggests a direct measurement of “work done" by the computing systems. However, different CPUs and GPUs have different architectures (e.g., x86 vs. ARM for CPUs, and NVIDIA vs. AMD for GPUs), and their cycles cannot be directly compared. What one CPU or GPU can achieve in a single cycle might take another multiple cycles, or vice versa. Different CPUs and GPUs operate at different clock speeds, meaning the number of cycles per second can vary widely. Thus, compute cycles alone do not provide a uniform standard of measurement across different hardware. Modern CPUs and GPUs can execute multiple instructions per cycle through techniques like pipelining and parallel execution. Therefore, the efficiency of cycle usage can vary, complicating a direct comparison based solely on cycle count. Different CPUs and GPUs support different sets of instructions (Instruction Set Architecture), and some can achieve more per cycle than others depending on the instruction mix. Operating systems and software optimizations play a significant role in how efficiently these cycles are used. The CH abstracts away these complexities and provides a consistent measure of computational work. This ensures fair and accurate trading of compute resources in the marketplace, regardless of the underlying hardware differences: Compute Hours (CH) = Performance (FLOPS)/Reference Performance (FLOPS)×Operational Hours Where Performance (FLOPS) is the actual performance output of the compute system measured in FLOPS. Reference Performance (FLOPS) is the benchmark performance of the reference system, also measured in FLOPS, and Operational Hours, or physical/wall-clock hours, is the duration in hours for which the system is utilized for computing tasks. FLOPS is floating-point operations per second, a measure of computational performance, especially in tasks involving real-number calculations common in AI and machine learning. This formula allows for the normalization of computing performance across different systems and configurations. By using a reference system as a baseline, the Compute Hours for any system can be calculated by scaling its performance relative to this standard benchmark. This scaling ensures that a Compute Hour represents the same amount of computational work, irrespective of the underlying hardware differences. §.§ Estimating Compute Hours for AI Models Mapping the size of an AI training model to the amount of compute time needed can be approached by understanding the relationship between the model’s architecture, the dataset, and the computational resources used. To learn more about AI models and their architecture, please consult, for example Refs. <cit.>. What follows is a comprehensive way to estimate the compute time based on model size. We assume the model architecture is known in detail, but this does not affect the generality of the result. Consider a Convolutional Neural Network (CNN). The CNN processes an input image through several layers, each performing specific operations. Fig. (<ref>) illustrates a typical CNN architecture used for image classification. * Input: The raw input image fed into the network. * Conv: Convolutional layers that apply filters to extract features from the input image. * Pool: Pooling layers that reduce the spatial dimensions of the feature maps, retaining important information while reducing computational load. * FC: Fully connected layers that combine the features learned by convolutional and pooling layers to make final predictions. * Softmax: The output layer that provides the probability distribution over classes. In what follows, we will consider a general use-case, but the idea is that this can be applied to specific AI models like the one in Fig. (<ref>). §.§.§ Understand Model Architecture * Layers and Parameters: The number of layers, types of layers (e.g., convolutional, fully connected), and the number of parameters in each layer. * Operations: Types of operations involved, such as matrix multiplications, convolutions, and activations. §.§.§ Calculate Floating Point Operations (FLOPs[FLOPS stands for floating-point operations per second, indicating a rate of computation. In contrast, FLOPs is the plural form of floating-point operation, referring to the number of operations. 1 TFLOP is a single teraFLOP which is 10^12 FLOPs.]) * Per Layer FLOPs: Calculate the number of FLOPs for each layer. For example, a convolutional layer’s FLOPs can be estimated using: FLOPs per layer = 2 ×input height×input width×input channels×output channels ×kernel height×kernel width×output height×output width * Total FLOPs: Sum the FLOPs for all layers to get the total FLOPs required for one forward pass. §.§.§ Training Iterations * Epochs: Determine the number of epochs (complete passes through the training dataset). * Batch Size: Number of samples processed at once. * Iterations: Calculate the total number of iterations as total samples / batch size. §.§.§ Example Calculation For a given model, suppose: * Model FLOP: 1 trillion FLOPs (1 TFLOP) per forward pass * Dataset Size: 1 million samples * Batch Size: 256 * Epochs: 10 Total FLOPs Needed = 10^12 Model FLOPs×(10^6 samples/256 batch size) × 10 epochs = 3.9 × 10^16 FLOPs = 39 PFLOPs By specifying further over which duration this needs to be completed we can obtain a rate: This is “Performance (FLOPS)" in Eq. (<ref>); the amount of CH follows from that. The user can now buy this from the GCX when also specifying when the computation needs to take place. To be sure, the concept of Compute Hour can be generalized beyond AI. Finally, let us say the AI model details are not known. We can still estimate the total number of FLOPs by using a reference GPU. This is something that can be provided on the GCX as well. This method allows us to guide users to the appropriate market for trading and provides valuable recommendations based on the computational requirements. §.§ Benchmarking and Normalization As part of defining the Compute Hours we introduced a reference system. This is a foundational step in standardizing compute units and facilitating the commodification of compute resources. By leveraging established benchmarking practices from the supercomputing industry <cit.>, we can create a robust and credible framework for measuring and comparing computational power. This framework ensures that all compute resources are evaluated on a consistent and fair basis, promoting transparency and efficiency in the compute marketplace. Benchmarking involves running a suite of standardized tests to measure a system's performance across various dimensions, such as processing speed, memory bandwidth, and input/output operations. The results of these tests are then compared to the reference configuration to determine the equivalent number of Compute Hours. For example, if a high-performance server is twice as fast as the reference configuration in executing the benchmark suite, it would be equivalent to two Compute Hours per physical hour of operation. Conversely, a less powerful system that achieves half the benchmark performance would be rated at 0.5 Compute Hours per physical hour. §.§.§ Key Areas for Universal Benchmark Consensus §.§.§ Performance Metrics To standardize computational performance, we propose using established benchmarks such as HPL (High Performance Linpack), SPEC CPU, and real-world application benchmarks. For instance, implementing the HPL benchmark can effectively measure the floating-point performance of a supercomputer cluster. Key contributions in this area come from several researchers. The authors in <cit.> have significantly contributed through the TOP500 and HPL benchmarks, been influential in establishing MPI <cit.> standards and benchmarks, worked on performance analysis in exascale computing providing valuable insights, and advancing the field on multithreaded algorithm performance. §.§.§ Energy Efficiency Energy efficiency is critical for optimizing computational resource utilization. Measuring power consumption under various loads using tools like PowerAPI <cit.> and integrating energy-saving techniques such as dynamic voltage and frequency scaling (DVFS) are essential steps. For example, DVFS can optimize energy consumption during periods of low usage. Here, energy-efficient algorithms exist <cit.>, while the work of <cit.> in energy-efficient supercomputing is well-regarded. Research on energy efficiency with DVS also provides significant insights <cit.>. §.§.§ Scalability Scalability is crucial for accommodating the growing demand for computational power across heterogeneous systems. Developing benchmarks that test scalability by combining CPUs, GPUs, and accelerators is essential. A mix of LINPACK and custom benchmarks can evaluate performance across different architectures and configurations. Work on scalable supercomputing architectures from <cit.>, contributions to scalability in exascale architectures from <cit.>, and research on scalable communication patterns from <cit.> are pivotal in this domain. §.§.§ Portability Ensuring that benchmarks can run on various platforms (e.g., x86 <cit.>, ARM <cit.>, GPUs <cit.>) without modification is fundamental for a universal compute benchmark. Testing benchmarks on different cloud providers and on-premise hardware ensures consistent performance metrics. Here, the work of <cit.> on the portability of LAPACK and BLAS libraries, contributions to portability in TSUBAME <cit.> systems, and research on portability in MPI applications <cit.> are key references in this area. §.§.§ Fault Tolerance Integrating fault injection and recovery tests into the benchmarking suite is essential to measure system resilience. Tools like Chaos Monkey <cit.> can simulate failures and measure a system's ability to recover and maintain performance. §.§.§ Cost-effectiveness Developing economic models that compare performance-per-dollar across different configurations is vital for identifying cost-effective solutions. Using real-world cost data from cloud providers and hardware vendors, we can calculate the total cost of ownership and return on investment for various compute setups. Here, <cit.> has research on cost models for graph computing and <cit.> has done work on cost-effective HPC/AI solutions which are instrumental in developing these economic models <cit.>. §.§ Defining the Reference System The establishment of a reference system is pivotal in standardizing compute units and ensuring fair benchmarking across different compute resources. A reference system serves as a benchmark against which the performance of other systems is measured. This process is akin to how supercomputers are benchmarked to assess their performance. §.§.§ Criteria for the Reference System The reference system must be a well-documented and widely recognized configuration to ensure credibility and consistency. The following criteria are considered when defining the reference system: * Performance: The reference system should have a balanced and representative performance profile, encompassing CPU, GPU, memory, and storage capabilities. * Availability: It should be based on commercially available hardware to ensure that the benchmarks are reproducible. * Documentation: Comprehensive documentation of the hardware and software configurations, including operating systems and drivers, must be available. * Scalability: The system should be scalable to allow for adjustments in benchmarking as technology advances. In the supercomputing world, benchmarks like the HPL are used to evaluate and rank supercomputers <cit.>. Additional benchmarks like HPCG (High Performance Conjugate Gradients) <cit.>, STREAM (memory bandwidth) <cit.>, and SPEC CPU (single-thread performance) <cit.> can also be used for specific aspects of performance. The HPL benchmark measures a system's floating-point computing power and is the basis for the TOP500 list of the world's most powerful supercomputers <cit.>. Similarly, we propose using a standardized benchmark suite to evaluate and define the reference system. §.§.§ Benchmarking Process Benchmarking GPUs to define a compute hour involves measuring the performance of the GPU in terms of computational tasks over a specific period. The first step is to define the benchmarking metrics. These include FLOPS (Floating Point Operations Per Second), which measures the raw computational power; throughput, which measures the number of tasks or operations the GPU can handle per unit of time; latency, which measures the time taken to complete a single operation or task; power consumption, which measures the energy efficiency of the GPU; and memory bandwidth, which measures the speed at which data can be read from or written to the GPU’s memory. Next, select benchmarking tools and software. Established tools such as SPECviewperf <cit.>, Geekbench <cit.>, 3DMark <cit.>, GPGPU <cit.> Benchmarks like Rodinia <cit.> or Parboil <cit.>, and Deep Learning <cit.> Benchmarks such as MLPerf <cit.> are commonly used. Set up the testing environment, ensuring the GPU is installed in a standard configuration that matches common usage scenarios and that the necessary drivers, benchmarking tools, and specific software libraries are installed. Run synthetic benchmarks to simulate various computational tasks, which could include matrix multiplications, deep learning model training, rendering tasks, or other GPU-intensive operations. Tools like CUDA-Z <cit.>, OpenCL Benchmark <cit.>, Blender <cit.> for rendering tests, and TensorFlow/PyTorch <cit.> for deep learning tasks are useful for this purpose. In addition to synthetic benchmarks, run real-world benchmarks using applications relevant to the target use case, such as Blender for rendering, Adobe Premiere Pro <cit.> for video editing, and TensorFlow/PyTorch for machine learning. Collect and analyze the performance data, measuring metrics such as FLOPS, task completion time, throughput, and power consumption during the benchmarks. Monitoring tools like NVIDIA-SMI <cit.> can log performance data. Normalize the results to standardize the compute hour definition. For example, if a GPU completes a certain benchmark task in 30 minutes, it would have utilized 0.5 compute hours if the task is defined as a 1 compute hour benchmark. Aggregate the performance data to define a compute hour for the GPU, based on the average performance across different benchmarks or a weighted score that emphasizes specific tasks. Examples of existing processes for benchmarking include MLPerf, which provides a suite of benchmarks for evaluating machine learning performance, including training and inference benchmarks for various neural network models. SPECviewperf benchmarks graphics performance by running a series of tests using real-world applications, measuring the ability of GPUs to handle 3D graphics workloads. Geekbench <cit.> offers a cross-platform benchmark that measures CPU and GPU performance, including tests for various compute tasks. The software 3DMark <cit.> is widely used for benchmarking gaming performance, running a series of graphics and computational tests to evaluate the capabilities of GPUs in gaming scenarios. * Setup: Install an NVIDIA RTX 3080 <cit.> in a test machine with the latest drivers. Set up TensorFlow with CUDA and cuDNN <cit.>. * Synthetic Benchmark: Run CUDA-Z to measure FLOPS. Execute TensorFlow benchmarks with a ResNet-50 model <cit.> to measure training time and throughput. * Real-World Benchmark: Use Blender to render a high-resolution 3D scene. Run inference benchmarks with TensorFlow using a pre-trained BERT model <cit.>. * Data Collection: Record the time taken for each task. Measure power consumption using NVIDIA-SMI. Log memory bandwidth utilization during the tasks. * Analysis: Calculate the average performance metrics. Normalize the results to define the compute hour based on task completion times and energy efficiency. * Report: Publish the normalized results, defining the compute hour for the RTX 3080 based on the aggregated performance data. By following these steps and leveraging existing benchmarking tools and methods, the compute hour for a GPU can be accurately defined and standardized, allowing for fair and transparent trading in platforms like the GCX. §.§.§ Normalization and Conversion Once the reference system's performance metrics are established, other compute resources can be normalized to this standard. The normalization process involves comparing a system's benchmark results to those of the reference system. The formula for calculating Compute Hours, as previously discussed, is applied to perform this conversion. For example, if the reference system achieves 1 TFLOP in the benchmark test, and another system achieves 0.5 TFLOPs, the latter would be assigned 0.5 Compute Hours for every physical hour of operation. Conversely, a system achieving 2 TFLOPs would be assigned 2 Compute Hours per physical hour. §.§.§ Updating the Reference System As technology evolves, it is crucial to periodically update the reference system to reflect advancements in computational hardware and software. Regular updates ensure that the benchmarking process remains relevant and accurate. The criteria for updates include: * Technological Advancements: Incorporating new hardware and software innovations. * Market Trends: Reflecting shifts in the computational resource market. * Feedback from Stakeholders: Considering input from compute providers and users. By maintaining an up-to-date reference system, we ensure that the commodification of compute resources remains fair, transparent, and aligned with current technological standards. §.§ Grading Compute Resources and ESG Compute resources can vary widely in terms of performance, reliability, and suitability for different tasks. To facilitate transparent and efficient trading, we will grade compute resources based on their benchmark performance and additional criteria such as uptime, reliability, and energy efficiency. Performance grades categorize compute resources into tiers based on their benchmark results. For instance: * Grade A: High-performance systems with benchmark results significantly above the reference configuration. * Grade B: Systems with performance moderately above the reference configuration. * Grade C: Systems with performance close to the reference configuration. * Grade D: Systems with performance below the reference configuration. Reliability is crucial for many computational tasks. Compute resources will be graded on their historical uptime and failure rates. For instance: * Grade 1: Systems with uptime exceeding 99.9%. * Grade 2: Systems with uptime between 99.0% and 99.9%. * Grade 3: Systems with uptime between 95.0% and 99.0%. * Grade 4: Systems with uptime below 95.0%. Energy efficiency is increasingly important in modern computing <cit.>. Compute resources will be graded based on their energy consumption relative to their performance. For instance: * Grade X: Highly energy-efficient systems with low power consumption per Compute Hour. * Grade Y: Moderately energy-efficient systems. * Grade Z: Systems with higher energy consumption. Implementing a grading system for compute resources based on performance, reliability, and energy efficiency can significantly promote the use of ESG (Environmental, Social, and Governance)-friendly sources of compute. By providing clear and transparent benchmarks, users can make informed decisions about the sustainability and efficiency of their computational resources. For instance, high-performance grades (e.g., Grade A) combined with high energy efficiency grades (e.g., Grade X) will enable users to select top-tier compute resources that not only meet their performance needs but also align with their environmental sustainability goals. The price point difference will encourages data centers and compute providers to adopt greener technologies and practices to achieve higher grades, thereby reducing the carbon footprint of computational activities. Ultimately, this fosters a market where environmental responsibility and efficiency are rewarded, driving innovation and sustainability in the computing industry. § BLOCKCHAIN-BASED CLEARING AND GUARANTEE POOLS IN THE GCX At the heart of the GCX lies the innovative concept of clearing and guarantee pools, where participants and guarantors play a crucial role in ensuring the integrity and efficiency of the compute market. Critical to a well-functioning commodities futures market are the following elements: * Convergence of Futures Price to Spot Price: At expiration, the futures price must converge to the actual spot price of the underlying commodity. This is typically accomplished by ensuring that holders of long positions in futures contracts can take delivery of the underlying asset, while short position holders are obligated to deliver the asset as per the contract terms. * Adaptive Risk/Margin System: A well-defined risk/margin system that can adapt to changing market conditions is essential. Mechanisms must be in place to ensure that sufficient margin is held by market participants. * Margin Maintenance and Liquidation Mechanisms: Ensuring that market participants maintain sufficient margin and providing mechanisms to liquidate positions that fail to meet margin requirements are crucial for market stability. * Backstop Liquidity Pool: A liquidity pool that can cover losses in case liquidated accounts do not have sufficient funds to meet their obligations is crucial. This pool acts as a safety net to ensure market stability. In traditional financial markets, points 1 and 2 have long been established by major exchanges. However, the industry is transitioning from older portfolio risk models (e.g., CME SPAN) to more modern Value at Risk (VaR) methods. For the GCX, we have the opportunity to build a margin system based on the latest advances in risk management techniques, unencumbered by legacy systems. Regarding points 3 and 4, these functions were traditionally provided by clearing members at major exchanges. Non-clearing market participants must have their trades guaranteed by a clearing member, who is ultimately responsible for ensuring the financial obligations of its customers. Clearing members must also keep customer funds segregated from their own. In the context of the GCX, Guarantee Pools can assume the role of traditional clearing firms by ensuring adequate margin is kept by their customers, that such margin is segregated from the pool’s funds, and that liquidations can be enforced. Blockchain technology can facilitate this process. Additionally, an Insurance Fund can serve as a final backstop against any financial losses resulting from a pool being unable to meet its obligations. All pools would be required to stake tokens to the Insurance Fund in proportion to their overall risk. The exchange itself (GCX) would also stake tokens into the Insurance Fund to align all parties’ interests in maintaining proper risk controls. Furthermore, outside participants can contribute tokens to the Insurance Fund in return for yield, enhancing the pool’s robustness and providing additional incentives for risk management. §.§ Layered Architecture of the GCX The GCX is structured into several distinct layers, each with specific roles and responsibilities. This multi-layered architecture ensures a robust and efficient operation of the compute market, leveraging blockchain technology for enhanced transparency and security. The layers and their interactions are illustrated in Fig. (<ref>). §.§.§ Market Layer The Market Layer comprises the participants who interact with the GCX to trade compute resources. These participants include: * Hedgers: Entities seeking to hedge against price volatility in compute resources. * Traders: Participants who engage in buying and selling compute resources to profit from price movements. * Short Sellers: Entities that sell compute resources they do not currently own, betting on future price declines. * Market Makers: Entities that provide liquidity to the market by continuously quoting buy and sell prices. These market makers, leveraging their high trading volumes, may opt to purchase GCX tokens to benefit from reduced transaction fees. §.§.§ App Layer The App Layer consists of applications developed by various companies, including the GCX itself. These apps provide user interfaces and frontends for interacting with the GCX platform. Key components include: * GCX's App: The primary application developed by the GCX for trading compute resources. * Company 2's App: An example of a third-party application integrated with the GCX. * Company 3's App: Another example of a third-party application integrated with the GCX. * User Interface & Frontend: Interfaces that allow users to interact with the apps and manage their trading activities. * API & SDK (Application Programming Interface and Software Development Kit): A set of protocols, tools, and libraries provided by GCX that allows developers to create and integrate applications seamlessly with the GCX platform. This enables faster development and integration of third-party apps. §.§.§ Clearing Layer The Clearing Layer facilitates smooth and secure trade settlements. It includes guarantors who provide trade guarantees and manage the delivery of compute resources. Guarantors also serve as gatekeepers to the GCX, ensuring market participants maintain sufficient collateral to cover their risk both pre- and post-trade. Key components include: * Guarantor 1, 2, 3: Entities that guarantee the fulfillment of trades. Guarantors may be subject to losses of their own collateral if a customer's collateral is insufficient to cover losses in case of default. This incentivizes Guarantors to maintain proper risk controls over their customers. * Proof of Compute/Capacity (e.g., via DePIN + Truebit): Mechanisms to verify the availability and delivery of compute resources. * Delivery of Compute: The process of delivering compute resources as per the contract terms. §.§.§ Risk Management Layer The Risk Management Layer monitors and manages the risks associated with trading on the GCX. It includes: * APIs & SDKs: Tools for developers to integrate their app into the GCX and to integrate risk management features into their applications as well. * Risk Engine: Monitoring and Analytics: Systems that analyze and monitor risk factors to ensure market stability. The risk engine also scores the participants and ensures the entire health of the platform. §.§.§ Exchange Layer (Offchain) The Exchange Layer manages the off-chain operations of the GCX, including matching buy and sell orders and executing trades. It is also responsible for determining which products and derivatives are listed and establishing risk calculation methods. Key components include: * Global Compute Exchange (GCX): The core exchange platform for trading compute resources. * Marketplaces (Spot, Derivatives: Futures, Options): Various marketplaces within the GCX for trading different types of compute contracts. §.§.§ Blockchain Layer (Onchain) The Blockchain Layer manages the onchain operations, leveraging blockchain technology for transparency and security. It includes: * Smart Contract Ecosystem: A system of smart contracts that govern the operations of the GCX. * Token Interface (GCX Token): The primary currency/utility token used within the GCX ecosystem. * Token Collateral: Tokens used as collateral for trades, including collateral tokens (USDC, USDT) and proof of compute tokens. * Customer Collateral (Segregated): Collateral provided by customers, kept separate from the pool’s funds. * Guarantor Pool Collateral, GCX tokens, Proof of Compute: Collateral provided by guarantors to back their guarantees. * Insurance Pool: A pool of funds to cover losses in case of defaults. * Staking and Slashing: Mechanisms to ensure compliance and penalize non-performance. * Yield Aggregation/Distribution: Systems to distribute earnings to participants. * Liquidation Engine: Systems to liquidate positions that fail to meet margin requirements. * Connectors to broader DeFi Ecosystem: Integrations with the broader decentralized finance ecosystem for additional liquidity and services. We discuss the utility token in more detail in Sec. (<ref>). §.§ Process Flow: Clearing and Guarantee Pools The process flow for clearing and guarantee pools within the GCX involves multiple steps with the goal to ensure that the platform operates smoothly and securely. §.§.§ Contract Execution When a market participant initiates a trade through the app layer, their guarantor acts as a gatekeeper. Before submitting the trade to the GCX matching engine, the guarantor ensures that the participant has sufficient collateral to cover the associated risk. §.§.§ Compute Delivery Verification The selected participant attempts to deliver the compute resources. Successful delivery results in the compute being provided to the buyer, while failure to deliver results in the slashing of the participant’s staked tokens. §.§.§ Exercising an Option and Delivery Exercising an option would typically result in the delivery of its underlying futures contract and therefore would not trigger immediate delivery of compute resources. For OTC (over-the-counter) and spot trades, these are settled directly by the counter-parties involved. The primary scenario for delivery to occur is when a futures contract reaches expiration. §.§.§ Futures Contract Expiration and Delivery We envision the delivery of compute hours primarily occurring as a result of a GCX futures contract coming to expiration, please see Fig. (<ref>). We take inspiration from existing commodities markets' delivery procedures. For example, in the CME WTI Crude Oil[These are futures contracts for West Texas Intermediate (WTI) crude oil that are traded on the CME Group’s commodity exchange, which includes the New York Mercantile Exchange (NYMEX) <cit.>. These contracts are used by investors and companies to hedge against price fluctuations in the oil market or to speculate on future price movements.] contract <cit.>, buyers can choose from several delivery methods including interfacility transfer, in-facility transfer, or transfer of title <cit.>. Similarly, for compute hours, buyers may be offered choices under contract terms specifying different forms of delivery. Since each guarantor is responsible for its open, deliverable positions at the time of expiration, it will be the guarantor's responsibility to guarantee the delivery of compute hours specified in short futures positions open at expiration. As futures approach expiration, a guarantor may require the holder of short futures positions to verify that they can deliver the contracted compute hours. If they cannot, such short positions can be liquidated to ensure market stability and integrity. §.§.§ Verification of Compute and Capacity Verification of compute delivery can be automatically ensured using verification protocols similar to those implemented by Truebit <cit.>. Truebit's technology works by having a verifier challenge the computations performed by a solver. If a solver’s computation is challenged and found incorrect, the solver is penalized, ensuring the integrity and correctness of the computations. This mechanism not only ensures that the computation was performed but also verifies its accuracy without the need for all nodes in the network to redo the work. This approach significantly increases the efficiency and scalability of blockchain-based compute platforms. In addition to verifying that compute tasks were correctly performed, it is also crucial to verify the capacity and availability of compute resources. This means ensuring that the source of compute is indeed available and capable of performing the required tasks. Verification of capacity can be achieved through regular audits and performance checks integrated into the smart contracts. These checks ensure that providers have the necessary resources available and are not oversubscribing their compute capacity. This dual verification process of both compute and capacity enhances the reliability and trustworthiness of the GCX platform. §.§ The GCX Utility Token Over time, one can imagine the GCX evolving into a sovereign chain or rollup, using GCX tokens as the underlying/core token. This would further enhance the platform's efficiency, scalability, and security, providing a robust foundation for global compute trading. The GCX utility token serves as the backbone of the Global Compute Exchange (GCX) at the Blockchain Layer, offering a versatile and robust utility for our users. Designed as a utility token, GCX provides access to various functionalities and benefits within our ecosystem, ensuring efficient and effective economic incentives. This utility token is crucial for accessing the exchange's preferred trading rates, collateral for trading activities, and participating in the future app chain. §.§ Membership and Trading Benefits GCX tokens can be purchased by users to acquire member seats on the GCX exchange at the Exchange Layer, similar to how seats are bought on traditional exchanges like the NYSE. These seats grant users preferred trading rates and other exclusive privileges, making GCX an essential asset for active traders. §.§ Collateral and Staking In addition to membership benefits, GCX tokens are used as collateral for short selling and margin trading. This feature allows traders to leverage their positions while maintaining economic stability within the exchange. Tokens used as collateral are staked, ensuring that operators deliver the required compute resources efficiently. If an operator fails to deliver as promised, their staked tokens can be slashed, providing a built-in mechanism for maintaining accountability and performance <cit.>. This aligns with our commitment to creating a transparent and reliable marketplace for computational resources. §.§ Transition to an App Chain As we evolve, GCX will transition to becoming the core token of our own app chain. This transition will enhance the token's utility, making it integral to the operation and governance of the new blockchain layer. Users will continue to benefit from the token's utility, with expanded functionalities and opportunities within the app chain ecosystem. §.§ Burn Mechanism and Token Circulation GCX tokens have a built-in burn mechanism, ensuring a controlled supply and increasing token value over time. Tokens can be slashed and burned if operators fail to meet performance standards, effectively removing them from circulation. Additionally, we will introduce new rules for burning and issuance to further stabilize and enhance the GCX ecosystem: * Transaction Burn: A small percentage of tokens will be burned from every transaction on the GCX exchange, reducing the total supply gradually. * Performance-Based Burn: Operators with poor performance metrics may face additional token burns, incentivizing optimal service delivery. * Issuance Cap: New GCX tokens will only be issued based on network growth and demand, maintaining a balanced and sustainable token economy. * Operator Rewards: Operators within the GCX ecosystem may also be rewarded with GCX tokens based on their holdings and performance. This reward system encourages operators to maintain high standards and contribute positively to the network's growth and efficiency. The GCX utility token is a crucial element of our ecosystem, driving membership, trading benefits, collateral management, and operator accountability. With its transition to an app chain and innovative burn mechanisms, our token is poised to become a cornerstone of our growing platform, ensuring long-term value and stability for all users. This approach not only aligns with current trends in digital transformation but also positions GCX at the forefront of the computational resource market’s evolution. §.§ Conclusion The GCX platform provides a versatile platform that caters to various stakeholders in the compute market, enabling them to hedge against price volatility, secure future prices, and generate additional yield. As illustrated in Fig. (<ref>), the GCX allows different actors to leverage its functionalities to their advantage. Alice, an AI startup founder, uses futures contracts to lock in the price of compute, safeguarding against future price increases <cit.>. Bob, anticipating a market correction, buys put options to secure a price ceiling, thereby minimizing potential losses <cit.>. Carol, a data center operator, sells both call and put options to collect premiums, generating additional yield. By utilizing the GCX, these actors can effectively manage their risks and optimize their financial strategies in the dynamic compute market <cit.>. In all cases, the GCX enables various expressions of market sentiment, allowing participants to profit and protect against the evolution of compute prices, whether they anticipate an increase, decrease, or sideways movement. This flexibility ensures that the GCX can benefit a diverse range of market participants, each with their unique perspectives and strategies. § CONTRACT FOR TRADING COMPUTE HOURS Contracts between buyers and sellers in a marketplace for computational resources need to be detailed and comprehensive to ensure fairness and stability. Quality specifications should include parameters such as CPU/GPU speed, RAM, and storage capacity. Pricing terms must reflect current market conditions and be adjustable based on supply and demand dynamics. Dispute resolution mechanisms, such as arbitration, should be clearly outlined to handle any conflicts that arise. Leveraging blockchain technology can further enhance contract management by ensuring transparency and immutability <cit.>. As AI technologies evolve, regulatory frameworks such as the EU AI Act have been developed aiming to ensure ethical and responsible use <cit.>. These regulations necessitate robust systems for managing and verifying compute resources to ensure compliance. We will structure the contracts drawing parallels from established practices in power and other commodity markets[We plan to finalize the contract in consultation with legal counsel, ensuring full compliance with all applicable laws and regulations.] and we can leverage blockchain technology to standardize and track these contracts while maintaining privacy and compliance. Blockchain technology and smart contracts are already transforming various sectors, proving their utility beyond mere conceptual applications. For instance, in real estate, companies like Propy <cit.> are facilitating property transactions where the entire buying process, including title transfers, is executed through smart contracts, reducing the transaction time and eliminating traditional intermediaries. In the art and music industry, Glair utilizes blockchain to manage copyrights <cit.>. In the realm of supply chain management, Everledger leverages blockchain to provide immutable histories of high-value items like diamonds, ensuring the authenticity and legality of the supply chain which aids in compliance with trade regulations <cit.>. Additionally, legal document management systems are beginning to adopt blockchain to secure digital signatures and notarizations that stand up to legal scrutiny and reduce fraudulent activities <cit.>. The implementation of smart contracts on the GCX platform to manage trading contracts for computational resources is a natural extension of these proven applications. By automating contract execution, ensuring transaction integrity, and providing transparent compliance tracking, smart contracts can offer a robust and efficient framework for trading computational power. This approach not only mitigates risks and lowers operational costs but also aligns with current trends in digital transformation, positioning GCX at the forefront of the computational resource market’s evolution. Smart contracts on a blockchain can be designed using standardized templates that encapsulate common terms and conditions needed for compute resource trading. By using blockchain, all parties access the same contract versions, eliminating discrepancies and ensuring that transactions are executed based on mutually agreed terms. Smart contracts can automatically execute transactions based on coded conditions, such as verifying completion and Quality of Service (QoS) adherence before releasing payments. Automation reduces administrative burdens and accelerates processes by eliminating manual steps in contract management and payments. Every transaction and amendment is recorded on the blockchain, providing a transparent and unchangeable history. Enhanced transparency ensures easy auditability and trust, as all actions are traceable and verifiable. Smart contracts can include provisions for updates or upgrades as market conditions, regulations, or technology evolves. Updates can be managed through decentralized voting among stakeholders to ensure democratic and agreeable changes. Blockchain's cryptographic nature makes contracts secure from unauthorized alterations. The immutable and transparent nature of blockchain simplifies the resolution of disputes. Incorporating blockchain and smart contracts in managing compute resources can significantly enhance efficiency, trust, and adaptability, creating a robust digital trading platform. § THE COMPUTE MARKETPLACE The GCX trading platform revolutionizes the way compute hour contracts are traded, providing a seamless, secure, and efficient marketplace for both buyers and sellers. By streamlining the process from request to delivery, we aim to fully ensure that computational resources are utilized effectively, supporting a wide range of industries and applications. Additionally, the GCX's modular approach allows for flexibility and customization, enabling various applications to be built on top of the platform. This means that marketplaces can choose to integrate only specific aspects of the GCX, such as options, futures, perpetual contracts, or any combination thereof. This flexibility ensures that the GCX can meet the diverse needs of different market participants and use cases. In later sections, we seek to illustrate the GCX trading process through detailed examples that encompasses the experiences of both buyers and sellers. Please consult, e.g., <cit.> for the underlying derivatives trading theory. The GCX is designed to accommodate a wide array of compute needs and resource offerings. It integrates advanced matchmaking algorithms with a user-friendly interface to facilitate transactions between buyers and sellers efficiently. The platform supports a variety of contract types, including spot and futures contracts for compute hours, each with customizable terms to suit the specific requirements of users. Prices for compute hour contracts are determined based on supply and demand dynamics, ensuring fair pricing that reflects current market conditions. All transactions are secured with blockchain technology, providing transparency and integrity to every contract. The platform's algorithms automatically match sellers' offerings with buyers' needs, optimizing resource allocation and utilization. §.§ Spot Trading of Compute Spot trading refers to the purchase and sale of commodities for immediate delivery and payment. In the context of compute resources, spot trading involves buying and selling compute hours that can be utilized immediately upon transaction completion. This type of trading is essential for industries and applications that require on-demand computational power without the need for long-term contracts or future deliveries. While the GCX exchange itself is not technically necessary for spot trading due to the lack of settlement requirements, the GCX app–and, over time, other companies/projects can build their own apps on top of our stack via API and SDK access–serves as a critical marketplace where buyers and sellers can meet, negotiate, and finalize deals. Thus, more generally speaking, the App Layer handles this, see Fig. (<ref>) for the various Layers and how they interact. The primary objective of the GCX app is to standardize the way buyers and sellers find each other and make transactions, thus facilitating a seamless and efficient marketplace. The GCX app enables spot trading of compute resources by: * Standardizing Contracts: By standardizing contract terms as much as possible, the GCX app simplifies the negotiation process for counterparties, making it easier for them to agree on deals. This standardization reduces complexity and enhances the efficiency of the marketplace. * Facilitating Connections: The app provides a platform where buyers and sellers can easily find each other and negotiate terms. It acts as a matchmaking service, ensuring that computational needs are met with available resources. * Incentivizing Participation: To encourage trading activity, the GCX app rewards participants with tokens for their trades. These incentives promote engagement and contribute to the success of the platform. By focusing on pure over-the-counter (OTC) spot trading in the beginning stages of the GCX app we eliminate the need for clearing pools or settlement processes on day one. Instead, the GCX app provides a streamlined environment for buyers and sellers to find each other and negotiate deals. This approach allows the platform to learn from real-world trading activity and adapt to the needs of its users. §.§ Derivatives Trading of Compute Derivatives trading involves financial contracts whose value is derived from an underlying asset—in this case, compute hours. Common derivatives include futures, options, and perpetual contracts (perps). These instruments allow market participants to hedge against price fluctuations, speculate on future price movements, and manage risk more effectively. * Futures Contracts: Agreements to buy or sell compute hours at a predetermined price on a specified future date. These contracts help participants lock in prices and secure compute resources for future use. * Options Contracts: Provide the right, but not the obligation, to buy or sell compute hours at a predetermined price before a specified expiration date. Options offer flexibility and risk management strategies for market participants. * Perpetual Contracts (Perps): Similar to futures but without an expiry date. They enable continuous trading of compute hours and are settled periodically based on market conditions. This can be enabled via the Blockchain Layer, see Fig. (<ref>). In contrast to spot trading, which involves immediate delivery, derivatives trading allows participants to plan and strategize for future computational needs and costs. Spot trading is essential for derivatives because it provides the actual compute hours that underlie the derivatives contracts, ensuring that the market has the necessary liquidity and resource availability. The GCX exchange is crucial for derivatives trading as it handles the settlement of these contracts. By evolving from the insights gained through the GCX app, the GCX exchange can develop into a robust platform capable of managing the complexities of derivatives trading. The GCX platform supports derivatives trading by offering: * Comprehensive Contract Types: A variety of derivatives contracts, including futures, options, and perps, each customizable to meet the specific needs of buyers and sellers. * Advanced Trading Tools: Features such as real-time market data, analytical tools, and risk management options to help participants make informed trading decisions. * Onchain Trading: The integration of blockchain technology allows for secure, transparent, and efficient onchain trading of derivatives, ensuring the integrity of every contract. By enabling both spot and derivatives trading, the GCX platform overall provides a complete ecosystem for trading compute resources. Spot trading ensures immediate access to compute hours, while derivatives trading offers tools for future planning and risk management. Together, these capabilities support a wide range of market participants, from those needing immediate computational power to those seeking to hedge against future price fluctuations or invest in the compute market. In the following sections, we will explore several examples of both spot and derivatives trading, illustrating how the GCX platform’s commodification of compute can revolutionize access to computational resources, resulting in significant impacts across various industries: §.§ Example 1: Hedging Strategies for Compute Resource Volatility Let's consider a specific hedging example where Alice wants to mitigate compute resource costs for her startup <cit.>: §.§.§ Alice's Scenario: Buy-side Hedge Alice is the founder of an AI-focused startup seeking venture funding. She anticipates needing 500 TFLOP Hours per month starting in six months, with an ongoing need of 1,000 TFLOP Hours per month after six months of training. To mitigate the risk of higher than expected compute costs, Alice can use the GCX platform to hedge against price volatility <cit.>. §.§.§ Option 1: Futures Contracts Alice can purchase futures on the GCX marketplace to lock in her expected compute needs. This allows her to: * Purchase more or sell excess compute power if her needs change. * Roll her existing futures forward or backward to adjust the timing of delivery. * Take delivery of the compute hours at the contract price or sell the futures to lock in gains if prices rise. §.§.§ Option 2: Options Strategies Alice can use various options strategies to protect against changes in her compute needs: * Buy Call options to protect against increased prices for additional compute hours. * Buy Put options to protect against needing less compute and falling prices. * Use a combination of Call and Put options (Straddle or Strangle) to protect against both upside and downside changes in compute needs. §.§.§ Advantages * Alice can reduce her exposure to changing compute prices and requirements. * She can focus on her business without worrying about price volatility. * Venture capitalists are more comfortable investing due to reduced risk. §.§.§ Disadvantages * Trading on an exchange may subject Alice to maintenance margin requirements. * Her needs may not match exactly with standard futures contracts. §.§.§ Sell-side Hedge: Bob's Scenario Bob is building new datacenters for his cloud-based compute business. He needs to lock in prices for future compute sales to secure financing and mitigate the risk of falling prices <cit.>. §.§.§ Option 1: Sell Futures Contracts Bob can sell long-term futures on the GCX marketplace to lock in prices. As contracts near expiration, they can be rolled into shorter-term contracts. §.§.§ Option 2: Buy Put Options Bob can buy put options on his futures contracts to protect against downside price risk while allowing for upside gains. §.§.§ Advantages * Bob can confidently build new datacenters knowing he has locked in profitable pricing. * Lenders are more likely to finance his venture due to reduced risk. §.§.§ Disadvantages * Selling futures or call options may lead to large margin calls if compute prices rise rapidly. * Standardized contracts may not match Bob’s specific needs. As mentioned, another advantage for both Alice’s and Bob’s businesses is that their investors experience reduced price volatility and more predictable financial outcomes, making these ventures more attractive and secure. §.§ Example 2: Optimizing Yield for Datacenter Operators Carol owns a large datacenter and is currently profitable based on current compute pricing. A significant cost for Carol to run her datacenter is the cost of power, so she can choose to turn off her datacenter if power costs outweigh the compute revenues. Carol would like to take advantage of the optionality embedded in being able to power up and down her datacenter. She is profitable selling at current compute prices and would like to earn additional yield to generate steady income and smooth out her long-term revenues <cit.>. §.§.§ Option 1: Selling Call Options Carol can sell strips of call options, which will generate a cash premium when sold. This strategy is most advantageous when volatility in compute prices is high since option prices are highly correlated with expected volatility in the underlying asset’s price. Selling options at times of high volatility will both take advantage of the high premium to generate additional yield, while at the same time reducing the variability in Carol’s future cash flows. Carol can sell call options with strikes set to the current futures price of compute, or she can opt to sell options with strikes that are higher than the current price of compute. The choice of strike price is dependent on the option volatility smile (skew). If the volatility of higher strike call options is greater than the at-the-money strikes, then Carol can take advantage of this condition to gain additional premium by selling these options. This situation can develop when there is high uncertainty in the markets and fear of prices spiking higher. If the volatility of higher strike call options is lower than the at-the-money strikes, then Carol may wish to sell call options closer to at-the-money. This is the typical situation in many markets due to the natural tendency of producers to hedge. §.§.§ Option 2: Selling Both Puts and Calls To provide additional yield, Carol can take on a riskier position by selling puts in addition to calls. This strategy would be more advantageous in a market where option put prices are trading at a significant volatility premium relative to call prices, often the case in a producer-hedger dominated market in times of lower volatility. By selling puts, Carol may be required to purchase compute at a price that, while lower than today’s prices, may be significantly lower than the strike price of the put option she sold, thus resulting in significant losses. However, Carol has the option to turn off her datacenter at these times, so while she will end up losing money on the options if prices drop, she can shut down her own production, putting her in a position as if she were simply unhedged on downside price moves from the start. This strategy carries risks but can be very advantageous when there is a strong put skew in the market. Carol can offset some of this risk by buying an additional put option at a strike below the put option she sold. This will reduce her yield but can return her to a fully hedged position, allowing her to take advantage of yield and smooth out her returns. Advantages: * Selling options generates a premium that provides additional cash flow up front, offering more working capital. * Selling call options helps smooth out revenue streams, beneficial for generating more consistent earnings over time. * Selling put options allows Carol to generate additional yield from the optionality embedded in the ability to turn off her production at any time. Disadvantages: * Selling calls may result in missed opportunities to sell compute at very high prices during extreme demand spikes. * Selling puts may result in large losses if the market price of compute drops significantly, effectively losing the optionality of turning off compute and leaving the downside unhedged. In summary, the GCX marketplace represents a significant leap forward in the trading of computational resources. By leveraging blockchain technology and smart contracts, we ensure that all transactions are secure, transparent, and efficient. This innovative approach facilitates dynamic and fair pricing but also enhances trust and reliability of the platform. As demonstrated through our detailed example, the GCX platform is poised to support a wide array of compute needs across diverse industries, driving the commodification of compute resources and unlocking new possibilities for growth and innovation. §.§ Example 3: Enabling Financial Trading on the Global Compute Exchange Amazon Web Services (AWS) <cit.> is a leading supplier of cloud computing resources, while we imagine that Citadel <cit.>, a major hedge fund, engages in financial trading of compute resources on the GCX. Citadel makes strategic bets on the future direction of compute prices, requiring large quantities of compute resources to support its trading activities. To facilitate these trades, Citadel purchases substantial compute resources from AWS <cit.>. AWS provides a vast array of compute resources on the GCX platform. Citadel, operating as a financial trader, needs to acquire large quantities of these resources to execute its trading strategies. By leveraging the GCX, Citadel can efficiently buy and sell compute resources, similar to how it would trade commodities like oil. §.§.§ Option 1: Long-term Contracts with AWS Citadel can enter into long-term contracts with AWS on the GCX platform. This allows Citadel to: * Secure compute resources at a fixed price, protecting against future price increases and enabling predictable financial planning. * Ensure a consistent supply of compute resources, essential for maintaining trading volumes and liquidity in the compute market. §.§.§ Option 2: Spot Market Purchases Citadel can also utilize the spot market on the GCX platform to buy compute resources from AWS as needed. This strategy offers: * Flexibility to purchase compute power based on real-time demand, potentially benefiting from lower prices during periods of low demand. * The ability to dynamically adjust resource levels to align with market conditions and trading activity. §.§.§ Advantages * GCX provides a transparent and efficient marketplace for trading compute resources, ensuring fair pricing and reliable transactions. * Long-term contracts with AWS offer Citadel price stability and secure supply, crucial for executing large trades. * The spot market allows Citadel to optimize costs by purchasing compute resources at favorable prices during “compute off-peak" times. §.§.§ Disadvantages * Long-term contracts may limit Citadel's ability to take advantage of potential future price drops in the spot market. * Relying solely on the spot market can expose Citadel to price volatility and potential shortages during high demand periods. Through the GCX, Citadel can efficiently manage its compute resource portfolio by purchasing in bulk from AWS and leveraging these resources for financial trades. This enables Citadel to place large bets on the future direction of compute prices, providing liquidity and depth to the market. The use of smart contracts ensures that transactions are executed automatically and transparently, with clearly defined terms and conditions. Both AWS and Citadel benefit from the security and compliance features embedded in the GCX platform. Smart contracts ensure that all transactions are secure, transparent, and auditable, meeting regulatory requirements and protecting sensitive data. By leveraging the Global Compute Exchange, AWS can supply compute resources to Citadel, enabling high-volume financial trading and liquidity in the compute market. GCX provides a robust and flexible platform for managing compute resource transactions, offering stability, flexibility, and security in the dynamic compute market. §.§ Example 4: Enhancing Financial Analytics for Fintech Startups Fintech AI modeling startup Nearest helps wealth managers better track portfolios, identify trends, and mitigate risks using sophisticated analytics on historical timeseries data. However, the small data science team struggled with long lead times for accessing GPU servers for model training experiments on Azure, costing tens of thousands of dollars despite modest needs. By leveraging the GCX compute application's (built on top of the GCX exchange[The “GCX compute application" is in the GCX App Layer whereas the “GCX Exchange" is in the Exchange Layer.]) auto-scaling pools with cloud credits, they could run iterations much faster without the hassles of managing underlying infrastructure. Nearest, an AI-focused fintech startup, requires significant computational resources for running complex financial models. Traditional cloud services like Azure were proving too costly and time-consuming, impeding their ability to innovate and deliver timely insights to clients. §.§.§ Advantages * Nearest can speed up their model training processes, delivering insights to clients faster. * Reduced infrastructure management burden allows the team to focus on enhancing their analytics capabilities. * Cost savings from cloud credits and dynamic scaling improve financial efficiency and flexibility. §.§.§ Disadvantages * Relying on cloud credits and dynamic scaling may lead to unpredictable costs if not managed carefully. * The performance of auto-scaling pools can vary based on market demand and availability. In summary, the GCX marketplace enables fintech startups like Nearest to overcome the high costs and inefficiencies associated with traditional cloud services. By leveraging auto-scaling pools and cloud credits, Nearest can enhance their financial analytics capabilities, providing better services to their clients while maintaining cost-effectiveness and operational efficiency. §.§ Example 5: On-Demand Rendering for Media Production Animation studio Pixar needed on-demand rendering capacity for a new animation film without major upfront capital costs before release. By leveraging the GCX platform, which links low-priority capacity across render farms, Pixar saved 30% in costs with a faster turnaround for the CGI-heavy project by paying only for the exact hours consumed. The auto-scaled capacity also reduced risks of output delays or resource saturation issues. §.§.§ Pixar’s Scenario: Utilizing the GCX Application Pixar, a leader in animation production, requires extensive computational resources for rendering high-quality animation films. Traditional methods involving significant upfront capital investments in rendering farms were not feasible for the tight production timelines and budget constraints of their new project. §.§.§ Leveraging Low-Priority Capacity Pixar can use the GCX marketplace to access low-priority capacity across render farms. This allows them to: * Utilize idle compute resources for rendering tasks, significantly reducing costs. * Scale resources dynamically to meet the demands of the project, ensuring timely completion. * Pay only for the exact hours consumed, optimizing their budget and avoiding unnecessary expenditures. §.§.§ Advantages * Pixar achieves significant cost savings by utilizing low-priority capacity and paying only for consumed resources. * The auto-scaling feature ensures timely project completion, maintaining high-quality output. * Reduced infrastructure management allows Pixar to allocate more resources to creative development. §.§.§ Disadvantages * Depending on low-priority capacity may lead to variability in resource availability during peak times. In summary, the GCX marketplace provides a robust solution for media production companies like Pixar, enabling them to access on-demand rendering capacity without the burden of significant upfront capital costs. By leveraging low-priority capacity, Pixar can deliver high-quality animation projects efficiently and cost-effectively. § CONCLUSION We have introduced the Global Compute Exchange (GCX) platform, a groundbreaking solution for the commodification and efficient trading of computational resources. By standardizing compute units through Compute Hours (CH) we have created a robust framework that ensures fair, transparent, and efficient trading. This approach allows for a seamless comparison and utilization of diverse computing resources, much like the established practices in electricity markets. The GCX helps to address critical inefficiencies in the current compute landscape, such as underutilization and price volatility, ensuring equitable access to computational power, stimulating innovation, and supporting diverse user needs on a global scale. The GCX platform’s architecture is divided into several layers, each with specific functions to ensure a robust and efficient operation: * Market Layer: Comprises participants such as hedgers, traders, short sellers, and market makers who interact with the GCX to trade compute resources. * App Layer: Consists of applications developed by various companies, including the GCX app, which provides user interfaces and frontends for interacting with the GCX platform. This is where spot trading of compute takes place natively. Derivatives trading also happens here, but in this case it settles on the Exchange Layer below. * Clearing Layer: Facilitates smooth and secure trade settlements with guarantors ensuring the delivery of compute resources and managing collateral. * Risk Management Layer: Monitors and manages risks associated with trading on the GCX, including APIs, SDKs, and a risk engine for monitoring and analytics. * Exchange Layer (Offchain): Manages the off-chain operations of the GCX, including matching buy and sell orders and executing trades. * Blockchain Layer (Onchain): Manages onchain operations, leveraging blockchain technology for transparency and security, including smart contracts, token collateral, and staking mechanisms. The GCX App Layer facilitates spot trading by standardizing contract terms, enabling buyers and sellers to efficiently negotiate deals and execute trades. This approach not only enhances market efficiency but also provides valuable insights into market dynamics, helping to inform the development of future products and services. Incentivizing participants with tokens further promotes engagement and contributes to the platform’s success. Derivatives trading in the GCX App Layer settled in the Exchange Layer introduces sophisticated financial instruments such as futures, options, and perpetual contracts. These tools allow participants to hedge against price fluctuations, speculate on future price movements, and manage risk effectively. While the trading of derivatives takes place on the GCX app, the settlement of these contracts is managed by the GCX exchange. The integration of blockchain technology ensures secure, transparent, and efficient onchain trading, maintaining the integrity of every contract. Through spot and derivatives trading, GCX provides a complete ecosystem for trading compute resources, supporting a wide range of market participants. This dual capability ensures immediate access to compute hours for urgent needs and offers strategic tools for future planning and risk management. The GCX platform’s potential to revolutionize access to computational resources is profound. By turning compute hours into a tradable commodity, GCX optimizes resource utilization, stabilizes pricing, and democratizes access to computational power. This democratization is crucial for enabling innovation across various sectors, from startups to large enterprises, fostering a more inclusive and dynamic technological landscape. § GLOSSARY * CPU (Central Processing Unit): The primary component of a computer that performs most of the processing inside a computer. It executes instructions from programs, performing basic arithmetic, logic, control, and input/output (I/O) operations. CPUs are characterized by their versatility and are designed to handle a wide variety of tasks in computing systems. * GPU (Graphics Processing Unit): A specialized processor designed to accelerate graphics rendering. GPUs can process many parallel tasks more efficiently than CPUs, making them ideal for handling complex computations in graphics rendering, machine learning, and scientific simulations. They are particularly known for their ability to perform floating-point calculations required in rendering 3D graphics and AI. * GPGPU (General-Purpose computing on Graphics Processing Units): GPGPU is the utilization of a GPU, which is typically used for rendering graphics, to perform computation traditionally handled by the CPU. This technique leverages the parallel processing capabilities of GPUs to accelerate a wide range of applications beyond graphics, including scientific simulations, machine learning, data analysis, and more. * AI (Artificial Intelligence): The simulation of human intelligence processes by machines, especially computer systems. AI involves the development of algorithms and software to perform tasks that typically require human intelligence, such as visual perception, speech recognition, decision-making, and language translation. Key areas of AI include machine learning, where systems learn and improve from experience, and deep learning, which involves neural networks with many layers. * Compute Hour (CH): A unit of measure representing the computational effort equivalent to one hour of processing on a reference system with a specified performance, typically measured in teraFLOPS (trillions of floating-point operations per second). Example: If a system performs 2 teraFLOPS, it will generate 2 Compute Hours per hour of operation. * Epochs: An epoch refers to one complete pass through the entire training dataset. During each epoch, the model is trained on all the training data once. Training typically involves multiple epochs, and the number of epochs is a crucial hyperparameter that impacts the model’s learning and generalization abilities. Too few epochs can lead to underfitting, where the model does not learn adequately from the training data, while too many epochs can cause overfitting, where the model learns noise and performs poorly on new data. * Batch Size: The batch size is the number of training samples processed before the model’s internal parameters are updated. Smaller batch sizes can make training noisier but help models generalize better, whereas larger batch sizes can make the training process faster but may lead to overfitting. The choice of batch size affects the stability and speed of the training process and is another critical hyperparameter that needs careful tuning. * Iterations: An iteration refers to one update of the model’s parameters and is completed once a single batch of data has been processed. The number of iterations per epoch is determined by the size of the dataset divided by the batch size. For instance, if the dataset contains 1,000 samples and the batch size is 100, it will take 10 iterations to complete one epoch. * Smart Contract: Self-executing contracts with the terms of the agreement directly written into code. These contracts automatically enforce and execute the terms when predefined conditions are met, running on a blockchain to ensure transparency and immutability. Example: A smart contract on the GCX platform could automatically transfer payment to a compute provider once the specified compute hours are delivered and verified. * Blockchain: A decentralized digital ledger that records transactions across many computers in such a way that the registered transactions cannot be altered retroactively. This technology ensures transparency, security, and trust in the data recorded. Example: Blockchain technology underpins the GCX platform, ensuring that all transactions are secure and transparent. * Token Staking: The process of locking tokens in a blockchain-based system to support the network's operations, such as transaction validation, and in return, earning rewards. In the context of GCX, operators stake tokens as a commitment to provide computational resources. Example: Operators on GCX stake tokens to join a compute pool, which are slashed if they fail to deliver the promised compute hours. * Decentralized Governance: A governance structure where decision-making is distributed among all stakeholders rather than being centralized in a single entity. This approach often involves voting mechanisms to make decisions on protocol updates, policies, and other governance matters. Example: GCX may use decentralized governance to allow token holders to vote on changes to smart contracts or platform policies. * GCX (Global Compute Exchange): A platform facilitating a marketplace designed to standardize and commoditize computational resources. GCX facilitates the buying and selling of compute resources by providing a transparent and efficient trading system. It leverages standardized benchmarks to measure and trade compute units, ensuring fair and consistent evaluation of computational power across various providers. * Dynamic Pricing: A pricing strategy where the price of a commodity fluctuates based on real-time supply and demand conditions. In the GCX marketplace, dynamic pricing ensures fair market value for compute hours. Example: The price of compute hours on GCX might increase during peak demand periods and decrease when demand is low. * Predictive Analytics: The use of statistical algorithms and machine learning techniques to analyze historical data and make predictions about future events. In GCX, predictive analytics help anticipate compute resource needs and preempt potential failures. Example: GCX uses predictive analytics to forecast demand for compute hours and ensure resources are allocated efficiently. * FLOP: A FLOP, or Floating Point Operation (and we take TFLOPs to be the plural form, namely, Floating Point Operations), is a measure of computational performance, especially crucial in tasks involving real-number calculations common in AI and machine learning algorithms. It quantifies the number of calculations a system can perform per second, with higher units like teraFLOPS (TFLOPs) representing one trillion FLOPs and petaFLOPS (PFLOPs) representing one quadrillion FLOPs. Supercomputers performing hundreds of petaFLOPs can handle complex simulations and large-scale data analyses. In AI, models requiring extensive computational power, such as deep learning networks, often measure their demands in teraFLOPs or even petaFLOPs, reflecting the immense resources needed for processing and training. * FLOP-Hours: A measure of computational work that represents the number of floating-point operations a system can perform in an hour. It is used to quantify and compare the computational performance of different systems. Example: A system that performs 1 trillion floating-point operations per second (1 TFLOP) over one hour would generate 1 FLOP-Hour. * PetaFLOP (PFLOP) A PetaFLOP is a measure of computational speed, representing one quadrillion (10^15) floating-point operations per second (FLOPS). It is used to quantify the performance of supercomputers and other high-performance computing systems. A supercomputer with a peak performance of 1 PetaFLOP can perform 10^15 floating-point calculations every second. * FLOPS (Floating Point Operations Per Second): A measure of computer performance, especially in fields of scientific calculations that require floating-point calculations. It indicates how many floating-point arithmetic operations a computer can perform in one second. Higher FLOPS values indicate greater computational power. * Call Option: A financial contract that gives the buyer the right, but not the obligation, to purchase a specified amount of compute resources at a predetermined price within a specified time period. This is used to hedge against or speculate on price increases. * Put Option: A financial contract that gives the buyer the right, but not the obligation, to sell a specified amount of compute resources at a predetermined price within a specified time period. This is used to hedge against or speculate on price decreases. * Futures Contract: A standardized legal agreement to buy or sell compute resources at a predetermined price at a specified time in the future. This allows companies to hedge against price volatility and secure stable compute costs. * Spot Market: A public financial market in which compute resources are traded for immediate delivery. Prices in the spot market are determined by current supply and demand. * Spot versus Derivatives: Spot trading involves immediate delivery of assets, while derivatives trading involves contracts based on future asset delivery and price speculation, such as futures and options. * OTC (Over-the-Counter): OTC refers to the process of trading financial instruments directly between two parties without the supervision of an exchange. These trades occur in a decentralized market where participants trade directly with one another, typically through a network of dealers and brokers. * Volatility Smile (Skew): A pattern in which options with different strike prices exhibit varying levels of implied volatility. Higher strike prices may have different implied volatilities than at-the-money strikes, affecting the pricing and attractiveness of these options. * Straddle Option: An options strategy involving the purchase of both a call and put option with the same strike price and expiration date. This is used to profit from large price movements in either direction. * Strangle Option: An options strategy involving the purchase of a call and put option with different strike prices but the same expiration date. This is used to profit from large price movements in either direction, with a lower cost than a straddle. * Dynamic Pricing System: A mechanism that adjusts the price of compute resources in real-time based on current market supply and demand conditions, ensuring fair and transparent transactions. * Smart Contract: A self-executing contract with the terms of the agreement directly written into code. Smart contracts automatically execute and enforce agreements when predetermined conditions are met, ensuring transparency and reducing the need for intermediaries. * Decentralized Task Queue: A system that allows users to submit compute tasks to a distributed network, where tasks are dynamically allocated to available resources based on a consensus mechanism. * Resource Matching Queue: A decentralized system that matches and allocates compute resources to tasks based on real-time demand and resource availability, optimizing resource utilization. * Hedging: The practice of making financial investments to reduce the risk of adverse price movements in an asset. In the context of compute resources, hedging involves using financial instruments like futures and options to manage price volatility. * Yield Generation: The process of earning returns on an investment. In the context of compute resources, this involves strategies like selling options to generate additional income and stabilize revenue streams. * Embedded Optionality: The inherent flexibility in an asset or operation that allows the owner to make strategic decisions based on changing conditions. For example, the ability to power up or down a datacenter in response to power costs and compute revenues. * Carbon Footprint: The total amount of greenhouse gases produced directly or indirectly by human activities. In the context of compute resources, managing and reducing the carbon footprint is crucial for sustainability. * Compute as a Commodity (CaaC): A paradigm that treats computing power as a commodity, similar to utilities like electricity and water. CaaC allows organizations to access vast compute resources on-demand and pay only for what they use, without the need for significant upfront investments. This model leverages the scalability and flexibility of cloud computing, enabling businesses to innovate more rapidly and efficiently by aggregating underutilized capacity across multiple providers and geographies. * Compute as a Service (CaaS): A cloud computing model that provides computing resources as a service over the internet. CaaS allows businesses to rent processing power, storage, and networking infrastructure on a pay-as-you-go basis. This model simplifies the deployment and management of IT resources, offering scalability, cost efficiency, and flexibility. Organizations can quickly scale their infrastructure to meet changing demands without the need for substantial capital investments in physical hardware. * Infrastructure as a Service (IaaS): A cloud computing model that provides virtualized computing resources over the internet. IaaS offers fundamental IT resources such as virtual machines, storage, and networking on a pay-as-you-go basis. It allows businesses to rent and manage these resources without having to purchase and maintain physical hardware. IaaS is highly scalable and flexible, enabling organizations to dynamically adjust their infrastructure to meet changing demands. Examples of IaaS providers include Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). * Platform as a Service (PaaS): A cloud computing model that provides a platform allowing customers to develop, run, and manage applications without dealing with the underlying infrastructure. PaaS offers a complete development and deployment environment, including tools, libraries, services, and infrastructure, to support the entire application lifecycle. This model abstracts much of the complexity of building and maintaining the underlying hardware and software layers, allowing developers to focus on coding and innovation. Examples of PaaS providers include Google App Engine, Microsoft Azure App Services, and Heroku. * Perps (Perpetual Futures): A type of derivative contract in financial markets that has no expiration date. Traders can hold positions indefinitely, allowing for continuous speculation on the price of an underlying asset, often with leverage. * Perpetual Options: Financial derivatives that grant the holder the right, but not the obligation, to buy or sell an underlying asset at any time without an expiration date, providing flexibility and continuous trading opportunities. * DeFi (Decentralized Finance): A financial ecosystem built on blockchain technology that eliminates intermediaries, offering a wide range of financial services such as lending, borrowing, and trading through decentralized protocols and smart contracts. This technology also allows for a seamless integration of Perps and Perpetual Options to be implemented and traded on the GCX. * Market Layer: This layer consists of various market participants including hedgers, proprietary traders, short sellers, and market makers who own GCX tokens. These participants engage in trading activities and strategies within the compute resource market. * App Layer: This layer comprises the user interfaces and frontend applications, including GCX's app and other companies' apps. It also includes the APIs and SDKs necessary for developing and integrating applications with the Global Compute Exchange. * Clearing Layer: The layer responsible for ensuring the verification and delivery of compute resources. It includes guarantors who provide proof of compute capacity and manage the delivery of compute resources, ensuring the integrity and reliability of the transactions. * Risk Management Layer: This layer focuses on identifying, assessing, and mitigating risks associated with trading compute resources. It involves risk identification, data collection and monitoring, risk assessment, and the implementation of risk models and mitigation strategies. * Exchange Layer (Offchain): The offchain component of the Global Compute Exchange that handles the trading infrastructure, including the Exchange API and SDK. It supports various marketplaces for spot, derivatives, futures, and options trading. * Blockchain Layer (Onchain): The foundational layer utilizing blockchain technology to ensure secure, transparent, and immutable transactions. It includes a smart contract ecosystem, token interfaces, and collateral mechanisms such as staking, slashing, and insurance pools. This layer integrates with the broader DeFi ecosystem and manages customer and guarantor collateral. * Offchain: Refers to processes and transactions that occur outside the blockchain network. In the context of the Global Compute Exchange, offchain activities include the trading infrastructure, APIs, and SDKs that facilitate the exchange of compute resources without directly recording each transaction on the blockchain. This allows (for now) for faster and more scalable operations, with periodic synchronization to the blockchain for settlement and verification. * Onchain: Refers to processes and transactions that are executed and recorded directly on the blockchain network. In the context of the Global Compute Exchange, onchain activities include the use of smart contracts, token interfaces, and collateral management. These onchain mechanisms ensure transparency, security, and immutability, as every transaction is permanently recorded on the blockchain, enabling decentralized and trustless interactions. * Blockchain Network: A decentralized digital ledger that records transactions across many computers in such a way that the registered transactions cannot be altered retroactively. Each block contains a cryptographic hash of the previous block, a timestamp, and transaction data. Examples include: * Ethereum: A blockchain network that supports smart contracts and decentralized applications (dApps) <cit.>. It features its native cryptocurrency, Ether (ETH), and enables developers to build and deploy their own applications on the network. * Bitcoin: The first and most well-known blockchain network, primarily used for peer-to-peer transactions with its native cryptocurrency, Bitcoin (BTC) <cit.>. Bitcoin's blockchain focuses on providing a secure and decentralized way to transfer value and store data. * Message Passing Interface (MPI): MPI is a standardized and portable message-passing system widely used in parallel computing. It enables processes to communicate with each other by sending and receiving messages, making it suitable for various parallel computing architectures. MPI is designed to be scalable, high-performing, and flexible, supporting both point-to-point and collective communication. It also provides mechanisms for creating process groups and communicators, facilitating structured communication in large-scale applications. * DVS (Dynamic Voltage Scaling): A power management technique that dynamically adjusts the voltage supplied to a processor based on performance needs. This helps in reducing power consumption and heat generation, thus enhancing energy efficiency, especially in battery-powered devices. * DVFS (Dynamic Voltage and Frequency Scaling): An advanced power management technique that adjusts both the voltage and frequency of a processor in response to workload demands. By scaling down voltage and frequency during low workloads, DVFS optimizes energy consumption and improves overall efficiency in computing systems. unsrt
http://arxiv.org/abs/2406.18755v1
20240626205259
Analysis and Applications of a Heralded Electron Source
[ "Stewart A. Koppell", "John W. Simonaitis", "Maurice A. R. Krielaart", "William P. Putnam", "Karl K. Berggren", "Phillip D. Keathley" ]
physics.app-ph
[ "physics.app-ph" ]
Koppell/Electron Heralding skoppel2@jh.edu Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA Department of Electrical and Computer Engineering. University of California, Davis, Davis, California 95616, USA Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA § ABSTRACT We analytically describe the noise properties of a heralded electron source made from a standard electron gun, a weak photonic coupler, a single photon counter, and an electron energy filter. We argue the traditional heralding figure of merit, the Klyshko efficiency, is an insufficient statistic for characterizing performance in dose-control and dose-limited applications. Instead, we describe the sub-Poissonian statistics of the source using the fractional reduction in variance and the fractional increase in Fisher Information. Using these figures of merit, we discuss the engineering requirements for efficient heralding and evaluate potential applications using simple models of electron lithography, bright-field scanning transmission electron microscopy (BFSTEM), and scanning electron microscopy (SEM). We find that the advantage in each of these applications is situational, but potentially significant: dynamic control of the trade-off between write speed and shot noise in electron lithography; an order of magnitude dose reduction in BFSTEM for thin samples (e.g. 2D materials); and a doubling of dose efficiency for wall-steepness estimation in SEM. Analysis and Applications of a Heralded Electron Source Phillip D. Keathley July 1, 2024 ======================================================= Introduction - Heralded photon sources are an elementary resource in quantum optics. Since their first demonstration in 1977 <cit.>, they have found wide-ranging application: from their use in quantum key distribution <cit.>, simulation <cit.>, and sensing <cit.>, to defining the candela <cit.>. Recently there has been a surge of interest in applying principles of quantum optics to free electrons. This was sparked in part due to various experimental demonstrations of strong coherent coupling between photons and free electrons in electron microscopes <cit.>, as well as new theoretical predictions which propose how such interactions could be useful for producing unique optical and free-electron quantum states <cit.>. Most recently, photon-mediated electron heralding in a transmission electron microscope (TEM) was demonstrated for the first time <cit.>. It has also been suggested that heralding could be done using coulomb-correlated electron pairs, which have recently been produced for the first time in a TEM<cit.> and a scanning electron microscope (SEM)<cit.>. In spite of this progress toward heralded electron sources, there has been little discussion of their potential impact. In optical microscopy, shot noise is often an important limiting effect, giving heralded photon sources clear utility <cit.>. However in high-energy electron microscopy, where detectors can be fast and efficient enough to register nearly every electron, fluctuations in beam current may not contribute significantly to the measurement noise. In addition, the proposed heralding schemes involve filters which would substantially reduce beam current. This raises the question of when exactly electron heralding would improve performance. Here, we analyze an electron heralding system modeled after the one demonstrated by Feist et al. <cit.>. First, we parameterize this system and describe its sub-Poissonian beam statistics in terms of the Klyshko efficiencies. We find that the extra information provided by a heralded source makes it possible to improve the speed, yield, and minimum feature size in electron lithography. With sufficient heralding efficiency, we find an order-of-magnitude increase in write speeds is possible. We then use the Fisher information formalism to quantify the increase in information available for various modalities of quantitative electron microscopy, and we identify two limits (high dark counts and low contrast) where signal-to-noise ratio (SNR) enhancement is possible for dose-limited imaging. We argue there is situational improvement to SNR for high energy electron microscope applications like scanning transmission electron microscopy (STEM), but not for electron-energy-loss spectroscopy (EELS), or energy-dispersive X-ray spectroscopy (EDS). A heralded source could also be a solution for dose-calibrated in-situ electron microscopy and low background cathodoluminescence (CL) imaging. In addition, we find heralding can improve quantitative SEM when the dose is limited by damage or charging. We calculate the error in surface tilt estimation of a steep-walled feature, finding that heralding can reduce the dose required to reach a prescribed level of measurement error by a factor of more than 2. A schematic for a photon-heralded electron source is shown in Fig. <ref> (the supplementary material (<ref>) contains a table with a more complete set of symbols). Free electrons are generated from a standard electron gun, then focused and steered to pass within the evanescent field of a waveguide, generating photons as it passes. These photons are sent to a single photon detector, and the resulting electrons to an electron counting spectrometer. The interaction creates an entangled electron-photon state. If the waveguide input is the ground state, then the output will have Poissonian photon-number statistics <cit.>. Experimental demonstrations to-date have mean photon number (per electron) g≪ 1; however g≳ 1 is possible in principle. In this analysis, we assume that g≪ 1, consistent with current experimental capabilities. In photon heralding, a typical setup involves a nonlinear medium which converts a higher energy pump photon to two lower energy ones: the signal and the idler. A photon detected in the idler channel `heralds' the presence of a photon in the signal channel <cit.>. The traditional figure of merit for heralded single-photon sources is the Klyshko efficiency, which is the conditional probability of measuring a photon in the signal channel given the detection of a photon in the idler channel. The Klyshko efficiency for electron heralding can similarly be defined as the conditional probability of measuring an electron given the detection of a photon. However, this figure of merit does not penalize `false negative' events, where an electron arrives without a coincident photon. A low false negative rate is critical for applications like dose control. Instead, we will describe the heralding efficiency in terms of the fractional reduction of variance (FRV) in the estimate of the number of electrons that pass the energy filter, given the detection of photons in a given time interval [t,t+Δ t]. Assuming the source can be described with Poissonian statistics, which is generally valid for typical operating conditions <cit.>, =κ_eκ_γ where κ_e is the Klyshko efficiency for heralding electrons using photons and κ_γ is the Klyshko efficiency for heralding photons using electrons κ_e=gη_γ f_1/gη_γ+/, κ_γ=gη_γ f_1/gf_1+f_0 , where η_γ is the photon detection efficiency, f_1 is the probability that an electron belonging to the chosen sideband passes through the filter, f_0 is the probability that an electron outside of the chosen sideband passes through the filter, =I/q is the average rate of electron production from the gun with current I (q is the electron charge magnitude), and is the photon detector background count rate. A derivation of both equations is available in the section <ref> in the supplement. To achieve a high FRV, both κ_e and κ_γ must be close to 1. This requires a highly efficient photon detector. While a strong electron-photon coupling is not necessarily required, it must be possible for the energy filter to efficiently isolate the chosen sideband, which has intensity proportional to g and may be superposed with the tail of the zero-loss distribution (i.e. we need f_1∼ 1 and gf_1 ≳ f_0). In practice, this will require the sideband spacing (equivalently, the photon energy) to be much larger than the energy spread of the electron gun. In the supplement we tabulate the FRV for various types of sources and detectors (Table <ref>) and calculate the maximum energy filter efficiency (section <ref>). Based on that analysis, we will assume η_γ=0.9, f_1=0.9, and f_0=0. With g=0.01, I<1, and <1, the effect of background counts is negligible. Then κ_e≈ f_1 and κ_γ≈η_γ, simplifications we assume through the rest of this work. Dose Estimation The task of dose calibration — measuring the mean beam current beyond the beam forming apertures — is difficult to do accurately in most electron-optical systems. Resolving the moment-to-moment Poissonian fluctuations, which ultimately requires non-destructive measurements of the beam, is currently impossible. In microscopy, dose estimation is important for in-situ electron microscopy and for controlling sample damage. In lithography, dose noise limits device yield and minimum feature size. Without heralding, some noise can be suppressed by sampling , the number of primary electrons registered in one time bin (perhaps indirectly, e.g. through secondary electrons bursts in SEM). If the probability of registering a primary electron is D, the FRV in the estimation of the dose is =D/(1+Γ_e^(dc)/Γ_e^(g)fD) where f ≡ gf_1+(1-g)f_0 is the total probability that an electron from the gun passes the filter and is the electron detector dark count count rate. When D≈ 1 and the dark count rate is negligible, = 1. In other words, with a perfect electron detector and no loss in the sample, there is no shot noise and so no benefit to heralding. As derived in the supplement, the FRV for dose estimation obtained by combining data from both detectors is ,= κ_γ+(1-κ_γ)D/1+/-k_γ(1-D)1-κ_e/1-κ_eD where = (f-gη_γ f_1)D is the rate of additional background counts due to unheralded electrons. Using the photon detector, the FRV can be close to 1 even when the electron detection efficiency is low, which is the case for lithographic systems where the primary electron is used to expose a resist and secondary electrons (SEs) are not counted efficiently. When D=0, the FRV for dose estimation is identical to the FRV for heralding (FRV=κ_eκ_γ). In electron lithography, Poissonian fluctuations in the beam current can cause errors in the written pattern. For a resist with critical areal dose density _c (charge per area required for full exposure), the expected relative dose error for a pixel of area A is 1/√(_c A/q). The error can be reduced by using larger pixels, which increases the minimum feature size, or using a less-sensitive resist <cit.>. For a beam-current density J and clock speed C, the maximum write speed is possible for _c≤ J/C. For _c>J/C, the writing speed becomes current-limited. As an example, suppose J=100 and C=100. Then the maximum write speed is possible for resists with _c<1 or 6 electrons per (10nm)^2 pixel. To get less than 10% dose error in each pixel, the resist sensitivity would need to be decreased by a factor of 16, slowing write speed proportionally. To achieve less than 10% error using (4nm)^2 pixels, the write speed would need to be 100 times slower than the maximum clock speed. Often, it is not possible to choose a resist with the optimal critical dose due to issues of availability and process compatibility. Alternatively, the trade-off between dose error and speed can be controlled with heralding. To do so, the exposure is divided into m stages and the dose applied at each stage is chosen based on the number of remaining stages and the estimated dose applied so far. This multi-pass, heralded electron lithography requires up to m times as many clock cycles, slowing clock-limited write times. The source current is also reduced by a factor of gf, which slows current-limited write times. Despite this, in some circumstances heralding can significantly increase writing speed. In Fig. <ref>, we use the heuristic strategy of selecting the dose at each stage k of the m exposures according to _k=F(_c-∑_k'<k_k') where _k' is the estimate of the dose applied at stage k', and F is a number between 0 and 1. The optimal choice for F depends on _c and m. We calculate the (root mean square) dose error for F={0.1,0.2,...0.9} and save the best result. For a resist with critical dose of 0.6electrons, the dose error for a 20 linewidth (4 pixels) is 30% (see the point labeled A). Reducing the error below 10% would require increasing the linewidth to more than 60 or finding a resist with one-tenth the sensitivity (see points B and C, respectively). Alternatively, the dose could be controlled with a heralded source using a multi-pass exposure with m=7. If a small fraction of the pattern requires a linewidth of 20 while 60 is suitable for the rest, then electron lithography with a heralded source and sensitive resist is about (6/2)^2=9 times faster while maintaining relative error below 10%. Electron Heralding for Quantitative Microscopy - Electron heralding has the potential to enhance electron microscopy when shot noise or dark counts are critical limitations. For example, when measuring sample transmissivity, the ability to distinguish loss events from fluctuations in the illumination can dramatically improve the signal to noise <cit.>. This is possible with a heralded electron source, but some advantage is lost due to the reduced beam current. For a heralded source to be useful for faster acquisition, the signal to noise ratio (SNR) must increase sufficiently to compensate for the lost signal. Often, the SNR is limited by effects other than acquisition time and beam current, like detector well depth or sample charging and damage. If there is a critical dose above which the sample is unacceptably degraded, then an appropriate figure of merit is the information gain per electron. We will describe scenarios in SEM and bright field STEM where a heralded electron source can significantly improve the information gain per electron (or equivalently the SNR at constant dose). A standard theoretical tool for optimizing measurements is the Fisher Information (FI), ℐ(x), which is used in conjunction with the Cramer-Rao bound σ_x^2≥ (ℐ(x_0))^-1 to determine the minimum measurement error σ_x when estimating an unknown sample parameter x (e.g. thickness, scattering mean free path, secondary electron (SE) yield, etc.) near a particular value x=x_0<cit.>. The quantity is a sample from the random variable . The advantage of this formalism is that we can determine the information value of a measurement without constructing an optimal method for estimating x based on . See section <ref> of the supplement for a brief review of this topic. We can re-write the Cramer-Rao bound in terms of the SNR = x/σ_x≤ x√(ℐ(x_0)). The fractional increase in Fisher information achieved by heralding is ℐ(x_0|,)/ℐ(x_0|)=1+κ_γ(1+//1-D(x_0)κ_e-1) , By examining equation <ref>, we can see two potential limiting cases where the fractional increase in information is large: when dark count rate is large (/≫1) or when the contrast is low (D(x_0)∼ 1). In the remainder of this letter, we examine specific scenarios where heralding may be beneficial for electron microscopy. High Energy Electron Microscopy - In bright field (BF) STEM, an aperture at the backfocal plane of the objective lens removes electrons which scatter to large angles as they pass through the sample. One possible motivation for such a measurement is to estimate the thickness t of a sample with a known, material-dependent scattering mean-free-path λ. If the electron detector has quantum efficiency η_e, then the detection probability is D(t)=η_e (1-e^-λ/t). In Fig. <ref>, we show the fractional increase in SNR (the square root of Eq. <ref>) relative to the unheralded case as a function of t for η_e=0.95. With an SNSPD, heralding doubles the SNR ratio at constant dose for samples with λ=100(200) and t<20(42). The information added by the heralding system in this case is equivalent to the information collected by a high-efficiency annular dark field detector covering all scattering angles excluded by the bright field detector. For other high-energy electron-microscopy modalities, it is more difficult to use a heralded source advantageously. Transmission electron microscopy and EELS use pixelated detectors with readout speeds far too low to preserve correlations with the photon detector signal, but heralding could improve dose estimation as discussed in the previous section for in-situ experiments. Heralding could also help reduce noise backgrounds for EDS and CL to the extent that the noise is uncorrelated with the primary electrons. Quantitative SEM - The interpretation of SEM images can change dramatically based on the beam energy and sample composition and morphology. For example, more secondary electrons escape from the interaction volume near the edges of nanostructures, causing a bright halo and a reduction in contrast. This so-called edge effect <cit.> can be partially ameliorated using a heralded electron source. To show this we model the tilt-dependent SE yield as δ(θ)=δ_0δ_1/[cos(θ)(δ_1-δ_0)+δ_0] where θ is the surface tilt, δ_0 is the SE yield for a horizontal surface (θ=0), and δ_1 is the SE yield at the edge of a tall vertical step <cit.>. We assume the detector clicks at most once for each primary electron, which is true when a single SE saturates the detector for longer than the spread in SE arrival times, and so counting secondary electrons is not possible<cit.>. We also assume that the SE yield is Poissonian (with mean δ). Then we write the probability of detection as D(δ)=1-e^-η_e δ. When the background count rate is small, the increase in information from heralding is proportional to e^η_e δ. Heralding is especially advantageous when the SE yield is much larger than 1 (i.e. where θ∼π/2), so contrast is low due to detector saturation. In such cases, the most information-rich events are primary electrons which fail to produce a SE. Without heralding, these events cannot be recognized. Figure <ref> shows a simulation of parameter estimation in an SEM. It neglects some aspects of image formation in SEM (e.g. shadowing and point spread function), but it provides an approximate visual comparison of quantitative SEM with and without heralding. At each pixel, the number of incident primary electrons is drawn from a Poisson distribution with mean 10. Then numbers are drawn from a multinomial distribution to determine the number of coincident and non-coincident events at the electron and photon detectors. Without heralding the estimates of the surface tilt and SE yield are θ̂ and δ̂. With heralding the estimates are θ̂_h and δ̂_h. See section <ref> of the appendix for more details. Prior probability distributions for θ and δ must be specified. We assume a uniform distribution for θ (which induces a non-uniform distribution for δ). As the true surface tilt is not well-represented by a uniform distribution, these estimators are biased (e.g. |⟨δ̂-δ⟩|>0 at finite dose). As seen in the radial plots on the right of the figure, heralding reduces this bias (an effect not captured by the Fisher information analysis). Heralding reduces the error for the estimation of θ and δ by a factor of 1.6 and 1.3, respectively. Equivalently, the dose required to reach a prescribed estimate error for δ and θ is reduced by a factor of 2.6 and 1.8 respectively. Compositional analysis could also be improved by heralding, but the advantage would only be exist when SE yields are high enough to saturate the detector. Conclusion and outlook - We have described a realistic heralded electron source, calculated its statistics, and estimated its effectiveness in specific applications of STEM, SEM, and electron lithography. A general figure of merit for describing the effectiveness of a heralded source is the FRV, which is proportional to the Klyshko efficiencies for both electron and photon heralding. To achieve a high FRV in practice will not necessarily require additional advances in sources, detectors, energy filters, or electron-photon coupling structures, but it will require sophisticated engineering to integrate state-of-the-art implementations of each of these systems. The goal of this analysis was to identify realistic circumstances where electron beam technologies may be improved by a heralded source. We chose to examine a series of minimal models which may motivate more detailed investigations of each potential application in the future. Multi-pass heralded electron lithography would have reduced beam current and may require more clock cycles. However, it enables dynamic control of the trade-off between speed and noise. Using a single layer of resist, heralded electron lithography could quickly expose regions of low detail, then apply a low-noise multi-stage exposure to areas where noise could limit device yield. Not all forms of microscopy benefit from heralding. For example, in phase contrast electron microscopy, information is extracted from the relative brightness of electron detector pixels. However, for amplitude (i.e. bright field) microscopy, heralding makes it possible to detect events where primary electrons fail to arrive at the electron detector. These events are particularly informative when they are rare (i.e. low-contrast imaging). For BF STEM of thin samples, a heralded electron source can more-than double the SNR at constant dose. Similarly, heralding is beneficial in SEM for low-contrast imaging conditions. An electron microscope equipped with a heralding system would likely be operated in a high-current, unheralded mode for alignment, focusing, and feature-finding, then switched to a low-current, heralded mode for dose calibration and enhanced performance. This work clarifies the specific conditions in which improvement from heralding can be expected. It also motivates more detailed analysis and design of heralded electron sources, which have the potential to become indispensable to next-generation electron-optical systems. We are grateful to Dr. Felix Ritzkowsky, Owen Medeiros, and Camron Blackburn for helpful discussion and input on the manuscript. This material is based upon work supported by the National Science Foundation under Grant No. 2110535 (MIT) and No. 2110556 (UC Davis). § TABLE OF SYMBOLS symbol meaning Δ t size of one time bin random variable (RV) describing the number of electrons produced by the gun in one time bin (samples from random variables are written in lower case) RV describing the number of electrons which pass the filter in one time bin RV describing the number of primary electrons which are registered by the electron detector (perhaps by generating one or more secondary electrons) in one time bin RV describing the number of photons which are registered by the single photon detector in one time bin rate at which electrons are produced from gun (after apertures) electron detector dark count rate electron detector background count rate from unheralded electrons photon detector background count rate g expected number of photons produced per electron f_0 probability that an electron outside of the chosen sideband passes through the filter f_1 probability that an electron belonging to the chosen sideband passes through the filter f total probability that an electron passes through the filter D probably the electron detector (secondary or primary) registers a count given that a primary electron passed the filter κ_e Klyshko efficiency for heralding electrons using photons κ_γ Klyshko efficiency for heralding photons using electrons ℐ Fisher information FRV fractional reduction in variance § STATISTICS OF A HERALDED ELECTRON SOURCE In this section, we calculate the fractional reduction in variance achievable with the heralded source described in the main text. In general, the superscripts (g), (p), and (d) will distinguish between quantities relating to the electron gun, the exit of the energy filter, and the electron/photon detectors, respectively. To begin, we assume the photon detector is capable of resolving arrival times into bins of size Δ t with Δ t=ϵ≪ 1, where =I/q is the average rate of electron production, I is the beam current, and q is the electron charge. With this assumption, each of the events in the system can be described as Bernoulli (binary) random variables. Let be a random variable describing the number of electrons produced by the gun in the time interval [t,t+Δ t]. While space charge and Fermionic particle statistics can in principle result in an anti-bunched electron beam, this effect is typically very small and has only recently been observed <cit.>. Therefore we will assume electron arrival times are uncorrelated so is Poissonian with mean and variance =Δ t. Let be a random variable describing the number of electrons which pass through the energy filter of the heralding system. Without access to data from the single photon detector is Poissonian. Let be a random variable describing the number of detected photons (it too is Poissonian with mean and variance gΔ t). Then p(=1) ≡ p(e_g)=Δ t+𝒪(ϵ^2) p(=0) ≡ p( e_g)=1-Δ t and p(=1) ≡ p(γ_d)=gη_γΔ t+Δ t -(/)𝒪(ϵ^2) p(=0) ≡ p(γ_d)=1-p(γ_d) where g is the average number of photons produced per electron (we will assume g≪ 1), η_γ is the photon system detection efficiency, and is the photon detector background count rate. The term proportional to ϵ^2 comes from simultaneous signal and background events and is small as long as ≪. We will assume this is true. Based on the definitons of g and η_γ, we can also write the conditional probability p(γ_d|e_g)=gη_γ. After interacting with the coupler, the electron beam is energy-filtered. We will assume the filter uses an energy-selecting slit which passes electrons with energy E_0<E<E_1. If 𝒮(E) is the energy distribution of the electron beam at the source and E_γ is the energy of a signal photon (we will assume the energy distribution of the photons is very narrow compared to 𝒮(E)), then let f_0 =∫_E_0^E_1 dE𝒮(E) f_1 =∫_E_0^E_1 dE𝒮(E-E_γ) We can interpret f_0 as the probability that an electron passes the filter given that it didn't produce a photon and f_1 as the probability that an electron passes the filter given that it did lose energy E_γ to a photon. Then f≡ p(e_p|e_g)=gf_1+(1-g)f_0 is the total probability that an electron which emerges from the gun passes the filter. In any given time bin, there are four possible outcomes =1, =1 true positive =0, =1 false positive =1, =0 false negative =0, =0 true negative with probabilities p(e_p|γ_d) true positive p( e_p|γ_d) false positive p(e_p|γ_d) false negative p( e_p|γ_d) true negative We can also condense these outputs into a single figure of merit in the form of the conditional variance | ≡∑_n p(=n) |=n = p(γ_d)|γ_d +p(γ_d)|γ_d with |γ_d =p( e_p|γ_d)(1-p(e_p|γ_d)) |γ_d =p( e_p|γ_d)(1-p( e_p|γ_d)) . In order to calculate these conditional probabilities, we use the above relations to obtain p( e_p|γ_d) =p(γ_d,e_p)/p(γ_d)=Δ tgη_γ f_1/gη_γΔ t+Δ t p( e_p|γ_d) =p(γ_d,e_p)/p(γ_d)=Δ t (f-gη_γ f_1)/1-gη_γΔ t-Δ t giving |/Δ t f =1-(gf_1η_γ)^2/f(gη_γ+)+𝒪(ϵ^2) . The intrinsic variance of the source (the variance without information from the photon detector) is =Δ tΓ_g^(e)f+𝒪(ϵ^2) . Incorporating the information from the photon detector, the fractional reduction in variance (FRV) is, to first order in ϵ, ≡-|/ =(gf_1η_γ)^2/(f)(gη_γ+) =Γ^(eγ)/Γ^(e)Γ^(eγ)/Γ^(γ) =κ_γκ_e where Γ^(eγ), Γ^(e), and Γ^(γ) are the rates of coincident events, electron detector events, and photon detector events, respectively. Without heralding, the FRV is 0. With perfect heralding, the FRV is 1. In terms of the system parameters described above, κ_e=gf_1η_γ/gη_γ+ , κ_γ=gη_γ f_1/f . where and η_γ are the photon detector background count rate and detection efficiency, respectively. §.§ Optimal filter parameters Here we justify f_1=1 for a field emission gun and f_1=0.9 for a Schottky emitter. First we assume the tip current is low enough such that the energy distribution of the emitted electrons is not distorted by space charge effects. Adapting this from Riemer <cit.> we have N(E) d E=E/(k T_c)^2exp(-E / k T_c) d E By setting the tip energy spread to 0.7 eV (for a Schottky emitter, which corresponds to a 3320 tip temperature), we find the overlap of a single 1.1 photon sideband with the zero-loss distribution is 0.1, as shown in Fig. <ref> below. An otherwise ideal energy filter could then have f_1 = 0.9. For the field emitter, the energy spread is so small that this integral is practically zero, and so f_1=1. § DOSE ESTIMATION Here we calculate the reduction in the variance of achievable using both a photon detector and an electron detector. We will assume the electron detectors are fast enough to count primary electrons. This assumption may not be appropriate for the pixelated detectors used, for example, in transmission electron microscopy, but it is possible for STEM<cit.> and SEM<cit.> detectors with careful calibration. First, to find the reduction in variance using just the electron detector, we first need |= ∑_p()| = p(e_d)[p(e_p|e_d)(1-p(e_p|e_d))] +p( e_d)[p(e_p| e_d)(1-p(e_p| e_d))] = p(e_d)[p(e_p|e_d)(1-p(e_p|e_d))] +p(e_p| e_d)+𝒪(Δ t^2) And since p(e_p|e_d) =[p(e_p)/p(e_d)p(e_d|e_p)] = f/ fD+D+𝒪(Δ t) =1/1+r_e where r_e=/ fD and p(e_p| e_d) =[p(e_p)/p( e_d)p( e_d|e_p)] =Δ tΓ_g^(e)f(1-D)+𝒪(Δ t^2) therefore |=Δ tΓ_g^(e)f(1-D/1+r_e)+𝒪(Δ t^2) At this point, we can find the fractional reduction of using : FRV(|)=1-|/=D/1+r_e Now we will calculate the additional variance reduction achievable with a heralding system. First, |, =∑_,ŋp(,ŋ)|,ŋ =p(e_d,γ_d)(p(e_p|e_d,γ_d)(1-p(e_p|e_d,γ_d))) +p(e_d,γ_d)(p(e_p|e_d,γ_d)(1-p(e_p|e_d,γ_d))) +p( e_d,γ_d)(p(e_p| e_d,γ_d)(1-p(e_p| e_d,γ_d))) +p( e_d,γ_d)(p(e_p| e_d,γ_d)(1-p(e_p| e_d,γ_d))) Since p(e_p|e_d,γ_d)=1+𝒪(Δ t^2) and 𝒪(1-p( e_d,γ_d))=𝒪(p(e_p| e_d,γ_d))=𝒪(Δ t) we can further simplify |, =p(e_d,γ_d)(p(e_p|e_d,γ_d)(1-p(e_p|e_d,γ_d))) +p( e_d,γ_d)(p(e_p| e_d,γ_d)(1-p(e_p| e_d,γ_d))) +p(e_p| e_d,γ_d)+𝒪(Δ t^2) so we'll need to find five probabilities: p(e_d,γ_d), p( e_d,γ_d), and three permutations of p(e_p|()e_d,()γ_d). Let's start with p(e_d,γ_d). The expression should contain one term representing a dark count at the electron detector and another where a real electron strikes the detector but either didn't create a photon or the photon was lost. p(e_d,γ_d)=Δ t+Δ t(f-gf_1η_γ)D+𝒪(Δ t^2) Using similar reasoning, p( e_d, γ_d)=Δ t+Δ t gη_γ(1-f_1D)+𝒪(Δ t^2) Now the conditional probabilities. First, we have p(e_p| e_d,γ_d)=Δ t(f-gf_1η_γ)(1-D)+𝒪(Δ t^2) Then, p(e_p| e_d,γ_d) =p(e_p, e_d,γ_d)/p( e_d,γ_d) = gη_γ f_1(1-D)/Γ_dc^(γ)+Γ_g^(e)gη_γ(1-f_1D) Finally, p(e_p|e_d,γ_d) =p(e_p,e_d,γ_d)/p(e_d,γ_d) =Γ_g^(e)(f-gf_1η_γ)D/Γ_dc^(e)+Γ_g^(e)(f-gf_1η_γ)D =f-gf_1η_γ/fr_e+f-gf_1η_γ Putting everything together, we have |,/Δ t f =(1-κ_γ)(1-D) +(1-κ_γ)Dr_e/1+r_e-κ_γ +k_γ(1-D)1-κ_e/1-κ_eD §.§ Dose Estimation for Electron Lithography As described in the main text, a multi-pass procedure can be used to implement dose control. In our, strategy the dose applied at stage k is _k=F(_c-∑_k'<k_k') where d_c is the target dose and d̂_k is the estimated dose at stage k. Without any information from the photon detector, we use d̂_k=n_γ,k/κ_γ where n_γ is the number of photons detected at stage k. For the purpose of simulation, we draw n_γ from a binomial distribution ℬ(n_e;κ_e) where n_e is the number of electrons actually delivered to the sample, which itself is a random number drawn from a Poisson distribution with mean d_k. The optimal value of constant F depends on the total number of dose stages m and on d_c. § REVIEW OF RELEVANT CONCEPTS RELATED TO FISHER INFORMATION The goal of a quantitative measurement is to estimate an unknown sample parameter, traditionally labeled θ. In STEM, θ could represent sample thickness or scattering cross-section. In SEM, θ could represent the secondary electron yield or surface tilt. A function θ̂(X) which uses measurement data X to produce an estimate of θ is called an estimator. We consider X to be a random variable and the variance θ̂=(θ̂-θ)^2 is the square of the measurement error. It is possible to place an upper bound on the measurement error without formulating θ̂ using the Cramer-Rao bound θ̂≤(Nℐ(θ_0))^-1 where ℐ(θ_0)=𝔼{(∂/∂θlog X(θ))^2|θ_0} is the Fisher information (FI). The Cramer-Rao bound applies only to unbiased estimator (with zero expected error). As an example, a Bernoulli random variable with probability mass function p(k;θ)=θ^k(1-θ)^1-k has FI ℐ_B=N/θ(1-θ) This means that after N samples of the distribution, the error of the best unbiased estimator is √(Nθ(1-θ)). Similarly, for a random variable with a Poisson distribution p(k;θ)=e^-θθ^k/k!, ℐ_P=N/θ so after N samples of the distribution, the error of the best unbiased estimator is √(Nθ). Notice that when estimating the parameter of a Bernoulli distribution, the FI diverges for large and small values of θ. But when estimating the parameter of a Poisson distribution, the FI diverges only when θ=0. § FISHER INFORMATION FOR ELECTRON MICROSCOPY We now to estimate an unknown parameter x, which determines D(x), the probability that an electron produced by the heralded source triggers the electron detector. The Fisher Information (FI), ℐ(x), added per time bin is ℐ(x_0|,)= ∑_,ŋ=0^11/p(,ŋ)(∂_xp(,ŋ)|_x=x_0)^2 As above, we simplify the calculation of ℐ by assuming that the electron detector operates in a counting regime (it has a binary response to each primary electron) with D(x) the probability that an electron which is produced by the heralded source is counted by the electron detector. Without heralding, the FI for values of x near x_0 gained from sampling only (i.e. without heralding) is ℐ(x_0|)=Δ t(∂_x D(x)|_x_0)^2/D(x)(1+/) where is the rate at which unheralded electrons arrive at the detector. Using data from the photon detector, we have ℐ(x_0|,)=Δ t(∂_x D(x)|_x_0)^2(I_γ+ I_e), where I_γ = κ_γ/D(x_0)(1-D(x_0)κ_e) and I_e = 1-κ_γ/D(x_0)(1+/). The first term in Eq. <ref> represents the information associated with photon detections (both with and without a coincident electron detection) and is similar in form to the FI associated with a binomial random variable with success probability D(x). The second term represents the information associated with electron detections which were not heralded and is similar in form to the FI associated with a Poisson random variable. Combining these two expressions above, the fractional gain in information is ℐ(x_0|,)/ℐ(x_0|)=1+κ_γ(1+//1-D(x_0)κ_e-1) Notice this expression diverges for κ_e=1 and D(x_0)→1. This is the same divergence we get by taking a ratio of the Bernoulli and Poisson FI: ℐ_B/ℐ_P=1/1-θ (see previous section). §.§ Estimating Sample Parameters in SEM Using FI, we can estimate the SNR improvement for quantitative measurement in an SEM. To show this, we will model the tilt-dependent SE yield as δ(θ)=δ_0δ_1/cos(θ)(δ_1-δ_0)+δ_0, where θ is the surface tilt, δ_0 is the SE yeild for a horizontal surface (θ=0), and δ_1 is the SE yield at the edge of a tall vertical step <cit.>. In order to estimate the secondary electron yield δ and surface tilt θ, we need to construct estimators δ̂ and θ̂ which are functions of the measurement data. The measurement data consists of tallies of the number of coincident events, N_eγ, electron-only events, N_e, and photon-only events, N_γ. For applied dose d, we use the estimators δ̂_h(N_γ e,N_e,N_γ;d) =𝔼_δ{p(δ|N_γ e,N_e,N_γ;d)} θ̂_h(N_γ e,N_e,N_γ;d) =𝔼_θ{p(θ|N_γ e,N_e,N_γ;d)} where 𝔼_X{p(X)}=∑_x xp(X=x) is the expectation value of X. We give these estimators the h subscript to differentiate them from the estimators used without heralding, δ̂(N_γ e+N_e;d) =𝔼_δ{p(δ|N_γ e+N_e,;d)} θ̂(N_γ e+N_e;d) =𝔼_θ{p(θ|N_γ e+N_e;d)} We can use Bayes' theorem to write p(δ|N_γ e,N_e,N_γ;d)=p(δ)/p(N_γ e,N_e,N_γ;d)p(N_γ e,N_e,N_γ|δ;d) where p(N_γ e,N_e,N_γ;d)=∫_δ_0^δ_1dδ p(δ)p(N_γ e,N_e,N_γ|δ;d) and p(δ) is a probability distribution describing our expectations or prior knowledge of δ. We use analogous expressions for θ. For simplicity, we will assume p(θ) is uniform: p(θ)=2/π 0<θ<π/2 0 else The distribution p(δ) is induced by p(θ) and their relation in Eq. <ref>. Note, however, that uniform distributions are not always the best representation of maximum ignorance. The distribution p(N_γ e,N_e,N_γ|θ;d) is a convolution of a Poisson distribution which determines the number of electrons produced by the gun and a multinomial distribution which selects the fate of each electron: p(N_γ e=n_γ e,N_e=n_e,N_γ=n_γ|θ;d)= ∑_n_e^(g)𝒫(n_e^(g);d)ℳ(n_γ e,n_e,n_γ;n_e^(g),p_γ e(θ),p_e(θ),p_γ(θ)) where 𝒫(n_e^(g);d)=e^-dd^n_e^(g)/n_e^(g)! and ℳ(n_γ e,n_e,n_γ;n_e^(g)(θ),p_γ e(θ),p_e(θ),p_γ(θ))= n_e^(g)!/n_γ e!n_e!n_γ!n_n!p_γ e(θ)^n_eγ p_e(θ)^n_e p_γ(θ)^n_γ p_n(θ)^n_n Where p_n=1-p_γ e-p_e-p_γ and n_n=n_e^(g)-n_γ e-n_e-n_γ, with the subscript n indicating the scenario where an electron produced by the gun does not trigger the electron or photon detectors. The convolution can be simplified into a product of three Poisson distributions: p(N_γ e=n_γ e,N_e=n_e,N_γ=n_γ|θ;d)= 𝒫(n_eγ; p_eγ(θ)d)𝒫(n_e; p_e(θ)d)𝒫(n_γ;p_γ(θ)d) Finally, according to the model described above, we have p_eγ(θ) =gη_γ f_1D(θ) p_e(θ) =(f-gη_γ f_1)D(θ) p_γ(θ) =gη_γ(1-f_1D(θ)) with D(θ)=1-e^-η_eδ(θ).
http://arxiv.org/abs/2406.18337v1
20240626132714
The Geometry of Generalised Spin$^r$ Spinors on Projective Spaces
[ "Diego Artacho", "Jordan Hofmann" ]
math.DG
[ "math.DG", "math.RT", "53C27, 15A66, 57R15" ]
[ [ Received 7 March 2024 / Accepted 23 May 2024 ================================================ § ABSTRACT In this paper, we adapt the characterisation of the spin representation via exterior forms to the generalised spin^r context. We find new invariant spin^r spinors on the projective spaces ℂℙ^n, ℍℙ^n, and the Cayley plane 𝕆ℙ^2 for all their homogeneous realisations. Specifically, for each of these realisations, we provide a complete description of the space of invariant spin^r for the minimum value of r for which this space is non-zero. Additionally, we demonstrate some geometric implications of the existence of special spin^r spinors on these spaces. § INTRODUCTION A topic of major interest in differential geometry is the existence or non-existence of special G-structures on a given smooth manifold M; classical examples include Riemannian, complex, symplectic, and spin structures. Spin geometry, in particular, gives a way of accessing global geometric information about Riemannian spin manifolds via sections of a certain naturally defined vector bundle called the spinor bundle. Indeed, for a Riemannian spin manifold M, there are a number of results of the form: M carries a spinor satisfying equation ℰM has geometric property 𝒫. For example, it is well-known that a manifold carrying a non-zero parallel spinor is Ricci-flat, and, more generally, that the existence of a non-zero real (resp. purely imaginary) Killing spinor implies that the metric is Einstein with positive (resp. negative) scalar curvature <cit.>. Other notable examples include the bijection between generalised Killing spinors in dimension 5 (resp. 6, resp. 7) and hypo (2)-structures (resp. half-flat (3)-structures, resp. co-calibrated G_2-structures) <cit.>, and the spinorial description of isometric immersions into Riemannian space forms <cit.>. However, not every manifold can be endowed with a spin structure; the question then naturally arises as to whether one can apply the tools of spin geometry to non-spin manifolds. The answer is affirmative, and there are several possible approaches. The unifying idea is to consider suitable enlargements of the spin groups, i.e., Lie groups L_n equipped with homomorphisms (n) ι_n⟶ L_n p_n⟶(n) such that p_n ∘ι_n is the usual two-sheeted covering (n) →(n). Hence, an oriented Riemannian n-manifold admitting a lift of the structure group to L_n is a weaker condition than being spin. Following ideas by Friedrich and Trautman <cit.>, the so-called spinorial Lipschitz structures have garnered much attention in recent years <cit.>. These naturally arise by following the inverse approach: starting with a suitable generalisation of the concept of spinor bundle, one investigates the enlargement L_n of (n) to which this bundle corresponds. These L_n are called Lipschitz groups. Another choice of L_n was introduced by Espinosa and Herrera <cit.>, who had the idea of spinorially twisting the spin group. In our setting, this corresponds to taking, for r ∈, the groups L_n^r = ^r(n) ( (n) ×(r) )/⟨ (-1 , -1) ⟩ with the obvious homomorphisms. We say that an oriented Riemannian n-manifold is spin^r if it admits a lift of the structure group to ^r(n). The case r=1 is the classical spin case, and the cases r=2, 3 give rise to spin^ and spin^ℍ geometry respectively, which have been a fruitful field of study over the past decades – see <cit.> for spin^ and <cit.> and references therein for spin^ℍ. These structures have been characterised topologically by Albanese and Milivojević in <cit.>, where they show that a manifold is spin^r if, and only if, it can be embedded into a spin manifold with codimension r. In <cit.>, Lawn and the first author focused on the study of spin^r structures on homogeneous spaces, establishing a bijection between G-invariant spin^r structures on G/H and certain representation-theoretical data – see Theorem <ref>. Analogously to the usual spin case, from a given spin^r structure one can construct, for each odd m ∈, the so-called m-twisted spin^r spinor bundle. Its sections are called m-twisted spin^r spinors or simply spin^r spinors, and, as in the classical case, they encode geometric information: for a spin^r manifold M, there are results of the form: M carries a spin^r spinor satisfying equation ℰM has property 𝒫. Some of these results can be found in <cit.>, for example (see Theorem <ref> for more explicit results): * The existence of a generalised Killing spin^r spinor ensures a certain decomposition of the Ricci tensor <cit.>. * The existence of a parallel pure spin^2 (resp. spin^3) spinor implies that the manifold is Kähler (resp. quaternionic Kähler) <cit.>. In this paper, we illustrate how invariant twisted spin^r spinors on the projective spaces ℂℙ^n, ℍℙ^n and the Cayley plane 𝕆ℙ^2 encode different geometric properties of these manifolds. To this end, we proceed as follows: * Consider a homogeneous realisation M=G/H of the corresponding space, equipped with a generic G-invariant metric. * Find the minimum value of r∈ such that M has a G-invariant spin^r structure carrying non-trivial invariant m-twisted spin^r spinors, for some odd m ∈, and describe the space of such spinors. * Study the geometric properties of M encoded by those invariant spin^r spinors which satisfy additional algebraic properties. The realm of projective spaces provides a fruitful ground for study. In particular, we prove that Friedrich's construction of generalised Killing spinors on ℂℙ^3 <cit.> cannot be generalised to higher complex dimensions, showing that this is the only dimension for which ℂℙ^2k+1 has an (k+1)-invariant metric carrying non-trivial invariant generalised Killing spinors. We also find the spin^ℍ spinor on ℍℙ^n inducing the standard quaternionic Kähler structure (see <cit.>) with new representation-theoretical methods, and we show that this is, up to scaling, the only (n+1)-invariant spin^ℍ spinor on this space. Finally, we find that the minimum values of r and m such that 𝕆ℙ^2 carries non-trivial F_4-invariant m-twisted spin^r spinors are r=9 and m=3, and the space of such spinors is four-dimensional. The computations carried out in this paper illustrate an extension of the differential forms approach to the spin representation (see e.g. <cit.>) to the context of spin^r structures. These techniques allow us to express and manipulate complicated twisted spinors in an easy and readable way, finding new examples of special spin^r spinors. The main contribution of this paper is, then, the fusion of the differential forms approach with the power of spin^r geometry to encode geometric properties of manifolds which are not necessarily spin. Our results are summarised as follows: Let G be a compact, simple and simply connected Lie group acting transitively on the projective space M = ℂℙ^n, ℍℙ^n or 𝕆ℙ^2. Then, the minimum values of r,m ∈ (with m odd) such that M admits a G-invariant spin^r structure carrying a non-zero invariant m-twisted spin^r spinor are shown in Table <ref>, together with the geometric information such spinors encode. § PRELIMINARIES We begin by introducing the necessary background definitions and results concerning spin and spin^r geometry within the context of homogeneous spaces. For an introduction to spin^r manifolds, we refer the reader to <cit.>, and for background about homogeneous spaces see e.g. <cit.>. §.§ Invariant metrics on reductive homogeneous spaces It is well-known that invariant Riemannian metrics on a reductive homogeneous space G/H are in one-to-one correspondence with _H-invariant inner products on a fixed reductive complement of the Lie algebra of H. Under some mild conditions, one can explicitly describe the set of such inner products: Let G/H be a Riemannian homogeneous space with reductive decomposition 𝔤 = 𝔥⊕𝔪, where 𝔤 is the Lie algebra of G, 𝔥 is the Lie algebra of H and 𝔪 is an _H-invariant complement of 𝔥 in 𝔤. Suppose that the adjoint representation of H on 𝔪 decomposes as a direct sum of pairwise non-isomorphic irreducible components 𝔪 = 𝔪_1 ⊕⋯⊕𝔪_k, and let B be an _H-invariant inner product on 𝔪. Then, the _H-invariant inner products on 𝔪 are precisely those of the form ∑_i=1^k a_i B 𝔪_i ×𝔪_i , a_i > 0 . §.§ Invariant spin^r structures Denote by (n) the special orthogonal group, and let λ_n (n) →(n) be the standard two-sheeted covering. This map induces an isomorphism at the level of Lie algebras, and its inverse ρ𝔰𝔬(n) ≅Λ^2 ^n →𝔰𝔭𝔦𝔫(n) ⊆ l(n) is given by 2 e_i ∧ e_j ↦ e_i · e_j. If f is any map with codomain 𝔰𝔬(n), we will refer to fρ∘ f as the spin lift of f. For r ∈ℕ, we define the group ^r(n) ((n) ×(r) ) / ℤ_2 , where ℤ_2 = ⟨(-1,-1)⟩⊆(n) ×(r). Note that ^1(n) = (n), ^2(n) = ^ℂ(n) and ^3(n) = ^ℍ(n), and that there are natural homomorphisms λ^r_n ^r(n) →(n) , ξ^r_n ^r(n) →(r) . [μ,ν] ↦λ_n(μ) [μ,ν] ↦λ_r(ν) We recall a topological result that we will use multiple times throughout the text: The map φ^r,n^r(n) →(n) ×(r) defined by λ^r_n ×ξ^r_n is a two-sheeted covering. Moreover, * φ^2,2_♯(π_1( ^2(2)) ) = ⟨(1,± 1)⟩⊆ℤ×ℤ≅π_1 ( (2) ×(2)); * for n ≥ 3, φ^2,n_♯( π_1 (^2(n))) = ⟨(1,1)⟩⊆ℤ_2 ×ℤ≅π_1 ( (n) ×(2)); * for r,n ≥ 3, φ^r,n_♯(π_1 (^r(n) ) ) = ⟨ (1,1) ⟩⊆ℤ_2 ×ℤ_2 ≅π_1 ( (n) ×(r)), where we always take the identifications π_1 ((n) ×(r)) ≅π_1((n)) ×π_1((r)). These enlargements or twistings of the spin group give rise to the main players in this paper: Let M be an oriented Riemannian n-manifold with principal (n)-bundle of positively oriented orthonormal frames FM. A spin^r structure on M is a reduction of the structure group of FM along the homomorphism λ^r_n. In other words, it is a pair (P,Φ) consisting of * a principal ^r(n)-bundle P over M, and * a ^r(n)-equivariant bundle homomorphism Φ P → FM, where ^r(n) acts on FM via λ^r_n. If there is no risk of confusion and Φ is clear from the context, we shall simply denote such a structure by P. The principal (r)-bundle associated to P along ξ^r_n is called the auxiliary bundle of the spin^r structure, and it is denoted by P̂. If (P_1,Φ_1) and (P_2,Φ_2) are spin^r structures on M, an equivalence of spin^r structures from (P_1,Φ_1) to (P_2,Φ_2) is a ^r(n)-equivariant diffeomorphism f P_1 → P_2 such that Φ_1 = Φ_2 ∘ f. If, moreover, M = G/H is a Riemannian homogeneous space, we say that a spin^r structure (P,Φ) on M is G-invariant if G acts smoothly on P by ^r(n)-bundle homomorphisms and Φ is G-equivariant. G-invariant spin^r structures on G/H are in one-to-one correspondence with representation-theoretical data: [<cit.>] Let G/H be an n-dimensional oriented Riemannian homogeneous space with H connected and isotropy representation σ H →(n). Then, there is a bijective correspondence between * G-invariant spin^r structures on G/H modulo G-equivariant equivalence of spin^r structures, * Lie group homomorphisms φ H →(r) such that σ×φ H →(n) ×(r) lifts to a homomorphism ϕ H →^r(n) along λ_n^r modulo conjugation by an element of (r). Explicitly, to such a φ corresponds the spin^r structure (P,Φ) with P = G ×_ϕ^r(n) and Φ P → FM ≅ G ×_σ(n) given by [g,x] ↦ [g,λ_n^r(x)]. For an oriented Riemannian homogeneous space M=G/H, its G-invariant spin type Σ(M,G) is defined by Σ(M,G):= min{ r∈ℕ : M admits a G-invariant spin^r structure} . §.§ Differential forms approach to the spin representation It is well-known – see e.g. <cit.> – that, for n ∈, the complexification ℂl(n) of the real Clifford algebra (n) satisfies ℂl(n) ≅_2^k (ℂ) if n=2k , _2^k (ℂ) ⊕_2^k (ℂ) if n=2k+1 . For n = 2k or 2k+1, define Σ_n = (ℂ^2)^⊗ k , and let s_k _2^k (ℂ) →_(Σ_n) be the standard representation of _2^k (ℂ). The spin representation Δ_n ℂl(n) →_ℂ(Σ_n) is defined by Δ_n s_k if n=2k , s_k ∘_j if n=2k+1 , where _j is the projection onto the j-th factor. Note that, for odd n, there are two non-isomorphic irreducible representations of l (n), and we are choosing one of them (we shall specify which one below). We will also denote by Δ_n its restriction to (n) when there is no risk of confusion. This restriction is independent of the choice of representation for odd n. It is useful to have an explicit description of the spin representation which does not use the isomorphism (<ref>). We will describe one here, which we refer to as the differential forms approach to the spin representation. More details can be found e.g. in <cit.>. Suppose n = 2k+1, and let (e_0,…,e_2k) be the standard basis of ℝ^n. Its complexification decomposes as ℂ_0 ⊕ L ⊕ L' , where ℂ_0 = _ℂ{u_0 i e_0} and L _ℂ{ x_j 1/√(2)( e_2j-1 - i e_2j) }_j=1^k , L' _ℂ{ y_j 1/√(2)( e_2j-1 + i e_2j) }_j=1^k . Note that _ℂ(Λ^∙L') = 2^k, and ℂl(n) acts on Λ^∙L' by extending x_j ·η i√(2) x_j ⌟η , y_j ·η i√(2) y_j ∧η , u_0 ·η -η_even + η_odd , where η_even and η_odd are, respectively, the even and odd parts of η∈Λ^∙ L'. Hence, e_2j-1·η i ( x_j ⌟η + y_j ∧η) , e_2j·η y_j ∧η - x_j ⌟η , e_0 ·η i η_even -i η_odd . This representation is isomorphic to Δ_n for n=2k+1 (the other possible choice of irreducible representation of l(n) corresponds to letting e_0 act by the negative of what is established in (<ref>)). To obtain it for n=2k, repeat all the above ignoring everything with a zero subscript. These representations have an invariant Hermitian product, which we shall denote by ⟨· , ·⟩, and an associated norm ·, for which the basis { y_j_1, …, j_k y_j_1∧⋯∧ y_j_k 1 ≤ k ≤ n , 1 ≤ j_1 < ⋯ < j_k ≤ n } is orthonormal. §.§ Invariant spin^r spinors A classical spin structure allows us to build a spinor bundle. Similarly, a spin^r structure naturally induces a family of complex vector bundles as follows: Let M be an n-dimensional oriented Riemannian manifold admitting a spin^r structure (P,Φ). For m ∈ odd, its m-twisted spin^r spinor bundle is defined by Σ_n,r^m M P ×_Δ_n,r^mΣ_n,r^m , with the natural projection to M induced by that of P, where Δ_n,r^mΔ_n ⊗Δ_r^⊗ m , Σ_n,r^mΣ_n ⊗Σ_r^⊗ m . If, moreover, M = G/H is a homogeneous space and (P,Φ) is G-invariant, then there exists a homomorphism φ H →(r) such that σ×φ lifts to a map ϕ H →^r(n) (see Theorem <ref>), and this bundle takes the form Σ_n,r^m M = G ×_Δ_n,r^m∘ϕΣ_n,r^m . Sections of Σ_n,r^m M are called m-twisted spin^r spinors – if there is no risk of confusion, we will just refer to them as spin^r spinors or simply spinors. They are identified with H-equivariant maps ψ G →Σ_n,r^m, and G acts on the space of spinors by (g ·ψ) (g') ψ( g^-1 g' ) . G-invariant spinors correspond, then, to constant H-equivariant maps G →Σ_n,r^m, which in turn correspond to elements of Σ_n,r^m which are stabilised by H. If H is connected, these are just elements of Σ_n,r^m which are annihilated by the differential action of the Lie algebra of H. We denote the space of invariant m-twisted spin^r spinors by ( Σ_n,r^m)_inv. In a similar spirit to Definition <ref>, we make the following definition: For an oriented Riemannian homogeneous space M=G/H, the G-invariant spinor type of M is defined by σ(M,G) min{ r ∈ℕ | M admits a G-invariant spin^r structure with (Σ_n,r^m)_inv≠ 0 for some odd m } . The G-invariant spinor type σ(M^n,G) is well-defined, and it satisfies 1 ≤σ(M,G) ≤ n . This is because the G-invariant ^n structure on M determined by taking φ H →(n) to be equal to the isotropy representation always carries a non-zero invariant 1-twisted spin^n spinor – see <cit.>. The requirement that r be minimal in the definition of the G-invariant spinor type is motivated by the next proposition, which shows that passing from a spin^r structure to any spin^r' structure (r'>r) induced by it via the obvious inclusion ^r(n) ↪^r'(n) leads to redundancies. Before stating the proposition, we introduce some terminology which will be useful in describing the relationship between the structures: Let M^n=G/H be an oriented Riemannian homogeneous space. We say that two spin^r structures P_r, P_r' (r≤ r') on M are in the same lineage if P_r'≅ P_r ×_ι^r'(n), where ι: ^r(n) ↪^r'(n) is the natural inclusion map induced by the inclusion (r) ↪(r') as the lower right-hand r× r block. Let M^n=G/H be an oriented Riemannian homogeneous space with connected isotropy group H, equipped with a G-invariant spin^r structure P_r. Furthermore, for any r'≥ r consider the invariant spin^r' structure P_r' in the lineage of P_r. If ψ∈ (Σ^m_n,r)_ is an invariant m-twisted spin^r spinor, then it induces an invariant m'-twisted spin^r' spinor for any m'≥ m (m,m' odd), i.e. there is an inclusion (Σ^m_n,r)_inv↪(Σ^m'_n,r')_inv for all r'≥ r, m'≥ m (m,m' odd). It suffices to prove the result for r'∈{r, r+1}. Suppose first that r'=r+1, and let φ : H→(r) be an auxiliary homomorphism corresponding to P_r in the sense of Theorem <ref>. Denoting by σ:H→(n) the isotropy representation, we begin by observing that the invariant spin^r+1 structure in the lineage of P_r is induced by the lift of the homomorphism σ×φ' : H →(n)×(r+1) given by the composition of σ×φ : H →(n)×(r) with the inclusion (n)×(r)↪(n)×(r+1). In particular, 𝔥 acts on Σ_n,r+1^m'= Σ_n ⊗Σ_r+1^⊗ m' by the (tensor product action associated to the) lift of σ_* on the Σ_n factor and the lift of φ_* on the Σ_r+1 factors. We now split into two cases based on the parity of r. Supposing first that r is even, we have Σ_r+1|_𝔰𝔭𝔦𝔫(r)^≃Σ_r as 𝔰𝔭𝔦𝔫(r)^ representations, and therefore Σ^m'_n,r+1|_𝔥^≃Σ^m'_n,r as 𝔥^-representations. Since m,m' are both odd we have m'-m = 2k for some k≥ 0, and therefore Σ_r^⊗ m'|_𝔰𝔭𝔦𝔫(r)^≃Σ_r^⊗ m⊗Σ_r ⊗…⊗Σ_r _2k copies. But Σ_r is a self-dual representation of 𝔰𝔭𝔦𝔫(r)^, hence also a self-dual representation of 𝔥^, so Σ_r^⊗ 2k contains a copy of the trivial 𝔥-representation. The corresponding H-representation thus also contains a trivial subrepresentation since H is connected. In particular, there is a copy of Σ^m_n,r|_H inside Σ^m'_n,r|_H and the result in this case follows. Suppose now that r is odd, and denote by Σ_r+1 = Σ^+_r+1⊕Σ^-_r+1 the splitting into positive and negative half-spinor spaces. Then we have Σ_r+1^+|_𝔰𝔭𝔦𝔫(r)^≃Σ_r, hence Σ^m'_n,r+1|_𝔥^ contains a copy of Σ_n,r^m', and the result in this case then follows by the same argument as in the even case. We have shown the result holds for r'=r+1 (hence for all r'>r), and all that remains is to consider the case r'=r. The result in this case follows by arguing exactly as above, using the fact that Σ_r is a self-dual 𝔰𝔭𝔦𝔫(r)^ representation to find a copy of the trivial representation in Σ_r^⊗ (m'-m). §.§ Special spin^r spinors In the classical spin setting, spinors satisfying some additional properties carry geometric information about the manifold. Special spin^r spinors also encode geometric properties. Let us first define the special properties of spinors we shall be concerned about: [<cit.>] Let ψ∈Σ_n,r^m, X,Y ∈ℝ^n and 1 ≤ k<l ≤ r, and let (ê_1, …, ê_r) be the standard basis of ℝ^r. The real 2-form η^ψ_kl and the skew-symmetric endomorphism η̂_kl^ψ associated to ψ are defined by η_kl^ψ (X,Y) ⟨(X ∧ Y ) · (ê_k ·ê_l) ·ψ , ψ⟩ , η̂_kl^ψ (X) ( η_kl^ψ (X, ·) )^♯ . We say that ψ is pure if ( η̂_kl^ψ)^2 = -_ℝ^n and ( η_kl^ψ + 2 ê_k ·ê_l ) ·ψ = 0 (only for r ≥ 3) , for all 1 ≤ k < l ≤ r. An m-twisted spin^r spinor on a manifold is pure if it is pure at every point. It is clear that an invariant spin^r spinor on a homogeneous space is pure if, and only if, it is pure at one point. We are also interested in various differential equations that a spinor might satisfy. Recall that the Levi-Civita connection on a spin manifold naturally induces a connection on the spinor bundle. Similarly, the Levi-Civita connection of a spin^r manifold together with a connection θ on the auxiliary bundle defines a connection ∇^θ on each twisted spin^r spinor bundle. There are obvious analogues of the usual special spinorial field equations to the spin^r setting: Let ψ be a twisted spin^r spinor on M and θ a connection on the auxiliary bundle of the spin^r structure. * ψ is θ-parallel if ∇^θψ = 0; * ψ is θ-Killing if for all vector fields X one has that ∇_X^θψ = λ X ·ψ, for some constant λ∈ℝ; * ψ is θ-generalised Killing if there exists a symmetric endomorphism field A∈(TM) such that, for all vector fields X, one has ∇_X^θψ = A(X) ·ψ. We collect here a number of results that relate the existence of special spin^r spinors to geometric properties of the manifold: [<cit.>] Let M be an n-dimensional spin^r manifold, and let θ be a connection on its auxiliary bundle. * If M carries a θ-parallel spinor ψ, then the Ricci tensor decomposes as = 1/ψ^2∑_k < lΘ̂_kl∘η̂_kl^ψ , where Θ̂_kl is the skew-symmetric endomorphism associated to the 2-form on TM given by Θ_kl (X,Y) ⟨Ω(X,Y) ( ê_k ) , ê_l ⟩ , where Ω is the curvature 2-form of the connection θ on the auxiliary bundle. * If θ is flat and M carries a θ-Killing spinor, then M is Einstein. * If M carries a θ-parallel m-twisted pure spinor for some m ∈ℕ, r ≥ 3, r ≠ 4, n ≠ 8, n + 4r - 16 ≠ 0 and n + 8r - 16 ≠ 0, then M is Einstein. * If r=2 and M carries a θ-parallel pure spinor, then M is Kähler. * If r=3 and M admits a θ-parallel pure spinor, then M is quaternionic Kähler. If M=G/H is a Riemannian homogeneous space, invariant connections on homogeneous bundles over M are described by algebraic data (<cit.>, see e.g. <cit.> for a modern treatment): Let G/H be a homogeneous space, and let ϕ H → K be a Lie group homomorphism. There is a one-to-one correspondence between G-invariant connections on G×_ϕ K and linear maps : 𝔤→𝔨 satisfying[Note that _H in condition (2) refers to the restriction of the adjoint representation of G to H⊆ G, whereas _K refers to the adjoint representation of K.]: * (X) = ϕ_* (X) , X ∈𝔥; * ∘_H(h) = _K(ϕ(h) ) ∘, h ∈ H. The map corresponding to a connection is called the Nomizu map of said connection. For the connections of interest in this article, the Nomizu maps are particularly easy to describe: (<cit.>) Let (G/H,g) be an n-dimensional oriented Riemannian homogeneous space, where the metric g corresponds to an invariant inner product B on a reductive complement 𝔪 of 𝔥. The Nomizu map 𝔤→𝔰𝔬(𝔪) of the Levi-Civita connection of g is given by (X)(Y) = 1/2[ X,Y ]_𝔪 + U(X,Y) , X ∈𝔤 , Y ∈𝔪 , where U is defined by B ( U(X,Y) , W ) = 1/2( B ( [W,X]_𝔪 , Y ) + B ( X , [W,Y]_𝔪) ) . The following proposition describes how the correspondence in Proposition <ref> works in the particular situation we are interested in: Let (G/H,g) be an n-dimensional Riemannian homogeneous space equipped with a G-invariant spin^r structure P. Let 𝔤→𝔰𝔬(n) be the Nomizu map of the Levi-Civita connection of g, and let ' 𝔤→𝔰𝔬(r) be the Nomizu map of an invariant connection θ on the associated bundle P̂. Let be the lift of to 𝔰𝔭𝔦𝔫(n) and let ' be the lift of ' to 𝔰𝔭𝔦𝔫(r). Then, ⊗(')^⊗ m is the Nomizu map of the invariant connection ∇^θ on the m-twisted spin^r spinor bundle. Moreover, if ψ∈(Σ_n,r^m)_inv and X̂ is the fundamental vector field on G/H defined by X ∈𝔪, then ( ∇^θ_X̂ψ)_eH = ⊗(')^⊗ m (X)·ψ . In particular, an invariant m-twisted spin^r spinor ψ is θ-parallel if, and only if, it satisfies the equation ∀ X ∈𝔪 ⊗(')^⊗ m (X)·ψ = 0 . As we shall see later in the setting of spin^ structures, the second condition in Proposition <ref> is quite restrictive. Indeed, the auxiliary bundles of invariant spin^ structures are principal bundles of the abelian group (2). Hence, the second condition becomes ∘_H(h) = for all h∈ H. This will force the kernel of |_𝔪 to be quite large in most of our cases. The following is a useful criterion: Let G/H be a homogeneous space with a reductive decomposition 𝔤 = 𝔥⊕𝔪, and let ϕ H → K be a Lie group homomorphism. Let H_0⊆ H be the kernel of _K ∘ϕ: H →(𝔨). If X ∈span_[ 𝔥_0 , 𝔪], then (X)=0 for the Nomizu map : 𝔤→𝔨 associated to any invariant connection on G×_ϕ K. By linearity of , it suffices to consider X=[v,Y] for some v ∈𝔥_0 and Y ∈𝔪. Let γ: → H_0 be a curve with γ(0)=e_G and γ'(0) = v. By Proposition <ref>, the Nomizu map of any invariant connection on G×_ϕ K satisfies ∘_H(γ(t)) =, and hence 0= d/dt|_t=0(Y) = d/dt|_t=0(_H(γ(t)) Y) = ([v,Y]) = (X). Finally, we examine the differential equations satisfied by invariant spin^r spinors on symmetric spaces. The following proposition is analogous to the familiar fact in the spin setting that invariant spinors on symmetric spaces are ∇^g-parallel, since the Levi-Civita and the Ambrose-Singer connections coincide: Let (M=G/H,g) be a Riemannian symmetric space with a G-invariant spin^r structure P. Furthermore, suppose that the natural vector bundle E associated to the auxiliary bundle P̂ is G-equivariantly isomorphic to a subbundle of the tensor bundle T^p,qM := TM^⊗ p⊗ T^*M^⊗ q for some p,q∈ℕ. Let ∇^E denote the natural connection induced by the Levi-Civita connection ∇^g of g under the identification E⊆ T^p,qM, and ∇:= ∇^g ⊗ (∇^E)^⊗ m the corresponding twisted connection on Σ_n,r^m M. If ψ∈Σ_n,r^m is G-invariant, then it is ∇-parallel, i.e. ∇ψ =0. Since M=G/H is symmetric we may fix a reductive decomposition 𝔤=𝔥⊕𝔪 such that [𝔪,𝔪]⊆𝔥. With respect to this decomposition, the Nomizu map associated to the Levi-Civita connection vanishes identically on the reductive complement 𝔪, i.e. ^g|_𝔪≡ 0. The extension of ^g|_𝔪 to tensors is therefore also identically zero, so the inclusion E⊆ T^p,qM implies that the Nomizu map ^E associated to ∇^E also vanishes identically on 𝔪. The twisted Nomizu map associated to ∇ then vanishes identically on 𝔪, and the result follows. This result will be useful for several of the cases in our classification, where the limited number of low-dimensional representations of the isotropy groups will force the auxiliary bundles to be isomorphic to familiar tensor (sub)bundles. §.§ Some notation Throughout the computations carried out in this paper, we will repeatedly use some matrix notation and identities, taken from <cit.>. We will denote by E^(n)_i,j (resp. F^(n)_i,j) the elementary n × n skew-symmetric (resp. symmetric) matrix given by E_i,j^(n)=r*4 >c i j r! [cccc] ⋮ i -1 j 1 ⋮ , F_i,j^(n) = r*4 >c i j r! [cccc] ⋮ i 1 j 1 ⋮ . By convention, the matrix F^(n)_i,i has all the entries equal to zero except for the (i,i) entry, which is 1. We will denote by B_0 the bilinear form on the space of matrices of appropriate size given by B_0(X,Y) - ( (XY) ) , where (z) denotes the real part of z and (A) is the trace of the matrix A. Finally, if {e_i}_i is an orthonormal basis for some vector space V with respect to an inner product B, we shall denote by e_i,j:= e_i∧ e_j the standard basis elements for 𝔰𝔬(V,B) ≅Λ^2 V, sending e_i ↦ e_j and e_j ↦ -e_i. § PROJECTIVE SPACES Oniščik <cit.> classified the compact, simple and simply connected Lie groups which act transitively on the projective spaces ℂℙ^n, ℍℙ^n and 𝕆ℙ^2 – see also <cit.>. We exhibit them in Table <ref>. §.§ Hermitian complex projective space Oniščik <cit.> classified the compact, simple and simply connected Lie groups which act transitively on the projective spaces ℂℙ^n, ℍℙ^n and 𝕆ℙ^2 – see also <cit.>. We exhibit them in Table <ref>. §.§.§ Invariant spin^r structures We are now ready to determine the (n+1)-invariant spin type of ℂℙ^n. By <cit.>, it is clear that ℂℙ^n admits an (n+1)-invariant spin structure if, and only if, n is odd. Moreover, one has: The (n+1)-invariant spin^ℂ structures on ℂℙ^n are given by (n+1) ×_ϕ_s^ℂ(2n) , s ∈ℤ n ≢s mod 2 , where ϕ_s is the unique lift of σ×φ_s to ^ℂ(2n), σ H →(2n) is the isotropy representation and φ_s H →(2) ≅(1) is given by [ z 0; 0 B ]↦(B)^s. In particular, the (n+1)-invariant spin type of ℂℙ^n is Σ(ℂℙ^n,(n+1)) = 1 n odd, 2 n even. Note that H ≅(n), and that every Lie group homomorphism (n) →(1) is of the form B ↦(B)^s, for some s ∈ℤ. If α is a generator of π_1 ( H ) ≅, then ( σ×φ_s )_♯ (α) = ( n-1 , s ) ∈π_1 ( (2n) ) ×π_1 ( (2) ) . Hence, by Proposition <ref>, σ×φ_s H →(2n) ×(1) lifts to ^ℂ(2n) if, and only if, n ≢s mod 2. Finally, as (1) is abelian, the representations φ_s are pairwise non-equivalent. The result now follows from Theorem <ref>. §.§.§ Invariant spin^r spinors The classical spin case r=1 does not yield any non-trivial invariant spinors, as we show in the following theorem: For n odd, there are no non-trivial (n+1)-invariant spinors on ℂℙ^n. We need the explicit expression of the action of ξ∈𝔥 on 𝔪. Letting σ H →(2n) be the isotropy representation and σ its lift to (2n), and e.g. using the commutation relations in <cit.>, one can readily see that, for each p = 1, …, n, (ξ)𝔪 (e_2p) = [ξ,e_2p]_𝔪 = (n+1) e_2p-1 , (ξ)𝔪 (e_2p-1) = [ξ,e_2p-1]_𝔪 = - (n+1) e_2p . Hence, σ_* (ξ) = (ξ)𝔪 = (n+1) ∑_p=1^n e_2p∧ e_2p-1∈𝔰𝔬(2n) , and the spin lift is given by σ_*(ξ) = n+1/2∑_p=1^n e_2p· e_2p-1∈𝔰𝔭𝔦𝔫(2n) ⊆ℂl(2n) . A direct computation using (<ref>) shows that, for each 1 ≤ k ≤ n and 1 ≤ j_1 < … < j_k ≤ n, σ_*(ξ) ·( y_j_1∧⋯∧ y_j_k) = i(n+1)/2(2k-n) y_j_1∧⋯∧ y_j_k . From this we observe that, if n is odd, there are no non-trivial invariant spinors. The fact that no non-trivial invariant spinors exist motivates the investigation of spin^ spinors. For n,s ∈ℕ with n ≢s mod 2, the space of (n+1)-invariant 1-twisted spin^ℂ spinors on ℂℙ^n associated to the spin^ℂ structure (n+1) ×_ϕ_s^ℂ(2n) is given by (Σ_2n,2^1)_inv = _ℂ{ 1 ⊗1̂ , (y_1 ∧⋯∧ y_n) ⊗ŷ_1 } s = n+1 , _ℂ{(y_1 ∧⋯∧ y_n) ⊗1̂ , 1 ⊗ŷ_1 } s = -(n+1) , 0 otherwise . In particular, the (n+1)-invariant spinor type of ℂℙ^n is σ(ℂℙ^n, (n+1)) = 2 . Recall that 𝔥 = ℝξ⊕𝔥' as Lie algebras, and note that, for ψ∈Σ_2n,2^1, ( ∀ X ∈𝔥' ( ϕ_s )_* (X) ·ψ = 0 ) ψ∈_ℂ{1,y_1 ∧⋯∧ y_n }⊗Σ_2 , by <cit.> and the definition of φ_s. Moreover, (ϕ_s)_* (ξ) = ( n+1/2∑_p=1^n e_2p· e_2p-1 , sn/2ê_1 ·ê_2 ) ∈𝔰𝔭𝔦𝔫(2n) ⊕𝔰𝔭𝔦𝔫(2) ≅𝔰𝔭𝔦𝔫^(2n) . Finally, an easy calculation shows that, for 1 ≤ k ≤ n and 1 ≤ j_1 < … < j_k ≤ n, (ϕ_s)_* (ξ) ·( ( y_j_1∧⋯∧ y_j_k) ⊗1̂) = i/2( (n+1)(2k-n) + sn ) ( ( y_j_1∧⋯∧ y_j_k) ⊗1̂) , (ϕ_s)_* (ξ) ·( ( y_j_1∧⋯∧ y_j_k) ⊗ŷ_1 ) = i/2( (n+1)(2k-n) - sn ) ( ( y_j_1∧⋯∧ y_j_k) ⊗ŷ_1 ) . From this it is straightforward to conclude the result. §.§.§ Special spin^r spinors The aim now is to show that the (n+1)-invariant spin^ℂ spinors on ℂℙ^n found in Theorem <ref> are pure and parallel with respect to a suitable connection on the auxiliary bundle. For s = n+1 (resp. s = -(n+1)), the (n+1)-invariant spin^ spinors 1 ⊗1̂ and (y_1 ∧⋯∧ y_n) ⊗ŷ_1 (resp. (y_1 ∧⋯∧ y_n) ⊗1̂ and 1 ⊗ŷ_1) on ℂℙ^n are pure. Moreover, they are parallel with respect to the invariant connection on the corresponding auxiliary bundle determined by the Nomizu map |_𝔪 = 0. We will only show the calculations for the spinor ψ = (y_1 ∧⋯∧ y_n) ⊗1̂, as the other three are analogous. We need to show that (η̂^ψ_12)^2 = -. Indeed, a straightforward calculation shows that η^ψ_12 (e_2p, e_2q) = ⟨ e_2p· e_2q·ê_1 ·ê_2 ·ψ , ψ⟩ + δ_p,q⟨ê_1 ·ê_2 ·ψ , ψ⟩ = ⟨ i e_2p· e_2q·ψ , ψ⟩ + δ_p,q⟨ -i ψ , ψ⟩ = 0, η^ψ_12 (e_2p-1, e_2q-1) = 0, η^ψ_12 (e_2p, e_2q-1) = ⟨ e_2p· e_2q-1·ê_1 ·ê_2 ·ψ , ψ⟩ = ⟨ i e_2p· e_2q-1·ψ , ψ⟩ = - δ_p,q. Hence, η̂^ψ_12 (e_2p) = - e_2p-1 and η̂^ψ_12 (e_2p-1) = e_2p. The last assertion of the proposition follows by noting that, as ℙ^n ≅(n+1)/((1)×(n)) is a symmetric space, the Levi-Civita connection coincides with the Ambrose-Singer connection, whose Nomizu map satisfies ^g|_𝔪≡ 0. In light of the Ricci decomposition in <cit.>, the existence of parallel pure spin^ spinors encodes a special geometric structure: The (n+1)-invariant metrics g_a on ℂℙ^n are Kähler-Einstein. Take s=-(n+1). Consider the spin^ structure defined by φ_s, and endow its auxiliary bundle with the connection described in Proposition <ref>. We have seen that this spin^ structure carries a non-zero parallel pure spin^ spinor ψ = (y_1 ∧⋯∧ y_n) ⊗1̂. This implies that the metric is Kähler <cit.> with respect to the invariant complex structure defined by η̂^ψ_12. Now, by Theorem <ref>(1), the Ricci tensor decomposes as = 1/ψ^2Θ̂_1 2∘η̂_1 2^ψ , where Θ̂_1 2 is the endomorphism associated to the 2-form on 𝔪 Θ_1 2 (X,Y) ⟨Ω(X,Y) ( ê_1 ) , ê_2 ⟩ , X,Y ∈𝔪 , where Ω is the curvature 2-form of the connection on the auxiliary bundle. Recall <cit.> that, if is the Nomizu map of the connection on the auxiliary bundle, then ∀ X,Y ∈𝔪Ω(X,Y) = [ (X), (Y) ]_𝔰𝔬(2) - ( [X,Y] ) = - ( [X,Y] ) . It is now easy to see that, for all 1 ≤ p, q ≤ n, Ω( e_2p-1, e_2q) = δ_p,qs/aê_1 ∧ê_2 , Ω( e_2p-1, e_2q-1) = Ω( e_2p, e_2q) = 0 . Hence, Θ̂_1 2 = s/a∑_p=1^n e_2p-1∧ e_2p , and finally, using the expression of η̂^ψ_12 obtained in the proof of Proposition <ref>, we obtain = 1/ψ^2Θ̂_1 2∘η̂_1 2^ψ = n+1/a . Recall that the Fubini-Study metric g_FS on ℂℙ^n is (n+1)-invariant, and that its Ricci constant is 2(n+1). From equation (<ref>), we can deduce that g_FS=g_1/2. §.§ Symplectic complex projective space Consider, for n ≥ 1, the homogeneous realisation of odd-dimensional complex projective space ℂℙ^2n+1≅(n+1)(1) ×(n) , where H (1) ×(n) is realised as a subgroup of (n+1) by the upper left-hand 1 × 1 block for (1) and the lower right-hand n × n block for (n). Denote by 𝔥 the Lie algebra of H and 𝔥' 𝔰𝔭(n) ⊆𝔰𝔭(n+1). Then, 𝔥 = ℝξ_1 ⊕𝔥' (as Lie algebras), where ξ_1 i F_1,1^(n+1) and 𝔥' = 𝔰𝔭(n) = _ℝ{ i F_p,p^(n+1), j F_p,p^(n+1), k F_p,p^(n+1), i F_r,s^(n+1), j F_r,s^(n+1), k F_r,s^(n+1), E_r,s^(n+1)}_l 2 ≤ r < s ≤ n+1 p = 2 , …, n+1 . The Lie subalgebra 𝔥⊆𝔰𝔭(n+1) has a reductive complement 𝔪( 𝔥)^⊥_B_0 = 𝒱⊕ℋ, where 𝒱_ℝ{ξ_2 -k F_1,1^(n+1), ξ_3 j F_1,1^(n+1)} , ℋ_ℝ{e_4p j F_1,p+1^(n+1), e_4p+1 k F_1,p+1^(n+1), e_4p+2 i F_1,p+1^(n+1), e_4p+3 E_1,p+1^(n+1)}_p = 1, …, n , and this is the decomposition of 𝔪 into irreducible subrepresentations of the adjoint representation of 𝔥. We have, therefore, by Theorem <ref>, a two-parameter family of invariant metrics g_a,t a B_0ℋ×ℋ + 2at B_0𝒱×𝒱 , a,t>0 , and a g_a,t-orthonormal basis of 𝔪 is given by {ξ_2^a,t1/√(2ta)ξ_2 , ξ_3^a,t1/√(2ta)ξ_3 , e_4p + ε^a,t1/√(2a) e_4p + ε}_lε = 0,1,2,3 p = 1 , …, n . We take the orientation defined by the ordering (ξ^a,t_2, ξ^a,t_3, e^a,t_4, …, e^a,t_4n+3). §.§.§ Invariant spin^r structures Consider, for n ≥ 1, the homogeneous realisation of odd-dimensional complex projective space ℂℙ^2n+1≅(n+1)(1) ×(n) , where H (1) ×(n) is realised as a subgroup of (n+1) by the upper left-hand 1 × 1 block for (1) and the lower right-hand n × n block for (n). Denote by 𝔥 the Lie algebra of H and 𝔥' 𝔰𝔭(n) ⊆𝔰𝔭(n+1). Then, 𝔥 = ℝξ_1 ⊕𝔥' (as Lie algebras), where ξ_1 i F_1,1^(n+1) and 𝔥' = 𝔰𝔭(n) = _ℝ{ i F_p,p^(n+1), j F_p,p^(n+1), k F_p,p^(n+1), i F_r,s^(n+1), j F_r,s^(n+1), k F_r,s^(n+1), E_r,s^(n+1)}_l 2 ≤ r < s ≤ n+1 p = 2 , …, n+1 . The Lie subalgebra 𝔥⊆𝔰𝔭(n+1) has a reductive complement 𝔪( 𝔥)^⊥_B_0 = 𝒱⊕ℋ, where 𝒱_ℝ{ξ_2 -k F_1,1^(n+1), ξ_3 j F_1,1^(n+1)} , ℋ_ℝ{e_4p j F_1,p+1^(n+1), e_4p+1 k F_1,p+1^(n+1), e_4p+2 i F_1,p+1^(n+1), e_4p+3 E_1,p+1^(n+1)}_p = 1, …, n , and this is the decomposition of 𝔪 into irreducible subrepresentations of the adjoint representation of 𝔥. We have, therefore, by Theorem <ref>, a two-parameter family of invariant metrics g_a,t a B_0ℋ×ℋ + 2at B_0𝒱×𝒱 , a,t>0 , and a g_a,t-orthonormal basis of 𝔪 is given by {ξ_2^a,t1/√(2ta)ξ_2 , ξ_3^a,t1/√(2ta)ξ_3 , e_4p + ε^a,t1/√(2a) e_4p + ε}_lε = 0,1,2,3 p = 1 , …, n . We take the orientation defined by the ordering (ξ^a,t_2, ξ^a,t_3, e^a,t_4, …, e^a,t_4n+3). §.§.§ Invariant spin^r spinors First, we classify the (n+1)-invariant spinors for the unique spin structure of ℂℙ^2n+1: The space Σ_inv of (n+1)-invariant spinors on ℂℙ^2n+1 is given by Σ_inv = _ℂ{ψ_+ω^(n+1)/2 , ψ_- y_1 ∧ω^(n-1)/2} n odd, 0 n even, where ω∑_p=1^n y_2p∧ y_2p+1. By <cit.>, the space of invariant spinors is quite restricted: Σ_inv⊆_ℂ{ω^k , y_1 ∧ω^k }_k=0, … , n . We only need to determine which of these are annihilated by ξ_1. A computation analogous to the one in the proof of Theorem <ref> shows that, if σ is the lift to (4n+2) of the isotropy representation σ H →(4n+2), σ_*(ξ_1) = ξ^a,t_2 ·ξ^a,t_3 + 1/2∑_p=1^n( e^a,t_4p· e^a,t_4p+1 + e^a,t_4p+2· e^a,t_4p+3) . In particular, σ_*(ξ_1) ·ω^k = i (n+1-2k) ω^k , σ_* (ξ_1) ·( y_1 ∧ω^k ) = i (n-1-2k)( y_1 ∧ω^k ) , and the result follows. We now turn to the study of spin^ℂ spinors. Using the same argument as in the Hermitian case (Theorem <ref>), one obtains: For n ∈ and s=2s' ∈ 2, the space (Σ_4n+2,2^1)_inv of (n+1)-invariant 1-twisted spin^ℂ spinors on ℂℙ^2n+1 associated to the spin^ℂ structure (n+1) ×_ϕ_s^ℂ(4n+2) is given by (Σ_4n+2,2^1)_inv = _ℂ{ω^(n+1+s')/2 , y_1 ∧ω^(n-1+s')/2}⊗1̂⊕ ⊕_ℂ{ω^(n+1-s')/2 , y_1 ∧ω^(n-1-s')/2}⊗ŷ_̂1̂ n ≢s' 2 , 0 otherwise. In particular, the (n+1)-invariant spinor type of ℂℙ^2n+1 satisfies σ( ℂℙ^2n+1 , (n+1) ) = 1 n odd , 2 n even. The spin^ structure corresponding to s=0 is the one induced by the usual spin structure. Indeed, taking s=0 in Theorem <ref>, one recovers the spinors in Theorem <ref> tensored with Σ_2. §.§.§ Special spin^r spinors In order to differentiate these spinors, one can see, using the formulas for the Nomizu map from <cit.>, that the spin lift ^a,t of the Nomizu map of the Levi-Civita connection of g_a,t is given by ^a,t( ξ_2^a,t) = 1-t/2√(2at)∑_p=1^n ( e_4p^a,t· e_4p+2^a,t - e_4p+1^a,t· e_4p+3^a,t) , ^a,t( ξ_3^a,t) = 1-t/2√(2at)∑_p=1^n ( e_4p^a,t· e_4p+3^a,t + e_4p+1^a,t· e_4p+2^a,t) , ^a,t( e_4p^a,t) = 1/2√(t/2a)( - ξ_2^a,t· e_4p+2^a,t - ξ_3^a,t· e_4p+3^a,t) , ^a,t( e_4p+1^a,t) = 1/2√(t/2a)( ξ_2^a,t· e_4p+3^a,t - ξ_3^a,t· e_4p+2^a,t) , ^a,t( e_4p+2^a,t) = 1/2√(t/2a)( ξ_2^a,t· e_4p^a,t + ξ_3^a,t· e_4p+1^a,t) , ^a,t( e_4p+3^a,t) = 1/2√(t/2a)( - ξ_2^a,t· e_4p+1^a,t + ξ_3^a,t· e_4p^a,t) . Baum et al. proved in <cit.> that ℂℙ^3 admits non-trivial (2)-invariant generalised Killing spinors, given by ψ_+± i ψ_-. For the Fubini-Study metric, the two eigenvalues of these generalised Killing spinors coincide, yielding real Killing spinors which are related to the nearly Kähler geometry of ℂℙ^3. We now show that this does not occur in higher dimensions: For n odd, ℂℙ^2n+1 admits non-trivial (n+1)-invariant generalised Killing spinors if, and only if, n=1. Let n ≥ 3, so that ω^(n ± 3)/2≠ 0. Using the above formulas (<ref>) for the Nomizu map, we get, for α, β∈, ^a,t( e_4p^a,t) ·( αψ_+ + βψ_- ) = -√(t/2a){αn+1/2 y_1 ∧ y_2p + β y_2p+1}∧ω^(n-1)/2 . Writing a general element X ∈𝔪 as a (real) linear combination X = μ_2 ξ_2^a,t + μ_3 ξ_3^a,t + ∑_p=1^n ( μ_4p e_4p^a,t + μ_4p+1 e_4p+1^a,t + μ_4p+2 e_4p+2^a,t + μ_4p+3 e_4p+3^a,t) of the basis vectors, we find that the Clifford product with an arbitrary invariant spinor is given by X ·( αψ_+ + βψ_- ) = α{ (i μ_2 + μ_3) y_1 + ∑_p=1^n[ (i μ_4p + μ_4p+1) y_2p + (i μ_4p+2+μ_4p+3) y_2p+1]}∧ω^(n+1)/2 + + αn+1/2{∑_p=1^n[(i μ_4p - μ_4p+1) y_2p+1 + (-i μ_4p+2+μ_4p+3) y_2p] }∧ω^(n-1)/2 + + β{ i μ_2 - μ_3 }ω^(n-1)/2 + + β y_1 ∧{∑_p=1^n[(-i μ_4p - μ_4p+1) y_2p + (-i μ_4p+2-μ_4p+3) y_2p+1] }∧ω^(n-1)/2 + + βn-1/2{∑_p=1^n[(-i μ_4p + μ_4p+1) + (i μ_4p+2-μ_4p+3) ] } y_1 ∧ω^(n-3)/2 . Hence, by equating coefficients in ^a,t( e_4p^a,t) ·( αψ_+ + βψ_- ) = X ·( αψ_+ + βψ_- ) , one easily concludes (using crucially that n ≥ 3) that the only possibility is α = β = 0, and the result follows. We now turn to the study of the (n+1)-invariant spin^ℂ spinors on ℂℙ^2n+1 found in Theorem <ref>. The aim is to show that, when s' = s/2 = ± n ± 1 and t=1, there is a pure spinor which is parallel with respect to a suitable connection on the auxiliary bundle. This encodes the fact that, for t=1, the metric g_a,t is Kähler. Let k ∈ℕ and ψ∈{ω^k ⊗1̂ , (y_1 ∧ω^k) ⊗1̂ , ω^k ⊗ŷ_1 , (y_1 ∧ω^k) ⊗ŷ_1 }. Then, a scalar multiple of ψ is pure if, and only if, k=0 or k=n. We will only prove it for ψ = ω^k ⊗1̂, as the other cases are analogous. For all 1 ≤ p , q ≤ n and ε∈{0,1,2,3}, one calculates η^ψ_12 (ξ^a,t_2, ξ^a,t_3) = ⟨ξ^a,t_2·ξ^a,t_3·ê_1 ·ê_2 ·ψ , ψ⟩ = ⟨ i ξ^a,t_2·ξ^a,t_3·ψ , ψ⟩ = - ⟨ω^k ⊗1̂ , ω^k ⊗1̂⟩ = -(k!)^2 nk , η^ψ_12 (e^a,t_4p, e^a,t_4q+1) = ⟨ e^a,t_4p· e^a,t_4q+1·ê_1 ·ê_2 ·ψ , ψ⟩ = ⟨ i · e^a,t_4p· e^a,t_4q+1·ω^k , ω^k ⟩ = - δ_p,q⟨ω^k - 2 k y_2p∧ y_2p+1∧ω^k-1, ω^k ⟩ = -δ_p,q(k!)^2 [ nk - 2 n-1k-1] , η^ψ_12 (e^a,t_4p+2, e^a,t_4q+3) = -δ_p,q(k!)^2 [ nk - 2 n-1k-1] , η^ψ_12 (ξ^a,t_2, e^a,t_4p + ε) = η^ψ_12 (ξ^a,t_3, e^a,t_4p + ε) = η^ψ_12 (e^a,t_4p, e^a,t_4q+2) = η^ψ_12 (e^a,t_4p, e^a,t_4q+3) = η^ψ_12 (e^a,t_4p+1, e^a,t_4q+3) = 0 , where n-1k-1 is understood to be 0 if k = 0. Altogether, we have η̂^ψ_12 = -(k!)^2 [ nkξ^a,t_2 ∧ξ^a,t_3 + [ nk - 2 n-1k-1] ∑_p=1^n( e^a,t_4p∧ e^a,t_4p+1 + e^a,t_4p+2∧ e^a,t_4p+3) ] , which is easily seen to square to a multiple of - if, and only if, k=0 or k=n. The preceding lemma, together with Theorem <ref> describing the invariant spin^ spinors, implies that the (n+1)-invariant spin^ℂ structure corresponding to s = 2s' ∈ 2 admits invariant pure spin^ℂ spinors if, and only if, s' ∈{n+1,-n-1,n-1,-n+1}, which are given in each case by {( y_1 ∧ω^n ) ⊗1̂, 1 ⊗ŷ_1 } s' = n+1, { 1 ⊗1̂, (y_1 ∧ω^n) ⊗ŷ_1 } s' = -n-1, {ω^n ⊗1̂, y_1 ⊗ŷ_1 } s' = n-1, { y_1 ⊗1̂, ω^n ⊗ŷ_1 } s' = -n+1. In order to differentiate these spinors, one needs to fix a connection on the auxiliary bundle. Applying the criterion in Lemma <ref>, one sees that the only (n+1)-invariant connection on the auxiliary bundle is the one with Nomizu map |_𝔪 = 0. This connection, together with the Levi-Civita connection of the metric g_a,t (with Nomizu map ^a,t), induces a connection ∇^a,t on the corresponding spin^ℂ spinor bundle. The following lemma is a straightforward calculation using the expression of the spin lift of the Nomizu map (<ref>): The invariant pure spin^ℂ spinor 1 ⊗1̂ is ∇^a,t-parallel if, and only if, t=1. These spin^ spinors encode some well-known geometric information of ℂℙ^2n+1: The metric g_a,1 on ℂℙ^2n+1 is Kähler-Einstein, for all a > 0. Let s = -2(n+1), and consider the spin^ structure on ℂℙ^2n+1 determined by φ_s. By Lemmata <ref> and <ref>, this structure carries a ∇^a,1 parallel pure spin^ℂ spinor ψ=1 ⊗1̂. Hence, by <cit.>, the metric g_a,1 is Kähler, for all a > 0. Let us now see that these metrics are also Einstein. By Theorem <ref>(1), the Ricci tensor decomposes as = 1/ψ^2Θ̂_1 2∘η̂_1 2^ψ . By calculations similar to those in the proof of Theorem <ref>, one finds that, for 1 ≤ p,q ≤ n, 0 ≤ε≤ 3 and 2 ≤ l ≤ 3, Ω(ξ^a,1_2,ξ^a,1_3 ) = - ( [ ξ^a,1_2 , ξ^a,1_3 ] ) = -1/a( ξ_1 ) = -1/a( φ_-2(n+1))_* ( ξ_1 ) = 2(n+1)/aê_1 ∧ê_2 , Ω(e^a,1_4p,e^a,1_4q+1) = Ω(e^a,1_4p+2,e^a,1_4q+3) = δ_p,q2(n+1)/aê_1 ∧ê_2 , Ω(e^a,1_4p,e^a,1_4q+2) = Ω(e^a,1_4p,e^a,1_4q+3) = Ω(e^a,1_4p+1,e^a,1_4q+2) = Ω(e^a,1_4p+1,e^a,1_4q+3) = Ω(ξ^a,1_l,e^a,1_4p + ε) = 0 . Hence, using the definition of Θ̂_12 in terms of Ω and taking k=0 in the proof of Lemma <ref>, Θ̂_1 2 = 2(n+1)/a( ξ^a,1_2∧ξ^a,1_3 + ∑_p=1^n ( e^a,1_4p∧ e^a,1_4p+1 + e^a,1_4p+2∧ e^a,1_4p+3) ) , η̂_1 2^ψ = - ( ξ^a,1_2∧ξ^a,1_3 + ∑_p=1^n ( e^a,1_4p∧ e^a,1_4p+1 + e^a,1_4p+2∧ e^a,1_4p+3) ) . Finally, substituting everything into equation (<ref>), we get = 2(n+1)/a , which completes the proof. §.§ Quaternionic projective space Consider the homogeneous realisation of quaternionic projective space given by ℍℙ^n≅(n+1)(1) ×(n) , where H (1) ×(n) is realised as a subgroup of (n+1) by the upper left-hand 1 × 1 block for (1) and the lower right-hand n × n block for (n). Denote by 𝔥 the Lie algebra of H and 𝔥' 𝔰𝔭(n) ⊆𝔥. Then, 𝔥 = 𝔰𝔭(1) ⊕𝔥' (as Lie algebras), and explicit bases are given by 𝔰𝔭(1) = _ℝ{ξ_1 i F_1,1^(n+1), ξ_2 -k F_1,1^(n+1), ξ_3 j F_1,1^(n+1)} , 𝔥' = 𝔰𝔭(n) = _ℝ{ i F_p,p^(n+1), j F_p,p^(n+1), k F_p,p^(n+1), i F_r,s^(n+1), j F_r,s^(n+1), k F_r,s^(n+1), E_r,s^(n+1)}_l 2 ≤ r < s ≤ n+1 p = 2 , …, n+1 . The isotropy subalgebra 𝔥⊆𝔰𝔭(n+1) has a reductive complement 𝔪_ℝ{e_4p j F_1,p+1^(n+1), e_4p+1 k F_1,p+1^(n+1), e_4p+2 i F_1,p+1^(n+1), e_4p+3 E_1,p+1^(n+1)}_p = 1, …, n , and the adjoint representation of 𝔥 on 𝔪 is irreducible. Therefore, by Theorem <ref>, the invariant metrics come in a one-parameter family g_a a B_0 |_𝔪×𝔪 , a>0 , and one easily verifies that the above basis of 𝔪 rescaled by 1/√(2a) is g_a-orthonormal. Without virtually any loss of generality, in order to simplify the notation we will only consider g:= g_1/2. We take the orientation defined by the ordering (e_4,e_5,…,e_4n+3). §.§.§ Invariant spin^r structures As ℍℙ^1 is just the sphere S^4, we will suppose throughout this section that n > 1. By Theorem <ref>, in order to understand (n+1)-invariant spin^r structures on ℍℙ^n, we need to find all Lie group homomorphisms φ H →(r) such that σ×φ lifts to ^r(4n). Since H is simply-connected, any such homomorphism lifts. Note also that, for r=2, using simplicity of (1) and (n), the only Lie group homomorphism (1) ×(n) →(2) is the trivial one. The corresponding (n+1)-invariant spin^ℂ structure on ℍℙ^n is naturally induced by its unique spin structure. The first interesting case is r=3, which corresponds to spin^ℍ structures. In order to classify them, we need to describe all homomorphisms (1) →(3): Up to conjugation by elements of (3) there are exactly two Lie group homomorphisms (3) →(3), namely the trivial homomorphism and the double covering λ_3. Let φ(1) →(3) be a non-trivial homomorphism, and recall that (1) ≅(3). As the Lie algebra 𝔰𝔬(3) is simple, the only non-trivial normal subgroups of (1) are discrete. Hence, φ has discrete kernel. As (1) is compact, the image of φ is a closed subgroup of (3). By the first isomorphism theorem for Lie groups, the image of φ is a 3-dimensional Lie subgroup of (3), and hence it is open in (3). As (3) is connected, φ must be surjective. Suppose now that T ∈(3,ℝ) is such that, for all A ∈(1), T^-1φ(A) T ∈(3). We claim that there exists T̂∈(3) such that T̂^-1φT̂ = T^-1φ T. Indeed, let B T^-1φ(A) T ∈(3). Then, T T^t = φ(A)^-1 T B T^t = φ(A)^-1 T B B^t T^t ( φ(A)^-1)^t = φ(A)^-1 T T^t φ(A) . Hence, as φ is surjective, TT^t commutes with all elements of (3), and hence it is a scalar multiple of the identity. Now, take T̂ = (T)^-1/3 T ∈(3). Finally, as (1) is simple, any 3-dimensional representation of (1) is either trivial or irreducible, and there is only one real irreducible 3-dimensional representation of (1) up to isomorphism <cit.>. This allows us to classify invariant spin^ℍ structures on quaternionic projective spaces: For n > 1, the (n+1)-invariant spin^ℍ structures on ℍℙ^n are given by (n+1) ×_ϕ_i^ℍ(4n) , i = 0,1 , where σ H →(4n) is the isotropy representation, φ_0 is the trivial homomorphism (1) ×(n) →(3), φ_1 (x,y) = λ_3 (x) and ϕ_i is the unique lift of σ×φ_i to ^ℍ(4n). The invariant spin^ℍ structure corresponding to φ_0 is simply the one induced by the unique spin structure, so for the rest of our discussion of ℍℙ^n we fix the spin^ℍ structure corresponding to φ_1. Observe that the auxiliary vector bundle of the spin^ℍ structure corresponding to φ_1 is (n+1)-equivariantly isomorphic to the rank-3 vector subbundle of (T ℍℙ^n) induced by the standard quaternionic Kähler structure on ℍℙ^n. §.§.§ Invariant spin^r spinors To begin, it is easy to see that this homogeneous realisation carries no invariant spinors: as the homogeneous realisation of ℍℙ^n that we are considering is that of a symmetric space, invariant spinors are parallel, and we know that ℍℙ^n cannot have any non-trivial parallel spinor, since it is not Ricci-flat. However, we shall see in the next proposition that there are always non-trivial invariant spin^ℍ spinors, for sufficient twistings of the spinor bundle, when n is odd. If n ≥ 1 is odd, the (n+1)-invariant spinor type of ℍℙ^n is σ(ℍℙ^n,(n+1))=3 , and the number of twistings m ≥ 0 of the spinor bundle which realises this is m=n. The preceding discussion shows that there are no invariant spinors. As noted above, the only invariant spin^ structure is the one coming from the spin structure, and it is clear that there are also no invariant spin^ spinors in this case (since H acts trivially on Σ_r and hence each Σ_4n,r^m is equivalent as H-modules to a direct sum of copies of Σ_4n). This shows that σ(ℍℙ^n, (n+1))≥ 3. Denote by V_t:=V(tω_1) the irreducible representation of 𝔰𝔩(2,) with highest weight tω_1 (and dimension t+1). Arguing as in <cit.>, we consider the structure of 𝒮:=(Σ_4n|_𝔥^)^𝔰𝔭(2n,) = span_{ω^k}_k=0^n (ω:=∑_j=1^n y_2j∧ y_2j+1) as a module for 𝔰𝔭(2,)⊂𝔥^. The action of 𝔰𝔭(2,) on 𝒮 follows from <cit.> and is given explicitly by (ξ_1)𝔪·ω^k = i(n-2k) ω^k , (ξ_2)𝔪·ω^k = k (n-k+1) ω^k-1 - ω^k+1 , (ξ_3)𝔪·ω^k = i ( ω^k+1 + k (n-k+1) ω^k-1) . The standard basis element for the Cartan subalgebra of 𝔰𝔭(2,)≅𝔰𝔩(2,) is -i ξ_1∼diag[1,-1], and by (<ref>) we see that the action of this element on 𝒮 has highest eigenvalue n. In particular, 𝒮 contains a copy of V_n, and by reason of dimension we have 𝒮≃ V_n as 𝔰𝔭(2,)-modules. Note that this representation is self-dual (see e.g. <cit.>). Thus, it suffices to show that the smallest odd tensor power of Σ_3 which contains a copy of V_n is Σ_3^⊗ n. Recalling the well-known decomposition V_s⊗ V_t ≃ V_s+t⊕ V_s+t-2⊕…⊕ V_s-t (s≥ t) of 𝔰𝔩(2,)-representations (see e.g. <cit.>), the result follows by repeatedly using (<ref>) to decompose tensor powers of Σ_3 ≃ V_1. The difference in behaviour between the even and odd cases in the preceding proposition occurs as something of a technicality rather than a manifestation of any significant geometric difference; indeed, the argument presented in the odd case also produces representation-theoretic invariants in the even case if we allow even twistings of the spinor bundle. The reason to exclude even twistings is that the twisted spinor module would then fail to be well-defined as a representation of ^r(n), since [-1,-1] wouldn't act by the identity map. In order to obtain a notion of spinor bundles with even numbers of twistings, one needs to consider instead the alternative structure groups (n) ×(r) described in <cit.>. In the next proposition we describe explicitly the invariant n-twisted spin^ℍ spinors which realise the equality σ(ℍℙ^n,(n+1))=3 for n odd. Recall that, as noted in Remark <ref>, the auxiliary vector bundle E of the spin^ℍ structure corresponding to φ_1 can be seen as a subbundle of the endomorphism bundle (T ℍℙ^n). In particular, E inherits a natural connection ∇^E from the Levi-Civita connection ∇^g on T ℍℙ^n, and the former induces a connection ∇^g,E:=∇^g⊗(∇^E)^⊗ m on the spinor bundle Σ_4n,3^m ℍℙ^n for any odd m ≥ 1. Recall that {ξ_1,ξ_2,ξ_3}is a basis of 𝔰𝔭(1) ⊂𝔰𝔭(1) ⊕𝔰𝔭(n) = 𝔥, and denote by ( Φ_1 (ξ_1) 𝔪 , Φ_2 (ξ_2) 𝔪 , Φ_3 (ξ_3) 𝔪) the standard basis of the invariant rank-3 subspace of (𝔪) corresponding to E. The action of ξ_i on this subspace is given by Φ_i ↦ 0, Φ_j↦ 2Φ_k, Φ_k↦ -2Φ_j , where (i,j,k) is an even permutation of (1,2,3), and the spin lift of this representation acts on Σ_3≅^2 by the standard basis matrices for 𝔰𝔲(2): ρ(ξ_1):= [ i 0; 0 -i ] , ρ(ξ_2):= [ 0 1; -1 0 ], ρ(ξ_3):= [ 0 i; i 0 ] (these matrices are taken relative to the standard basis 1̂:=(1,0), ŷ_1:= (0,1) for Σ_3≅^2). In order to relate these to the usual presentation of the Lie algebra 𝔰𝔩(2,)≅𝔰𝔭(2,) we introduce H:=-iξ_1, X:=1/2(ξ_2-i ξ_3), Y:= - 1/2(ξ_2+iξ_3), so that ρ(H)=[ 1 0; 0 -1 ], ρ(X)= [ 0 1; 0 0 ] , ρ(Y)= [ 0 0; 1 0 ] act in the representation Σ_3≅^2 by the usual operators. Using this setup, we find: If n>1 is odd, the space of (n+1)-invariant n-twisted spin^ℍ spinors is spanned over by ψ:= ∑_j=0^n (-1)^jω^j ⊗ (ρ(Y)^n-j.1) , where 1:=1̂⊗n…⊗1̂. As in the proof of Proposition <ref>, we note that 𝒮:=(Σ_4n|_𝔥^)^𝔰𝔭(2n,)≃ V_n as modules for 𝔰𝔩(2,)≅𝔰𝔭(2,)⊂𝔥^; explicitly we have 𝒮 = _{ω^ℓ}_ℓ=0^n, with the action of 𝔰𝔭(2,)=_{ξ_1,ξ_2,ξ_3} by the formulas (<ref>)-(<ref>). It is clear from (<ref>) that 1∈Σ_3^⊗ n is a highest weight vector for 𝔰𝔭(2,), and that the 𝔰𝔭(2,)-submodule U(𝔰𝔭(2,)).1 that it generates[Here, U(𝔰𝔭(2,)) refers to the universal enveloping algebra of 𝔰𝔭(2,)] is isomorphic to V_n (since ρ(H) = diag[1,-1] acts on 1 by multiplication by n). Therefore, we have U(𝔰𝔭(2,)).1= _{ρ(Y)^k.1}_k=0^n. On the other hand, we see from (<ref>)-(<ref>) that 1=ω^0∈𝒮 is a highest weight vector and Y.ω^k= ω^k+1 for all k=0,…,n. In particular, the isomorphism 𝒯:𝒮→ U(𝔰𝔭(2,)).1 is given (up to rescaling) by 𝒯(ω^k) = ρ(Y)^k.1. We have 𝒯∈_𝔰𝔭(2,)(𝒮, Σ_3^⊗ n) ≃𝒮^*⊗Σ_3^⊗ n, and the invariant spin^ℍ spinor we are seeking is the corresponding element of 𝒮⊗Σ_3^⊗ n obtained via the musical isomorphism ♯: 𝒮^*≃𝒮 associated to the 𝔰𝔭(2,)-invariant symplectic form Ω(ω^j,ω^k) = (-1)^j j+k=n , 0 j+k≠ n on 𝒮. Defining ω^j∈𝒮^* by ω^j(ω^k)=δ_j,k, one sees that (ω^j)^♯= (-1)^j+1ω^n-j, and the result then follows by noting that 𝒯 = ∑_j=0^nω^j⊗ (ρ(Y)^j.1). The spinor in the statement of Theorem <ref> corresponds to the one in <cit.>, which the authors show to be pure. Finally, we give the differential equation satisfied by ψ. Recalling that ℍℙ^n=(n+1)/(1)×(n) is a symmetric space, and that the auxiliary bundle of the spin^ℍ structure under consideration is isomorphic to the rank-3 bundle spanned by the (locally-defined) endomorphisms Φ_1,Φ_2,Φ_3, the following is an immediate consequence of Proposition <ref>: For n ≥ 1 odd, the invariant n-twisted spin^ℍ spinor ψ in Proposition <ref> is parallel with respect to the invariant connection ∇^g,E:=∇^g ⊗(∇^E)^⊗ n. This spin^ℍ spinor encodes a well-known geometric fact – see <cit.>: The metric g_a on ℍℙ^n is quaternionic Kähler. §.§ Octonionic projective plane Consider now the octonionic projective plane, realised as a homogeneous space via 𝕆ℙ^2≅F_4(9) . A description of the isometric action of F_4 can be found e.g. in <cit.>, and, importantly, the isotropy representation is just the real spin representation of (9) on ℝ^16: (9) ^0_9,0≅_8,0≅_16( ℝ) . As in <cit.>, at the level of Lie algebras this inclusion is given by 𝔰𝔭𝔦𝔫(9) ^0_9,0 ψ_1≅_8,0ψ_2≅_0,6⊗_2,0ψ_3≅_4,0⊗_0,2⊗_2,0 ψ_4≅_0,2⊗_2,0⊗_0,2⊗_2,0ψ_5≅_2 (ℝ) ⊗ℍ⊗_2 (ℝ) ⊗ℍ ψ_6≅_2 (ℝ) ⊗_2 (ℝ) ⊗ℍ⊗ℍψ_7≅_4( ℝ) ⊗_4( ℝ) ψ_8≅_16( ℝ) . The algebra isomorphisms ψ_1, ψ_2, ψ_3, ψ_4, ψ_5 are given explicitly in <cit.>; ψ_6 is the obvious permutation of the second and third factors; ψ_7 is the tensor product of the Kronecker product of the first two factors and the isomorphism ℍ⊗_ℝℍ →_4( ℝ) q_1 ⊗ q_2 ↦( x ↦ q_1 · x ·q_2) , where x = ( x_1 , x_2, x_3, x_4 ) ∈ℝ^4 is thought of as the quaternion x_1 + i x_2 + j x_3 + k x_4; and ψ_8 is the Kronecker product. Letting {e_0, e_1 , … , e_8 } be the canonical basis of ℝ^9, a basis of 𝔰𝔭𝔦𝔫(9) is given by 𝔰𝔭𝔦𝔫(9)= _{ e_i · e_j }_0 ≤ i < j ≤ 8 . With a slight abuse of notation, we will denote by e_1, …, e_n the elements of the canonical basis of ℝ^n, for n=2,4,6,8, 16. Now we give the images in _16( ℝ) of each of the elements of our basis of 𝔰𝔭𝔦𝔫(9). For the first basis vector e_0· e_1, one computes, following the chain of maps in (<ref>): e_0 · e_1 ↦ e_0 · e_1 ↦ e_1 ↦ e_1 ⊗( e_1 · e_2 ) ↦ e_1 ⊗( e_1 · e_2 ) ⊗( e_1 · e_2 ) ↦ e_1 ⊗( e_1 · e_2 ) ⊗( e_1 · e_2 ) ⊗( e_1 · e_2 ) ↦[ 1 0; 0 -1 ]⊗ k ⊗[ 0 1; -1 0 ]⊗ k ↦[ 1 0; 0 -1 ]⊗[ 0 1; -1 0 ]⊗ k ⊗ k ↦[ 0 1 0 0; -1 0 0 0; 0 0 0 -1; 0 0 1 0 ]⊗[ 1 0 0 0; 0 -1 0 0; 0 0 -1 0; 0 0 0 1 ] ↦ -E^(16)_1,5 + E^(16)_2,6 + E^(16)_3,7 - E^(16)_4,8 + E^(16)_9,13 - E^(16)_10,14 - E^(16)_11,15 + E^(16)_12,16 . The others are computed similarly, giving: e_0 · e_2 ↦ -E^(16)_1,13 + E^(16)_2,14 + E^(16)_3,15 - E^(16)_4,16 + E^(16)_5,9 - E^(16)_6,10 - E^(16)_7,11 + E^(16)_8,12 ; e_0 · e_3 ↦ -E^(16)_1,7 - E^(16)_2,8 - E^(16)_3,5 - E^(16)_4,6 - E^(16)_9,15 - E^(16)_10,16 - E^(16)_11,13 - E^(16)_12,14 ; e_0 · e_4 ↦ E^(16)_1,6 + E^(16)_2,5 - E^(16)_3,8 - E^(16)_4,7 + E^(16)_9,14 + E^(16)_10,13 - E^(16)_11,16 - E^(16)_12,15 ; e_0 · e_5 ↦ -E^(16)_1,4 + E^(16)_2,3 + E^(16)_5,8 - E^(16)_6,7 - E^(16)_9,12 + E^(16)_10,11 + E^(16)_13,16 - E^(16)_14,15 ; e_0 · e_6 ↦ -E^(16)_1,8 + E^(16)_2,7 - E^(16)_3,6 + E^(16)_4,5 - E^(16)_9,16 + E^(16)_10,15 - E^(16)_11,14 + E^(16)_12,13 ; e_0 · e_7 ↦ -E^(16)_1,2 + E^(16)_3,4 - E^(16)_5,6 + E^(16)_7,8 - E^(16)_9,10 + E^(16)_11,12 - E^(16)_13,14 + E^(16)_15,16 ; e_0 · e_8 ↦ -E^(16)_1,3 - E^(16)_2,4 - E^(16)_5,7 - E^(16)_6,8 - E^(16)_9,11 - E^(16)_10,12 - E^(16)_13,15 - E^(16)_14,16 . The images of the other basis vectors e_i· e_j (1≤ i≤ 8) for 𝔰𝔭𝔦𝔫(9) are then determined by taking products of the above generators inside ^0_9,0 using the Clifford algebra identities e_i· e_j = (e_0· e_i)· (e_0· e_j). §.§.§ Invariant spin^r structures By Theorem <ref>, in order to understand F_4-invariant spin^r structures on 𝕆ℙ^2, we need to find all Lie group homomorphisms φ(9) →(r) such that σ×φ lifts to ^r(16) (of course, as the group (9) is simply connected, the lifting condition is automatically satisfied). As the Lie algebra 𝔰𝔭𝔦𝔫(9) ≅𝔰𝔬(9) is simple, for 1 ≤ r ≤ 8 the only homomorphism (9)→(r) is the trivial one. The corresponding F_4-invariant spin^r structures are just the ones in the lineage of the invariant spin structure. The first non-trivial case is r=9, where we have the covering homomorphism λ_9 (9) →(9). By essentially the same argument as in Proposition <ref>, there are only two Lie group homomorphisms (9) →(9) up to conjugation by elements of (9), namely the trivial one φ_0 and the double covering φ_1 λ_9, leading to two possible invariant spin^9 structures up to equivalence: The F_4-invariant spin^9 structures on 𝕆ℙ^2 are given by F_4 ×_ϕ_i^9(16) , i = 0,1 , where ϕ_i is the unique lift of σ×φ_i to ^9(16) and σ(9) →(16) is the isotropy representation. §.§.§ Invariant spin^r spinors As in the case of ℍℙ^n, the octonionic projective plane 𝕆ℙ^2 does not admit any invariant spinors (cf. the discussion before Prop. <ref>), so from this point forward we consider the non-trivial invariant spin^9 structure (i.e. the one corresponding to i=1 in the preceding theorem). In order to describe its twisted spin^9 spinors we first need a small lemma describing the decomposition of the spin lift of the isotropy representation as a direct sum of highest weight modules. This result may be found, written in a slightly different form and without proof, in <cit.>; we include a sketch of the proof here as the notation and formulas will be useful for subsequent discussion: (<cit.>) As modules for (9)^, the spin lift σ of the isotropy representation decomposes as Σ_16≃V(ω_1+ω_4)_Σ_16^-⊕V(ω_3) ⊕ V(2ω_1)_Σ_16^+, where V(μ) denotes the irreducible representation of highest weight μ and ω_i, i=1,2,3,4 denote the fundamental weights of 𝔰𝔭𝔦𝔫(9)^≅𝔰𝔬(9,). In order to take advantage of the explicit operators calculated in (<ref>)-(<ref>), we view 𝔰𝔭𝔦𝔫(9)^≅𝔰𝔬(9,) as the set of 9× 9 skew-symmetric matrices in 𝔤𝔩(9,). We take the (real form of the) Cartan subalgebra spanned by h_j:= -iE^(9)_2j-1,2j (j=1,2,3,4), together with the (positive) re-scaling of the Killing form such that the h_j's are orthogonal and unit length. Letting v_j:=h_j^♭, we have the simple roots α_1=v_1-v_2, α_2=v_2-v_3, α_3=v_3-v_4, α_4=v_4, and the corresponding fundamental weights ω_j:=2α_j/||α_j||^2 are given by ω_1=v_1, ω_2=v_1+v_2, ω_3=v_1+v_2+v_3, ω_4 = 1/2(v_1+v_2+v_3+v_4). The root vectors X_i:= X_α_i associated to the simple roots α_i are X_1 = E^(9)_1, 3 + E^(9)_2, 4 + i(-E^(9)_2, 3 + E^(9)_1, 4), X_2 = E^(9)_3, 5 + E^(9)_4, 6 + i(-E^(9)_4, 5 + E^(9)_3, 6), X_3 = E^(9)_5, 7 + E^(9)_6, 8 + i(-E^(9)_6, 7 + E^(9)_5, 8), X_4 = E^(9)_7, 9 - iE^(9)_8, 9, and the root vectors associated to the roots -α_i, i=1,2,3,4 are given by Y_i:=Y_α_i:=X_i. We note that this setup is slightly unusual[One usually chooses a different realization of the Lie algebra 𝔰𝔬(9,) in order to make the elements of the Cartan subalgebra diagonal matrices, but that realization is less convenient for our purposes here.] and can be found e.g. in <cit.>. Using the explicit formulas for σ from (<ref>)-(<ref>), we find that the Cartan subalgebra generators h_i and simple root vectors X_i act in the complexified isotropy representation 𝔪^≃Σ_9 by the operators σ(h_1) = -i/2( -e_1, 5 + e_2, 6 + e_3, 7 - e_4, 8 + e_9, 13 - e_10, 14 - e_11, 15 + e_12, 16) , σ(h_2) = -i/2( e_1, 11 - e_2, 12 - e_3, 9 + e_4, 10 + e_5, 15 - e_6, 16 - e_7, 13 + e_8, 14) , σ(h_3) =-i/2( e_1, 7 - e_2, 8 - e_3, 5 + e_4, 6 + e_9, 15 - e_10, 16 - e_11, 13 + e_12, 14 ), σ(h_4) =-i/2( -e_1, 7 - e_2, 8 + e_3, 5 + e_4, 6 - e_9, 15 - e_10, 16 + e_11, 13 + e_12, 14), and 2σ(X_1) = e_1, 3 - i e_1, 7 - i e_1, 9 - e_1, 13 - e_2, 4 - i e_2, 8 - i e_2, 10 + e_2, 14 - i e_3, 5 - i e_3, 11 + e_3, 15 - i e_4, 6 - i e_4, 12 - e_4, 16 + e_5, 7 + e_5, 9 - i e_5, 13 - e_6, 8 - e_6, 10 - i e_6, 14 - e_7, 11 - i e_7, 15 + e_8, 12 - i e_8, 16 - e_9, 11 - i e_9, 15 + e_10, 12 - i e_10, 16 - i e_11, 13 - i e_12, 14 - e_13, 15 + e_14, 16, 2σ(X_2) = -i e_1, 4 + e_1, 6 - e_1, 10 + i e_1, 16 - i e_2, 3 - e_2, 5 + e_2, 9 + i e_2, 15 + e_3, 8 - e_3, 12 - i e_3, 14 - e_4, 7 + e_4, 11 - i e_4, 13 - i e_5, 8 + i e_5, 12 - e_5, 14 - i e_6, 7 + i e_6, 11 + e_6, 13 - i e_7, 10 - e_7, 16 - i e_8, 9 + e_8, 15 - i e_9, 12 + e_9, 14 - i e_10, 11 - e_10, 13 + e_11, 16 - e_12, 15 - i e_13, 16 - i e_14, 15, 2σ(X_3) = -2 e_1, 3 - 2 i e_1, 5 - 2 i e_3, 7 + 2 e_5, 7 - 2 e_9, 11 - 2 i e_9, 13 - 2 i e_11, 15 + 2 e_13, 15 , 2σ(X_4) = i e_1, 4 + e_1, 6 - i e_2, 3 - e_2, 5 - e_3, 8 + e_4, 7 + i e_5, 8 - i e_6, 7 + i e_9, 12 + e_9, 14 - i e_10, 11 - e_10, 13 - e_11, 16 + e_12, 15 + i e_13, 16 - i e_14, 15. Considering the action of the lifts σ(h_i), σ(X_i) ∈𝔰𝔭𝔦𝔫(16)^ in the spin representation, a straightforward calculation using computer algebra software yields three linearly independent joint eigenvectors for the σ(h_i) which are simultaneously annihilated by the action of each σ(X_i) (i.e. highest weight vectors). The corresponding weights are 1/2(3v_1 +v_2+v_3+v_4) = ω_1 + ω_4, v_1+v_2+v_3 = ω_3 , 2v_1 = 2ω_1 , and the assertion that Σ_16^- ≃ V(ω_1+ω_4) and Σ_16^+ ≃ V(ω_3)⊕ V(2ω_1) may be deduced from <cit.>. Note that the preceding lemma immediately recovers the fact that the invariant spin structure carries no invariant spinors. It also allows us to readily describe the smallest twisting for which 𝕆ℙ^2 admits invariant twisted spin^9 spinors: The F_4-invariant spinor type of 𝕆ℙ^2 is σ(𝕆ℙ^2,F_4)=9, and the twisting of the spinor bundle which realises this is m=3. Furthermore, the space of invariant 3-twisted spin^9 spinors has complex dimension 4. First, we recall that every representation of 𝔰𝔬(9,) is self-dual (see e.g. <cit.>). Therefore, using a similar argument as in the proof of Proposition <ref>, and in light of the preceding lemma, it suffices to show that Σ_9^⊗ 3 is the smallest odd tensor power of Σ_9≃ V(ω_4) which contains a copy of V(ω_1+ω_4), V(ω_3), or V(2ω_1). It is easily verified using e.g. the LiE software package <cit.> that Σ_9^⊗ 3 ≃ 5 V(ω_4) ⊕ V(3ω_4) ⊕ 2V(ω_3+ω_4)⊕ 3V(ω_2+ω_4) ⊕ 4V(ω_1+ω_4 ) . Finally, using self-duality, it follows from (<ref>) and (<ref>) that _ (Σ_16,9^3)_ = _ (Σ_16⊗Σ_9^⊗ 3)^(9) = _Hom_(9)(Σ_16 , Σ_9^⊗ 3) = 4. Now we examine more closely the invariant 3-twisted spin^9 spinors from the preceding theorem. This 4-dimensional space corresponds to the pairings of the 4 copies of V(ω_1+ω_4) in (<ref>) with the single copy in (<ref>), so in order to obtain formulas for the spinors we first need to clarify the algebraic structure of this representation. With all notation as above, one finds using computer algebra software an explicit highest weight vector w_0 (unique up to scaling) generating Σ_16^- ≃ V(ω_1+ω_4)⊆Σ_16, and one may verify furthermore that any other weight vector can be obtained from w_0 by applying at most 18 lowering operators Y_i, i=1,2,3,4. Writing Y_ℐ := Y_i_1.Y_i_2… Y_i_k for a multi-index ℐ={i_1,…,i_k}, one possible minimal choice of multi-indices ∪_k=0^18{ℐ_k,ℓ}_ℓ=1^μ_k generating V(ω_1+ω_4) is given in Table <ref>, where μ_k denotes the number of k-multi-indices in the generating set. In what follows we describe explicitly the invariant spinors, using a more sophisticated version of the technique from the proof of Proposition <ref>: A basis for the space of _4-invariant 3-twisted spin^9 spinors on ^2 is given by ψ_p:= ∑_k=0^18∑_ℓ=1^μ_k (Y_ℐ_k,ℓ.w_0)^♯⊗ (Y_ℐ_k,ℓ.w_p ) , p=1,2,3,4, where ♯Σ_16^* →Σ_16 is the musical isomorphism, w_p (p=1,2,3,4) denote highest weight vectors for the four copies of Σ_16^- inside Σ_9^⊗ 3, the indices ℐ_k,ℓ are as in Table <ref>, and for any (Y_ℐ_k,ℓ.w_0 )∈Σ_16^- ⊆Σ_16 we denote by Y_ℐ_k,ℓ.w_0 ∈Σ_16^* the corresponding dual map sending Y_ℐ_k',ℓ'.w_0↦δ_k,k'δ_ℓ,ℓ' and Σ_16^+↦ 0. From the preceding discussion and Table <ref> we have the highest weight vector w_0 for Σ_16^- ⊆Σ_16, together with explicit sequences of lowering operators Y_i generating this subrepresentation. Altogether this gives four 𝔰𝔭𝔦𝔫(9)^-module isomorphisms T_p: Σ_16^- →Σ_9^⊗ 3, p=1,2,3,4 defined by T_p : Y_ℐ_k,ℓ.w_0 ↦ Y_ℐ_k,ℓ.w_p, (k=0,…, 18, ℓ = 1,…, μ_k), where the ℐ_k,ℓ are as in Table <ref> and we use the convention Y_∅=. By abuse of notation, we also denote by T_p the extensions to all of Σ_16 by Σ_16^+↦ 0. The spinors ψ_p are the images of the T_p under the 𝔰𝔭𝔦𝔫(9)^-module isomorphism (Σ_16^-)^* ⊗Σ_9^⊗ 3≃Σ_16^- ⊗Σ_9^⊗ 3, which are precisely given by (<ref>). Finally, we give the differential equation satisfied by the spinors from Theorem <ref>. To begin, we need to first specify a connection on the vector bundle 𝒜 associated to the auxiliary (9)-bundle of the spin^9 structure. Note that 𝒜 is associated to the principal (9) bundle F_4→ F_4/(9) by the composition of the covering map λ_9: (9)→(9) with the standard representation ρ_: (9) →(^9). Indeed, there is a natural choice of invariant connection defined on 𝒜 as follows. The structure of 𝔪≃Σ_9^ as a Clifford module for _9 gives 9 linearly independent endomorphisms, corresponding to Clifford multiplication by an orthonormal set of basis vectors for ^9. By slightly modifying the Clifford multiplication (see <cit.>), one obtains endomorphisms T_i: 𝔪→𝔪, i=1,…, 9 satisfying the modified Clifford relations T_i ∘ T_j + T_j∘ T_i = 2δ_i,j , T_i^* = T_i, T_i =0 for i=1,…,9. In this description, the isotropy image σ ((9)) ⊆(𝔪) ⊆End(𝔪) coincides with the normaliser of the 9-dimensional subspace 𝒯:= _{T_1,…, T_9}⊆End(𝔪) <cit.>: σ((9)) = { g∈(𝔪): g𝒯g^-1 = 𝒯}. The 9-dimensional (9)-representation 𝒯 (action via conjugation) is isomorphic to ρ_∘λ_9 (since there is only one irreducible real 9-dimensional representation of (9) up to isomorphism), hence we have 𝒜≅ F_4 ×_(9)𝒯⊆ F_4 ×_(9)(𝔪) ≅( T(𝕆ℙ^2)). In particular, 𝒜 naturally inherits a connection ∇^ from the extension of the Levi-Civita connection to the endomorphism bundle. In light of Proposition <ref>, we finally see that the invariant twisted spinors found above are parallel: The 4-dimensional space of F_4-invariant 3-twisted spin^9 spinors on 𝕆ℙ^2 is spanned by parallel spinors for the connection ∇^g,:= ∇^g ⊗ (∇^)^⊗ 3. toc § ACKNOWLEDGEMENTS The authors are grateful to Travis Schedler for his contributions to the representation-theoretical aspect of the paper, and to Marie-Amélie Lawn for her comments and fruitful discussions. D. Artacho is funded by the UK Engineering and Physical Sciences Research Council (EPSRC), grant EP/W5238721. J. Hofmann was supported by the Engineering and Physical Sciences Research Council [EP/L015234/1, EP/W522429/1]; the EPSRC Centre for Doctoral Training in Geometry and Number Theory (The London School of Geometry and Number Theory: University College London, King's College London, and Imperial College London); and a DAAD Short Term Research Grant for a research stay at Philipps-Universität Marburg. alphaurl
http://arxiv.org/abs/2406.19355v1
20240627173312
A class of ghost-free theories in symmetric teleparallel geometry
[ "Antonio G. Bello-Morales", "Jose Beltrán Jiménez", "Alejandro Jiménez Cano", "Antonio L. Maroto", "Tomi S. Koivisto" ]
gr-qc
[ "gr-qc", "astro-ph.CO" ]
=1
http://arxiv.org/abs/2406.18201v1
20240626093351
EFCNet: Every Feature Counts for Small Medical Object Segmentation
[ "Lingjie Kong", "Qiaoling Wei", "Chengming Xu", "Han Chen", "Yanwei Fu" ]
eess.IV
[ "eess.IV", "cs.CV" ]
Reidemeister's theorem using transversality Hoel Queffelec July 1, 2024 ============================================= § ABSTRACT The ABSTRACT is to be in fully justified italicized text, at the top of the left-hand column, below the author and affiliation information. Use the word “Abstract” as the title, in 12-point Times, boldface type, centered relative to the column, initially capitalized. The abstract is to be in 10-point, single-spaced type. Leave two blank lines after the Abstract, then begin the main text. Look at previous abstracts to get a feel for style and length. This paper explores the segmentation of very small medical objects with significant clinical value. While Convolutional Neural Networks (CNNs), particularly UNet-like models, and recent Transformers have shown substantial progress in image segmentation, our empirical findings reveal their poor performance in segmenting the small medical objects and lesions concerned in this paper. This limitation may be attributed to information loss during their encoding and decoding process. In response to this challenge, we propose a novel model named EFCNet for small object segmentation in medical images. Our model incorporates two modules: the Cross-Stage Axial Attention Module (CSAA) and the Multi-Precision Supervision Module (MPS). These modules address information loss during encoding and decoding procedures, respectively. Specifically, CSAA integrates features from all stages of the encoder to adaptively learn suitable information needed in different decoding stages, thereby reducing information loss in the encoder. On the other hand, MPS introduces a novel multi-precision supervision mechanism to the decoder. This mechanism prioritizes attention to low-resolution features in the initial stages of the decoder, mitigating information loss caused by subsequent convolution and sampling processes and enhancing the model's global perception. We evaluate our model on two benchmark medical image datasets. The results demonstrate that EFCNet significantly outperforms previous segmentation methods designed for both medical and normal images. § INTRODUCTION Small medical objects, like HyperReflective Dots (HRDs) observed on Optical Coherence Tomography (OCT), are frequently encountered in disease research. Various studies <cit.> have confirmed the significant relevance of these small lesions to medical diagnosis and treatment. However, the manual labeling of these small objects in medical images is a time-consuming and labor-intensive task, representing a substantial drain on medical resources. Consequently, there is a pressing need to automate the segmentation of small medical objects using computer vision algorithms. Considering the type of task, one would easily recall the numerous image segmentation models such as U-Net <cit.>, ResUNet <cit.>, DenseUNet <cit.>, ResUNet++ <cit.>, TransFuse <cit.> and Swin-Unet <cit.>, as well as the recent SAM <cit.>. Given their generally desirable performance, it is straightforward to ask: Can these method solve the small medical objects segmentation? Sadly, these famous methods are generally wiped out in this specific task. Moreover, research specifically dedicated to the segmentation of small medical objects is lacking. Identified issues with previous segmentation methods reveal significant information loss, as highlighted in two points: 1) Earlier methods <cit.> often use features from the preceding stage in the decoder and the corresponding stage in the encoder, limiting the direct use of features for each decoding stage. This contradicts previous studies <cit.> indicating improved segmentation accuracy with both shallow and deep features and results in untapped information from various encoder stages. 2) Many prior approaches <cit.> employ only one segmentation head for supervision at the last decoder stage. In contrast, studies <cit.> highlight the strong global perception of low-resolution features in early decoding stages, valuable for small object localization. However, information in these early decoder stages is somewhat lost during convolution and upsampling processes. Additionally, small medical objects, compared to standard-sized objects, carry less information, intensifying the impact of information loss on segmentation accuracy. Some samples of small medical objects are illustrated in Fig. <ref>. Notably, the SAM model <cit.> performs inadequately in addressing this segmentation task. To address the challenge of information loss during segmentation, we introduce a novel solution based on the encoder-decoder structure. Our model meticulously attends to all features in each stage of both the encoder and decoder, enhancing the accuracy of small medical object segmentation. Figure <ref> illustrates the distinctions between our approach and prior methods. Specifically, for the encoder, we present the Cross-Stage Axial Attention Module (CSAA), leveraging the attention mechanism to integrate features from all stages. This adaptation enables the model to dynamically learn information necessary for each decoding stage. CSAA facilitates direct reference to all valuable information in the encoder during the decoding process of each stage, mitigating information loss in the encoder. Simultaneously, we introduce the Multi-Precision Supervision Module (MPS) for the decoder. This module adds segmentation heads with varying precision for supervision after each stage in the decoder. Low-precision segmentation heads focus on low-resolution features, temporarily overlooking local details to leverage their robust global perception. With MPS, the model effectively exploits information from each stage in the decoder, reducing information loss in this part of the model. To validate the efficacy of the proposed method, we carry out comprehensive experiments on two datasets S-HRD and S-polyp, as illustrated in Fig. <ref>. These datasets comprise a fundus Optical Coherence Tomography (OCT) image dataset created by our team and a subset from CVC-ClinicDB <cit.>. Experimental results on these two datasets demonstrate that our method outperforms previous state-of-the-art models in terms of dice similarity coefficient (DSC) and intersection over union (IoU). Our contributions are outlined as follows: 1. Innovative Segmentation Approach: We introduce a novel concept to address the challenge of small medical object segmentation. Emphasizing the significance of every feature in medical images, our model meticulously attends to all features at each stage. This approach enables the extraction of diverse information, thereby mitigating information loss associated with small medical objects. 2. Proposed Modules for Enhanced Accuracy: We devise two key modules, namely the Cross-Stage Axial Attention Module (CSAA) and the Multi-Precision Supervision Module (MPS). These modules effectively tackle information loss in the encoder and decoder, respectively, resulting in an improved segmentation accuracy for the model. 3. Benchmark Construction and Model Validation: We establish a new benchmark for evaluating small medical object segmentation. Through experiments on two datasets focusing on small medical objects, our model significantly outperforms previous state-of-the-art models. This demonstrates the robustness and superiority of our proposed approach in this challenging domain. § RELATED WORKS Medical Image Segmentation. Recently, researchers have introduced several innovative methods <cit.> for semantic segmentation in medical images. Zhang  <cit.> proposed a unique hybrid structure that concurrently integrates CNN and Transformer, leading to a reduction in the loss of low-level details. Chen  <cit.> incorporated a Transformer module into the U-Net encoder, enhancing the model's ability for long-range modeling. Wang  <cit.> addressed overfitting by employing a Transformer encoder and introduced the progressive locality decoder to improve local information processing in medical images. While these methods have significantly contributed to medical image segmentation, they often fall short in accounting for the impact of object size on segmentation results. Particularly, these models tend to underperform when confronted with the segmentation of small objects. Lou  <cit.> recognized the significance of considering the size of medical objects in the segmentation process, and introduced a Context Axial Reverse Attention module (CaraNet) to assist the model in detecting local information related to small medical objects. However, the use of bilinear interpolation in the decode stage of CaraNet leads to substantial information loss, significantly affecting the segmentation of small medical objects. In contrast to the aforementioned methods, our model addresses the issue of information loss in small medical objects through CSAA and MPS, effectively improving segmentation accuracy. General Segmentation Model. Recently, Kirillov  <cit.> introduced the Segment Anything Model (SAM), a versatile segmentation model that has made significant strides in the realm of natural image segmentation. Despite its success in general applications, SAM proves unsuitable for many medical image segmentation tasks due to the intricate structures and complex boundaries present, particularly in cases involving small medical objects, without manual guidance <cit.>. In response to these limitations, Ma  <cit.> devised MedSAM as an adaptation of SAM tailored specifically for medical image segmentation. MedSAM exhibits notable advancements in handling medical image segmentation tasks compared to SAM <cit.>. Nevertheless, even with these improvements, MedSAM <cit.> still struggles when tasked with the segmentation of small medical objects concerned in this paper. Attention Mechanism. Numerous attention-based methods <cit.> have emerged in recent years, applied to diverse tasks in computer vision and natural language processing. Vaswani  <cit.> broke away from the conventional convolutional structure and introduced the Transformer, a novel architecture utilizing attention mechanisms. Woo  <cit.> enhanced convolutional neural networks by incorporating both Channel Attention and Spatial Attention. Zhang  <cit.> proposed the Pyramid Squeeze Attention Module (PSA) to enable the model to capture spatial information across different channels. Building upon the insights gained from the aforementioned attention-based methods, we introduce the Cross-Stage Axial Attention Module (CSAA). This module facilitates feature fusion and minimizes information loss within the model. § METHOD We introduce small medical object segmentation and related notations. In Sec.<ref>, we outline the overall structure of EFCNet. We detail the CSAA in Sec. <ref> and the MPS in Sec. <ref>. Lastly, we discuss our loss function in Sec.<ref>. Problem Setup and Notations. In our segmentation task, we denote the medical picture dataset as X = {x_1, ..., x_m | x_i∈ℝ^C× H× W, i=1,2,..,m }. Doctors meticulously and manually annotate lesions in each image x_i, forming the ground truth set Y = {y_1, ..., y_m | y_i∈{0,1}^1× H× W, i=1,2,..,m }. The complete dataset is denoted as D={X, Y}, which is splitted into a training set D_train={X_train, Y_train} and a testing set D_test = {X_test, Y_test}. Our objective is to develop an algorithm that empowers our model to effectively segment small medical objects from D_train and demonstrate robust performance on D_test. §.§ Overall Architecture We propose a novel method called EFCNet to address the challenge of segmenting small medical objects. Specifically, we have designed two modules, the Cross-Stage Axial Attention Module (CSAA) and the Multi-Precision Supervision Module (MPS), to ensure that the model focuses on small medical object infomration in both the encoder and decoder. In Fig. <ref>(a), an image containing small medical objects serves as input to our model. Initially, the image undergoes encoding through k stages, producing feature maps encompassing diverse information about small medical objects. Subsequently, the CSAA Module processes features from all encoder stages, compelling the model to adaptively learn pertinent information for the decoding phase. The decoder then sequentially processes these feature maps under the guidance of the CSAA Module. Lastly, the feature maps from all decoder stages enter the MPS Module to yield multi-precision prediction results, receiving separate supervisions. During testing, we utilize the segmentation output from the last decoder stage as the final result of our model. §.§ CSAA Module Information about small medical objects is dispersed across different encoder stages, each containing varied types of data. However, much of this information is not directly usable by the decoder, and some is lost during the convolution and downsampling processes. To reduce information loss in the encoder and fully leverage insights about small medical objects, we present a novel Cross-Stage Axial Attention Module (CSAA). This module adaptively learns from features in all encoder stages and subsequently guides the decoding process. As depicted in Fig. <ref>(b), CSAA have four steps: resizing, W-dimensional axial attention, H-dimensional axial attention and resizing back. Resizing. To enhance the fusion of features from all encoder stages, we resize all feature maps in each encoder stage to (C^*,H^*,W^*), adjusting both spatial and channel dimensions through convolution operations: f_i^* = σ (BN(conv_i^e(f_i^e))), i=1,2,..,k, where f_i^e represents the feature map in stage i of the encoder; σ and BN denote ReLU and Batch Normalization respectively; and k is the number of stages in the encoder and decoder. We consider the k-stage resized feature maps {f_i^*}_i=1^k as the input for the subsequent W-dimensional axial attention step. W-Dimensional CSAA. We firstly generate Query {Q_i,w}_i=1^k, Key {K_i,w}_i=1^k, Value {V_i,w}_i=1^k based on the feature maps {f_i^*}_i=1^k in the width (W) dimension: Q_i,w = W_i^Q(f_i,w^*),i=1,2,..,k, K_i,w = W_i^K(f_1,w^*,f_2,w^*,...,f_k,w^*),i=1,2,..,k, V_i,w = W_i^V(f_1,w^*,f_2,w^*,...,f_k,w^*),i=1,2,..,k, where {W_i^Q}_i=1^k, {W_i^K}_i=1^k, {W_i^V}_i=1^k represent the weight matrix used to generate {Q_i,w}_i=1^k, {K_i,w}_i=1^k, {V_i,w}_i=1^k respectively; k is the number of stages in the encoder and decoder; and f_i,w^* denotes the feature f_i^* in width dimension. Equation <ref> shows that K_i,w and V_i,w merge the information from all stages of the encoder in width dimension. Next, we get the output {f_i^w}_i=1^k of W-Dimensional Axial-Attention by f_i^w=Softmax(Q_i,wK_i,w^T/√(C^*H^*))V_i,w ,i=1,2,..,k. H-Dimensional CSAA. Similarly above, we firstly generate Query {Q_i,h}_i=1^k, Key {K_i,h}_i=1^k, Value {V_i,h}_i=1^k based on the feature maps {f_i^w}_i=1^k in the height (H) dimension: Q_i,h = W_i^Q(f_i,h^w),i=1,2,..,k, K_i,h = W_i^K(f_1,h^w,f_2,h^w,...,f_k,h^w),i=1,2,..,k, V_i,h = W_i^V(f_1,h^w,f_2,h^w,...,f_k,h^w),i=1,2,..,k, where {W_i^Q}_i=1^k, {W_i^K}_i=1^k, {W_i^V}_i=1^k represent the weight matrix used to generate {Q_i,h}_i=1^k, {K_i,h}_i=1^k, {V_i,h}_i=1^k respectively; k is the number of stages in the encoder and decoder; and f_i,h^w denotes the feature f_i^w in height dimension. Through Eq. <ref> and Eq. <ref>,we merge the information from all stages of the encoder in both width dimension and height dimension. Next, we obtain the output {f_i^h}_i=1^k of H-Dimensional Axial-Attention through the attention operation: f_i^h=Softmax(Q_i,hK_i,h^T/√(C^*W^*))V_i,h ,i=1,2,..,k. Resizing back. To aid the guidance of the decoding process with the information acquired through axial attention, we resize the output feature f_i^h of the two-step axial attention to match the dimensions of the feature map in the corresponding decoding stage i, adjusting both spatial and channel dimensions through convolution operations, denoted as {f_i^attn∈ℝ^C_i× H_i× W_i}_i=1^k, f_i^attn = σ (BN(conv_i^h(f_i^h))), i=1,..,k, where σ and BN denote ReLU and Batch Normalization respectively; and k is the number of stages in the encoder and decoder. Finally, we concatenate f_i^attn to the feature map of the corresponding stage in the decoder along the channel dimension. It is noteworthy that traditional two-dimensional attention mechanism demands substantial computing resources <cit.>. To address this, we employ two-stage one-dimensional attention modules in CSAA, conducting attention processing sequentially on the feature maps in the width and height dimensions. The CSAA significantly aids the model in extracting information about small medical objects from the encoder and appropriately allocates it to the corresponding stage of the decoder. Unlike prior models, our decoding process for each stage in the decoder is influenced by segmentation-related information gleaned from all stages of the encoder, facilitated by CSAA. Through CSAA, our model achieves feature fusion in the encoder, and reinforces the linkage between the encoder and decoder. §.§ MPS Module Low-resolution features in the decoder possess robust global perception, enhancing the model's performance in small medical object segmentation. However, in prior models such as <cit.>, the globally perceptual information is not fully harnessed; and a significant amount of useful information is lost in the subsequent convolution and upsampling processes. To tackle this issue, we introduce the Multi-Precision Supervision Module (MPS) to extract information from low-resolution features in the decoder and diminish its loss in the ensuing decoding process. Specifically, MPS consists of two steps: segmentation and upsampling. Segmentation. To thoroughly extract information about small medical objects from each stage of the decoder, we individually feed feature maps from each decoder stage into corresponding segmentation heads. This process yields segmentation results with distinct resolutions, denoted as {P_i∈ℝ^C_i× H_i× W_i}_i=1^k, P_i = S(σ (BN(conv_i^d(f_i^d)))), i=1,..,k, where f_i^d represents the feature map in stage i of the decoder; σ and BN denote ReLU and Batch Normalization respectively; and S(.) represents the sigmoid function. Upsampling. We employ neighbor interpolation method to upsample the segmentation results obtained in the previous step to match the size same of the ground truth image. This process enables us to achieve multi-precision segmentation {M_i}_i=1^k for small medical objects, as illustrated in Fig. <ref>(a). M_i = Upsample(P_i)∈ℝ^C × H × W, i=1,2,..,k. We oversee the segmentation results of different precision with the ground truth label, ensuring that each stage of the decoder encompasses sufficient information to facilitate the segmentation of small medical objects. In MPS, we formulate a supervision strategy with varying precision for features of different resolutions. Recognizing that low-resolution features possess robust global perception but lack local details, we employ low-precision supervision for them. This decision is made to temporarily forego the emphasis on local details while capitalizing on the strengths of low-resolution features with powerful global perception. This multi-precision supervision approach preserves the advantages of the conventional single-segmentation head while preserving additional global perception from low-resolution features. Consequently, it enhances the model's performance in small medical object segmentation. §.§ Loss function Considering that the positive and negative pixels are extremely unbalanced in small medical object segmentation tasks, we adopt a combination of DiceLoss <cit.> and Binary Cross Entropy(BCE) Loss during the training process. The loss for the segmentation maps produced by each stage of the decoder is set as follows. ℒ_i = λ_1·ℒ_Dice (M_i, Y) + λ_2·ℒ_BCE (M_i, Y), where i=1,2,..,k denotes stage indexes; M_i represents the result predicted by the model at stage i; and Y represents the ground truth with the hyperparameters λ_1 and λ_2 to balance DiceLoss and BCELoss. Taking into account all segmentation results output by each stage of the decoder, the total loss of the model is ℒ_total = ∑_i=1^kα_i·ℒ_i, where hyperparameter {α _i}_i=1^k leverage the losses of segmentation results with different precision. § EXPERIMENT S-HRD Dataset. We have gathered a dataset comprising 313 optical coherence tomography (OCT) images from patients with macular edema, with the objective of segmenting small HyperReflective Dots (small HRDs) within them. We refer to this dataset as S-HRD, where 'S' indicates 'Small'. In S-HRD, the area of each lesion is less than 1 percent of the entire image size. All the ground truths have been manually labeled by experienced eye doctors with over ten years of expertise. We have ensured that appropriate consent has been obtained for the utilization and presentation of images in our research. For further insights into the data collection process, privacy considerations and relevant medical knowledge, please refer to the Supplementary. S-Polyp Dataset. We build a small-polyp segmentation dataset by excluding images with sizable medical lesions in CVC-ClinicDB <cit.>. From this selection, we retain 229 images where all lesions are small medical objects. We label this dataset as S-Polyp. In S-Polyp, the area of each lesion is less than 5 percent of the entire image size. Examples of both S-HRD and S-Polyp are illustrated in Fig. <ref>. To mitigate limitations and address the specificity of the two datasets, we employ a five-fold cross-validation approach to assess the performance of our model. Definition of Small Medical Objects. Given the absence of a consistent definition for small medical objects in previous works, we establish our own criteria. In our framework, an object is considered a small medical object if the ratio of its pixel count n to the total pixel count N in the entire image is less than 5%. For objects with a ratio below 1%, we classify them as extremely small medical objects. In the S-Polyp dataset, all objects fall under the category of small medical objects, while in S-HRD, all objects are classified as extremely small medical objects. Evaluation Metrics. We use two common metrics to compare our model with previous state-of-the-art models. Dice Similariy Coefficient (DSC) is defined as: DSC =2 × |P∩ G|/|P|+|G|, where P represents the area of the predicted label and G represents the area of the ground truth. Intersection over Union (IoU) is defined as: IoU =S_i/S_u, where S_i represents the area where the predicted label and ground truth overlap; and S_u for the total area of the two. Implementation Details. We conduct our experiments using one NVIDIA RTX A6000 GPU equipped with 48GB of memory. The SGD optimizer is employed with an initial learning rate of 0.01. Our training spans 200 epochs, employing a batch size of 4. All input images are resized uniformly to 352×352. The model configuration includes 4 stages in both the encoder and decoder. We set λ_1 and λ_2 to 0.7 and 0.3 respectively to balance DiceLoss and BCELoss. And α_1, α_2, α_3, α_4 are set to 1.0, 0.9, 0.8,0.7 respectively to balance losses of muti-precision segmentation results. Competitors. Given the limited number of works dedicated specifically to small medical object segmentation, we reference recent models from the broader field of medical image segmentation, including state-of-the-art models: (1) CNN based methods: U-Net <cit.>, Attention-UNet <cit.>, MSU-Net <cit.>, CaraNet <cit.>; (2) Transformer based methods: TransFuse <cit.>, TransUNet <cit.>, SSFormer <cit.>, Swin-UNet <cit.>; (3) Segment Anything Model (SAM) <cit.> and the related works: SAM without any prompt, SAM with point, SAM with box and MedSAM <cit.>. (4) Additionally, we assess the performance of an enlarged version of U-Net (U-Net-Large), where both the encoder and decoder are scaled up from 4 layers to 12 layers. This exploration aims to understand the impact of model size on segmentation accuracy. §.§ Quantitative Results As shown in Tab. <ref>, our model consistently outperforms previous state-of-the-art models across all folds for both S-HRD and S-Polyp, measured by DSC and IoU. On S-HRD, our model demonstrates a noteworthy improvement of 4.88% in DSC and 3.77% in IoU compared to earlier methods. Similarly, on S-Polyp, our model exhibits a performance boost of 3.49% in DSC and 3.25% in IoU. Table <ref> shows that methods tend to perform poorly on S-HRD. That is because the objects in S-HRD are smaller in size compared with S-Polyp, which means that there is less information available in images. Nevertheless, our model still performs best among all methods. Furthermore, the results on S-HRD and S-Polyp indicate that the smaller the medical objects in the datasets, the more significant the improvement of our model compared to previous SOTA methods. This underscores the superiority of our model in small medical object segmentation. Additionally, simply increasing the size of U-Net (U-Net-Large) yields only marginal improvements in segmentation performance compared to the standard-sized U-Net. In contrast, our EFCNet demonstrates substantial improvement. This indicates that the superior performance of EFCNet in segmentation is primarily attributed to our model design rather than the larger model size. While the addition of CSAA and MPS increases the model's cost, we believe the improvement justifies the associated costs in the realm of small medical object segmentation. We provide detailed model costs comparison in the Supplementary. §.§ Visualization Some visual results of different methods on S-HRD and S-Polyp are shown in Fig. <ref>. We compare our EFCNet with the previous SOTA methods. According to our experimental results in Tab. <ref>, the previous SOTA method on S-HRD is Attn-UNet <cit.>, and the previous SOTA method on S-Polyp is SSFormer <cit.>. As illustrated in Fig. <ref>, the strengths of our method are predominantly evident in three aspects: (1) Our method excels at capturing extremely small medical objects, as highlighted in the green circle areas. (2) Our method demonstrates higher accuracy in terms of segmenting the boundaries of small medical objects, as evidenced by the yellow circle areas. (3) Our method is significantly less prone to erroneously segmenting the background into small medical objects, as indicated by the red circle areas. On one hand, CSAA facilitates the application of valuable local information from low-level features in the encoder to the segmentation process, ensuring the model's capability to capture fine details of small medical objects. On the other hand, MPS enables the model to leverage global perception inherent in low-resolution features in the initial stages of the decoder, enhancing its ability to locate small medical objects. §.§ Ablation Studies We perform several sets of ablation experiments on S-HRD and S-Polyp, confirming the positive effect of CSAA and MPS on segmentation ability for small medical objects respectively. Effectiveness of CSAA and MPS. We incorporate CSAA and MPS into our U-Net backbone individually, and the experimental results are presented in Tab. <ref>. It is evident that each module contributes to the improvement of our model's performance. Furthermore, with the addition of both CSAA and MPS, the segmentation ability of our model experiences further enhancement. Number of Stages in CSAA. We change the number of stages aggregated in CSAA module: AA-All aggregates features in all stages of the encoder, which is exactly the CSAA applied in our final model. AA-One only performs axial attention on features in one stage of the encoder. Concat-One only concatenates features in each stage of the encoder to the corresponding decoder without any other processing. The performance of these three methods on S-HRD and S-Polyp is shown in Tab. <ref>. It can be seen that among the three models, Concat-One performs the worst. Compared with Concat-One, AA-One can improve the segmentation ability of the model. CSAA aggregates features of all stages of the encoder and performs best among these three methods above. Number of Supervisions in MPS. We vary the number of supervisions in the MPS: MPS-4 connects segmentation heads to all stages of the decoder, representing the MPS configuration in our final model. MPS-3, MPS-2, and MPS-1 connect three, two and one segmentation heads to the decoder respectively. The performance of these four models on S-HRD and S-Polyp is shown in Tab. <ref>. It is evident that among these four models, increased supervision correlates with improved model performance. § CONCLUSION We introduce a novel model called EFCNet to address the challenging task of small object segmentation in medical images. EFCNet pays sufficient attention to all features of each stage in the model, effectively reducing the information loss of small medical objects and improving the segmentation accuracy. Specifically, we propose Cross-Stage Axial Attention Module (CSAA) and Multi-Precision Supervision Module (MPS), which alleviate the loss of information in the encoder and decoder respectively, leading to a substantial enhancement in model performance. Moreover, we establish a new benchmark for small medical object segmentation research. Our experiments on two datasets demonstrate that CSAA and MPS contribute to improved segmentation accuracy, with our model significantly outperforming previous state-of-the-art models. ieeenat_fullname § RATIONALE Having the supplementary compiled together with the main paper means that: * The supplementary can back-reference sections of the main paper, for example, we can refer to <ref>; * The main paper can forward reference sub-sections within the supplementary explicitly (e.g. referring to a particular experiment); * When submitted to arXiv, the supplementary will already included at the end of the paper. To split the supplementary pages from the main paper, you can use https://support.apple.com/en-ca/guide/preview/prvw11793/mac#: :text=Delete § OVERVIEW In the supplementary material, we firstly provide additional analysis of model cost in Sec. <ref>. Then, we provide additional details about the architecture of our EFCNet in Sec. <ref> and experiment settings in Sec. <ref>. In the end, we introduce ethical considerations in Sec. <ref>. § ANALYSIS OF MODEL COST We provide model cost comparison of our EFCNet and other UNet-based methods in Tab. <ref> and performance improvement compared to U-Net <cit.> in Tab. <ref>. We can draw a conclusion that in the field of small medical object segmentation, simply increasing the model size like U-Net-Large cannot bring significant improvement in segmentation performance based on the standard-sized U-Net. In comparison, our EFCNet achieves far better segmentation performance than other UNet-based methods with model cost less than that of U-Net-Large. Indeed, our model remains relatively large to effectively address the intricate challenge of the segmentation task. § ADDITIONAL DETAILS ABOUT MODEL ARCHITECTURE We provide additional details about the backbone network, the CSAA Module and the MPS Module in our EFCNet as shown in Tab. <ref>, Tab. <ref> and Tab. <ref> respectively. § ADDITIONAL DETAILS ABOUT EXPERIMENT SETTINGS Experimental environment. The environment of our experiment is as follows. GPU: NVIDIA RTX A6000; CUDA Version: 11.7; Python Version: 3.10.4; Torch Version: 1.13.1. Split of Datasets. We perform five-fold cross-validation in our experiments. In each split, datasets are separated into training set, validation set, testing set by a ratio of 7:1:2. § ETHICAL CONSIDERATIONS The collection and utilization of human data in our research project adheres to the highest ethical standards. Our study has received the full approval of the Institutional Review Board and the Ethics Committee of a hospital. This approval process is conducted in strict accordance with the principles outlined in the Declaration of Helsinki <cit.>, which provides ethical guidelines for medical research involving human subjects. All recruited patients have signed the informed consent to publish this paper. Here, we provide the information on data collection and annotation, appropriate consent and privacy considerations. Data collection and annotation. In this investigation, retinal OCT scans are collected from eyes of 313 patients who seek treatment for macular edema associated with diabetic retinopathy or retinal vein occlusion at a hospital within the past six months. The investigation has been approved by the Institutional Review Board and the Ethics Committee of a hospital, in accordance with the principles of the Declaration of Helsinki <cit.>. Each OCT scan is centered on the fovea, either vertically or horizontally. Macular edema is defined as a central retinal thickness (CRT) greater than 300 mm. Small hyperreflective dots (small HRDs) are defined as discrete tiny dots with diameter between 20 micron and 40 micron, characterized by reflectivity similar to that of the nerve fiber layer and the absence of back shadowing <cit.>. The ground truths of small HRDs have been manually labeled by experienced eye doctors with over ten years of expertise. Appropriate consent and privacy considerations. We confirm that appropriate consent has been obtained for the use and display of images in our research. To uphold the privacy and confidentiality of the individuals involved, we take rigorous measures to ensure that all identifying information, including names, genders, and birth dates of patients, has been thoroughly removed from the images prior to any processing or analysis. These steps are taken to safeguard patient privacy and adhere to ethical standards. We understand the critical importance of addressing privacy concerns when dealing with medical images, and we are committed to upholding the highest ethical standards in our research practices.
http://arxiv.org/abs/2406.19183v1
20240627140138
Fronthaul Quantization-Aware MU-MIMO Precoding for Sum Rate Maximization
[ "Yasaman Khorsandmanesh", "Emil Björnson", "Joakim Jaldén" ]
eess.SP
[ "eess.SP" ]
Fronthaul Quantization-Aware MU-MIMO Precoding for Sum Rate Maximization This work was supported by the Knut and Alice Wallenberg Foundation. Yasaman Khorsandmanesh, Emil Björnson, and Joakim Jaldén KTH Royal Institute of Technology, Stockholm, Sweden Email: {yasamank, emilbjo, jalden}@kth.se ============================================================================================================================================================= § ABSTRACT This paper considers a multi-user multiple-input multiple-output (MU-MIMO) system where the precoding matrix is selected in a baseband unit (BBU) and then sent over a digital fronthaul to the transmitting antenna array. The fronthaul has a limited bit resolution with a known quantization behavior. We formulate a new sum rate maximization problem where the precoding matrix elements must comply with the quantizer. We solve this non-convex mixed-integer problem to local optimality by a novel iterative algorithm inspired by the classical weighted minimum mean square error (WMMSE) approach. The precoding optimization subproblem becomes an integer least-squares problem, which we solve with a new algorithm using a sphere decoding (SD) approach. We show numerically that the proposed precoding technique vastly outperforms the baseline of optimizing an infinite-resolution precoder and then quantizing it. We also develop a heuristic quantization-aware precoding that outperforms the baseline while having comparable complexity. Sum rate maximization, weighted minimum mean square error, quantization-aware precoding. § INTRODUCTION Multi-user multiple-input multiple-output (MU-MIMO) systems enable high data rates through spatial multiplexing of multiple user equipments (UEs) on the same time-frequency resource <cit.>. A base station (BS) equipped with multiple antennas and channel state information (CSI) can transmit simultaneously to several UEs using different beamforming directivity to increase the sum rate, which is controlled by the precoding method. In precoding matrix design, it is most common to maximize the sum rate, or even the weighted sum rate, under a constraint on the total transmit power <cit.>; however, sum rate maximization is known to be NP-hard <cit.>. One popular approach for sum-rate maximization is the iterative weighted minimum mean square error (WMMSE) algorithm, which finds locally optimal solutions with affordable computational complexity <cit.>. This paper utilizes a novel iterative algorithm inspired by WMMSE. A 5G BS typically consists of two main components: an advanced antenna system (AAS) and a baseband unit (BBU). The AAS is a box containing the antenna elements and their respective radio units (RUs). The BBU performs the digital processing related to the received uplink data and transmitted downlink data. The AAS and BBU are connected through a digital fronthaul. The integration of antennas and radios into a single box has made massive MU-MIMO practically feasible <cit.> and enabled the BBU to be virtualized in an edge cloud through migration to the centralized radio access network architecture <cit.>. The new implementation bottleneck is the limited fronthaul capacity and the quantization errors it creates. Both the uplink/downlink data and combining/precoding coefficients are sent over this digital fronthaul and must be quantized to a finite resolution. This paper proposes a novel linear block-level quantization-aware precoding technique that maximizes the sum rate. §.§ Prior Work Downlink MU-MIMO systems have been widely studied in previous literature regarding impairments in analog hardware <cit.> and the effect of low-resolution digital-to-analog converters <cit.>. These prior works are characterized by distortion created either in the RU, analog domain, or converters. Therefore, the transmitted signal is distorted after the precoding. The effect of limited fronthaul capacity is studied in <cit.>, but the precoding design was not quantized. In <cit.>, the authors proposed a fronthaul quantization-aware precoding design that minimizes the sum MSE, which will generally not maximize the sum rate. Many previous studies suggest designing a precoding matrix by maximizing the sum rate, often using the WMMSE approach; see <cit.> and references therein. Nevertheless, they consider ideal hardware or other types of distortion than precoding quantization. §.§ Contributions This paper proposes a transmit precoding design that finds a local optimum to the sum-rate maximization problem subject to a transmit power constraint over a limited-capacity fronthaul connection that only accepts precoding matrix elements from a discrete quantization codebook. The main contributions are: * We propose maximizing the sum rate by solving a quantization-aware precoding problem. As this mixed-integer problem is non-convex, we rewrite it following the iterative WMMSE algorithm to find a local optimum. Each iteration of the proposed iterative algorithm contains a new integer least-square problem that minimizes the weighted MSE at each UE. The solution is obtained in a new way inspired by sphere decoding (SD). We consider a reduced-complexity variation on the proposed algorithm where only the last iteration is quantization-aware. * We define quantization-unaware precoding as a baseline and then recommend a low-complexity heuristic algorithm to sequentially refine the quantization-unaware precoding columns to improve the sum rate. The complexity is comparable to quantization-unaware precoding; thus, massive MIMO scenarios can be effectively handled. * We provide numerical results to compare different quantization-aware algorithms with quantization-unaware precoding baseline in terms of sum rate with the correlated Rician fading and a uniform planar array (UPA). § SYSTEM MODEL We consider a single-cell MU-MIMO downlink system, where the BS contains an AAS with M antenna-integrated radios and serves K single-antenna UEs. The AAS is connected to a BBU through a limited-capacity fronthaul link, which is modeled as a finite-resolution quantizer. The precoding matrix P is computed, and the data symbol vector s is encoded at the BBU and then sent over the fronthaul to the AAS. As data symbols are bit sequences from a channel code, we can transmit them over the fronthaul without quantization errors as they are already quantized. We can then map the data symbols obtained from the BBU to modulation symbols at the AAS. However, the precoding matrix computed at the BBU based on CSI is quantized due to the digital fronthaul. The quantized precoding matrix is then multiplied with the UEs' data symbols at the AAS, and finally, the product is transmitted wirelessly. Before analyzing the proposed quantization-aware precoding in Section <ref>, we introduce the problem formulation and quantization scheme in the following subsections. §.§ Problem Formulation The BBU uses its available CSI to select the downlink precoding matrix P. As the main focus of this work is on managing quantization errors in a precoding matrix that maximizes the sum rate, we consider perfect CSI. However, the same algorithms can be used if the BBU has imperfect CSI but treats it as perfect. We postpone the channel estimation part for future work. The transmitted data symbol to the UE k is denoted by s_k, which has zero mean and normalized unit power, and the corresponding channel vector h^T_k ∈ℂ^1 × M represents a narrowband channel and might be one subcarrier of a multi-carrier system. The algorithm developed in this paper can be applied individually to each subcarrier. The received signal at the UE k is y_k = h^T_k p_k s_k + ∑_i=1,i k^K h^T_k p_i s_i + n_k, where p_k∈𝒫^M is the quantized linear precoding vector for UE k and n_k ∼𝒞𝒩(0,N_0) represents the independent additive complex Gaussian receiver noise with power N_0. For later use, we define the total received signal as y = [y_1,…,y_K]^T, the data symbols vector as s = [s_1, …, s_K]^T∈𝒪^K (𝒪 is the finite set of constellation points such as a QAM alphabet), the channel matrix as H = [h_1,…,h_K]^T∈ℂ^K× M, and the precoding matrix as P = [p_1,…,p_K] ∈𝒫^M× K. The fronthaul quantization alphabet set 𝒫 is defined as 𝒫 = { l_R + jl_I : l_R,l_I∈ℒ}. We assume the same quantization alphabet is used for the real and imaginary parts. Here ℒ= { l_0, … ,l_L-1} contains the set of real-valued quantization labels, L=|ℒ| denotes the number of quantization levels, and L=log_2 (L) is the number of quantization bits per real dimension. Note that 𝒫 becomes the complex-number set ℂ in the case of infinite resolution. The quantized precoding matrix P and the data symbols vector s are sent separately over the fronthaul. The precoded signal vector x is calculated at the AAS as x = αPs, where the scaling factor α = √(q/ tr(PP^H)) is computed at the AAS, q denotes the maximum transmit power, and tr( · ) denotes the matrix trace. This scaling factor ensures that the 𝔼[‖x_2^2] = q, where · is the Euclidean norm so that the maximum power is always utilized, despite the finite-resolution quantization. The achievable rate is log_2 (1 + SINR_k (P)), where the signal-to-interference-plus-noise-ratio (SINR) depends on the precoding matrix P as SINR_k(P) = | h^T_k p_k |^2/∑_i=1,i k^K | h^T_k p_i |^2 + N_0. We want to maximize the sum rate of this downlink channel under the mentioned maximum transmit power constraint, and we define this problem as \beginmaxi![2] P∈𝒫^M ×K ∑_k=1^K log_2 (1 + SINR_k (P))ℙ_1: tr(PP^H) ≤q, \endmaxi! where the optimization variable is P with elements p_m,k∈𝒫 for k=1, … ,K and m=1, … ,M. Problem ℙ_1 is not convex since the utility is non-concave and the search space is discrete, so it is hard to find the optimal solution <cit.>. §.§ Quantization Scheme We are modeling the digital fronthaul as a quantizer. In practice, uniform quantization is often used, so we assume our quantizer function 𝒬(·) : ℂ→𝒫 is a symmetric uniform quantization with step size Δ. Each entry of the quantization labels ℒ is defined as l_z = Δ( z- L-1/2), z=0, …, L-1. Furthermore, we let 𝒯 = {τ_0, …, τ_L }, where -∞ = τ_0 < τ_1 < … < τ_(L-1) < τ_L =∞, specify the set of the L + 1 quantization thresholds. For uniform quantizers, the thresholds are τ_z = Δ( z- L/2), z=1, …, L-1. The quantizer function 𝒬(·) can be uniquely described by the set of quantization labels ℒ = { l_z : z=0, …, L-1 } and the set of quantization thresholds 𝒯. The quantizer maps an input r ∈ℂ to the quantized output 𝒬(r) = l_o + jl_l ∈𝒫, where the set is defined in (<ref>), if ℜ{𝒬(r) }∈ [τ_o,τ_o+1) and ℑ{𝒬(r) }∈ [τ_l,τ_l+1). The step size Δ of the quantizer should be chosen to minimize the distortion between the quantized output and the unquantized input. The optimal step size Δ depends on the dynamic range of the input, which in our case depends on the precoding scheme and channel model. We select the step size to minimize the distortion under the maximum-entropy assumption that each input element to the quantizer is distributed 𝒞𝒩(0,q/KM), where the variance is selected so that the sum power of the elements matches with the power constraint (<ref>). The corresponding optimal step size for the normal distribution was numerically found in <cit.>. § PROPOSED WMMSE ALGORITHM As the optimization problem ℙ_1 is non-convex, the global optimal solution is challenging to find. We instead target finding a local optimum. Inspired by <cit.>, we will rewrite ℙ_1 as an equivalent iterative WMMSE problem for which a local optimum can be found through alternating optimization. In the following, we decompose this equivalent optimization problem into a sequence of convex subproblems. Let ŝ_k = β_ky_k denote the estimate at UE k of the transmitted data symbol s_k. It is obtained from the received signal y_k using the receiver gain β_k ∈ℂ (also known as the precoding factor <cit.>). For a given receiver gain, the MSE in the data detection as UE k becomes e_k(P, β_k) = 𝔼[ | s_k - ŝ_k |^2 ] = |β_k|^2(|h^T_k p_k|^2+∑_i=1,i k^K|h^T_k p_i|^2+N_0) -2(β_kh^T_k p_k )+1 . The MSE in (<ref>) is a convex function of β_k. We can select the value of β_k that minimizes the MSE for given P as β̅_k (P)= (h^T_k p_k )^*/|h^T_k p_k|^2+∑_i=1,i k^K|h^T_k p_i|^2 + N_0. By plugging optimal β̅_k into (<ref>), we can see that e_k(P, β̅_k) is equal to 1/(1+SINR_k(P) ). Now by defining the auxiliary weight d_k ≥ 0, we formulate a weighted sum MMSE problem subject to the same total transmit power constraint as in (<ref>): \beginmini![2] P∈𝒫^M ×K , β,d ∑_k=1^K ( d_k e_k(P, β_k) - log_2 (d_k)) ℙ_2: tr(PP^H) ≤q, \endmini! where β = [β_1,…,β_K]^T is a vector containing all receiver gains and d = [d_1,…,d_K]^T is a vector containing all the UE weights in the weighted MSE. The problem ℙ_2 is equivalent to ℙ_1 in the sense that the optimal P is the same for both problems. This equivalence comes from the fact that the optimal weight for the UE k is d̅_k = 1/e_k(P, β̅_k (P)) = 1+SINR_k(P), so (<ref>) then becomes K-∑_k=1^Klog_2(1+SINR_k(P)). The cost function in (<ref>) is convex in each individual optimization variable, which is the key reason for considering this equivalent problem formulation. For fixed β_k and d_k (e.g., calculated as in (<ref>) and (<ref>)), we have the WMMSE problem \beginmini![2] P∈𝒫^M ×K∑_k=1^K d_k e_k(P, β_k) ℙ_3: tr(PP^H) ≤q, \endmini! which is a mixed-integer convex problem. It can be solved using general-purpose methods, such as CVX <cit.>. By iterating between updating β_k using (<ref>), d_k using (<ref>), and P by solving ℙ_3, we obtain a block coordinate descent algorithm that will converge to a stationary point (for the same reasons as in <cit.>). This algorithm is summarized in Figure <ref>. We can initialize the algorithm using any precoding matrix, including those obtained using classical infinite-resolution precoding schemes. The Wiener filtering (WF) precoding scheme (also known as regularized zero-forcing) is the most desirable one in this context since it can be derived by minimizing the sum MSE <cit.>. Hence, we suggest setting the initial precoding matrix as P_initial = H^H (HH^H+KN_0/qI_K )^-1. §.§ Efficient Implementation of Quantization-Aware Precoding The main complexity in the proposed algorithm originates from solving ℙ_3. Instead of using a general-purpose solver, we will propose an efficient dedicated algorithm. We can rewrite the objective function (<ref>) as ∑_k=1^K d_ke_k(P,β_k) = 𝔼[ ‖√(diag( d )) ( s - diag( β) y) _2^2 ], where diag(d) is a diagonal matrix with elements from the vector d with UE weights on the main diagonal. The expression in (<ref>) can be expanded as 𝔼[ ‖√(diag(d))s - √(diag(d))diag(β) y_2^2 ] = tr(diag(d) - √(diag(d))DHP- √(diag(d))P^HH^HD^H + DHPP^HH^HD^H + N_0 DD^H), where we introduce the notation D= √(diag(d))diag(β). We first notice that ℙ_3 is a so-called integer least-squares problem due to uniform quantizer that we chose. The search space is a scaled finite subset of the infinite integer lattice. A technique that has previously been proposed as an efficient algorithm to solve closest lattice point problems in the Euclidean sense is called sphere decoding (SD). SD has significantly lower average computational complexity than a naive exhaustive search <cit.>. The basic principle of SD is to reduce the number of search points in a skewed lattice that lies within a hypersphere of radius d, which can speed up the process of finding the solution without loss of optimality. In this paper, we consider the Schnorr-Euchner SD (SESD) algorithm <cit.>, where the enumeration sorts candidate symbols in a zig-zag manner. SESD improves the basic SD algorithm by first checking the smallest child node of the parent node in each layer because the first found the feasible solution is often quite suitable and quickly enables a reduction of the search radius. Thus, many branches can be pruned, and the calculation complexity can be further lowered. We propose using the SESD algorithm to approximately solve ℙ_3 for fixed values of β_k and d_k. However, the classical SD framework does not contain any power constraint. To adapt our problem ℙ_3 to match with the form required by SD algorithms, we need to proceed as follows. First, we rewrite the objective function (<ref>) using the Lagrange multiplier λ as 𝔏(P, β, λ) = tr(diag(d) - √(diag(d))DHP - √(diag(d))P^HH^HD^H + DHPP^HH^HD^H + N_0 DD^H) + λ( tr(PP^H)-q ). When minimizing (<ref>) with respect to P and λ, we can drop the constant term tr (diag(d) ) and have problem ℙ_4, given in (<ref>) at the top of the next page. Although strong duality does not hold for the integer least-squares problem ℙ_3, ℙ_4 provides a good approximation. For solving ℙ_4, we can utilize the SD algorithm for a fixed value of λ. We then make a bisection search over λ to find the value that gives a solution that satisfies the power constraint in (<ref>) near equality. For a fixed value of λ, and by vectorizing ℙ_4 and using f = vec( (√(diag(d))DH)^T), we obtain ℙ_5, which finds a suboptimal precoding matrix and has K separable objective functions that each only depends on one of the optimization variables. This feature enables parallel optimization of p_i for i=1,…,K. Thus, in addition to the more efficient search strategy, the reformulation of problem ℙ_5 also significantly reduces the dimension of each subproblem <cit.>. By defining V̂ = H^HD^HDH+λI_M, we can obtain the equivalent formulation of each term of the objective function in (<ref>) as p_i^HV̂p_i -f_i^Tp_i - ( f_i^Tp_i )^H = ‖c_i - Gp_i ‖_2^2 - c_i^Hc_i, where G∈ℂ^M × M is obtained from the Cholesky decomposition V̂ = G^HG and c_i = (f_i^TG^-1)^H. We can now minimize (<ref>) by the classical SESD algorithm. We call this approach the Proposed SD-based WMMSE algorithm. §.§ Quantization-Unaware Precoding The conventional WMMSE-based algorithms in <cit.> (among others) compute a precoding matrix from ℂ^M× K that maximizes the sum rate with infinite resolution and the available CSI. Hence, the naive baseline approach would be to compute such a precoding matrix P_unquantized=W∈ℂ^M× K and then quantize each entry using 𝒬(·): ℂ^M × K→𝒫^M × K so that the result can be sent over the fronthaul. In this case, the BBU follows Fig. <ref> but solves ℙ_3 in the domain ℂ instead of 𝒫, and ℙ_3 becomes a continuous convex optimization problem that is solvable using any general-purpose convex solver. We will refer to this quantization-unaware precoding approach as the Unaware WMMSE. After the WMMSE algorithm has converged, the final precoding matrix is obtained by quantizing each entry as P = 𝒬(W). §.§ Combined Quantization-Aware and -Unaware Precoding Although the proposed SD approach is tailored for the problem, there is a limit to how large setups (M and K) it can handle before the run time of the Proposed SD-based WMMSE approach becomes an issue. As we run the SD algorithm in each iteration, the algorithm's average complexity over N iterations becomes O(NKL^2γ M) for some 0 ≤γ≤ 1 <cit.>. An alternative would be to search for a precoding matrix in ℂ^M × K for N-1 iterations and then use the SD-based algorithm only for the final iteration. In this case, we will first identify UE gains β_k and weights d_k that are suitable for sum-rate maximization with infinite-resolution precoding and then compute the corresponding optimized quantization-aware precoding. The order of complexity would be O(KL^2γ M) for some 0 ≤γ≤ 1. We call this method Half-aware WMMSE. §.§ Heuristic Quantization-Aware Precoding Although the Half-aware WMMSE scheme has lower complexity than the Proposed SD-based WMMSE, the complexity grows exponentially with the number of antennas M, which makes scenarios with many antennas intractable. Hence, we believe it should primarily be seen as a benchmark for designing quantization-aware precoding schemes with low complexity. In this subsection, we propose such a heuristic precoding scheme. The proposed scheme is an add-on to the Unaware WMMSE precoding. After we quantize the precoding matrix W∈ℂ^M × K to obtain P∈𝒫^M × K, we refine the elements sequentially. We consider the second closest quantization levels in both the real and imaginary dimensions according to Euclidean distance <cit.>. This search gives us three alternative ways of quantizing each element in the precoding matrix W. We call this method the Heuristic quantization-aware. As the matrix elements are refined sequentially, we must order the elements properly. We propose to start by updating the column of the quantized precoding matrix corresponding to the UE k with the highest generated interference GI_k = ∑_i=1 , i k^K |[HP̂]_i,k|^2, where P̂ = αP = α𝒬 (W) since this might improve the performance the most.[We have noticed experimentally that this leads to the largest improvement in sum rate at high SNR.] Then for that specific UE k, for each transmit antenna m ∈{ 1, …, M }, we identify the four nearest points in 𝒫 to the element w_k,m from the original unquantized precoding matrix W. We evaluate the sum rate ∑_k=1^K log_2 (1+|[HP̂]_k,k|^2/∑_i=1, i k^K|[HP̂ ]_k,i|^2 +N_0), for the four different P options obtained with p_k,m∈{four nearest points to w_k,min𝒫} while all other elements are fixed. We then replace the corresponding element in P with the option that achieves the largest sum rate. The rest of the UEs are ordered based on decreasing generated interference, and the precoding elements are updated similarly. § NUMERICAL RESULTS This section compares the sum rates achieved by the aforementioned precoding approaches as a function of the SNR. The sum rate is calculated using Monte Carlo simulations for the case of Gaussian signaling. §.§ Channel Model We consider spatially correlated Rician fading channels composed of a line-of-sight (LoS) path component and a non-line-of-sight (NLoS) path component as H = √(κ/κ+1)H^LoS+√(1/κ +1)H^NLoS, where κ is the Rician factor, while H^LoS∈ℂ^K× M and H^NLoS∈ℂ^K× M are the LoS and NLoS components, respectively. We assume that the AAS is a UPA; thus, the LoS channel matrix can be modeled as in <cit.>. We model the NLoS channel as h_k^NLoS∼𝒞𝒩(0_M,R_k), which is spatially correlated Rayleigh fading. The spatial correlation matrix R_k∈ℂ^M × M is generated following the local scattering model from <cit.>. The locations of the UEs are randomly generated with the same elevation angle θ =0 but uniformly distributed azimuth angles ϕ, seen from the UPA. §.§ Results and Discussion We assume the UPA consists of M = 16 entries in the form of a 4 × 4 array. The number of quantization levels is L=8, the number of UEs is K = 4, and the Rician factor is κ = 5. The UEs have the same SNR, which is defined as SNR=q γ/N_0, where γ is the common channel variance. Fig. <ref> presents the convergence behavior of the proposed WMMSE algorithm in Fig. <ref> for the cases when ℙ_3 is solved exactly using CVX (denoted by CVX-based WMMSE) and approximately using the proposed SD-based method. We consider SNR=20 dB and generate one user drop. The proposed algorithm reaches a stationary point after N=5 iterations. We notice that, despite the approximations made to lower the complexity in the SD-based method, the difference in the sum rate is negligible after convergence. The difference between the sum rate at the starting point and the stationary point approximately shows the improvement of the proposed sum-rate maximization algorithm compared to our previous results in <cit.>, which considered sum MSE minimization. Fig. <ref> depicts the average sum rate as a function of the SNR for the different precoding schemes. The top curve, WMMSE (infinite res), considers the ideal case without quantization and outperforms all the quantized precoding schemes since the rate increases linearly (in dB scale) at high SNR. In all quantized precoding schemes, the sum rate converges to specific limits at high SNR since the interference cannot be canceled entirely due to the limited precoding resolution; i.e., the system is interference-limited at high SNR. The gap between WMMSE (infinite res) and our novel Proposed SD-based WMMSE precoding is remarkably smaller than the gap between Unaware WMMSE and WMMSE (infinite res), where we quantized the precoding matrix used for WMMSE (infinite res). Our algorithm provides twice the rate at high SNR. The lower-complexity Half-aware WMMSE approach also outperforms the Unaware WMMSE algorithm, but there is a substantial gap to Proposed SD-based WMMSE. This gap demonstrates the importance of computing quantization-aware weights in the WMMSE algorithm instead of relying on those obtained with infinite resolution. The proposed Heuristic quantization-aware precoding reaches nearly the same performance as Half-aware WMMSE and, thus, performs vastly better than Unaware WMMSE precoding. Note that the complexity of Heuristic quantization-aware is polynomial with M and K, just as Unaware WMMSE, and therefore is implementable in the same scenarios (e.g., massive MIMO). § CONCLUSIONS A 5G site often consists of an AAS connected to a BBU via a digital fronthaul with limited capacity. Hence, the precoding matrix that is computed at the BBU must be quantized to finite precision. We have proposed a novel WMMSE-based framework for quantization-aware precoding that finds a local optimum to the sum rate maximization problem. Moreover, a reduced-complexity SD algorithm was proposed to find an approximate but tight solution. We demonstrated that the sum rate can be doubled at high SNR by being quantization-aware both when selecting the weights in the WMMSE formulation and when selecting the precoding matrix. Besides, a heuristic quantization-aware precoding was proposed to outperform the quantization-unaware baseline while having comparable complexity. Finally, we note that the framework can be easily modified also to solve weighted sum rate problems. IEEEtran
http://arxiv.org/abs/2406.19132v1
20240627123646
Origin of extended Main Sequence Turn Off in open cluster NGC 2355
[ "Jayanand Maurya", "M. R. Samal", "Louis Amard", "Yu Zhang", "Hubiao Niu", "Sang Chul Kim", "Y. C. Joshi", "B. Kumar" ]
astro-ph.SR
[ "astro-ph.SR", "astro-ph.GA" ]
firstpage–lastpage Trivariate Bicycle Codes Kishor Bharti July 1, 2024 ======================== § ABSTRACT The presence of extended Main Sequence Turn-Off (eMSTO) in the open clusters has been attributed to various factors, such as spread in rotation rates, binary stars, and dust-like extinction from stellar excretion discs. We present a comprehensive analysis of the eMSTO in the open cluster NGC 2355. Using spectra from the Gaia-ESO archives, we find that the stars in the red part of the eMSTO have a higher mean v sin i value of 135.3±4.6 km s^-1 compared to the stars in the blue part that have an average v sin i equal to 81.3±5.6 km s^-1. This suggests that the eMSTO in NGC 2355 is possibly caused by the spread in rotation rates of stars. We do not find any substantial evidence of the dust-like extinction from the eMSTO stars using ultraviolet data from the Swift survey. The estimated synchronization time for low mass ratio close binaries in the blue part of the eMSTO suggests that they would be mostly slow-rotating if present. However, the stars in the blue part of the eMSTO are preferentially located in the outer region of the cluster indicating that they may lack low mass ratio close binaries. The spread in rotation rates of eMSTO stars in NGC 2355 is most likely caused by the star-disc interaction mechanism. The stars in the lower main sequence beyond the eMSTO region of NGC 2355 are slow-rotating (mean v sin i = 26.5±1.3 km s^-1) possibly due to the magnetic braking of their rotations. (Galaxy:) open clusters and associations – individual: NGC 2355 – stars: rotation – (stars:) Hertzsprung-Russell and colour-magnitude diagrams – techniques: spectroscopic. § INTRODUCTION The unusual spread near the turn-off of the main sequence (MS) in the colour-magnitude diagram of the Galactic open clusters has drawn the attention of astronomers recently <cit.>. This spread in the upper MS is called extended main sequence turn-off i.e. eMSTO. Though recent in detection, the eMSTOs are now commonly found in the Galactic open clusters <cit.>. It has been found that the inferred apparent age spread from the eMSTO's width is correlated to the cluster age <cit.>. This indicates that the presence of the eMSTOs is related to stellar evolution rather than star formation. <cit.> investigate the origin of the eMSTO in the clusters and predict that the stellar rotation can cause broadening of the upper MS of the clusters which was found to be true for observed young and intermediate-age clusters <cit.>. The fast rotational velocity causes a decrease in the self-gravity of the star which manifests in the reduced stellar surface temperature, a phenomenon known as gravity darkening. Therefore, the fast-rotating stars appear redder on the colour-magnitude diagram (CMD) due to the gravity-darkening effect <cit.>. However, gravity darkening affects the stellar equators more significantly than the poles of the stars so the observed colour of fast-rotating stars also depends on the inclination angle. Additionally, the rotational mixing in the fast-rotating stars can enhance their core size and lifetime on MS. These stars will appear younger than their non-rotating counterparts <cit.>. Although the spread in rotation rates of stars is the widely accepted reason behind the origin of eMSTO in the open clusters, there are studies suggesting other possible mechanisms that could contribute to the presence of the eMSTOs in the open clusters <cit.>. The differential reddening was the cause behind the presence of eMSTO in cluster Stock 2 instead of the rotational velocity <cit.>. However, the open clusters hosting eMSTOs explored by <cit.> do not show any significant differential reddening. Similarly, using simple stellar populations models <cit.> notify the contribution of the binary stars in the eMSTO morphologies of the open clusters. Recently, <cit.> propounded the idea that dust-like extinction from the circumstellar excretion disc of the fast-rotating stars may lead to the presence of the eMSTO in the star clusters. Thus, the exact reason behind the origin of the presence of eMSTO in the Galactic open clusters is still a debated topic. The possible mechanism causing the spread in rotation rates of the stars with similar masses belonging to the eMSTO is still debated. <cit.> proposed that all the stars were initially fast-rotating stars before tidal torque in binary stars, causing them to become slow rotators. The finding that a few star clusters have comparable binary fractions in the fast and slow-rotating stars samples seems not to support this hypothesis <cit.>. A different explanation for the spread in the rotation rate of MS stars having masses above ∼1.5 M_⊙ links this to the bimodal stellar rotation distribution in the pre-main sequence (PMS) low-mass (2 M_⊙) stars <cit.>. This bimodal stellar rotation in the PMS stars depends on the interaction of stars with their circumstellar discs during PMS lifetimes. Another interesting mechanism happening in very young clusters suggests that the fast rotation is caused by disc accretion and slow rotation is due to the binary merger in the stars <cit.>. So, theories providing possible physical mechanisms leading to the spread in rotation rates of the eMSTO stars are still emerging. Therefore, studying the origin of the eMSTOs in open star clusters needs keen attention to develop a better understanding of this physical phenomenon. In this work, we carefully investigate the origin of the eMSTO in open cluster NGC 2355. The open cluster NGC 2355 is an intermediate-age open cluster having an age of 1 Gyr <cit.>. This cluster is located at a distance of 1941 pc with Galactic longitude and latitude of 203.370 and 11.813 degrees, respectively <cit.>. The cluster is situated in the anticenter direction in the Gemini constellation at the Galactocentric distance of 10.1 kpc near the region where the Milky Way metallicity gradient flattens <cit.>. The mean metallicity of the cluster is estimated to be [Fe/H] = -0.09 by <cit.> using medium resolution Gaia-ESO UVES (R∼47000) and GIRAFFE HR15N (R∼19000) spectra. This [Fe/H] metallicity translates to Z = 0.0163 in the mass fraction form of metallicity using the relation provided by <cit.>. The broadening near the MS turn-off of NGC 2355 was noticed by <cit.> using Swift near-ultraviolet (1700-3000 Å) data without further spectroscopic study. The open cluster NGC 2355 is also studied for stellar variability by <cit.> using optical V band data. A total of 15 members are variable stars in the study. The present study is organized as follows. In Section 2, we present the source and specifications of the data used in the current analysis. We describe member identification, cluster general properties, and demography and physical properties of the eMSTO stars in Section 3. In Section 4, we explore the different reasons that could explain the origin of the eMSTO. Finally, we conclude in Section 5. § DATA We utilized photometric and astrometric data provided by Gaia Data Release 3 (DR3) <cit.> for the present study. The Gaia DR3 archive provides data with photometric uncertainty of 0.0003, 0.001, and 0.006 mag for the G band data up to G<13, G=17, and G=20 mag, respectively. The photometric uncertainties for the G_ BP band are ∼0.0009, 0.012, and 0.108 mag up to G<13, G=17, and G=20 mag, respectively. The G_ RP band data have 0.0006, 0.006, and 0.052 mag photometric uncertainty for G<13, G=17, and G=20 mag, respectively. The proper motions provided by Gaia DR3 have median uncertainty of ∼0.03, 0.07, and 0.5 mas/yr for G<15, G=17, and G=20 mag, respectively. The median uncertainties in parallax are ∼0.03 mas for G<15, 0.07 mas for G=17, and 0.5 mas for G=20 mag. We used Gaia DR3 astrometric data to identify member stars and calculate the distance of the cluster NGC 2355. The ultraviolet (UV) data for NGC 2355 in the uvw2 (1928 Å), uvm2 (2246 Å), and uvw1 (2600 Å) bands of the Swift survey provided by <cit.> is used in the present study. This data set is part of the Swift/Ultraviolet-Optical Telescope Stars Survey and includes near-ultraviolet photometric data of 103 Galactic open clusters. We used this data to create the ultraviolet CMD for NGC 2355. We used medium and high-resolution spectra available at the European Southern Observatory (ESO) archive [http://archive.eso.org/scienceportal/home]. The medium-resolution and high-resolution spectra were observed using the multi-object GIRAFFE and UVES spectrographs installed on the ESO Very Large Telescope (VLT). The archived spectra were observed under programme 197.B-1074 (PI: GILMORE, GERARD). The resolution for medium-resolution GIRAFFE spectra was R = λ/Δλ ∼ 19,200 for the wavelength range ∼644 to ∼680 nm. The high-resolution spectra from the UVES spectrograph have a resolution of R ∼ 51,000 in the wavelength range ∼582-683 nm. These Gaia-ESO spectra were used to estimate the projected rotational velocity of the stars in this paper. § MEMBERSHIP AND PHYSICAL PROPERTIES §.§ Identification of the member stars The identification of member stars is necessary to derive conclusions about the origin of the eMSTO stars. For this purpose, we used very precise astrometric data from Gaia DR3 archives. We downloaded all the sources from Gaia DR3 data within a radius of 0.261 around the cluster center with right ascension (RA) = 07:16:59.3 and declination (Dec.) = +13:46:19.2 of epoch=J2000 given by <cit.>. This radius is equal to the tidal radius of the cluster estimated by <cit.>. We present a vector-point diagram (VPD) of the NGC 2355 region showing the proper motion plane in Figure <ref>. We noticed a conspicuous over-density in the VPD of NGC 2355 at (μ_α * = -3.802 mas yr^-1; μ_δ = -1.086 mas yr^-1) as shown Figure <ref>. The stars lying within a circle of radius 0.6 mas yr^-1 centered at this point were chosen as potential member stars. This radius is estimated through the radial density profile of stars in the proper motion plane as described in our previous studies <cit.>. The radius is the radial distance from the center where the number density of potential members starts merging into the number density of the field stars in the proper motion plane. To quantify the membership of stars, we assigned membership probability to the stars calculated through a statistical method utilizing proper motions as described in <cit.>. The stars with membership probabilities above 60% and parallaxes, ϖ, within 3σ standard deviation of the mean ϖ of the potential member stars were chosen as cluster members. Through this process, we obtained 411 member stars in cluster NGC 2355. §.§ Extended Main Sequence Turn-Off in the cluster We notice an unusual broadening of the upper MS, i.e. the eMSTO, in the CMD of NGC 2355, as visible in Figure <ref>. We defined the extended main sequence turn-off region in the colour-magnitude diagram of the cluster NGC 2355 as the rectangular region with G_ BP - G_ RP colour values between 0.35 to 0.70 mag and G band magnitude less than 14.16 mag. We analyzed the 3D kinematics of the stars belonging to the eMSTO region which is illustrated in Figure <ref>. The radial velocity (RV) of 39 out of 54 eMSTO stars were available in <cit.> derived from the spectroscopic data from the Gaia-ESO survey. We found all these 39 eMSTO stars except one share approximately similar RV with a mean RV value of 35.4±0.4 km s^-1 and a standard deviation of 2.1 km s^-1. The remaining eMSTO star with ID 38 is reported to have an RV value of 50.2 ± 0.3 km s^-1, which may be a binary star. The proper motions, parallaxes, and RV of 53 out of 54 eMSTO stars suggest that these stars are profound members of the cluster NGC 2355. The stars in the eMSTO region have magnitudes G≤14.16, G_ BP ≤ 14.34, and G_ RP ≤ 13.80, respectively. The uncertainties for these magnitude ranges are estimated to be 0.0002, 0.0012, and 0.0010 mag, respectively, using the tool provided by Gaia DPAC <cit.>. The uncertainty in the (G_ BP - G_ RP) colour would be 0.002 mag compared to the observed spread of 0.222 mag in the (G_ BP - G_ RP) colour for the eMSTO stars. For further investigation, we divided the population of eMSTO stars into two groups. The eMSTO stars below the fiducial line were grouped as bMS stars, whereas those above the fiducial line were called rMS stars. We divided the MS into magnitude bins of 0.5 mag to derive the fiducial line. The fiducial line is the interpolation of the median of magnitude and colour of these magnitude bins. The bMS and rMS stars within the eMSTO regions are shown in Figure <ref>. There were 23 bMS stars and 31 rMS stars in the eMSTO region. We grouped MS stars beyond the eMSTO region as lower main sequence (lMS) stars. The red giant branch stars of the CMD are grouped as RGB stars. We divided the lMS stars into two groups: the lMS stars with colour bluer than the fiducial line as b-lMS stars and stars redder than the fiducial line as r-lMS stars. We have used these sub-samples of bMS, rMS, lMS, b-lMS, and r-lMS in the subsequent analysis in the paper. §.§ Physical properties §.§.§ Cluster properties through photometric analysis We used extinction, A_V, values for the cluster members given by <cit.> to calculate the reddening, E(B-V), values using relation E(B-V)=A_V/R_V where R_V is the total-to-selective extinction, taken to be 3.1, for the diffused interstellar medium <cit.>. We calculated the mean extinction, A_V, value to be 0.346±0.123 mag which corresponds to E(B-V) = 0.112±0.040 for NGC 2355. <cit.> used the broadband photometric data from Gaia EDR3, Pan-STARRS1, SkyMapper, AllWISE, and 2MASS to calculate the A_V values of stars with a typical precision of 0.13 mag up to G=14 mag. The calculated mean A_V value agrees with the mean A_V value of 0.323±0.018 estimated by <cit.> using Gaia DR2 optical G_ BP and G_ BP bands data. We also calculated the mean reddening values for bMS and rMS stars to be 0.099±0.027 and 0.098±0.043 mag, respectively. These values are in excellent agreement which indicates the absence of differential reddening for the eMSTO stars in NGC 2355. The distance of cluster NGC 2355 was determined through the parallax inversion method. We calculated the mean ϖ from the member stars to be 0.522 ± 0.046 mas, translating into a distance of 1853 ± 84 pc for NGC 2355. This distance was calculated after compensating the mean ϖ of NGC 2355 for a systematic global parallax offset of -0.017 mas for Gaia DR3 parallaxes estimated by <cit.>. The distance modulus calculated from the distance of this cluster was found to be 11.34 ± 0.10 mag. The age of this cluster was estimated by fitting extinction corrected <cit.> isochrones[http://stev.oapd.inaf.it/cgi-bin/cmd] of the metallicity Z = 0.0163 on the CMD for the obtained distance modulus and extinction. The age of the best-fit isochrone was found to be 1 Gyr. We determined the mass of the individual member stars through the isochrone fitting on the CMD of the cluster. The CMD of NCG 2355 with the best-fit isochrone is shown in Figure <ref> and the masses of the individual member stars are given in Table <ref>. We used the multiple-part power law form of mass function provided by <cit.> to calculate the total mass of NGC 2355. We estimated the total mass of NGC 2355 as described by <cit.>. In this method, the total mass, M_tot is calculated from the relation M_tot = A×∫_m_1^m_2 M^1-α dM. The normalization constant, A, in this equation was estimated from the relation N = A×∫_m_1^m_2 M^-α dM for m_1 = 1.0 M_⊙, m_2 = 2.0 M_⊙, and N=218. A brief description of this method can be found in our previous study <cit.>. Using this method, we obtained the total mass of the cluster to be 1.3±0.5 × 10^3 M_⊙ including members having masses above 0.08 M_⊙. §.§.§ Properties of individual stars through spectroscopic analysis We used the iSpec software solution package to estimate effective temperature (T_ eff) and projected rotational velocity (v sin i) of the stars <cit.>. The iSpec package provides various options for synthetic spectrum generation codes, line lists, and model atmospheres. We used the radiative transfer code SPECTRUM <cit.> and MARCS <cit.> atmosphere models available in the iSpec to generate the synthetic spectra. Solar abundances for the synthetic spectra were taken from <cit.> models. We used original line lists provided in the SPECTRUM package covering 300 to 1100 nm. We fixed microturbulent velocity and the limb darkening to 2 km s^-1 and 0.6 for all the spectra, respectively. We obtained the best-fit synthetic spectra to the observed spectra through the global χ^2 minimization of the parameters T_ eff, surface gravity log (g), and v sin i simultaneously. The value of a parameter corresponding to the best-fit synthetic spectra was considered as the estimated value of the parameter for the star. We have presented the illustrative plots of best-fit synthetic spectra on the observed spectra for slow and fast-rotating stars in Figure <ref>. The derived T_ eff and v sin i values together with membership probabilities and other physical parameters of eMSTO stars in Table <ref>. The v sin i values we estimated for the eMSTO stars mostly agree with those reported by <cit.>. The high-resolution UVES spectra were available for RGB stars only. The v sin i and T_ eff values for RGB stars were determined using these high-resolution spectra. The estimated v sin i values for RGB stars range from 1.23±0.45 to 7.98±0.21 km s^-1. We estimated v sin i and T_ eff of eMSTO stars and lower MS stars from the medium-resolution GIRAFFE spectra. We found v sin i values for the eMSTO stars in the range 15.59±8.71 to 225.70±35.72 km s^-1. These spectroscopically estimated values of v sin i and T_ eff are used in the following analysis of the present study. §.§ Unresolved binaries The presence of the binary stars can also influence the morphology of the CMD of the clusters as an unresolved binary would appear redder and brighter than the single star of a mass similar to the mass of the primary component on the MS. This redder and brighter shift of binaries may resemble the eMSTO in the upper MS of the clusters. The magnitude of the binary system can be expressed as follows: m_ binary = m_ p - 2.5 log (1 + F_ sF_ p) where m_binary and m_p are the magnitudes of the whole binary system and the primary star, respectively. F_p and F_s denote the flux of the primary and the secondary stars of the binary system, respectively. The shift will be largest for the equal mass binaries as these binaries will be -2.5×log(2) ∼ 0.752 mag brighter than the single star on the MS. Therefore, we investigated the CMD of the cluster NGC 2355 by over-plotting <cit.> isochrone of the metallicity Z = 0.0163 corresponding to the equal mass binaries as shown in Figure <ref>. The stars lying below G = 16.0 mag on the MS, especially between the G = 16.0 and G = 17.5 mag, form a narrow MS separated from the binary sequence. The gap between the single stars sequence and equal mass binaries sequence becomes narrower towards the upper MS, especially near the turn-off region. The two sequences mostly contain the distribution of the stars on MS of the CMD, however, the majority of the rMS stars in the upper eMSTO region are intriguingly beyond the equal mass binary sequence. This indicates that the apparent colour shift due to unresolved companion will not be able to produce most of the spread in the red part of the eMSTO in the CMD of NGC 2355. However, the low mass ratio binaries can still be responsible for the spin-down of bMS stars of the eMSTO <cit.>. The bMS stars in the CMD are well contained within the single star sequence and binary sequence corresponding to mass ratio q=0.8 as shown in Figure <ref>. The magnitude shift for the binary sequence of mass ratio q = 0.8 was calculated to be 0.350 mag by using the luminosity-mass relation provided by <cit.> in the above equation for the magnitude of the binary system. We have discussed the spin-down scenario of the fast-rotating stars due to tidal-locking in the low mass ratio binaries present in the bMS population in Section <ref>. § ORIGIN OF THE EMSTO IN THE CLUSTER §.§ Spread in the rotation rates The spread in the rotation rates of stars has also been suggested to be a possible reason behind the origin of eMSTO in the Galactic open clusters <cit.>. We have shown the v sin i distribution of the stars on the CMD of the cluster NGC 2355 in Figure <ref>. We found no clear-cut distinction between the fast and slow-rotating stars on the lower MS of NGC 2355 as these stars are mostly slow rotators. However, there is a conspicuous preferential distribution of fast-rotating stars on the red part and slow-rotating stars on the blue part of the CMD in the eMSTO region of the cluster. We found the mean v sin i values for the bMS and rMS stars of the eMSTO to be 81.3±5.6 and 135.3±4.6 km s^-1, respectively. We have shown the histogram of the v sin i values for the bMS and rMS stars in Figure <ref>. It can be noticed from the histogram that the bMS stars have projected rotational velocities up to ∼176 km s^-1 whereas the rMS stars possess v sin i values up to ∼226 km s^-1. Thus, the rMS stars exhibit a much wider range of rotational velocities, as reported in the previous study on the other clusters hosting eMSTOs <cit.>. However, the mean v sin i value for the stars belonging to the lower MS stars was found to be 26.5±1.3 km s^-1. The effects of stellar rotation, such as gravity darkening and dust-like extinction from the circumstellar excretion disc in the fast-rotating stars, may lead to a spread in the colour of the eMSTO stars on the CMD. The gravity-darkening effect can cause a decrease in the T_ eff values for the fast-rotating stars. We studied the distribution of effective temperature, T_ eff, as a function of the G_ BP - G_ RP colour for eMSTO stars in NGC 2355. We found average T_ eff values of 7421.3±34.8 and 7190.1±32.0 K for the bMS and rMS stars, respectively. We also investigated the relation between stellar rotation and the effective temperature of the eMSTO stars in the cluster. We have shown the projected rotational velocity in a T_ eff-colour diagram in Figure <ref>. We found that the fast-rotating rMS stars generally tend to have lower T_ eff values than their slow-rotating counterparts. The gravity-darkening also includes the inclination effect which causes the fast-rotating stars to appear bluer and brighter for the pole-on view compared to the equator-on view <cit.>. Thus, the combined effect of the viewing angle and gravity darkening might contribute to the emergence of the eMSTO in cluster NGC 2355. §.§ Dust-like extinction from the circumstellar excretion disc The material ejected from the fast-rotating stars forms the circumstellar excretion disc. The dust-like extinction from the excretion disc of the stars can cause fast-rotating stars to appear redder <cit.>. The absorption from the disc combined with the viewing angle may also produce a broadening of the upper MS, resulting in the eMSTO. This dust-like extinction is expected to be wavelength-dependent and can be detected in the ultraviolet CMD <cit.>. We used the uvw2 (1928 Å), uvm2 (2246 Å), and uvw1 (2600 Å) bands ultraviolet data to investigate the dust-like extinction properties of the eMSTO stars in the cluster NGC 2355. The ultraviolet CMDs for NGC 2355 are shown in Figure <ref>. We have shown the stars in the blue and the red parts of the eMSTO and the lower main sequence, lMS, by different groups in the figure to compare the UV extinction from them. The broadened upper MS is conspicuous in the UV CMDs except for uvm2 versus (uvw2-uvm2) CMD. We noticed an interesting case where stars belonging to the red part of the MS, including both the rMS and the r-lMS stars, of the optical CMD are bluer than their counterparts in the uvm2 versus (uvw2-uvm2) UV CMD. We did not find substantial evidence of any excess UV extinction from rMS stars in the eMSTO sample compared to the r-lMS stars in lower MS in NGC 2355 to support the hypothesis of dust-like extinction from the fast-rotating stars suggested by <cit.>. §.§ Possible mechanisms for spread in rotation rates §.§.§ Tidally-locked binaries The low mass ratio unresolved binaries may be present in the bMS population of the eMSTO which may have slowed down due to tidal locking. The tidal locking synchronizes the rotation rates of the primary and secondary stars in a binary system with their orbital motion <cit.>. The synchronization may slow down the stellar rotation by effectively transferring the angular momentum from the rotational motion to the orbital motion of the system. The effect of the synchronization of the rotation of the stars can be investigated by calculating the required synchronization time. We estimated the synchronization time, τ_sync, by following <cit.> formulae for MS binary stars with radiative envelopes having primary star with mass ≥ 1.25 M_⊙. The formula can be expressed as follows: 1τ_sync=5× 2^5 / 3(G M_ pR_ p^3)^1 / 2M_ p R_ p^2I_ p q^2(1+q)^5 / 6 E_2(R_ pa)^17 / 2 where M_p, R_p, and I_p are the mass, radius, and moment of inertia of the primary stars, respectively. The G in the equation denotes the gravitational constant. q represents the mass ratio of binary systems, while a denotes the separation between binary components. The physical quantities in the above equation are in the CGS units system. E_2 is the second-order tidal coefficient. The value of E_2 can be calculated using the following relation provided by <cit.> derived through fitting the values of E_2 and mass of stars given in the original study by <cit.>: E_ 2 = 1.592 × 10^-9 M_ p^2.84 where M_p is in the solar unit M_⊙. We estimated the radius of the primary star by using the empirical relations R_p ≈ 1.06 M^0.945 for M_p < 1.66 M_⊙ and R_p ≈ 1.33 M^0.555 in case of M_p > 1.66 M_⊙ provided by <cit.>. We have shown the impact of the mass ratio and separation between binary components on the synchronization time of the binary systems in Figure <ref>. From the above relations, we can deduce that the binary system with a higher primary star mass would have a shorter synchronization time than the binary system with a relatively lower primary star mass. Since the eMSTO stars in our sample have masses from ∼1.6 to 2.0 M_⊙ so, we have shown plots of the synchronization time distribution only for the two bounding masses, i.e., 1.6 M_⊙ and 2.0 M_⊙ in the figure. We noticed that the τ_sync sharply increases with the separation between the binary components for any particular mass ratio. The binary system with a larger separation between the components would have weaker tidal torque and hence, longer synchronization time. We also found that the τ_sync values for low mass ratio binaries were greater than the τ_sync for the relatively high mass ratio binaries of the same mass and separation. The synchronization time for the close binary systems (a < 7 R_⊙) with q ≥ 0.2 is less than the age of the cluster NGC 2355. For binaries with q < 0.2 also, the τ_sync is generally less compared to the age of the cluster up to a separation of ∼6 R_⊙. We expect that most of the close binary systems in the bMS population of the eMSTO stars may have synchronized rotational and orbital motions due to tidal locking and thus become slow-rotating as suggested by <cit.>. The radial distribution of the eMSTO stars in NGC 2355 is crucial for assessing the possibility of populating blue eMSTO stars from tidally braked slow-rotating stars. The tidally braked slow-rotating low mass ratio binaries of the bMS population are expected to be preferentially concentrated in the central region due to the mass segregation of the binary systems. To investigate the radial distribution of the bMS and rMS stars, we divided the stars of NGC 2355 into two groups: inner region and outer region stars. The circular region around the cluster center with a radius equal to the half mass radius of 0.104 provided by <cit.> was taken as the inner region. The remaining annular region was considered the outer region of the cluster. The inner and outer regions had 224 and 187 stars, respectively. We have shown the locations of these two different populations in the CMD of NGC 2355 in Figure <ref>. We found that the bMS stars of the eMSTO were preferentially located in the outer region of the cluster. This contradicts our expectation that bMS stars should be preferentially segregated in the inner region if they were tidally locked binary systems. Therefore, bMS stars seem less likely to be the tidally braked low mass ratio binaries. Similarly, a few previous studies also found that the stars belonging to the red part of the eMSTO were preferentially inner region stars <cit.>. As discussed in the previous section, only the low mass ratio binaries with massive primary stars can undergo a rapid tidal locking process. Based on findings from simulated populations, <cit.> suggested that low mass ratio binaries will have primary stars heavier than 5 M_⊙ for a population of 100 Myr age. In the case of clusters older than 100 Myr, the primary star in the low mass ratio binary would have already evolved off the MS <cit.>. Thus, the cluster NGC 2355 (age ∼ 1000 Myr) may not possess a tidally locked low mass ratio binary in its bMS population. §.§.§ Star-disc interaction <cit.> suggested that the spread in rotation may be caused by differences in the star-disc interaction during the PMS phase. The coupling between the star and the protoplanetary disc is thought to be responsible for extracting a large amount of angular momentum from the star <cit.>. However, as soon as the disc is dissipated or decoupled from the star, the latter is free to spin up as it contracts to the MS. Therefore, an early spin-up leads to a faster rotation on the MS since the star can spin up for a longer time. Many factors can affect the duration of the star-disc coupling. The disc can be dispersed by its accretion onto the star, dynamical encounter, photo-evaporation from the star itself, or surrounding massive stars <cit.>. The distribution of the rotation rates of these stars thus strongly depends on the environment during the formation stage. If the stars in a cluster face a higher rate of disc destruction, the cluster will host a larger number of fast-rotating stars leading to a change in its main sequence turn-off morphology. The stars in the central part of the cluster may face higher destruction of their disc due to dynamical interactions caused by higher stellar density and the photoionization from the larger fraction of massive stars towards the center <cit.>. The greater fraction of the destruction of the protoplanetary disc of stars may result in a larger fraction of the fast-rotating stars in the inner region of the cluster <cit.>. It can also be inferred from Figure <ref> that the rMS stars of the eMSTO, which comprise mostly fast-rotating stars, are preferentially located in the inner region of the cluster NGC 2355. Thus, our findings of the radial distribution of eMSTO stars in NGC 2355 indicate that the star-disc interaction is most likely responsible for the observed spread in rotation rates of the eMSTO stars. §.§ The lower mass limit for fast rotation We found that fast-rotating stars were brighter than G∼14.16 mag in NGC 2355 which corresponds to a mass of ∼1.56 M_⊙ for the metallicity Z = 0.0163 (see Figure <ref>). This mass limit coincides with the mass range of ∼1.5-1.6 M_⊙ where magnetic braking starts slowing down the stars <cit.>. The stars with masses above this mass limit on the MS are expected not to be efficiently spun down as they possess very thin convective envelopes. This mass limit increases with the increase in the metallicity of clusters <cit.>. The stars in the lower main sequence (below ∼1.56 M_⊙) of NGC 2355 were found to be rotating slowly with a mean projected rotational velocity of the 26.5±1.3 km s^-1. The lower MS is marked by a complete absence of fast-rotating stars indicating the magnetic braking of stellar rotation for the lMS stars in NGC 2355. The magnetic braking in low-mass stars is related to the presence of a thick convective envelope, which allows for the generation of a large-scale magnetic field through a dynamo process. The coupling of this large-scale field with the mass loss causes the stars to efficiently lose angular momentum and their spin rate to brake as they evolve. This likely explains both the absence of fast-rotating stars in the lower part of the CMD and the lesser spread of the MS as the structural effects of rotation become negligible at low rotation rates. The dynamo becomes inefficient above ∼1.5M_⊙ and the stars keep the fast rotation throughout their MS lifetime. The transition between the two regimes is known as the Kraft break <cit.>. § SUMMARY We study the origin of the extended main sequence turn-off present in the Galactic open cluster NGC 2355. It is a low mass (1.3±0.5 × 10^3 M_⊙) cluster with age of ∼1 Gyr. The MS stars, except rMS stars in NGC 2355, are contained between the single star and the equal-mass binary sequences. The majority of the rMS stars in the eMSTO region are distributed well beyond the equal mass binary sequence, which discards the possibility that the unresolved binaries could resemble eMSTO in NGC 2355 (see Figure <ref>). We further investigate the projected rotational velocity distribution of the stars in the cluster, which reveals that fast-rotating stars preferentially populate the red part of the eMSTO. The v sin i values for bMS, rMS, and lMS stars are found to be 81.3±5.6, 135.3±4.6, and 26.5±1.3 km s^-1, respectively. The spread in rotation rates of stars may lead to the origin of eMSTO due to various effects related to stellar rotation, such as dust-like extinction from excretion disc and gravity-darkening. We examine the dust-like extinction scenario through ultraviolet CMD created from the Swift near-ultraviolet data. We do not find any substantial evidence of the excess ultraviolet absorption in the fast-rotating rMS population of NGC 2355 to support the hypothesis that dust-like extinction from the circumstellar disc makes them appear redder on the MS compared to their slow-rotating counterparts. However, a careful inspection of the effective temperature of the stars hints toward the contribution of gravity darkening in the colour spread of the upper MS of NGC 2355. The spread in rotation rates of the eMSTO stars can be explained by mechanisms such as tidal interaction in the binary stars and star-disc interaction in the PMS phase of stars. The synchronization time for the likely low mass ratio close binaries belonging to bMS stars of NGC 2355 is mostly shorter enough for them to become slow-rotating stars through the tidal-locking process. To further inspect the tidal locking scenario, we analyze the radial distribution of the stars in NGC 2355. Against the general expectation that bMS stars should be preferentially located in the inner region if they were close binaries, we find that the bMS stars are mostly located in the outer region of the cluster. The possible cause for this discrepancy could be the absence of the low mass ratio close binaries in the bMS population. So, the tidal locking in the close binaries appears to be the less likely reason for the spread in rotation rates of the eMSTO stars in the NGC 2355 open cluster. The different star-disc interaction time in the PMS phase of the stars may also lead to the spread in rotation rates of the eMSTO stars. The longer the star-disc interaction time, the slower the rotation rate of the star would be. The early destruction of the protoplanetary disc of stars in the central region would be higher due to dynamical interactions and photoionization, which may lead to a higher concentration of the fast-rotating stars towards the center of a cluster. We also find that the rMS stars, mostly fast-rotating, are preferentially concentrated in the inner region of the cluster NGC 2355. Therefore, the star-disc interaction during the PMS phase seems to be the most likely mechanism for the spread in the rotation rates of the upper MS stars and, thus, the origin of the eMSTO in the open cluster NGC 2355. We also notice an absence of fast-rotating stars in the lower main sequence beyond ∼1.56 M_⊙ possibly because of the magnetic braking that effectively spins down their rotations. Further radial velocity analysis of the eMSTO stars involving high-resolution multi-epoch data will help us better understand the role of binary stars in the origin of the eMSTO in NGC 2355 and open clusters in general. § ACKNOWLEDGEMENTS This research was supported by the Chinese Academy of Sciences (CAS) “Light of West China” Program, No. 2022-XBQNXZ-013, the Tianchi Talent project, the Natural Science Foundation of Xinjiang Uygur Autonomous Region, No. 2022D01E86, and National Natural Science Foundation of China under grant U2031204. This research was supported by the Korea Astronomy and Space Science Institute under the R&D program (Project No. 2024-1-860-02) supervised by the Ministry of Science and ICT. This work acknowledges funding from the Centre National d'Étude Spatial (CNES) via an AIM/PLATO grant. The corresponding author, Jayanand Maurya, acknowledges the support of the Physical Research Laboratory at Ahmedabad, India, where the initial work of this paper was carried out. This work has made use of data from the European Space Agency (ESA) mission Gaia (<https://www.cosmos.esa.int/gaia>), processed by the Gaia Data Processing and Analysis Consortium (DPAC, <https://www.cosmos.esa.int/web/gaia/dpac/consortium>). Funding for the DPAC has been provided by national institutions, in particular, the institutions participating in the Gaia Multilateral Agreement. This research has made use of the tool provided by Gaia DPAC (https://www.cosmos.esa.int/web/gaia/dr3-software-tools) to reproduce (E)DR3 Gaia photometric uncertainties described in the GAIA-C5-TN-UB-JMC-031 technical note using data in <cit.>. § DATA AVAILABILITY The derived data generated in this study will be shared upon reasonable request to the corresponding author. All other data used for the present study are publicly available. mnras
http://arxiv.org/abs/2406.17856v1
20240625180038
In-in formalism for the entropy of quantum fields in curved spacetimes
[ "Thomas Colas", "Julien Grain", "Greg Kaplanek", "Vincent Vennin" ]
hep-th
[ "hep-th", "astro-ph.CO", "gr-qc" ]
=1 fpheader @addto@macro *uell ell[2] 0=#1ℓ#1ℓ 1=10#1ℓ#1ℓ 0=0 0 by -1 0 by 2 0.1ex 010
http://arxiv.org/abs/2406.18970v1
20240627080115
Galois groups of reciprocal polynomials and the van der Waerden-Bhargava theorem
[ "Theresa C. Anderson", "Adam Bertelli", "Evan M. O'Dorney" ]
math.NT
[ "math.NT", "11R32, 11R45, 11C08, 11N35, 20E22" ]
theoremTheorem[section] lemma[theorem]Lemma sublem[theorem]Sublemma prop[theorem]Proposition corollary[theorem]Corollary conj[theorem]Conjecture prob[theorem]Problem definition conv[theorem]Conventions nota[theorem]Notation defn[theorem]Definition examp[theorem]Example remark rem[theorem]Remark caseCase ( Dats-kov-sky Bhar-ga-va Pa-le-stri-naGalois groups of reciprocal polynomials and the van der Waerden–Bhargava theorem Theresa C. Anderson, Adam Bertelli, and Evan M. O'Dorney July 1, 2024 ================================================================================ § ABSTRACT We study the Galois groups G_f of degree 2n reciprocal (a.k.a. palindromic) polynomials f of height at most H, finding that G_f falls short of the maximal possible group S_2 ≀ S_n for a proportion of all f bounded above and below by constant multiples of H^-1log H, whether or not f is required to be monic. This answers a 1998 question of Davis–Duke–Sun and extends Bhargava's 2023 resolution of van der Waerden's 1936 conjecture on the corresponding question for general polynomials. Unlike in that setting, the dominant contribution comes not from reducible polynomials but from those f for which (-1)^n f(1) f(-1) is a square, causing G_f to lie in an index-2 subgroup. § INTRODUCTION For a positive integer n, let E_n(H) denote the number of degree n monic separable polynomials f(x) = x^n + a_n-1 x^n-1 + ⋯ + a_1 x + a_0 with integer coefficients a_i ∈ [-H, H] whose Galois group is notS_n. Hilbert's Irreducibility Theorem implies that Galois groups not equal to S_n occur 0% of the time, in other words, E_n(H) = o(H^n). In 1936, van der Waerden <cit.> gave a quantitative upper bound and conjectured that the true order of growth is E_n(H) ≍ H^n-1, the lower bound coming from counting multiples of x (or any fixed monic linear polynomial). For the next several decades, progress continued, with many authors making improvements on the bound, including Knobloch (1955) <cit.>, Gallagher (1972) <cit.>, Chow and Dietmann (2020) <cit.> who proved it for n ≤ 4, and a group including the first author (2023) <cit.>. The conjecture (<ref>) was finally resolved by Bhargava (2023) (<cit.>; an abridged version of the paper has appeared in print <cit.>). Bhargava's method harnesses sophisticated known results (classification of subgroups of S_n, distribution of discriminants of number fields) in combination with innovative recent methods, including Fourier equidistribution and the use of the double discriminant_a_n_x f. In this paper, we study the analogous question for the subspace of reciprocal polynomials, which was previously posed by Davis–Duke–Sun <cit.>. A polynomial f is called reciprocal if f(1/x) = 1/x^ f f(x), in other words, if the coefficient list of f is palindromic. We focus on the case where f is of even degree 2n (in the odd-degree case, f must be reducible, equal to x+1 times an even-degree reciprocal polynomial). The roots of f come in reciprocal pairs {α, 1/α}; the Galois group must preserve the partition into pairs and thus is a subgroup of the wreath product S_2 ≀ S_n, the subgroup of S_2n of order 2^n n! preserving this partition. In this paper, we provide a precise estimate for how often the group is strictly smaller: Let _n^(H) be the number of separable monic reciprocal polynomials f of degree 2n with coefficients in [-H,H] whose Galois group is not S_2 ≀ S_n. Then for each n ≥ 2, _n^(H) ≍ H^n - 1log H. Here and throughout the paper, if f(H), g(H) are real-valued functions of a sufficiently large real number H, then the usual notations f(H) ≪ g(H), g(H) ≫ f(H), f(H) = O(g(H)) mean that f(H) < c ·g(H) for sufficiently large H and some constant c, while the notations f(H) ≍ g(H), f(H) = Θ(g(H)) mean that f(H) ≪ g(H) and f(H) ≫ g(H). Finally, f(H) = o(g(H)) means that lim_H →∞ f(H)/g(H) = 0. The implied constants may depend on n but not on H. Theorem <ref> is significant for several reasons: * This is a sharp improvement on the work of Davis–Duke–Sun <cit.>, who show _n^(H) ≪ H^n-1/2log H. * The correct order of growth is notH^n-1, as one might expect by counting the reducible polynomials by analogy with van der Waerden's conjecture. Instead, the dominant contribution comes from reciprocal polynomials f such that (-1)^n f(1) f(-1) is a square. This makes f a square and causes the Galois group to lie in an index-2 subgroup, called G_1 in the classification below. * Since the characteristic polynomial of a symplectic matrix is reciprocal, a potential application of this work is to understand the characteristic polynomials of random elements of Sp_2n(), extending the work of Rivin <cit.> for SL_n() and that of the first author and Lemke Oliver <cit.>. Additionally, there are interesting connections to results in quantum chaos (see below). As with Bhargava <cit.> and many other papers on van der Waerden's conjecture, the methods are largely indifferent to whether we look at monic polynomials or general non-monic polynomials f with all coefficients ranging through a box [-H,H]^n+1. The analogue of Theorem <ref> for the non-monic setting is: Let _n(H) be the number of separable reciprocal polynomials f of degree 2n with coefficients in [-H,H] whose Galois group is not S_2 ≀ S_n. Then for all n ≥ 1, _n(H) ≍ H^nlog H. Here the range of applicability is n ≥ 1. In Theorem <ref>, we must exclude n = 1, because a monic reciprocal quadratic polynomial f(x) = x^2 + a_1 x + 1 has full Galois group S_2 for all a_1 ≠± 2. Note that a degree 2n polynomial f is reciprocal if and only if the Cayley-transformed polynomial f(x) = (1+x)^2n f1-x/1+x is even, that is, f(x) = g(x^2) for a polynomial g. (This is a straightforward computation and well known, see for instance <cit.>.) For an even polynomial, the roots again come in pairs {α, -α}. So we also have the following corollary: The number of degree 2n even polynomials f with coefficients in [-H,H] whose Galois group is not S_2 ≀ S_n is Θ(H^n log H). Note that in this setting, the condition for the Galois group to lie in G_1 is that the product (-1)^n a_2n a_0 of the first and last coefficients of f be a square. If we impose a_2n = 1, the likelihood of this rises to O(H^-1/2), so the naïve analogue of Corollary <ref> in the monic setting is false. As alluded to earlier, there are potential applications of our work to the area of quantum chaos. In particular, studying the mass of eigenfunctions in quantum chaos is an area of great mathematical interest. This area shares exciting connections with reciprocal polynomials via quantum cat maps, a toy model of study in the area, that are given by symplectic matrices A. In particular, an important object to study the mass of eigenfunctions is the semiclassical measure μ associated to A. It turns out that μ has nice support properties if the characteristic polynomial of A^m for all m ∈ℕ (including A itself) is irreducible over the integers. In an appendix to a forthcoming paper of Elena Kim, the first author and Lemke Oliver show that this irreducibility happens 100% of the time and moreover that the generic Galois group of such polynomials is the wreath product S_2 ≀ S_n (see <cit.> for more precise information and connections). Based on this recent result, we conjecture that if M is a random n × n matrix, then the characteristic polynomials not only of M but of its powers M^i, i ≥ 1, are almost surely S_n, where “almost surely” entails error bounds of van der Waerden type. This can be interpreted as saying that, if f is the characteristic polynomial, then not only its root α but each of its powers α^i, i ≥ 1, generates an S_n-extension of . The present work can be regarded as an extension of this to the fractional power √(α). It is natural to hope that the higher-order roots √(α) of polynomials g(x^m) can be handled in a similar way, the generic Galois group now being the semidirect product (μ_m ≀ S_n) ⋊ (/m)^. Our methods are parallel to Bhargava's in <cit.>, with judicious modifications when necessary. In Section <ref>, we lay out preliminary facts about reciprocal polynomials. In Section <ref>, we classify maximal subgroups of S_2 ≀ S_n, and in particular, we show that we can narrow our focus to three groups, which we denote G_1, G_2, and G_3. In Sections <ref>, <ref>, and <ref>, we count the number _n(G_i; H) of polynomials having each of those Galois groups (or a subgroup thereof). The G_1-polynomials we can count by direct parametrization. To count the G_3-polynomials f, we take advantage of the fact that f is reducible over a quadratic extension of . The heart of the argument is to handle G_2, which plays a challenging role like that of the alternating group A_n in the study of general polynomials. Following Bhargava, we divide up the polynomials into three cases based on the size of the discriminant and its prime divisors. While attacking each case in turn, we are led to apply Fourier analysis for equidistribution and to construct a suitably modified double discriminant. The Fourier analysis is more involved than Bhargava's and involves breaking up the desired count into terms supported on different sublattices of the lattice V() of reciprocal polynomials. This type of decomposition is used in harmonic analysis to split up a function into pieces with different Fourier properties, but our use is perhaps novel in this setting. Because Theorems <ref> and <ref> are so similar, we focus on Theorem <ref>, the non-monic setting, which is the technically simpler of the two. At the end of each of Sections <ref>, <ref>, and <ref>, we explain how the proof must be adapted to the monic case to prove Theorem <ref>. §.§ Acknowledgements TCA was partially supported by the NSF (grants 2231990, 2237937). We thank Hongyi (Brian) Hu and the rest of the participants in the Research Topics in Mathematical Sciences semester at CMU in spring 2024 for their contributions to the discussion. We thank Igor Shparlinski for insightful comments. We thank Robert Lemke Oliver for a careful read of an earlier draft. § RECIPROCAL POLYNOMIALS We define the height of an integer polynomial P(x) = c_n x^n + c_n-1 x^n-1 + ⋯ + c_1 x + c_0 ∈[x] to be the maximum of the coefficients, P = max{c_n, c_n-1, …, c_0}∈_≥ 0. Let f(x) = a_0 x^2n + a_1 x^2n-1 + ⋯ + a_n-1 x^n+1 + a_n x^n + a_n-1 x^n-1 + ⋯ + a_1 x + a_0 be a reciprocal polynomial of degree 2n. Note that there is a unique degree n integer-coefficient polynomial g(u) = b_n u^n + b_n-1 u^n-1 + ⋯ + b_1 u + b_0 such that f(x) = x^n gx + 1/x. The passage between f and g is bijective and linear, and it increases or decreases heights by at most a bounded factor, depending only on n. Hence it is immaterial whether we count f or g of height at most H. For most purposes, it is more convenient to count g. We denote the roots of f by α_1, 1/α_1, α_2, 1/α_2, …, α_n, 1/α_n. Then the roots of g are β_1,…, β_n, where β_i = α_i + 1/α_i. We will sometimes write α = α_1 and β = β_1 when the choice of root is irrelevant. In view of the main theorem we would like to prove, we can assume any statement that occurs for all but O(H^n) of the O(H^n + 1) polynomials g of height at most H. For example, we may assume that g is irreducible and that g(2) and g(-2) are nonzero. We have the tower of number fields K_f = (α) ⊇ K_g = (β) ⊃ , where K_g is an S_n-extension of degree n, while K_f/K_g is of degree at most 2, given by K_f = K_g(√(β^2 - 4)). Let K_g and K_f, respectively, be the splitting fields of g and f, and let G_g and G_f be their respective Galois groups, which are subgroups of S_n and S_2 ≀ S_n, with G_f ↠ G_g under the natural projection S_2 ≀ S_n ↠ S_n. By the main result of Bhargava <cit.>, we can assume that G_g is the whole S_n. Our aim in this paper is to understand when G_f is not the whole S_2 ≀ S_n. By the usual formula for the discriminant of a number field tower, we have ( K_f) = ( K_g)^2 · N_K_g/_K_g K_f as ideals in . The following is a closely related result on the discriminants of the associated polynomials. f = (-1)^n g(2) g(-2) ( g)^2. We have f = a_0^4n - 2∏_i=1^n (α_i-α_i^-1)^2 ·∏_i<j(α_i-α_j)^2(α_i-α_j^-1)^2(α_i^-1-α_j)^2(α_i^-1-α_j^-1)^2 = b_n^4n - 2∏_i=1^n (β_i^2-4)·∏_i<jα_i^-4α_j^-4(α_i-α_j)^4(α_iα_j-1)^4 = (-1)^n · b_n ∏_i=1^n (2-β_i) · b_n ∏_i=1^n (-2-β_i) · b_n^4n - 4∏_i<j(β_i-β_j)^4 = (-1)^n · g(2)· g(-2)·( g)^2. § MAXIMAL SUBGROUPS OF S₂ ≀ Sₙ If G_f is not the full S_2 ≀ S_n, it is contained in a maximal subgroup. We have S_2 ≀ S_n = _2^n ⋊ S_n. The following elementary lemma classifies the maximal subgroups of a semidirect product whose normal factor is abelian. Let X ⋊ S be a semidirect product of groups, with X abelian. The maximal subgroups of X ⋊ S are of two types: * X ⋊ S', for S' < S a maximal subgroup; * Those groups G < X ⋊ S for which Y = G X is a maximal S-invariant subgroup of X such that the projection G → S is surjective. Let G < X ⋊ S be a maximal subgroup, and let S' be the projection of G onto S. If S' ≠ S, then G ≤ X ⋊ S', and we must have equality so we get a group of type <ref>. So we assume S' = S. Then Y = G X is a subgroup of X, not the whole of X since G ≠ X ⋊ S. The conjugation action of X ⋊ S on X is the S-action. Since G surjects onto S, the fact that G is closed under conjugation by itself implies that Y is closed under the S-action. Suppose that Y is not maximal as an S-invariant subspace, Y < Y' < X. Let GY' = {gy : g ∈ G, y ∈ Y'}⊂ X ⋊ S. We claim that GY' is a subgroup. Since G and Y' are subgroups, it suffices to show that any product yg, y ∈ Y', g ∈ G, belongs to GY'. Write g = sx, x ∈ X, s ∈ S. Since X is abelian, yg = ysx = sy^s x = sx y^s = g y^s. Since Y' is S-invariant, the last product belongs to GY'. So GY' is a subgroup. It is evident that GY' X = Y'. So G < GY' < X ⋊ S, contradicting maximality of G. So Y is maximal, and G is a group of type <ref>. By Bhargava's result <cit.>, the extension K_g has full Galois group S_n for all but O(H^n) polynomials g (or O(H^n-1) in the monic case), so Theorems <ref> and <ref> are already known for subgroups of type <ref>. To classify the subgroups of type <ref>, we must find the maximal S_n-invariant subspaces of X = _2^n, a vector space equipped with an S_n-action permuting the n basis vectors freely. For convenience we let = (0, …, 0), = (1, …, 1) ∈ X. We use a superscript ⊥ to denote the orthogonal complement of a space under the S_n-invariant inner product (x_1,…, x_n) ∙ (y_1,…, y_n) = ∑_i = 1^n x_i y_i. Let X = _2^n, with the permutation action of S_n. The S_n-invariant subspaces of X are as follows: * X * ^⊥ = {𝐯 = (v_1,…, v_n) : ∑_i v_i = 0} * = {, } * {0}. Let W ⊆ X be an invariant subspace. If W contains a vector 𝐯 = (v_1,…,v_n) besides and , then upon applying some element of S_n we can assume v_1 = 1, v_2 = 0. Then W ∋𝐯 + (12)𝐯 = 1, 1, 0, …, 0. Applying further permutations from S_n, we get that W contains every vector with exactly two nonzero coordinates. Then W contains their span, which is ^⊥. Hence W = ^⊥ or W = X. Among these, the maximal subgroups are * ^⊥, and * , for n odd (since for n even, ⊆^⊥). Note that for n odd, we have a direct sum decomposition X = ⊕^⊥. We now classify the groups G using group cohomology as follows: Let X ⋊ S be a semidirect product of groups, with X abelian, and let K ⊆ X be a subgroup fixed by S. The subgroups of G ⊆ X ⋊ S such that G X = K are parametrized by 1-cocycles ϵ∈ Z^1(S, X/K), in other words, maps ϵ : S → X/K satisfying the cocycle condition ϵ(στ) = ϵ(σ) + σ(ϵ(τ)). The map sends each ϵ to the group G = {(x, σ) : x ≡ϵ(σ) K}. Moreover, two such subgroups G, G' are conjugate if and only if the corresponding ϵ, ϵ' are in the same cohomology class in H^1(S, X/K); in other words, if there is a y ∈ X such that for all σ∈ S, ϵ'(σ) = ϵ(σ) + σ(y) - y. This is a fairly standard use of group cohomology. Indeed, it is easy to check that G must have the form (<ref>), that closure under multiplication enforces (<ref>), and that conjugation by y ∈ X induces (<ref>). (Since G surjects onto S, conjugation by X is sufficient to produce all the conjugates of G in X ⋊ S.) The maximal subgroups of S_2 ≀ S_n whose projection onto S_n is the whole group are, up to conjugation, as follows: G_1 = {(𝐯, σ) ∈_2^n ⋊ S_n : ∑_i v_i = 0} = ^⊥⋊ S_n, n ≥ 1 G_2 = {(𝐯, σ) ∈_2^n ⋊ S_n : ∑_i v_i = σ}, n ≥ 2 G_3 = S_n, n ≥ 3 odd By the preceding lemmas, we are left with computing H^1(S_n, X/Y) for each of the maximal S_n-invariant subspaces Y in Lemma <ref>. If Y = ^⊥, then H^1(S_n, X/^⊥) = H^1(S_n, C_2) = (S_n, C_2) = C_2, the two maps ϵ being the zero map and the sign map, giving the subgroups G_1 and G_2 claimed. If Y = for n odd, we must compute H^1(S_n, X/). Since n is odd, we have that X = ⊕^⊥ is a direct sum and X/^⊥. First consider H^1(S_n, X). Note that with this action, X = _S_n-1^S_n C_2 is an induced module, so by Shapiro's lemma, H^1(S_n, X) = H^1(S_n-1, C_2) = (S_n-1, C_2) = C_2. Hence H^1(S_n, X/) = H^1(S_n, X)/H^1(S_n, C_2) = 0. Therefore there is only the trivial extension G_3. The restrictions on n are provided due to the fact that, for some n, the G_i are nonmaximal or coincident: * For n = 1, G_2 = G_1 and G_3 = S_2 ≀ S_n; * For n even, G_3 ⊆ G_1. A proof of Theorem <ref> without group cohomology is possible, but it involves some tedious case analysis with many computations of products in S_n and S_2 ≀ S_n. With a few more cohomological computations, we can classify all the subgroups of S_2 ≀ S_n that surject onto S_n. They are the whole S_2 ≀ S_n, the groups G_1, G_2, and G_3 (without parity restrictions on n), and the following additional groups: * {0} S_n S_n * {((σ), σ) : σ∈ S_n}, a twisted copy of S_n * and, for n = 4 only, the group _2 _3, which acts on the 8-element set _3^2 ∖{0}, preserving the partition into opposite pairs of vectors. In terms of its map to S_4, this is the double cover denoted 2· S_4^+ in <cit.>. For instance, this is the generic Galois group of the even octic minimal polynomial of the y-coordinate of a 3-torsion point on an elliptic curve over . It is a proper subgroup of the maximal subgroup G_2. To prove Theorems <ref> and <ref>, we must bound the number of polynomials f, equivalently g, for which G_f is (conjugate to) a subgroup of G_1, G_2, or G_3 for the values of n listed in Theorem <ref>. For G ⊆ S_2 ≀ S_n and H ≥ 2, let _n(G; H) be the number of separable reciprocal polynomials f of degree 2n with coefficients in [-H, H] such that G_g = S_n and G_f ⊆ G. Let _n^(G; H) count the subset of these that are monic. The following lemma reduces computing _n(G_1; H) and _n(G_2; H) to number-theoretic conditions on g that must hold in order to fulfill these conditions. (For G_3, we will use a different technique: see Section <ref>.) Assume that G_g is the whole of S_n. Then: * G_f ⊆ G_1 if and only if (-1)^n g(2) g(-2) is a square. * G_f ⊆ G_2 if and only if (-1)^n g(2) g(-2) g is a square. For <ref>, note that G_1 is the preimage of A_2n under the usual inclusion S_2 ≀ S_n ↪ S_2n. Hence G_f ⊆ G_1 if and only if f = (-1)^n g(2) g(-2) ( g)^2 is a square, which happens exactly when (-1)^n g(2) g(-2) is a square. For <ref>, consider the embedding of S_2 ≀ S_n into S_3n given by its action on the disjoint union of the roots of f and of g. Note that G_2 is the preimage of A_3n under this embedding. Hence G_f ⊆ G_2 if and only if f · g is a square, which happens exactly when (-1)^n g(2) g(-2) g is a square. § COUNTING G1-POLYNOMIALS We first deal with the case G_1, which yields the main term of Theorems <ref> and <ref>. For n ≥ 1, _n(G_1; H) ≍ H^nlog H and for n ≥ 2,_n^(G_1; H) ≍ H^n - 1log H. By Lemma <ref><ref>, it suffices to count the number of g such that (-1)^n g(2) g(-2) is a square z^2. The number of solutions to the equation x y = z^2, 1 ≤ x, y, z ≤ H (H ≥ 2) is Θ(H log H). A parametrization of the solutions is given by x = k u^2, y = k v^2, z = k u v where k, u, v are positive integers and (u,v) = 1. For each k, 1 ≤ k ≤ H, the pair (u,v) is chosen from the box 1 ≤ u, v ≤√(H/k), and the number of coprime pairs in this box is Θ(H/k) (the lower bound comes from citing the limiting proportion 6/π^2 > 0 of coprime pairs when H/k is large, and noting that there is always at least one solution u = v = 1). So the total number N of solutions satisfies N ≍∑_k = 1^H H/k ≍ H log H, as desired. For simplicity, we prove the non-monic case (<ref>). Since g(2), g(-2) ≪ H and each pair (x, y) = g(2), g(-2) appears O(H^n-1) times, we get that G_f ⊆ G_1 at most O(H^n log H) times. Conversely, if we take |x|, |y| ≤ c H for an appropriate constant c, and with x ≡ y 4 (which can be arranged, for instance by taking 4 | k), we find that there are Θ(H^n-1) polynomials g with g(2) = x and g(2) = y, and thus Θ(H^n-1log H) polynomials overall with Galois group G_f ⊆ G_1. §.§ Remarks on the monic case For the monic case, the argument is identical, replacing n by n - 1. It is only necessary to have at least two free coefficients so that g(2) and g(-2) can be adjusted independently, requiring n ≥ 2. § COUNTING G2-POLYNOMIALS For G_2, we prove the following bounds, which are stronger than those for G_1 by a factor of log H: For n ≥ 2, _n(G_2; H) ≪ H^n _n^(G_2; H) ≪ H^n - 1. By Lemma <ref><ref>, it suffices to count the number of g such that (-1)^n g(2) g(-2) g is a square. We use a sieve method adapted from Bhargava <cit.>. We begin with some analytic preliminaries. §.§ Twisted Poisson summation Let Φ : ^n → be a Schwartz function. We normalize the Fourier transform by Φ(y) = ∫_^n e^-2π√(-1) x ∙ yΦ(x) dx. The usual Poisson summation formula ∑_x ∈^nΦ(x) = ∑_y ∈^nΦ(y) can be extended in various ways. If L ⊆^n is a lattice (a subgroup of finite index), then ∑_x ∈ LΦ(x) = 1/[^n : L]∑_y ∈ L^*Φ(y), where L^* ⊇^n is the dual lattice. More generally: Let L ⊆^n be a lattice, and let Ψ : (/M)^n → be any function, where the modulus M is coprime to [^n : L], and let Ψ : (/M)^n → be its Fourier transform Ψ(y) = 1/M^n∑_x ∈ (/M)^n e^-2π√(-1) x ∙ y/M. For any Schwartz function Φ, ∑_x ∈ LΨ(x) Φ(x) = 1/[^n : L]∑_y ∈ L^*Ψ(y) Φy/M. Observe that Ψ is well defined on L^* since M is coprime to [^n : L]. (In fact, one can do without this hypothesis, but then Ψ becomes a function on L^*/ML^* instead of being independent of L.) To prove this proposition, one may assume by linearity that Ψ is supported on a single point, and the result reduces to Poisson summation. §.§ The index of a polynomial over a field Following Bhargava <cit.>, we make the following definition. If P ∈[x,y] is a nonzero homogeneous polynomial over a field, we factor P = ∏_i P_i^e_i into powers of distinct irreducibles and define the index of P to be (P) = ∑_i (e_i - 1) f_i. The index is significant for bounding the power of a prime p dividing the discriminant of the polynomial and the extension field it defines. The following lemma is used implicitly in <cit.>; for completeness, we offer a statement and proof. Let g ∈ R[x,y] be a separable homogeneous binary form of degree n over a Dedekind domain R. Let F = R, and let E = F[β]/g(β, 1) be the étale algebra (product of separable finite field extensions) defined by g. If is a prime ideal of R such that E is tamely ramified at (e.g. (R/) > n) and g is not identically zero modulo , then v_( E) ≤(g p) ≤ v_( g). We have deliberately stated the lemma in fuller generality than needed to allow for making some reductions. First, we may replace R by its completion at . Let be the residue field of R. Now, if g ≡ g_1 g_2 mod with g_i ∈[x,y] homogeneous and coprime, then by Hensel's lemma, the factorization lifts to R and induces a splitting E = E_1 E_2. The inequality (<ref>) can then be deduced by summing the corresponding inequalities for g_1 and g_2. Thus we may assume that g ≡ c · g_1^e , where c is a constant and g_1 ∈[x,y] is irreducible of some degree f, with e f = n. We may change coordinates so that g_1 ≠ y is monic in x. Now E = ∏_j = 1^r E_j is a product of fields with the same inertia index f and possibly different ramification indices e_j. We compute, using the usual formula for the discriminant of a tamely ramified extension: * First, v_( E) = ∑_j = 1^r v_( E_i) = ∑_j = 1^r (e_j - 1) f = n - rf. * (g p) = (e - 1) f = n - f. * Finally, we need to understand the ring S = R[β]/g(β, 1) ⊂ E. Note that S is contained in S' = {(x_1, …, x_r) ∈_E : x_1 ≡⋯≡ x_r }, a subring of _E of index (r - 1)f. Thus v_( g) = v_( S) ≥ v_( S') = v_( E) + 2 v_([_E : S']) = n - rf + 2(r - 1) f = n - f + (r - 1) f. The desired inequality follows immediately. Equality for both parts holds exactly when r = 1, or in the original setup, when ∤ [_E : S]. Following Bhargava <cit.>, we define D = K_g C = ∏_p| D D and divide the counting into three cases based on the sizes of C and D relative to H. Unlike in <cit.>, we do not have that D is squarefull; but by Lemma <ref><ref> we have that (-1)^n D g(2) g(-2) is a square, which limits the cases. Let δ be a small constant (such as 1/4n). For R a ring, denote by V^(R) the (n+1)-dimensional space of binary n-ic forms P(x,y) over R, and denote by V(R) the space of polynomials g(u) of degree at most n over R. The two R-modules are isomorphic, but we will need to identify them in multiple ways. §.§ Case I: C < H, D > H² In this case, we need to estimate the number of g for which D = (K_g), g(2), and g(-2) have certain factors. We begin with a short argument that yields the result up to ϵ. Let p be a prime, and let k be an integer. The number of binary forms g ∈ V^(_p) such that * g(1,0) = 0, that is, y | g, and * (g) ≥ k is O(p^n-k). We can immediately dispose of the case p ≤ n, for here both the number of g and the desired bound are O(1). In <cit.>, it is shown that the number of degree-n binary forms g such that (g) ≥ k is O(p^n+1-k). To impose the condition g(1,0) = 0, we can consider each g as lying in the family of p translates g(x,y + a x), a ∈_p. The translates all have the same index, and if p > n, the translates are all distinct. Moreover, since g has at most n roots over _p, at most n = O(1) of the translates satisfy the added condition g(1,0) = 0. Hence the total number of such g is O(p^n-k), as desired. Let p > n be a prime dividing C, and suppose p^k ∥ D. By Lemma <ref> (left part), we have (g) ≥ k, and this occurs for a proportion p^-k of g. If k is odd, then we additionally have p | g(2) or p | g(-2), and altogether there is a proportion p^-k-1 of g satisfying these conditions. Multiplying over p, the proportion of G_2-polynomials with K_g = D is bounded by ∏_p| D Op^-2k/2 = Oc^ω(D_1)/D_1^2, where D_1^2 = ∏_p| D p^2k/2 is the least square divisible by D. Observe that D_1 ≫ H and that each D_1 occurs for at most 2^ω(D_1) values of D. Moreover, these g are cut out by congruence conditions mod C. If C ≤ H, then we can estimate the number of lattice points very precisely because our modulus is lower than the size of the box. We get that the number of polynomials g is ≪ H^n + 1∑_D_1 ≥ Hc^ω(D_1)/D_1^2≪_ϵ H^n + ϵ. Using Fourier analysis we can remove the ϵ and also extend the validity of this case from C ≤ H to C ≤ H^1 + δ. Recall some definitions from <cit.>. If a binary n-ic form f (over , or over _p) factors modulo p as ∏_i=1^r P_i^e_i, with P_i irreducible and (P_i)=f_i, then the splitting type(f,p) of f is defined as (f_1^e_1⋯ f_r^e_r), and the index(f) of f modulo p (or the index of the splitting type (f,p) of f) is defined to be ∑_i=1^r (e_i-1)f_i. If p | f, we put (f) = ∞. More abstractly, a splitting type is an expression σ of the form (f_1^e_1⋯ f_r^e_r), where the f_i and e_i are positive integers. The degree(σ) is ∑_i=1^r e_i f_i, and the index(σ) is ∑_i=1^r (e_i-1)f_i. Finally, #(σ) is defined to be ∏_i f_i times the number of permutations of the factors f_i^e_i that preserve σ. In this work, we will need to deal with splitting types with a distinguished factor of degree 1. If σ has f_1 = 1, we let #'(σ) be ∏_i f_i times the number of permutations of the factors f_i^e_i, i ≥ 2, that preserve σ. We first recall the following lemma of Bhargava: Let σ=(f_1^e_1⋯ f_r^e_r) be a splitting type with (σ)=d and (σ)=k. Let w_p,σ:V^(_p)→ be defined by w_p,σ(f) the number of r-tuples (P_1,…,P_r), up to the action of the group of permutations of {1,…,r} preserving σ, such that the P_i are distinct irreducible binary forms where, for each i, we have P_i(x,y) is y or is monic as a polynomial in x, P_i=f_i, and P_1^e_1⋯ P_r^e_r| f. Then w_p,σ(g)= p^-k/#(σ) + O(p^-(k+1)) if g=0; O(p^-(k+1)) if g ≠ 0. The significance of this function w_p,σ is twofold: First, w_p,σ(f) is nonnegative, and is equal to 1 when f has splitting type σ; second, the definition is arranged so that w_p,σ is a sum of characteristic functions of subspaces, which makes the Fourier transform nonnegative, small, and easily computable. Bhargava uses this lemma to bound the number of integer polynomials having high index at a set of primes. In our setting, we also need to bound the number of integer polynomials having high index and which vanish at a given point mod p; hence we modify the lemma as follows: Let σ=(f_1^e_1⋯ f_r^e_r) be a splitting type with f_1 = 1. Let (σ)=d and (σ)=k. Let w'_p,σ:V^(_p)→ be defined by w'_p,σ(f) the number of r-tuples (P_2,…,P_r), up to the action of the group of permutations of {2,…,r} preserving σ, such that the P_i are distinct irreducible binary forms where, for each i, we have P_i(x,y) is monic as a polynomial in x, P_i=f_i, and y^e_1 P_2^e_2⋯ P_r^e_r| f. Let V^_e_1,_p denote the subspace of V^(_p) comprising polynomials divisible by y^e_1, and let (V^_e_1, _p)^⊥⊆ V^(_p)^* be its dual. Then w'_p,σ(g)= p^-(k+1)/#'(σ) + O(p^-(k+2)) if g ∈(V^_e_1, _p)^⊥; O(p^-(k+2)) if g ∉(V^_e_1, _p)^⊥. Rather than starting from scratch, we derive this lemma from the preceding one. Indeed, let σ' be the splitting type obtained by deleting the first factor f_1^e_1 = 1^e_1 from σ. Then (σ') '(σ), and (σ') = k - e_1 + 1. We have that w'_p,σ(f) vanishes unless f ∈ V^_e_1,_p (implying in particular that w'_p,σ is constant on cosets of (V^_e_1,_p)^⊥), and w'_p,σ(f) = w_p,σ'(f/y^e_1) is almostw_p,σ(f/y^e_1). We say “almost” because we need to exclude the case that one of the P_i is y; so we define w_p,σ to be just like w_p,σ, except that the P_i(x,y) in the definition are not allowed to equal y. Since w_p,σ' is a sum of characteristic functions of subspaces corresponding to the various choices of the P_i and w_p,σ is obtained by deleting some of the terms from this sum, we have, by Lemma <ref>, w_p,σ'(f) ≤ w_p,σ'(f) w_p,σ'(g) ≤w_p,σ'(g) = p^-(k-e_1+1)/#'(σ) + O(p^-(k-e_1+2)) if g ∈ V_e_1, _p^⊥; O(p^-(k-e_1+2)) if g ∉ V_e_1, _p^⊥. Re-embedding V^_e_1,_p into V^(_p), the Fourier transform drops by a factor of p^e_1, yielding upper bounds of the claimed order of magnitude. To obtain that the main term is undiminished by the w ↦w replacement, we can argue that w_p,σ = w_p,σ - ∑_i : f_i = 1w_p,σ_i where σ_i is obtained by deleting f_i^e_i from σ. Thus w_p,σ(0) = w_p,σ(0) - ∑_i : f_i = 1w_p,σ_i(0) = p^-(k-e_1+1)/#'(σ) + O(p^-(k-e_1+2)) + ∑_i : f_i = 1 O(p^-(k-e_1-e_i+1)) = p^-(k-e_1+1)/#'(σ) + O(p^-(k-e_1+2)), as desired. We can now estimate the number of polynomials in Case I. Let D be a positive integer. Let C = ∏_p| D p be its radical, and let D'^2 = ∏_p| D p^2v_p(D)/2 be its smallest square multiple. Assume that C < H^1 + δ. The number of reciprocal integer polynomials f of height ≤ H for which G_f ⊆ G_2 and D | K_g is ≪O(1)^ω(C) H^n+1/D'^2. Note that in contrast to the notation in the rest of the paper, we do not set D = K_g, but rather assume only that D | K_g. For Case I, we set D = K_g, but we will reuse this lemma in Case III, and there we will pick a general divisor D. Note that Lemma <ref> implies the bound of Theorem <ref> on the number of g in Case I because each D' determines D up to O(1)^ω(C) possibilities, and the total number of g is thus ≪ O(1)^ω(C) H^n+1∑_D' ≥ H^1 + δ1/D'^2≪ O(1)^ω(C) H^n-δ≪ H^n - δ + ϵ≪ H^n. First of all, we divide out all primes p ≤ n from D. If there is at least one K_g with D | K_g, this change only affects D by a bounded factor, since v_p( K_g) is uniformly bounded. Thus we can assume that every prime p | C is at least n. For each prime p | C, let k_p = v_p(D') = v_p(D). By Lemma <ref> (right part), any f as in the statement of the theorem has its corresponding g of a splitting type σ_p = (f_1^e_1⋯ f_1^e_r) at p (or 0 mod p: see below), where either * (σ_p) ≥ 2 k_p, or * (σ_p) ≥ 2 k_p - 1 and f_1 = 1 is at u = 2, or * (σ_p) ≥ 2 k_p - 1 and f_1 = 1 is at u = -2. We call an annotated splitting type a σ = σ_p with a choice of case (a), (b), or (c). For convenience, we let j = j_p be the number of marked roots (0 in case <ref>, 1 in cases <ref> and <ref>), and mark various items with j to show that the appropriate case is to be followed: w^(j)_p,σ = w_p,σ w'_p,σ, ^(j)(σ) = (σ) '(σ), etc. For σ_p an annotated splitting type, let Ψ_p = w^(j_p)_p,σ_p that picks out g having splitting type σ_p, with a root at the indicated place in case (b) or (c). Let Ψ : V_/C→ be the product of the Ψ_p. Observe that there are only O(1)^ω(C) choices of the annotated splitting types for all p, and for each g as in the statement of the lemma, there is a choice for which Ψ(g) ≥ 1. (In the degenerate case that p | g, we pick σ_p arbitrarily, because ω_p,σ(0) ≥ 1 for all σ.) Thus it suffices to prove, for a fixed choice of σ_p and case (a)–(c) for each p | C, that ∑_ g ≤ HΨ(g) ≪O(1)^ω(C) H^n+1/D'^2. We claim that there is a decomposition Ψ_p = Λ_p + Δ_p with the following properties: * Λ_p = a_p _L_p is a rescaled characteristic function of a sublattice L_p ⊆ V(), with a_p ≤ 1, and Λ_p = â_p _L_p^⊥ where â_p ≤ p^-2 k_p; * Δ_p has uniformly small Fourier transform: Δ_p(h) ≪ p^-2 k_p - 1. Indeed, we take L_p = V() case <ref> {g ∈ V() : g(2) ≡ g'(2) ≡⋯≡ g^(e_1,p - 1)(2) ≡ 0 p} case <ref> {g ∈ V() : g(-2) ≡ g'(-2) ≡⋯≡ g^(e_1,p - 1)(-2) ≡ 0 p} case <ref>. Note that in the last two cases, L_p is the image of V^_e_1,_p under the two relevant identifications of V^ with V, the former sending x^i y^n-i to (u-2)^n-i, the latter sending it to (u+2)^n-i. Choose the constant a_p = [V() : L_p] · p^-σ - j/#^(j)(σ) = p^-∑_i > j (e_i - 1)f_i/#^(j)(σ)≤ 1 so that Λ_p is the dominant term â_p _L_p^⊥, â_p = p^-σ - j/#^(j)(σ)≤ p^-2 k_p of Ψ_p as computed in Lemmas <ref> and <ref>, which show the smallness of Δ_p. Now Ψ = ∏_p Λ_p + Δ_p = ∑_q | CΛ_q Δ_C/q, where we set Λ_q = ∏_p | qΛ_p = a_q _L_q, a_q = ∏_p | q a_p, L_q = _p | q L_p, Δ_q = ∏_p | qΔ_p. We now use Fourier analysis, as in Case I of <cit.>, to rewrite the the desired count of g. Let ϕ : → be a Schwartz function on with the following properties: * ϕ(u) is nonnegative, and ϕ(u) ≥ 1 for u≤ 1; * ϕ is compactly supported; * The Fourier transform ϕ is real and nonnegative. Such a ϕ can be constructed, for instance, by taking a usual even “bump function” and convolving with itself, which squares the Fourier transform, ensuring nonnegativity. Then let Φ : V() → be defined by Φ(g) = ϕ(b_0)ϕ(b_1) ⋯ϕ(b_n). We use this as a smoothing factor: X #{g ∈ V() : G_f ⊆ G_2, D | K_g, g ≤ H} ≤∑_ g ≤ HΨ(g) ≤∑_g ∈ V()Ψ(g) Φg/H = ∑_q | C∑_g ∈ V()Λ_q(g) Δ_C/q(g) Φg/H. Since C has O(1)^ω(C) divisors q, it suffices to prove that for all q | C, X_q ∑_g ∈ V()Λ_q(g) Δ_C/q(g) Φg/H≪O(1)^ω(C) H^n+1/D'^2. We apply twisted Poisson summation <ref> with modulus M = C/q, which is coprime to [V() : L_q] | q^n, to get X_q = a_q ∑_g ∈ L_qΔ_C/q(g) Φg/H = a_q H^n+1/[V() : L_q]∑_h ∈ L_q^*Δ_C/q (h) Φq H h/C ≪a_q H^n+1/[V() : L_q]∏_p |C/q Op^-2k_p - 1∑_h ∈ L_q^*Φq H h/C. We apply Poisson summation again, now untwisted (<ref>), to get X_q ≪a_qC^n+1/q^n+1∏_p |C/q Op^-2k_p - 1∑_g ∈ L_qΦC g/q H ≤a_qC^n+1/q^n+1∏_p |C/q Op^-2k_p - 1∑_g ∈ V()ΦC g/q H ≪ a_q ∏_p |C/q Op^-2k_p - 1C/qmax{qH/C, 1}^n+1 ≪ O(1)^ω(C)∏_p | q p^-2k_p∏_p |C/q Op^-2k_p - 1·max{H, C/q}^n+1 = O(1)^ω(C) q/CD'^2max{H, C/q}^n+1. If the first argument to the maximum dominates, we get a bound X_q ≪O(1)^ω(C)q/C D'^2 H^n+1≤O(1)^ω(C) H^n+1/D'^2, as desired. If instead the second argument dominates, we get a bound X_q ≪O(1)^ω(C)q/C D'^2C/q^n+1 = O(1)^ω(C) C^n/q^n D'^2≤O(1)^ω(C) H^n + nδ/D'^2, as desired, since n δ≤ 1. §.§ Case II: D < H² Here we simply invoke Case II of Bhargava's treatment <cit.>, which shows that the number of irreducible g of height < H defining a number field K_g of primitive Galois group G_g and discriminant D ≤ H^2+2δ is O(H^n) (or O(H^n-1) in the monic case). In our setting we are only concerned with the case G_g = S_n. We do not need to use the added knowledge that G_f ⊆ G_2. §.§ Case III: C > H Here we adapt the method of Bhargava using the double discriminant. Let p > 2 be a prime. If h(x_1, … , x_n) is an integer polynomial, such that h(c_1, … , c_n) is a multiple of p^2 for mod p reasons, that is, h(c_1 + p d_1, … , c_n + p d_n) is a multiple of p^2 for all (d_1, … , d_n) ∈^n, then ∂/∂ x_n h(c_1, … , c_n) is a multiple of p. Let h = g(2) g(-2) g, considered as a polynomial in the coefficients b_0, …, b_n of g. Let p | C, p > n. We claim that p^2 | h for mod p reasons. We have that p | D, and p^2 divides the square (-1)^n D g(2) g(-2). If p^2 | D, then by the left part of Lemma <ref>, the index of g modulo p is at least 2. So by the right part of that same lemma, we have p^2 | g for mod p reasons. Otherwise, we have p | D, so p | g, and either p | g(2) or p | g(-2). Thus in all cases the product h is divisible by p^2 for mod p reasons. By Lemma <ref>, this implies that the derivative ∂/∂ b_0 h with respect to the constant term is divisible by p. Hence their resultant R(b_1, …, b_n) = _b_0h, ∂/∂ b_0 h = ± b_n _b_0 h is a multiple of p. The polynomial R is the analogue in our setting of the double discriminantDD of <cit.>. We have R(b_1, …, b_n) = ± b_n _b_0 (g(2) g(-2) _u g) = ± b_n _b_0_u g ·_b_0(g(2), g(-2))^2 _b_0(g(2), _u g)^2 _b_0(g(-2), _u g)^2 = ± b_n _b_0_u g ·(g(2) - g(-2))^2 (_u (g(u) - g(2)))^2 (_u (g(u) - g(-2)))^2, where in the last step, we take advantage of the fact that g(± 2) are linear in b_0 to use the standard formula _x(x - a, P(x)) = P(a). In particular, R is a product of which one factor is the double discriminant DD(g) = _b_0_u g. One easily sees from the factorization (<ref>) that R is not identically zero as a function of b_1,…, b_n. We now proceed as in <cit.>. The number of b_1, …, b_n∈ [-H, H]^n such that R(b_1, …, b_n) = 0 is O(H^n-1) (by, e.g., <cit.>), and so the number of g with such b_1, …, b_n is O(H^n). We now fix b_1, …, b_n such that R(b_1, …, b_n) ≠ 0. Then R(b_1, …, b_n) has at most O_ϵ(H^ϵ) factors C > H. Once C is determined by b_1, …, b_n (up to O_ϵ(H^ϵ) possibilities), then the number of solutions for b_0 (mod C) to h ≡ 0 C is (_b_0 h)^ω(C) = O_ϵ(H^ϵ), as the number of possibilities for b_0 modulo p such that h ≡ 0 p for each p | C is at most _b_0(h). Since C > H, the number of possibilities for b_0 ∈ [-H, H] is also at most O_ϵ(H^ϵ), and so the total number of g in this case is O_ϵ(H^n+ϵ). To eliminate the ϵ, we divide into two subcases, the first of which is reduced to Case I, just as in <cit.>: §.§.§ Subcase (i): A=∏_ p| C p>H^δ/2 p ≤ H. In this subcase, C has a factor C_ between H^1+δ/2 and H^1+δ, with A| C_| C. Pick C_ to be the largest such factor, and let D_ = ∏_p| C_ p^v_p(D). We now appeal to Lemma <ref> from Case I, with C_ in place of C, and D_ in place of D. (Note that D determines C, C_, and D_.) We get that the number of g with a given D is at most ≪O(1)^ω(C_) H^n+1/D_'^2, where D_' = ∏_p| C_ p^v_p(D)/2≥ C_≥ H^1 + δ/2. For each D_', there are at most 2^ω(D) values of D_, so the total number of reciprocal polynomials in this subcase is ∑_D_' > H^1 + δ/2O(1)^ω(C_) H^n+1/D_'^2 = O_ϵ(H^n - δ/2+ϵ) =O(H^n). §.§.§ Subcase (ii): A=∏_ p| C p>H^δ/2 p > H. In this subcase, we carry out the original argument of Case III, with C replaced by A. We have A| R(b_1,…,b_n). Fix one of the O(H^n) choices of b_1,…,b_n such that R(b_1,…,b_n)≠ 0. Being bounded above by a fixed power of H, we see that R(b_1,…,b_n) can have at most a bounded number of possibilities for the factor A (since all prime factors of A are bounded below by a fixed positive power of H). Once A is determined by b_1,…,b_n, then the number of solutions for b_0 (mod A) to (f)≡0 (mod A) is O(n^ω(A))=O(1). Since A>H, the total number of f in this subcase is also O(H^n), completing the proof of Theorem <ref> in the non-monic case. §.§ Remarks on the monic case It would be impractical to replicate the full proof of Theorem <ref> in the monic case, most of which consists of changing n to n-1 and correcting calculations appropriately. The following changes are worthy of note: * Unlike in the non-monic case, the space V(R) of monic polynomials over a ring does not have a natural origin. We must fix one, such as g(u) = u^n, in order to carry out the Fourier analysis. * In Case I, we replace Lemma <ref> with the following lemma from Bhargava: Let σ=(f_1^e_1⋯ f_r^e_r) be a splitting type with (σ)=d and (σ)=k. Let w_p,σ : V(_p) → be defined by w_p,σ(f) the number of r-tuples (P_1,…,P_r), up to the action of the group of permutations of {1,…,r} preserving σ, such that the P_i are distinct irreducible monic polynomials with P_i=f_i for each i and P_1^e_1⋯ P_r^e_r| f. Then w_p,σ(g)= p^-k/#(σ) + O(p^-(k+1)) if g=0; O(p^-(k+1)) if g≠ 0 and d<n; O(p^-(k+1/2)) if g≠ 0 and d=n. * We then replace Lemma <ref> as follows: Let σ=(f_1^e_1⋯ f_r^e_r) be a splitting type with f_1 = 1. Let (σ)=d and (σ)=k. Let w'_p,σ : V(_p) → be defined by w_p,σ(f) the number of r-tuples (P_1,…,P_r), up to the action of the group of permutations of {2,…,r} preserving σ, such that the P_i are distinct irreducible monic polynomials with P_i=f_i for each i, f_1 = x, and P_1^e_1⋯ P_r^e_r| f. Let V_e_1,_p denote the subspace of V(_p) comprising polynomials divisible by x^e_1, and let V_e_1, _p^⊥⊆ V(_p)^* be its dual. Then w_p,σ(g)= p^-(k+1)/#'(σ) + O(p^-(k+2)) if g ∈ V_e_1, _p^⊥ O(p^-(k+2)) if g ∉ V_e_1, _p^⊥ and d<n; O(p^-(k+3/2)) if g ∉ V_e_1, _p^⊥ and d=n. * Our Ψ_p is then supported on L_p = V() case <ref> {g ∈ V() : g(2) ≡ g'(2) ≡⋯≡ g^(e_1,p - 1)(2) ≡ 0 p} case <ref> {g ∈ V() : g(-2) ≡ g'(-2) ≡⋯≡ g^(e_1,p - 1)(-2) ≡ 0 p} case <ref>, which is no longer a lattice but a coset g_p + L_p for some fixed monic polynomial g_p and some lattice L_p of index dividing p^n. The Fourier transform Ψ is small away from L_p^⊥. The intersection L_q = _p | qL_p = g_q + L_q is likewise a coset of a lattice (if nonempty). When we carry out the twisted Poisson summation, the translation g_q contributes a twist factor e^-2π√(-1)g_q ∙ h to the values of Λ_q. Because we then immediately bound each term by its absolute value, this twist factor drops out. * The final step of Case I, bounding X_q, proceeds as follows: X_q ≪∏_p | q Op^-2k_p∏_p |C/q Op^-2k_p - 1/2·max{H, C/q}^n = O(1)^ω(C)√(q)/√(C) D'^2max{H, C/q}^n = O(1)^ω(C)/D'^2max{H^n√(q)/√(C), C/q^n-1/2} ≤O(1)^ω(C)/D'^2max{ H^n, H^n - 1/2 + nδ} = O(1)^ω(C) H^n/D'^2. § COUNTING G3-POLYNOMIALS Finally, we count the polynomials f having G_f ⊆ G_3 for each odd n ≥ 3 (the even case is subsumed by G_1, as we noted in stating Theorem <ref>). For n ≥ 3 odd, _n(G_3; H) ≪ H^2 log^2 H n = 3 H^n+1/2 n ≥ 5 _n^(G_3; H) ≪ H^2 n = 3 H^2 log H loglog H n = 5 H^n-1/2log H n ≥ 7. §.§ Heights We need a notion of height more general than the naïve height on integer polynomials used in Section <ref>. If P = [x_0:⋯: x_N] ∈^N(K) is a point in projective space over a number field K, we define its (exponential, projective, Weil) height (denoted H(P) in Silverman <cit.>) as follows: P = ∏_vmax{x_0_v, …, x_N_v }^[K_v : _v] / [K : ] where the product is taken over the places v of K, and the norm x_v extends the usual v-adic norm on . This normalization ensures that the height is unchanged if K is embedded into a larger field. There are two natural ways to define a height on a polynomial f(x) = a_n x^n + a_n-1 x^n-1 + ⋯ + a_0 over a number field, and we distinguish the projective and affine heights f = [a_n : ⋯ : a_0] f = [a_n : ⋯ : a_0 : 1]. For instance, if f ∈[x] is nonzero, then f = max{a_0, …, a_n} is the naïve height already introduced in Section <ref>, while f = f/ f is smaller by a factor of the content(f) = (a_0, …, a_n). In particular, we define heights of algebraic numbers α by α = [α : 1]. The following properties should be noted: * If α is an algebraic integer with conjugates α = α_1, …, α_n, then α = ∏_imax{1, α_i}^1/n is none other than the Mahler measure of its minimal polynomial (suitably normalized). * If f has roots α_1, …, α_n, then 2^-n∏_iα_i ≤ f ≤ 2^n-1∏_i α_i <cit.>. In particular, for any factorization f = g h, we have f ≍_ f g · h. §.§ Proof of Theorem <ref> Under the conditions, there are only two possibilities for G_f: either G_f G_3 ≅ S_2 S_n, or G_f ≅ S_n. In the latter case, f is reducible and factors as a product f(x) = c · h(x) · x^n h(1/x), c ∈, h ∈[x]. By taking |c| = f, we may assume that h = 1. We have f = f/|c| ≤ H/|c|, so by (<ref>), h = h ≪√(H/|c|). As h has n+1 free coefficients, the number of polynomials f we get is ≪∑_c = 1^H H/c^n+1/2≪ H^n+1/2. We now assume that G_f = G_3. In this case there is a quadratic field K_2 = (√(k)), where k is a squarefree integer, such that K_f = K_g · K_2, so that over K_2, the polynomial f factorizes: f(x) = c' · h(x) · x^n h(1/x), c' ∈ K_2, h ∈ K_2[x]. We cannot assume that this factorization holds in _K_2[x] because _K_2 need not be a UFD. However, we can rescale h so that h(1) ∈. Then c' = f(1) / h(1)^2 ∈, and the contents c = (f) and å = (h) satisfy c _K_2 = c' ·å^2. Therefore å^2 = (c/c') is principal, generated by a rational number; in particular, å = å is self-conjugate. The self-conjugate ideals in a quadratic field are simply the rational multiples of products of ramified primes. Thus, after rescaling by a rational scalar, we have that there is a positive divisor d | k such that å = √((d)) = d, √(k), k ≡ 2, 3 4 d, k + √(k)/2, k ≡ 1 4, and c' = ± c/d. Note that h(x) and x^n h(1/x) have the same roots, and comparing values at x = 1, they must be equal. Thus the coefficients θ_i ∈ K_2 of h(x) = ∑_i = 0^n θ_i x^i satisfy θ_i = θ_n-i. We now bound the height of the θ_i. This requires us to control the affine/projective height ratio h/ h = ∏_vmax{θ_0_v, …, θ_n_v}/max{θ_0_v, …, θ_n_v, 1}^[(K_2)_v : _v] / [K_2 : ]. We consider the contributions of the different places v in turn: * If v = p | d is finite, then the numerator is å_p = 1/√(p), while the denominator is 1. As [(K_2)_v : _v] = [K_2 : ] = 2, we get a contribution 1/√(p) in this case. * For all other finite v, the numerator and denominator are both 1. * If v is infinite, the parenthesized fraction is 1 because, by (<ref>), θ_0_v ·θ_n_v = θ_0_v ·θ_0_v = d ·f(0)/c≥ 1. Hence h = 1/√(d) h ≍√( f/d) = √( f/c d)≤√(H/c d). Write θ_i = du_i + v_i √(k)/2, u_i, v_i ∈ and observe that not all the u_i nor all the v_i can be zero, or f would be reducible. Now some coefficient du_i of h + h is nonzero and thus at least d in absolute value. So d ≤(h + h) ≪(h) ≪√(H/cd), which gives us the bound d ≪H/c. Analogously, analyzing the v_i by considering h - h gives us the bound d' k/d≪H/c. Since d u_i and v_i √(k) are bounded by h, the numbers of possibilities for each u_i and v_i are ≪√(H/c d) and ≪√(H/c d') respectively, and so the total number of polynomials is E_n(G_3; H) ≪∑_c = 1^H ∑_d ≪H/c∑_d' ≪H/c√(H/c d)√(H/c d')^n+1/2 = H^n+1/2∑_c = 1^H c^-n+1/2(∑_d ≪H/c d^-n+1/4)^2. If n ≥ 5, the two remaining sums are both O(1), and we get a bound of O(H^n+1/2). If n = 3, we get E_n(G_3; H) ≪ H^n+1/2∑_c = 1^H c^-n+1/2log^2 H = H^n+1/2log^2 H = H^2 log^2 H. The “≪” in Theorem <ref> can be sharpened to “≍” in the non-monic case by checking that a positive proportion of choices of the independent variables c, d, d', u_i, v_i satisfy the needed conditions: * d and d' are squarefree and coprime; * u_i and v_i satisfy the conditions mod 2 so that θ_i ∈å; * and the θ_i do not all lie in an ideal strictly smaller than å. All these can be managed using an appropriate form of the squarefree sieve. We omit the details. §.§ Remarks on the monic case The monic case is proved similarly, but the possibilities are more restricted because we can factor f(x) = ± h_1(x) · x^n h_1(1/x) over _K_2, where h_1(x) is monic. This scaling h_1 of h may or may not be compatible with the one used above, but since (h_1) = (1), we derive that å = (θ) is principal. Since å is known to be self-conjugate, we have a relation θ = ϵθ where ϵ∈_K_2^ is a unit (necessarily of norm 1). The relation (<ref>) determines θ up to scaling by ^; namely θ = w(1 + ϵ) and/or θ = w√(k)(1 - ϵ), w ∈^, the label “and/or” being used because these formulas become invalid if ϵ = -1, respectively ϵ = 1. In particular, ϵ determines d. Moreover, since θ can be rescaled by a unit, it follows that rescaling ϵ by a square of a unit does not affect d. Since _K_2^ / (_K_2^)^2 is uniformly bounded (at most 4), there are O(1) possible values of d for each k. Each of the coefficients θ_1,…,θ_(n-1)/2 has O(H/√(k)) values, as above. As to θ_0, we have that η = θ_0^2 / d is a unit and determines θ_0 up to sign. We have θ_0 ≪√(H/d) so η≪ H. Up to the finite group of roots of unity, η is a power η_1^m of the fundamental unit η_1 of K_2. We have η_1≫√(k), so the number of possibilities for η is ≪log H/log k, for a grand total of _n^(G_3; H) ≪∑_2 ≤ k ≪ H^2H/√(k)^n-1/2·log H/log k. Estimating this sum yields the claimed bounds. In contrast to the non-monic case, it is unclear whether our bounds are sharp, specifically in the n = 3 and n = 5 cases. A more accurate count of monic G_3-polynomials demands a delicate understanding of the distribution of sizes of fundamental units in real quadratic fields. plain
http://arxiv.org/abs/2406.19207v1
20240627143108
Photon number states via iterated photon addition in a loop
[ "Barna Mendei", "Gábor Homa", "Péter Ádám", "Mátyás Koniorczyk" ]
quant-ph
[ "quant-ph" ]
Metaheuristics for finding threshold graphs with maximum spectral radius Luka Radanović1, Abdelkadir Fellague2, Dragutin Ostojić3, Dragan Stevanović4DS is on a sabbatical leave from the Mathematical Institute of the Serbian Academy of Sciences and Arts, Tatjana Davidović5 ============================================================================================================================================================================================================= § ABSTRACT We consider the probabilistic generation of time-bin photon number states from a train of single photon pulses. We propose a simple interferometric feedback loop setup having a beam splitter and a possibly non-idel detector. This Hong-Ou-Mandel type scheme implements iterated photon additions. Our detailed study shows that up to 4 photons this simple setup can provide reasonable success probabilities and fidelities. § INTRODUCTION Hong-Ou-Mandel interference <cit.> is one of the most prominent examples of quantum interference phenomena which is in the heart of many photonic quantum state engineering and quantum information processing schemata. When two single photons arrive at each input port of a symmetric beam splitter at time, they leave the beam splitter as a photon pair, i.e. a two-photon state in one or the other output ports. The Hong-Ou-Mandel interference is discussed in full detail in a recent tutorial by Brańczyk <cit.>. Placing a photodetector to one output port of the beam splitter it is possible, in principle, to generate a two-photon Fock-state in the other output. If the detector does not detect any photons, that is, it realizes a quantum measurement projecting onto the vacuum state, the other output port is left in the desired two-photon state; a highly nonclassical and non-Gaussian state. From the quantum state engineering point of view, this can be understood as adding a photon to a single-photon state. Photon addition in general, as a photonic quantum engineering method, has a broad coverage in the literature, see e.g. the recent review by Biagi et al <cit.>. Realizing photon addition with a beam splitter has the shortcoming that in the case of non-ideal detectors the imperfect measurement results in a non-ideal mixed output state. In heralding schemes, e.g. photon subtraction this a less significant issue as they rely on the actual detection of a photon. Hence, photon addition is more often realized using nonlinear optics. In spite of that, owing to the significant development of detectors, the simplicity of using a beam splitter and the fundamental relevance of the Hong-Ou-Mandel interference, the idea of adding photons to a few-photon state with beam splitters is not to be ignored. The generation of photon number states is a topic which has been discussed broadly in the literature, both theoretically (see e.g. <cit.>) and experimentally, with various experimental approaches including e.g. cavity QED <cit.>, micromasers <cit.>, interferometric setups <cit.>, superconducting quantum circuits <cit.>, or quantum dots <cit.>. Here we consider a less perfect but rather simple optical interferometric approach. Recently there have been proposals to generate periodic single-photon sources <cit.>. These would produce a train of photon pulses with single photon in each of them. In the present work we study a loop setup with a single beam splitter, using a periodic single-photon source as an input, and realizing iterated conditional photon additions on a Hong-Ou-Mandel basis. Certainly the schema is theoretically equivalent to the use of many beam splitters with single-photon inputs. Also, the photon addition with nonlinear optics could be considered in a similar setting. However, the use of a single beam splitter and a single detector is appealing for its simplicity. We calculate the probability of n-photon states in the an ideal version of the setup. Though this decreases exponentially with n, the probability for a few-photon state is negligible. We also analyze the impact of having a non-ideal detector to the schema. § METHODS In the present work we consider time-bin modes of the electromagnetic field. The pulse shapes are not explicitly taken into account to maintain the simplicity of the consideration, hence, for each mode we have a single annihilation operator for each mode so that the [â_i, â_j^†]=δ_i,j commutation relations hold. (If one adds frequency dependence and pulse shapes, the interference visibility is described more efficiently. When the timing of the pulses is appropriate, hence the visibility is maximal, the same results are obtained as from the present simiplified analysis. The considered scheme also contains beam splitters <cit.>. We cosider beam splitters described by real unitary matrices, whose input and output creation operators are [ â_1^†â_2^† ] = [ √(τ) - √(1-τ)√(1-τ) √(τ) ][ b̂_1^†b̂_2^† ], where τ∈ [0,1] is the transmittivity of the beam splitter. We do not consider the phase parameters for the transmitted and reflected beam as they do not introduce different physics in our considerations. The particular, the time-multiplexed scheme we consider is depicted in Fig. <ref>. A train of single-photon pulses generated periodically enter the interferometer in mode 1. When the first mode arrives, it interferes with the vacuum in mode 2. As of the subsequent pulses, the delay loop formed by the controlled switch completely reflecting the beam and the two mirrors are set in such a way that each pulse overlaps as exactly as possible with the previous input pulse. Hence, in each turn, the state in mode 2 at the beam splitter input will be equal to the state of mode 1 in the previous period. The other output (mode 2) of the beam splitter BS is sent to a threshold detector with single-photon efficiency. The detector is assumed to be possibly lossy, with efficiency η. The detector loss is modeled in the standard way with an additional beam splitter in front of the detector with transmittance η, whose other input mode, mode 3 is the vacuum and whose other output mode is ignored (i.e. traced out). Dark counts are ignored as their rate can be made low in case of threshold detectors. The way of operating the system is to let a sequence of n pulses interfere, and couple out the resulting state from the system after the n pulses. The result is accepted if the detector did not click during the operation, hence, in each period, vacuum was detected in mode 2. Under such circumstances, the output state should ideally be an n-photon Fock-state, alebit with a probability decreasing quickly with n. The operation can be interpreted as a photon addition repeated n times. In what follows we answer the following questions. What is the exact probability of generating an n-photon Fock state with such a simple scheme? How does this depend on the beam splitter transmittance τ, and the detector efficiency η. What is the actual fidelity of generating an n-photon Fock state, what kind of noise is introduced by the detector loss? Finally, under what circumstances will the output state still have negative parts of its Wigner function? § RESULTS This section is organized as follows. First we analyze photon addition with a beam splitter and a detector: the case when a single and an n-photon state interfere on a beam splitter, and one of the modes is projected to the vacuum state afterwards. This will be a single step in our iteration. §.§ Photon addition with a beam splitter and a detector Our loop scheme can be replaced by a periodic system consisting of beam splitters and ideal detectors. This can be seen in Figure <ref>. In one period there is the main beam splitter with τ transmittance and the real detector modelled as described previously (with another beam splitter and an ideal detector). At each step input mode 1 is the same as output mode except for the first round when it is exactly the vacuum state. Input mode 2 is where the periodic one-photon states are coming in. Output mode 2 is lead to the η transmittance beam splitter (i.e. the real detector with η efficiency), so it is an input mode of it. Its other input mode is mode 3 where we assume that only vacuum is present, so the ouput mode 2 is interfering with the vacuum state at the η transmittance beam splitter. From it the output mode 2 is going towards the ideal detector and output mode 3 is neglected (i.e. traced out). Let us look at a general period of this system, when n photons are coming onto input mode 1 (|n⟩_1), one photon coming to mode 2 (|1⟩_2), and there is vacuum at mode 3 (|0⟩_3), so the input photon number state is |ψ_in⟩_123=|n10⟩_123=1√(n!)(a_1^†)^na_2^†|000⟩_123, where in the last step we expressed it with the bosonic creation operators of each input mode acting on the vacuum state. First, examine the effect of the first (main) beam splitter with τ transmittance. It can be described similarly to what we have already presented in (<ref>), but now we have a third input mode, which is not affected by this beam splitter, so its effect on the creation operators is the following [ â_1^†â_2^†â_3^† ]=[ √(τ) -√(1-τ) 0 √(1-τ) √(τ) 0 0 0 1 ][ b̂_1^†b̂_2^†b̂_3^† ]. Using this the output state of the τ transmittance beam splitter is |ψ_out⟩_123=1√(n!)(√(τ)·b̂_1^†-√(1-τ)·b̂_2^†)^n(√(1-τ)·b̂_1^†+√(τ)·b̂_2^†)|000⟩_123= =(-1)^n√(τ(1-τ)^nn!)(b̂_2^†)^n+1|000⟩_123+ +∑_k=1^n(-1)^n-k√(τ^k-1(1-τ)^n-kn!)[nkτ-nk-1(1-τ)](b̂_1^†)^k(b̂_2^†)^n-k+1|000⟩_123+ +√(τ^n(1-τ)n!)(b̂_1^†)^n+1|000⟩_123. Now we have to apply the effect of the other beam splitter (standing in front of an ideal detector with η transmittance) on this state. It can be expressed similarly: [ b̂_1^†b̂_2^†b̂_3^† ]=[ 1 0 0 0 √(η) √(1-η) 0 -√(1-η) √(η) ][ ĉ_1^†ĉ_2^†ĉ_3^† ], where we introduced ĉ_i^† as the creation operator of the ith output mode of this beam splitter. With the proper substitutions we get the following output state: |ψ_out'⟩_123=∑_k=0^n+1(-1)^n√(τ(1-τ)^nn!)n+1k√(η^n-k+1(1-η)^k)(b̂_2^†)^n-k+1(b̂_3^†)^k|000⟩_123+ +∑_k=1^n∑_l=0^n-k+1(-1)^n-k√(τ^k-1(1-τ)^n-kn!)√(η^n-k-l+1(1-η)^l)× ×[nkτ-nk-1(1-τ)]n-k+1l(b̂_1^†)^k(b̂_2^†)^n-k-l+1(b̂_3^†)^l|000⟩_123+ +√(τ^n(1-τ)n!)(b̂_1^†)^n+1|000⟩_123= =∑_k=0^n+1(-1)^n√(τ(1-τ)^nη^n-k+1(1-η)^k)√(n+1k(n+1))|0; n-k+1; k⟩_123+ +∑_k=1^n∑_l=0^n-k+1(-1)^n-k√(τ^k-1(1-τ)^n-kη^n-k-l+1(1-η)^l)√(n-k+1l(n-k+1)!k!n!)× ×[nkτ-nk-1(1-τ)]|k; n-k-l+1; l⟩_123+ +√(τ^n(1-τ))√(n+1)|n+1; 0; 0⟩_123 We are interested in the cases where the detector does not detect any photons. This state can be obtained by a projection. The operator we can use for this is ^(1)⊗P̂_0^(2)⊗^(3)=∑_i=0^∞|i⟩_1⟨i|_1⊗|0⟩_2⟨0|_2⊗∑_j=0^∞|j⟩_3⟨j|_3, where ^(i) is the identity of the ith mode and P_k^(i)=|k⟩_i⟨k|_i is the projector of the ith mode projecting to the k-photon state. Applying this operator on |ψ_out'⟩_123 we get ^(1)⊗P̂_0^(2)⊗^(3)|ψ_out'⟩_123= =(-1)^n√(τ(1-τ)^n(1-η)^n+1)√(n+1)|0;0; n+1⟩_123+ +∑_k=1^n(-1)^n-k√(τ^k-1(1-τ)^n-k(1-η)^n-k+1)√((n-k+1)!k!n!)× ×[nkτ-nk-1(1-τ)]|k; 0; n-k+1⟩_123+√(τ^n(1-τ))√(n+1)|n+1; 0; 0⟩_123. Its absolute value squared is the probability of not detecting any photon with the detector. It is p=(η^2τ(1-τ)(n+1)+1-η)(ητ+1-η)^n-1. Consider the basic Hong–Ou–Mandel setup with τ=0.5 as an example and an ideal detector (η=1). In this case p decreases exponentially[We note, that usually, apart from some special cases, p decreases exponentially.]: p(n, τ=0.5, η=1)=(n+1)2^-n-1. Back to the general case, after the projection the state we got is not normalised, that is why, we have to multiply it with 1/√(p) to get a properly normalised photon state: |ψ_out, norm⟩_13=1√(p){(-1)^n√(τ(1-τ)^n(1-η)^n+1)√(n+1)|0; n+1⟩_13.+ +∑_k=1^n(-1)^n-k√(τ^k-1(1-τ)^n-k(1-η)^n-k+1)√((n-k+1)!k!n!)× .×[nkτ-nk-1(1-τ)]|k; n-k+1⟩_13+√(τ^n(1-τ))√(n+1)|n+1; 0⟩_13}. Here we no longer denote the photon number in mode 2, because it is 0 in every component. Now with the two remaining modes 1 and 3 we can obtain the density matrix as follows: ρ̂=|ψ_out, norm⟩_13⟨ψ_out, norm|_13. As we described earlier output mode 3 is ignored, so we have to trace out in it: ρ̂_1=_3ρ̂=1p{τ(1-τ)^n(1-η)^n+1(n+1)|0⟩_1⟨0|_1+. +∑_k=1^nτ^k-1(1-τ)^n-k(1-η)^n-k+1(n-k+1)!k!n![nkτ-nk-1(1-τ)]^2|k⟩_1⟨k|_1+ .+τ^n(1-τ)(n+1)|n+1⟩_1⟨n+1|_1}. From that fidelity can be obtained as the coefficient of the |n+1⟩_1⟨n+1|_1 term F(n, τ, η)=τ^n(1-τ)(n+1)p(n, τ, η). §.§ Iterated photon addition Now let us consider the case when a train of n single-photon pulses arrive at our arrangement, having beam splitter of transmittance τ and a detector of efficiency η. After the n pulses the result is coupled out. As before let us denote the absorption operators of the modes after an iteration, i.e. at the stage before the next input pulse arrives and when the detection takes place, after the beam splitter modeling detector loss, by ĉ_1, ĉ_2, and ĉ_3, respectively. Before the beam splitter modeling loss we have the operators b̂ given by Eq. (<ref>), which are expressed with the absorption operators â at the arrival of the input pulse of this turn according to Eq. (<ref>). As pointed out before, on the condition that the detector never fires, at each iteration the state in mode 1 emerging from the previous iteration, ρ^(1,n) which is now the input state in mode 2 of the next iteration, n+1 is an incoherent mixture of photon number states up to n photons. As pointed out before, ρ^(1,n) is diagonal in the photon number basis. Thus the output at this iteration can be calculated by taking all possible k-photon inputs from 1 to n at mode 2 of the beam splitter, calculating the output states before the detection as |Ψ_3|k⟩= 1/√(k!)â_1^†(â_2^†)^k |000⟩, and mixing them with the respective probablilities, resulting in the density matrix of the three-mode field before the (ideal) detection: ϱ^(n+1, preproj) = ∑_k=0^nρ^(1,n)_k,k|Ψ_3|k⟩⟨Ψ_3|k|. The ideal detection is described then with a projection of this three-mode density operator to the vacuum state in mode 2, leading to the unnormalized density operator ϱ^(n+1, afterproj) = |0⟩_2⟨0|_2 ϱ^(n+1, preproj)|0⟩_2⟨0|_2. From this the probability of the vacuum detection, i.e. that of obtaining an approximate n+1 photon state on condition that the previous iteration succeeded is p_n+1|n= ϱ^n+1, afterproj, whereas the state of mode 1 after this iteration, which will be the input to the next iteration, or the output if the desired number of iterations took place, will read ρ^(1,n+1)=1/p_n+1|n_2,3ϱ^n+1, afterproj. As pointed out before, this will remain diagonal. The net success probability will be the product of success probabilities at each turn, p_n+1=∏_k=0^n+1p_n+1|n. These iterations can be evaluated numerically. We have considered the cases of generating 3 or 4 photons in this setup. In Figs. <ref> and <ref> we have plotted the net success probability, the fidelity of the output state to the desired number state, and the output purity ϱ^2, for the 3 and 4 photon cases, respectively, as a function of detector efficiency η and beam splitter transmittance τ. The figures support the following conclusions. Lower detector efficiencies yield to bigger success probabilities; the detector loss results in a more frequent detection of vacuum. Meanwhile the fidelity and the purity do not decrease too dramatically with the detector efficiency, e.g. in the 3-photon case a detector with an efficiency of 80% will give a fidelity of 0.67 with a symmetric beam splitter, with a success probability of 0.14. This is to be compared with the case of the ideal detector where the fidelity is 1 but the success probability is 0.09. While the lower fidelity can be a significant issue in certain applications, we remark that such a state is still highly nonclassical: its Wigner function is very similar in shape to that of the |3⟩ state, with a relevant negative part, as illustrated in Fig. <ref> Another phenomenon to observe in the figures is that for a given detector efficiency, non-symmetric beam splitters may give a better performance. This is even more marked in the case of the 4-photon state, albeit the results also confirm that the success probability becomes really very low as expected. § DISCUSSION AND CONCLUSIONS We have considered an iterated photon addition scheme using a single detector and a beam splitter, which probabilistically converts a train of single-photon pulses into a higher photon number state. Using nonideal detectors the scheme outputs a mixture of photon states which can be still close to the desired n-photon state, and exhibits nonclassical properties such as negativity of the Wigner function. Although the success probability of the scheme decreases, exponentially with n, 3 or 4-photon states can be generated still with reasonable probability. This probability increases when using an nonideal detector, however, the fidelity of the generated state to the desired Fock-state decreases. We have analyzed in detail this interplay between the success probability and the fidelity. We have found that in certain cases a better fidelity can be achieved by using a beam splitter with optimally chosen transmittance. The presented scheme thus offers a relatively simple way of generating highly nonclassical states of time-bin modes. A logical continuaton of the present research could be the replacement of the beam splitter with a nonlinear element, and a detailed comparsion with a classical description of the scheme. Viewed from a broader context, time multiplexed photon interference schemes are developing significantly. For instance, in addition to the time-bin resolved (local) approach studied here has been extended to time-bucket (global) coincidence, showing nonclassical effects effect beyond Hong-Ou-Mandel type interference <cit.>. In this experimental context the present schema could easily realized, provided that periodic single-photon source is available, which latter is also important in photonic quantum information applications. Let us also remark that certain optical quantum processors, such as Borealis also employ delay loops <cit.>, hence, the present results can be instructive also for the detailed understanding of these. §.§ Funding This research was supported by the National Research, Development, and Innovation Office of Hungary under the Quantum Information National Laboratory of Hungary (Grant No. 2022-2.1.1-NL-2022-00004), the "Frontline" Research Excellence Program, (Grant. No. KKP 133827), and Project no. TKP2021-NVA-04. B.M. and M.K. received support from the ÚNKP-23-2 New National Excellence Program of the Ministry for Culture and Innovation from the source of the National Research, Development and Innovation Fund. §.§ Acknowledgments The authors thank Zoltán Zimborás for his questions that have intitated and motivated the present research, and Aurél Gábris for useful discussions. 10 adam2023single Peter Adam and Matyas Mechler. Single-photon sources based on incomplete binary-tree multiplexers with optimal structure. Optics Express, 31(19):30194–30211, 2023. adam2024single Peter Adam and Matyas Mechler. Single-photon sources based on stepwise optimized binary-tree multiplexers. Optics Express, 32(10):17173–17188, 2024. Adam_2014 Peter Adam, Matyas Mechler, Imre Santa, and Mátyás Koniorczyk. Optimization of periodic single-photon sources. Physical Review A, 90(5), 2014. arakawa2020progress Yasuhiko Arakawa and Mark J Holmes. Progress in quantum-dot single photon sources for quantum information technologies: A broad spectrum overview. Applied Physics Reviews, 7(2), 2020. Biagi_2022 Nicola Biagi, Saverio Francesconi, Alessandro Zavatta, and Marco Bellini. Photon-by-photon quantum light state engineering. Progress in Quantum Electronics, 84:100414, 2022. PhysRevA.102.013513 Ferenc Bodog, Matyas Mechler, Matyas Koniorczyk, and Peter Adam. Optimization of multiplexed single-photon sources operated with photon-number-resolving detectors. Phys. Rev. A, 102:013513, 2020. 1711.00080v1 Agata M. Brańczyk. Hong-Ou-Mandel interference. arXiv, 2017. quant-ph:1711.00080v1. Brattke_2003 S. Brattke, G. R. Guthöhrlein, M. Keller, W. Lange, B. Varcoe, and Herbert Walther. Generation of photon number states on demand. Journal of Modern Optics, 50(6–7):1103–1113, 2003. PhysRevLett.86.3534 Simon Brattke, Benjamin T. H. Varcoe, and Herbert Walther. Generation of photon number states on demand via cavity quantum electrodynamics. Phys. Rev. Lett., 86:3534–3537, 2001. campos1989quantum Richard A Campos, Bahaa EA Saleh, and Malvin C Teich. Quantum-mechanical lossless beam splitter: SU (2) symmetry and photon statistics. Physical Review A, 40(3):1371, 1989. PhysRevResearch.2.033489 M. Cosacchi, J. Wiercinski, T. Seidelmann, M. Cygorek, A. Vagov, D. E. Reiter, and V. M. Axt. On-demand generation of higher-order fock states in quantum-dot–cavity systems. Phys. Rev. Res., 2:033489, 2020. PhysRevA.80.013805 I. Dotsenko, M. Mirrahimi, M. Brune, S. Haroche, J.-M. Raimond, and P. Rouchon. Quantum feedback by discrete quantum nondemolition measurements: Towards on-demand generation of photon-number states. Phys. Rev. A, 80:013805, 2009. Hofheinz_2008 Max Hofheinz, E. M. Weig, M. Ansmann, Radoslaw C. Bialczak, Erik Lucero, M. Neeley, A. D. O’Connell, H. Wang, John M. Martinis, and A. N. Cleland. Generation of fock states in a superconducting quantum circuit. Nature, 454(7202):310–314, 2008. PhysRevA.39.2493 C. A. Holmes, G. J. Milburn, and D. F. Walls. Photon-number-state preparation in nondegenerate parametric amplification. Phys. Rev. A, 39:2493–2501, 1989. Hong_1987 C. K. Hong, Z. Y. Ou, and L. Mandel. Measurement of subpicosecond time intervals between two photons by interference. Physical Review Letters, 59(18):2044–2046, 1987. li2023quantum Rusong Li, Fengqi Liu, and Quanyong Lu. Quantum light source based on semiconductor quantum dots: A review. Photonics, 10(6):639, jun 2023. Madsen_2022 Lars S. Madsen, Fabian Laudenbach, Mohsen Falamarzi. Askarani, Fabien Rortais, Trevor Vincent, Jacob F. F. Bulmer, Filippo M. Miatto, Leonhard Neuhaus, Lukas G. Helt, Matthew J. Collins, Adriana E. Lita, Thomas Gerrits, Sae Woo Nam, Varun D. Vaidya, Matteo Menotti, Ish Dhand, Zachary Vernon, Nicolás Quesada, and Jonathan Lavoie. Quantum computational advantage with a programmable photonic processor. Nature, 606(7912):75–81, 2022. MeyerScott2020 Evan Meyer-Scott, Christine Silberhorn, and Alan Migdall. Single-photon sources: Approaching the ideal through multiplexing. Review of Scientific Instruments, 91(4), 2020. PhysRevLett.125.213604 Thomas Nitsche, Syamsundar De, Sonja Barkhofen, Evan Meyer-Scott, Johannes Tiedau, Jan Sperling, Aurél Gábris, Igor Jex, and Christine Silberhorn. Local versus global two-photon interference in quantum networks. Phys. Rev. Lett., 125:213604, 2020. Sayrin_2011 Clément Sayrin, Igor Dotsenko, Xingxing Zhou, Bruno Peaudecerf, Théo Rybarczyk, Sébastien Gleyzes, Pierre Rouchon, Mazyar Mirrahimi, Hadis Amini, Michel Brune, Jean-Michel Raimond, and Serge Haroche. Real-time quantum feedback prepares and stabilizes photon number states. Nature, 477(7362):73–77, 2011. Varcoe_2000 B. T. H. Varcoe, S. Brattke, M. Weidinger, and H. Walther. Preparing pure photon number states of the radiation field. Nature, 403(6771):743–746, 2000. Waks_2006 Edo Waks, Eleni Diamanti, and Yoshihisa Yamamoto. Generation of photon number states. New Journal of Physics, 8:4–4, 2006.
http://arxiv.org/abs/2406.19381v1
20240627175536
Spontaneous symmetry breaking in open quantum systems: strong, weak, and strong-to-weak
[ "Ding Gu", "Zijian Wang", "Zhong Wang" ]
quant-ph
[ "quant-ph", "cond-mat.mes-hall", "cond-mat.quant-gas", "cond-mat.str-el" ]
Institute for Advanced Study, Tsinghua University, Beijing 100084, China Institute for Advanced Study, Tsinghua University, Beijing 100084, China wangzhongemail@tsinghua.edu.cn Institute for Advanced Study, Tsinghua University, Beijing 100084, China § ABSTRACT Depending on the coupling to the environment, symmetries of open quantum systems manifest in two distinct forms, the strong and the weak. We study the spontaneous symmetry breaking among phases with different symmetries. Concrete Liouvillian models with strong and weak symmetry are constructed, and different scenarios of symmetry-breaking transitions are investigated from complementary approaches. It is demonstrated that strong symmetry always spontaneously breaks into the corresponding weak symmetry. For strong U(1) symmetry, we show that strong-to-weak symmetry breaking leads to gapless Goldstone modes dictating diffusion of the symmetry charge in translational invariant systems. We conjecture that this relation among strong-to-weak symmetry breaking, gapless modes, and symmetry-charge diffusion is general for continuous symmetries. It can be interpreted as an “enhanced Lieb-Schultz-Mattis (LSM) theorem" for open quantum systems, according to which the gapless spectrum does not require non-integer filling. We also investigate the scenario where the strong symmetry breaks completely. In the symmetry-broken phase, we identify an effective Keldysh action with two Goldstone modes, describing fluctuations of the order parameter and diffusive hydrodynamics of the symmetry charge, respectively. For a particular model studied here, we uncover a transition from a symmetric phase with a “Bose surface” to a symmetry-broken phase with long-range order induced by tuning the filling. It is also shown that the long-range order of U(1) symmetry breaking is possible in spatial dimension d≥ 3, in both weak and strong symmetry cases. Our work outline the typical scenarios of spontaneous symmetry breaking in open quantum systems, and highlights their physical consequences. Spontaneous symmetry breaking in open quantum systems: strong, weak, and strong-to-weak Zhong Wang July 1, 2024 ======================================================================================= § INTRODUCTION Symmetries play a fundamental role in condensed matter physics, where different phases and transitions between them are described by symmetries and their spontaneous breaking. Following Landau's paradigm, critical properties of continuous phase transitions can be obtained by determining how symmetry changes across the transition. Furthermore, symmetry breaking gives rise to Goldstone modes <cit.>, such as spin waves and phonons, which render the system gapless. Nevertheless, the discussion of symmetry and symmetry breaking is usually in the context of ground states or thermal ensembles. For open quantum systems, where coupling to the environment can not be neglected and where systems do not necessarily evolve towards thermal equilibrium, symmetries and their spontaneous breaking are much less explored. Interestingly, symmetries manifest in two distinct forms in open quantum systems: weak and strong <cit.>. Strong symmetries are similar to symmetries discussed in closed systems, with a conserved symmetry charge during time evolution. In contrast, due to interaction with the environment, weak symmetries do not imply the conservation of a physical symmetry charge. Recently, there has been a growing interest in exploring novel phases of matter in open quantum systems <cit.>, where both strong and weak symmetries play significant roles in characterizing these phases <cit.>. In this work, we focus on continuous symmetries and their spontaneous breaking in the context of Markovian dynamics governed by Lindblad superoperators, specifically emphasizing the U(1) case. Previous studies of U(1) symmetry breaking in open quantum systems are mainly devoted to driven-dissipative Bose-Einstein condensates <cit.>. Here, by representing strong and weak symmetry operations as superoperators, we are able to systematically analyze how spontaneous symmetry breaking takes place in open quantum systems and their consequences. For weak U(1) symmetric Liouvillians, spontaneous symmetry breaking is accompanied by Liouvillian gap closing in the thermodynamic limit, with conventional long-range order in the steady states. For strong U(1) symmetric Liouvillians, we find that there is U(1)× U(1) symmetry in the double space, where one of the U(1) symmetry is identified as weak U(1) symmetry. However, due to the structure of operators governing time evolution, the other U(1) symmetry always spontaneously breaks. This strong-to-weak symmetry breaking is universal, occurring in all dimensions and for any parameter. Consequently, if the Liouvillian is also translational invariant, there will be Goldstone modes, which in the physical space describe the diffusion of conserved symmetry charge, and render the spectrum gapless. In contrast to the conventional Lieb-Schultz-Mattis (LSM) theorem <cit.> as well as its recent generalization in open quantum systems <cit.>, we show that strong continuous symmetry and translational invariance always lead to gapless Liouvillian spectrum, at both integer and non-integer fillings. Thus, we call it an“enhanced LSM theorem” of open quantum systems. We note that very recently, the strong-to-weak symmetry breaking were studied in Refs. <cit.> with a rather different focus. Starting from strong-symmetric mixed states, Refs. <cit.> define and characterize strong-to-weak symmetry breaking mainly from an information-theoretic viewpoint. In the present work, we focus on the physical consequences of strong-to-weak symmetry breaking, including the Goldstone modes, Liouvillian spectrum, and the relaxation dynamics. We also provide a field-theoretic understanding of the phenomena. We construct simple spin models with weak and strong U(1) symmetries where different spontaneous symmetry breaking processes are studied, including weak symmetry breaking, strong-to-weak symmetry breaking, and complete breaking of strong symmetry (shown in Fig. (<ref>)). For the strong U(1) symmetric model, we find a novel filling-induced transition from a weak U(1) symmetric phase with Bose surface <cit.> to a weak U(1) symmetry broken phase with long-range order. When the strong U(1) symmetry completely breaks, we identify two Goldstone modes in the effective action, describing fluctuations of order parameter and diffusion of the symmetry charge. We also show that long-range order is possible in d≥ 3. The remainder of the article is organized as follows. In Sec. <ref>, we set up a framework to study weak and strong symmetries, outlining how spontaneous symmetry breaking takes place and the consequences of symmetry breaking. In Sec. <ref> and <ref>, spin models of weak and strong U(1) symmetries are studied, and the symmetry-breaking transitions are illustrated by mean-field, perturbation theory, and field-theoretical analysis. Specifically, for the strong symmetric model, we explain in detail how the change in filling induces the transition from the weak U(1) symmetric phase to the symmetry broken phase. We also derive the effective action describing fluctuations of the order parameter and diffusion of the symmetry charge, where the lower critical dimension is identified. Finally in Sec. <ref>, we further elaborate on the “enhanced" Lieb-Schultz-Mattis (LSM) theorem, explaining how the interplay between strong continuous symmetry and translational invariance renders the spectrum of Liouvillians gapless in each symmetry sector, and how the gapless modes are connected with diffusion modes of the symmetry charge. § SYMMETRIES IN OPEN QUANTUM SYSTEMS In open quantum systems, states are represented by density matrix ρ, and for Markovian environments, time evolution is generally governed by Lindblad master equation <cit.>: ρ̇ = ℒ[ρ] = -i[H,ρ]+∑_μγ_μ(2L_μρ L_μ^†-{L_μ^†L_μ,ρ}), where H is the Hamiltonian that describes unitary evolution, L_μ's are dissipators arising from coupling to the environment, and γ_μ≥ 0 describes the strength of dissipation associated with each L_μ. ℒ is called the Liouvillian superoperator. As t→∞, the system described by Eq. (<ref>) approaches steady state ρ_ss with ℒ[ρ_ss] = 0, which in general is a mixed state. The Liouvillian spectrum satisfies 0 = λ_0≥λ_1≥λ_2≥⋯, where λ_0 = 0 is the eigenvalue of the steady state ρ_ss and the Liovillian gap is defined as |λ_1|. There are two types of symmetry in open quantum systems: weak and strong (the names will be explained later). For a symmetry group G, with each element g∈ G acting as a unitary transformation U(g), the weak symmetry operation is defined as: 𝒰_w(g)[ρ] ≡ U(g)ρ U^†(g), and the strong symmetry operation is defined as 𝒰_s(g)[ρ]≡ U(g)ρ or 𝒰_s(g)[ρ]≡ρ U^†(g). A Liouvillian ℒ respects weak (strong) symmetry G if [ℒ, 𝒰_w(s)(g)] = 0 for all g∈ G. For example, we consider global U(1) symmetry operation U(θ) = e^-iNθ generated by symmetry charge N = ∑_i n_i, with local order parameter a_i^†(a_i) satisfying [n_i,a_i^†(a_i)] = a_i^†(-a_i). If the Hamiltonian H commutes with N, and dissipators L_μ's are of the type a_i or a_i^†, the Liouvillian respects weak U(1) symmetry, but N is not conserved: d/dt⟨ N⟩≠ 0 due to the exchange of symmetry charge with the environment. If dissipators are of the type n_i or a_j^†a_i, the Liouvillian respects strong U(1) symmetry where the total charge N is conserved d/dt⟨ N⟩ = 0. The names “strong” and “weak” are related to whether the symmetry charge is conserved or not. Global symmetries can be spontaneously broken. Here we focus on the continuous case U(1). For weak U(1) symmetry, spontaneous symmetry breaking (SSB) can be diagnosed by long-range correlation of order parameter in the steady state: lim_|i-j|→∞⟨ a_i^†a_j⟩≡lim_|i-j|→∞ (ρ_ssa_i^†a_j) ≠ 0, as the correlation function is invariant under weak symmetry. Another perspective for weak symmetry breaking is to look at the Liouvillian gap in the thermodynamic limit. In the symmetry charge basis, a density matrix can be mapped to a pure state in the double space: ρ = ∑_jkc_jk|j⟩⟨ k|→∑_jkc_jk|j⟩_L⊗|k⟩_R. In this new Hilbert space, weak symmetry implies the conservation of N_L-N_R, however, N_L-N_R does not correspond to any physical observable. For a general weak U(1) symmetric Liouvillian ℒ, there always exists a steady state ρ_ss^0 in the sector N_L-N_R = 0, whether the symmetry is spontaneously broken or not (this is the steady state if we start from a pure state with fixed N). In the thermodynamic limit, if ρ_ss^0 is the only steady state (finite Liouvillian gap in sectors N_L-N_R ≠ 0), we say the Liouvillian ℒ is in the symmetric phase, since: 𝒰_w(θ)[ρ_ss^0]≡ e^iNθρ_ss^0e^-iNθ = ρ_ss^0, when the Liouvillian gap closes in sectors N_L-N_R = n≠ 0, there will be corresponding steady states ρ_ss^n. Then with proper superposition: 𝒰_w(θ)[∑_nρ_ss^n] = ∑_ne^-inθρ_ss^n, the steady states transform nontrivially under weak U(1) symmetry due to steady state degeneracy. Consequently, weak U(1) SSB is accompanied by the Liouvillian gap closing in sectors N_L-N_R≠ 0 in the thermodynamic limit. In the weak U(1) SSB phase, generally the steady states will be oscillatory. For a weak U(1) SSB Liouvillian ℒ: ℒ[ρ_ss^n] = 0, n = 0,± 1,± 2⋯ if we add a term in the Hamiltonian that is proportional to the generator N, ℒ→ℒ' = ℒ-iμ[N,]: ℒ'[ρ_ss^n] = -iμ nρ_ss^n, the steady state eigenvalues now acquire an imaginary part μ n, equally spaced on the imaginary axis. This is equivalent to going to the rotating frame: ρ→ρ' = e^-iμ Ntρ e^iμ N t, where observables like the order parameter will now be time-dependent: ⟨ a⟩→⟨ a⟩ e^-iμ t. This resembles limit cycles and quantum synchronization in open systems <cit.>. Now let us turn to strong symmetry and its possible spontaneous breaking. Strong U(1) symmetry can be viewed as a special weak U(1) symmetry, where not only N_L-N_R, but also N_L,N_R, are conserved in the double space, so that there are two U(1) symmetries generated by N_L-N_R and N_L+N_R. SSB associated with generator N_L-N_R is the same as weak symmetry breaking where we can also use long-range order Eq. (<ref>) to characterize it. However, SSB associated with generator N_L+N_R is quite different as we find this symmetry is always spontaneously broken, for any parameter and in any dimension. It results from the fact that for a strong U(1) symmetric Liouvillian ℒ, there must exist steady states ρ_ss^nn in every double space sector N_L = N_R = n. To understand that, we note in the physical space since the total charge N is conserved, we will arrive at steady state ρ_ss^nn starting from a pure state with fixed N = n. Consequently, for a strong U(1) symmetric Liouvillian ℒ, we can always find steady states of the form ∑_nc_nρ_ss^nn that transforms non-trivially under strong symmetry operation Eq. (<ref>)(but not under weak symmetry operation Eq. (<ref>)), which means the symmetry generated by N_L+N_R is always spontaneously broken. This is known as the strong-to-weak symmetry breaking <cit.>. Although the strong-to-weak symmetry breaking does not give rise to long-range order, it has important physical consequences. In particular, the strong continuous symmetry together with translational invariance will render the Liouvillian ℒ gapless. Physically, the Goldstone modes associated with broken symmetry generated by N_L+N_R correspond to diffusion of the conserved charge. Concrete examples of strong-to-weak symmetry breaking and the associated gapless spectrum will be given later in our discussion of models with strong U(1) symmetry in Sec. <ref>, and Sec. <ref>. § MODEL WITH WEAK U(1) SYMMETRY §.§ The model In this section, we focus on the case of weak U(1) symmetry breaking. We consider a simple spin-1/2 model on a bipartite lattice with Liouvillian: H = ∑_⟨ ij⟩J(σ_i^+σ_j^-+σ_i^-σ_j^+)+J_zσ_i^zσ_j^z, ℒ_I[ρ]= -i[H,ρ]+Γ∑_i∈ A𝒟(σ_i^-)[ρ]+Γ∑_i∈ B𝒟(σ_i^+)[ρ], where σ^± = 1/2(σ^x± iσ^y) and σ^x,σ^y,σ^z are the Pauli operators. 𝒟(O)[ρ] is short for 2Oρ O^†-{O^†O,ρ}. The Hamiltonian part is the spin-1/2 XXZ model. The dissipation part describes onsite loss on the A sublattice and onsite gain on the B sublattice, with a uniform rate Γ. U(1) symmetry is generated by N = ∑_i n_i = ∑_i(σ_i^z+1)/2. This is equivalent to viewing the spins as hard-core bosons where a site i is occupied when σ_i^z = 1 and unoccupied when σ_i^z = -1. Immediately we can see that there is no strong U(1) symmetry (N is not conserved) due to the presence of loss and gain. However, the Liouvillian ℒ_I respects a weak U(1) symmetry. Possible SSB is diagnosed by long-range correlation ⟨σ_i^+σ_j^-⟩. The competition between coherent Hamiltonian evolution and dissipation induces a symmetry-breaking transition when J is increased from 0 across a critical value J_c. For simplicity, we set J_z = 0 and put the model on the square lattice in the following discussion. §.§ SSB from gap closing In the trivial case where J = 0, the Liouvillian Eq. (<ref>) is gapped and the unique steady state is: ρ_0 = |↓↓⋯⟩⟨↓↓⋯|_A⊗|↑↑⋯⟩⟨↑↑⋯|_B, where there is no long-range order and thus no symmetry breaking. When J is increased from 0, we can treat the Hamiltonian H as a perturbation. In the double space, the J = 0 steady state is in the sector N_L-N_R = 0. To diagnose symmetry breaking, we focus on the sectors where N_L-N_R≠ 0, for example, N_L-N_R = 1. At J = 0, in the long time, the important states (apart from ρ_0) are those with the largest eigenvalue -Γ. These states are mapped to pure state |i⟩(i is the site index): {[ σ_i^+ρ_0, i∈ A; ρ_0σ_i^+, i∈ B ]. → |i⟩ They are the one magnon states. The Hamiltonian acts as the kinetic term for the magnons to hop between neighboring sites. To first order in perturbation theory, the effective Liouvillian ℒ^eff_I in terms of |i⟩'s is: ℒ^eff_I = -Γ + iJ∑_⟨ ij⟩ |i⟩⟨ j|-|j⟩⟨ i|, (i∈ B, j∈ A). For ℒ_I^eff, the k = 0 magnon (with π/2 phase difference between two sublattices) reduces the dissipative gap from Γ to Γ-2Jd, where d is the spatial dimension. Consequently, when J is large, the magnons tend to condense at k = 0 and the Liouvillian gap closes, signaling weak U(1) SSB as we discuss in Sec. <ref>. §.§ SSB from mean field A perhaps more transparent way to see the symmetry-breaking process is through mean field theory. By mean field decoupling: ρ = ⊗_iρ_i, ⟨σ_iσ_j⟩ = ⟨σ_i⟩⟨σ_j⟩, together with the Heisenberg equation for operators: ∂_tO = -i[O,H]+∑_μγ_μ(2L_μ^†OL_μ-{L_μ^†L_μ,O}), we arrive at the mean field equations: d/dtσ_i∈ A^+ = -Γσ_i∈ A^+-iJ∑_⟨ ij⟩σ^z_i∈ Aσ_j∈ B^+, d/dtσ_i∈ A^z = -2Γ(σ_i∈ A^z+1)+2iJ∑_⟨ ij⟩(σ_i∈ A^-σ_j∈ B^+-σ_i∈ A^+σ_j∈ B^-), d/dtσ_i∈ B^+ = -Γσ_i∈ B^+-iJ∑_⟨ ij⟩σ^z_i∈ Bσ_j∈ A^+, d/dtσ_i∈ B^z = 2Γ(1-σ_i∈ B^z)+2iJ∑_⟨ ij⟩(σ_i∈ B^-σ_j∈ A^+-σ_i∈ B^+σ_j∈ A^-). Here each variable represents the expectation value of the corresponding operator. When 2Jd≤Γ, the mean field equations have a trivial symmetric steady state with σ_A^z = -1, σ_B^z = 1,σ_A^+ = σ_B^+ = 0. When 2Jd>Γ, we arrive at symmetry breaking steady state with a nonzero order parameter: σ_A^z = -Γ/2Jd, σ_B^z = Γ/2Jd, σ_A^+ = iσ_B^+, |σ_A^+|^2 = 1/2Γ/2Jd(1-Γ/2Jd), where on each sublattice the order parameter σ_i^+ takes a uniform value and there is π/2 phase difference between order parameters on the two sublattices. The mean field theory result is consistent with previous perturbation analysis where we find that magnon condensation leads to Liouvilian gap closing in non-diagonal sectors N_L-N_R≠ 0 and there is π/2 phase difference between the condensed magnon on two sublattices. §.§ Field-theoretical analysis and lower critical dimension To further study the model and its weak U(1) symmetry-breaking transition, we employ Keldysh field theory to see whether long-range order survives fluctuations beyond mean field. The Keldysh path integral is formally expressed as: ∫𝒟[ψ_+,ψ_-] e^iS[ψ_+,ψ_-], where ψ_+(x,t), ψ_-(x,t) are defined on the forward (backward) branch of a closed time contour. It is often convenient to perform the Keldysh rotation ψ_c = 1/√(2)(ψ_++ψ_-), ψ_q = 1/√(2)(ψ_+-ψ_-), where ψ_c/q are called classical (quantum) fields. At the saddle point δ S/δψ_c = 0, δ S/δψ_q = 0. Typically ⟨ψ_q⟩ = 0, and its fluctuations are gapped. Similar models to Eq. (<ref>) with weak U(1) symmetries have been studied in <cit.> using Keldysh formalism. Close to symmetry breaking transition point J = J_c, mean field theory gives: σ_i∈ A≈ -1, σ_i∈ B≈ 1. Consequently, we choose |↓_A↑_B⟩ as the reference state, and use the large-spin Holstein–Primakoff transformation to Eq. (<ref>) to a bosonic model with Keldysh action S_1[ψ_+,ψ_-]. The details of large-spin expansion is irrelevant. For the long wavelength universal behaviors near the critical point J = J_c, only symmetry matters. The action S_1 respects a global U(1) symmetry: S_1[ψ_+,ψ_-] = S_1[ψ_+e^iϕ,ψ_-e^iϕ], which is the manifestation of weak U(1) symmetry in Eq. (<ref>). (A subtle point is that Eq. (<ref>) is not fully translational invariant as different dissipators are imposed on different sublattices, so a transformation is made to restore the full translational invariance). S_1 is in the same universality class as the driven-dissipative BEC studied in <cit.>, and we review the results here. For d ≥ 3, the Keldysh action S_1 is of the form: S_1 = ∫ dtd^dx {ψ_q^*[i∂_t+(K_c-iK_d)∇^2-r_c+ir_d]ψ_c + c.c. -[(u_c-iu_d)ψ_q^*ψ_c^*ψ_c^2+c.c.]+2iγψ_q^*ψ_q}, where K_c, K_d, r_c, r_d, u_c, u_d,γ are phenomenological parameters arising from different terms in the microscopic model. All terms allowed by weak U(1) symmetry are considered in S_1, with terms irrelevant in the RG sense dropped. Close to the critical point r_d = 0 (r_c is unimportant because we can always go to a rotating frame ψ_c→ψ_ce^iω t and shift r_c to 0), the scaling dimension is: [∇] = 1, [∂_t] = 2, [ψ_c] = (d-2)/2, [ψ_q] = (d+2)/2. Terms containing more than two ψ_q's are irrelevant, and thus S_1 Eq. (<ref>) is quadratic in ψ_q. Since ψ_q is also gapped in S_1, we can integrate it out <cit.> and arrive at the following Langevin equation for the order parameter ψ_c: i∂_tψ_c = [-(K_c-iK_d)∇^2+r_c-ir_d+(u_c-iu_d)|ψ_c|^2]ψ_c+ξ, where ξ is Guassian white noise with ⟨ξ(x,t)⟩ = 0 and ⟨ξ(x,t)ξ^*(x',t')⟩ = 2γδ(t-t')δ(x-x'). The Langevin equation Eq. (<ref>) captures the long-distance and long-time physics. Symmetry breaking transition is controlled by r_d∼Γ-2dJ. When r_d>0(J<J_c), the system is in the symmetric phase with ⟨ψ_c⟩ = 0. When r_d<0 (J> J_c), the system is in the symmetry-broken phase with long-range order in ψ_c, and the long-time dynamics is described by the phase fluctuation of ψ_c = √(ρ_0)e^iθ_c: ∂_tθ_c = D∇^2θ_c + λ/2(∇θ_c)^2 + η(x,t), which is the famous KPZ equation <cit.>, where ρ_0 = -r_d/u_d, D = K_cu_c/u_d, λ = -2K_c. η(x,t) is Gaussian white noise satisfying ⟨η(x,t)⟩ = 0, ⟨η(x,t)η(x',t')⟩ = 2Δδ(x-x')(t-t'), where Δ = (γ+2u_dρ_0)(u_c^2+u_d^2)/(2ρ_0u_d^2). In dimension d≥ 3, the nonlinear term (∇θ_c)^2 is irrelevant. Phase fluctuations do not destroy long-range order, with dynamical critical exponent z = 2 and Goldstone modes corresponding to fluctuations of the order parameter (spin waves). For d<3, it can be shown <cit.> that the long-time dynamics is still governed by Eq. (<ref>), and phase fluctuations completely destroy long-range order in the steady state. Notably, due to the nonlinear term (∇θ_c)^2, there is no quasi-long-range order in 2D <cit.>. Generally, the weak U(1) symmetry breaking with long-range ordered steady states can occur in d≥ 3. § MODEL WITH STRONG U(1) SYMMETRY §.§ The model In this section, we consider a model with strong U(1) symmetry. It is described by Liouvillian ℒ_II: H = ∑_⟨ ij⟩J(σ_i^+σ_j^-+σ_i^-σ_j^+)+J_zσ_i^zσ_j^z, ℒ_II[ρ]= -i[H,ρ] +Γ∑_⟨ ij⟩𝒟(σ_j∈ B^+σ_i∈ A^-)[ρ] + Γ_z∑_i𝒟(σ_i^z)[ρ], The strong U(1) symmetric Liouvillian ℒ_II is similar to ℒ_I Eq. (<ref>). It is also defined on a bipartite lattice, with the same XXZ Hamiltonian. However, for the dissipation part, we now consider jump operators that move the hard-core bosons between nearest sites (from A sublattice to B sublattice) with rate Γ, and there are local dephasing terms represented by jump operators σ_i^z with strength Γ_z. ℒ_II respects a strong U(1) symmetry, which means the total particle number N = ∑_i(σ_i^z+1)/2 is conserved: d/dt⟨ N ⟩ = 0. This is the major difference between model I and model II. Later we will find out that the conservation of N has profound consequences. As discussed in Sec. <ref>, for a strong U(1) symmetric Liouvillian, there are actually two U(1) symmetries generated by N_L+N_R and N_L-N_R in double space. The U(1) symmetry generated by N_L-N_R is similar to the weak symmetry, and its symmetry breaking can be characterized by a long-range order lim_|i-j|→∞⟨σ_i^+σ_j^-⟩≠ 0. Furthermore, we find that for ℒ_II complete breaking of the strong U(1) symmetry and long-range order occur for a certain range of fillings. On the other hand, the U(1) symmetry generated by N_L+N_R always spontaneously breaks, which can be understood as strong-to-weak symmetry breaking, which leads to gapless diffusion modes of the conserved charge. We first focus on weak U(1) SSB of model II and leave the discussion of strong-to-weak symmetry breaking to Sec. <ref>. §.§ Mean field analysis We consider weak U(1) symmetry breaking of model II. For simplicity, we consider the model on the square lattice and set J_z = Γ_z = 0 for the time being. By performing mean field approximation as in Sec. <ref>, we arrive at equations of motion: d/dtσ_i∈ A^+ = ∑_⟨ ij⟩-Γ/2σ_i∈ A^+(1-σ_j∈ B^z)-iJσ^z_i∈ Aσ_j∈ B^+, d/dtσ_i∈ A^z = ∑_⟨ ij⟩ Γ (σ_i∈ A^z+1)(σ_j∈ B^z-1) +2iJ(σ_i∈ A^-σ_j∈ B^+-σ_i∈ A^+σ_j∈ B^-), d/dtσ_i∈ B^+ = ∑_⟨ ij⟩-Γ/2σ_i∈ B^+(1+σ_j∈ A^z) -iJσ^z_i∈ Bσ_j∈ A^+, d/dtσ_i∈ B^z = ∑_⟨ ij⟩ Γ (1-σ_i∈ B^z)(1+σ_j∈ A^z) +2iJ(σ_i∈ B^-σ_j∈ A^+-σ_i∈ B^+σ_j∈ A^-); where each variable represents its expectation value. The conservation of N manifests as d/dt∑_iσ_i^z = 0. We define the filling factor as n = N/L^d = ∑_i1/2(σ_i^z+1)/L^d(L^d is the number of total sites). By definition, 0≤ n≤ 1. The model respects a particle-hole symmetry: σ_i^+↔σ_i^-,σ_i^z↔ -σ_i^z,A↔ B. Consequently, we consider only 0≤ n≤ 1/2. For 0≤ n≤ 1/4, the mean field equations of motion Eq. (<ref>) give no weak U(1) symmetry-broken steady states. Since the dissipation we consider in ℒ_II takes hard-core bosons from the A sublattice to the B sublattice, when the filling factor is small, the hard-core bosons prefer to stay on the B sublattice. One possible stable steady state satisfies σ_A^+ = σ_B^+ = 0, σ_A^z = -1. The expectation value of order parameter σ^+ is 0, σ_A^z = -1 implies no hard core bosons on A sublattice. The value of σ_i∈ B^z is uniform on the B sublattice and determined by filling factor n. This is by definition a symmetric steady state. When n is tuned above 1/4, it becomes unstable (infinitesimal deviation from this steady-state solution will drive the system toward a symmetry-breaking state). The other possible steady state for 0≤ n≤ 1/4 satisfies: σ_A^z = -1, σ_A^+ = 0; ∑_⟨ ij⟩σ^+_j∈ B = 0, ∀ i∈ A; where there is still no hard-core bosons on the A sublattice. However, on the B sublattice there can be nonzero order parameter σ_i∈ B^+ as long as it satisfies Eq. (<ref>): it is possible only with momentum k on a certain Bose surface<cit.>: σ_r∈ B^+ = ∑_k σ_k^+ e^ik· r, ∏_i = 1^dcos k_i/2 = 0. The Bose surface is illustrated in Fig. <ref>. However, the Bose surface steady states Eq. (<ref>) are unstable against dephasing. When Γ_z > 0, it is no longer a steady state. Nonzero order parameter now vanishes: ∀ i, σ_i^+ = 0. For 1/4≤ n≤ 1/2, mean field equations of motion lead to symmetry breaking steady state, where σ_i^±,σ_i^z takes uniform value on each sublattice and satisfies: Γ^2(1-σ_B^z)(1+σ_A^z) = -4J^2σ_A^zσ_B^z, Γ(1-σ_B^z)σ_A^+ = -2 iJσ_A^zσ_B^+, 2|σ_A^+|^2 = -σ_A^z(1+σ_A^z); and the filling factor is n = (σ_A^z+σ_B^z+2)/4. (The reason that SSB steady state is possible only for 1/4≤ n≤3/4 is that the SSB steady state Eq. (<ref>) is possible only when σ_A^z≤ 0, σ_B^z≥ 0). Exactly at n = 1/2, the steady state satisfies (to first order in J/Γ): σ_A^z = -1 + 2J/Γ, σ_B^z = 1 - 2J/Γ; σ_B^+ = iσ_A^+, |σ_A^+| = √(J/Γ). Similar to the SSB steady state of ℒ_I Eq. (<ref>), there is also a π/2 phase difference between order parameter on two sublattices. When there is no dephasing, infinitesimal J will drive the system into the symmetry-broken phase. Generally, when Γ_z>0, there will be a phase transition: J has to be larger than a critical value J_c for the system to be in the symmetry-broken phase. §.§ Large spin analysis for n = 0 and n = 1/2 One interesting result from mean field theory is that the model ℒ_II goes from symmetric phase to symmetry-broken phase when filling factor n is increased from 0 to 1/2, this is a unique phenomenon for strong symmetric Liouvillian since particle number conservation requires strong symmetry. To better understand the transition, we first focus on the cases n = 0 and n = 1/2. We still consider J_z = Γ_z = 0. At exactly n = 0, the steady state is trivial: ρ_ss = ρ_1 = |↓↓⋯⟩⟨↓↓⋯|. When the system is doped a little away from n = 0, we believe the long-time dynamics will only involve states close to ρ_1, which allows us to perform large-S Holstein–Primakoff expansion to the spins (by taking |↓↓⋯⟩ as the reference state): S^z_i = -S + b_i^†b_i, S_i^+ = b_i^†√(2S-b_i^†b_i); to zeroth order in 1/S, we have: σ_i^+ = S_i^+ = b_i^†,σ_i^z = 2S_i^z = 2b_i^†b_i-1. This is equivalent to relaxing the hard-core boson constraint n_i≤ 1. Consequently, near n = 0, ℒ_II is approximately described by the Liouvillian: ℒ^eff_n = 0[ρ] = -i[J∑_⟨ ij⟩(b_i^†b_j+b_j^† b_i),ρ]+Γ∑_⟨ ij ⟩𝒟(b_j∈ B^†b_i∈ A)[ρ], for ℒ^eff_n = 0, there is no weak U(1) SSB phase (mean field equations do not admit stable SSB steady state). However, there are exact dark states |ψ⟩'s that satisfy: ∑_⟨ ij⟩(b_i^†b_j+b_j^† b_i)|ψ⟩ = 0, ∀⟨ ij⟩, b_j∈ B^†b_i∈ A|ψ⟩ = 0, which makes |ψ⟩⟨ψ'| steady states of ℒ^eff_n = 0 Eq. (<ref>): ℒ^eff_n = 0[|ψ⟩⟨ψ'|] = 0. These steady states correspond to the Bose surface steady states in our mean field theory analysis Eq. (<ref>) and (<ref>), for that |ψ⟩'s can be expressed as: |ψ⟩ = b_k_1^†b_k_2^†⋯ b^†_k_n|Ω⟩, b_k^† = 1/√(L^d/2)∑_r∈ B b_r^†e^ikr, n = 0,1,2⋯ where |Ω⟩ is the vacuum for bosons and k's are on the Bose surface Eq. (<ref>). As in the mean field steady state, A sites are empty, and bosons staying on the B sites are dissipationless only with momentum on the Bose surface. Again, the Bose surface excitations are unstable against dephasing. Now we turn to n ≈ 1/2. By examining the mean field steady state Eq. (<ref>), when J/Γ is small, we take the state |↓_A↑_B⟩ as the vacuum for large S expansion: {[ S^z_i = -S + b_i^†b_i, S_i^+ = b_i^†√(2S-b_i^†b_i), i∈ A;; S^z_i = S - b_i^†b_i, S_i^+ = √(2S-b_i^†b_i) b_i, i∈ B; ]. Similarly, to zeroth order in 1/S, the effective Liouvillian is: ℒ^eff_n = 1/2[ρ] = -i[J∑_⟨ ij⟩(b_i^†b_j^†+b_jb_i),ρ]+Γ∑_⟨ ij ⟩𝒟(b_ib_j)[ρ], where b_i^† creates a particle on the A sublattice and a hole on the B sublattice. Strong U(1) symmetry implies the conservation of ∑_i∈ Ab_i^†b_i-∑_i∈ Bb_i^†b_i. Quite different from ℒ^eff_n = 0, in ℒ^eff_n = 1/2 bosons are created, annihilated, and damped in pairs, leading to condensation and SSB <cit.>. Within mean field theory, ℒ^eff_n = 1/2 indeed admits stable SSB steady state with uniform order parameter ⟨ b_i⟩ on the A/B sublattice: |b_A| = √(n_A), Γ n_Bb_A = -iJb_B^†, Γ^2n_An_B = J^2, where each variable denotes its expectation value. Later in Sec. <ref> we will use Keldysh field theory to further study ℒ^eff_n = 1/2, identifying lower critical dimension for weak U(1) SSB and emergent gapless diffusion modes from strong-to-weak symmetry breaking. §.§ Transition from n = 0 to n = 1/2 With a better understanding of n = 0 and n = 1/2, we now focus on the transition from n = 0 to n = 1/2. For ℒ_II Eq. (<ref>), the Hamiltonian H is now treated perturbatively (again we set J_z = Γ_z = 0 for simplicity). At J = 0, the Liouvillian ℒ_II is purely dissipative. There are numerous dark states on which dissipation has no effect: ∀⟨ ij⟩, σ_j∈ B^†σ_i∈ A|ψ_d⟩ = 0, for example, states with hard-core bosons (spin up) only on the B sublattice. The J = 0 steady states are of the form |ψ_d⟩⟨ψ_d'| or their linear superposition. As a result, there is massive degeneracy in each symmetry sector. When the Hamiltonian H is added perturbatively, it mixes the degenerate steady states by connecting different configurations in |ψ⟩_d's, and through second-order perturbation picks certain superposition of |ψ_d⟩⟨ψ_d'| as the new steady state. In |ψ_d⟩'s, at a small filling factor, the hard-core bosons tend to occupy B sites. The simplest case is to consider the symmetry sector N_L = 1, N_R = 0. J ≠ 0 steady state is simple: ρ_ss = 1/√(L^d/2)∑_r∈ Bσ_r^+e^ikr |↓↓⋯⟩⟨↓↓⋯|, where k is on the Bose surface Eq. (<ref>). When there are few bosons, A sites are rarely occupied, and the bosons prefer to form magnons on the B sublattice with momentum k on the Bose surface, as in Eq. (<ref>). To go into the symmetry-broken phase, the hard-core bosons should be able to explore the whole lattice and condense at k = 0. However, in |ψ_d⟩'s, an A site can be occupied only if the neighboring B sites are all occupied, and an A site boson can move through second-order perturbation of H only if it is in a connected cluster of occupied B sites (These are illustrated in Fig. (<ref>)). Consequently, to achieve long-range order, occupied B sites need to form a large enough connected cluster across the lattice in order for the A site bosons to move around and create long-range coherence. This is similar to the percolation problem <cit.>, where a large enough cluster across the lattice occurs only when filling factor n is larger than a critical value n_c. When n is large enough, in the |ψ_d⟩'s, occupied B sites form a large cluster across the lattice. Especially, near n = 1/2, ρ_0 = |↓_A↑_B⟩⟨↓_A↑_B| is a J = 0 steady state. By doping particles on the A sublattice or holes on the B sublattice, σ_i_1∈ A^+σ_i_2∈ A^+⋯ρ_0 and σ_i_1∈ B^-σ_i_2∈ B^-⋯ρ_0 are still J = 0 steady states. When J is increased from 0, in contrast to n = 0 case Eq. (<ref>), the perturbative action of H will favor k = 0 superposition of these doped states. More precisely, near n = 1/2, the model is described by the effective Liouvillian ℒ^eff_n = 1/2 Eq. (<ref>), where b_i^† creates particles on the A sublattice or holes on the B sublattice. Pairs of A particle and B hole are created and annihilated coherently, which leads to SSB and uniform (k = 0) order parameter ⟨ b_i⟩ on each sublattice. Mean field analysis in Sec. <ref> predicts a transition at n_c = 1/4. However, with the above percolation analogy, the actual transition filling factor may not be 1/4. Generally, it will depend on dimensionality and lattice geometry. §.§ Field-theoretical analysis: lower critical dimension and emergent hydrodynamics For strong U(1) symmetric ℒ_II Eq. (<ref>), long-range order is possible only for certain fillings. Here we focus on filling factor n near 1/2, where it is mapped to a bosonic model by large spin expansion Eq. (<ref>). To zeroth order, the corresponding effective Liouvillian is ℒ^eff_n = 1/2 Eq. (<ref>). For ℒ_II, following the same procedure in Sec. <ref>, we should arrive at a keldysh action S_2[ψ_+,ψ_-]. Since the microscopic model Eq. (<ref>) respects strong U(1) symmetry, the action S_2 satisfy: S_2[ψ_+,ψ_-] = S_2[ψ_+e^iϕ_+,ψ_-e^iϕ_-]. There are two global U(1) symmetries corresponding to the U(1) rotation of ψ_+ and ψ_-. In the double space, these two U(1) symmetries are generated by N_L and N_R. For the saddle point Eq. (<ref>) of S_2, ψ_q = 0 is still a solution. However, the term of the form |ψ_q|^2 is not invariant under the symmetry operation Eq. (<ref>), which means ψ_q can not be integrated out in the same way as in the weak U(1) symmetry case (it is no longer gapped). Thus we return to the (ψ_+,ψ_-) basis to find the right “low energy” degrees of freedom. We explicitly write down the Keldysh action for the effective Liouvillian ℒ^eff_n = 1/2 : iS_2 = ∫ dt ∑_i ψ_i+∂_t ψ_i+^*+ψ_i-^*∂_t ψ_i- -iJ∑_⟨ ij⟩(ψ_i+^*ψ_j+^*+ψ_i+ψ_j+)+iJ∑_⟨ ij⟩(ψ_i-^*ψ_j-^*+ψ_i-ψ_j-) +Γ∑_⟨ ij⟩(2ψ_i+ψ_j+ψ_i-^*ψ_j-^*-|ψ_i+ψ_j+|^2-|ψ_i-ψ_j-|^2), where ψ_i±(t) are the bosonic fields at site i on the forward (backward) branch of the closed time contour. Due to the transformation Eq. (<ref>), the global U(1) symmetry transformations that leave S_2 invariant are now: ψ_i+→ψ_i+e^iϕ_+, i∈ A; ψ_i+→ψ_i+e^-iϕ_+, i∈ B; ψ_i-→ψ_i-e^iϕ_-, i∈ A; ψ_i-→ψ_i-e^-iϕ_-, i∈ B; At the saddle points δ S_2/δψ_i± = 0, the steady state fields ψ_i± are uniform on each sublattice: ψ_A+ψ_B+ = -iJ/Γ = ψ_A-ψ_B-, |ψ_A+| = |ψ_A-|; where the latter constraint comes from higher order terms in the large spin expansion. Eq. (<ref>) is in the symmetry broken phase, where both U(1) symmetries (U(1) rotation of ψ_- and ψ_+) are broken. As a result, the global phases of ψ_+ and ψ_- are not determined in the mean field Eq. (<ref>). Also, the relative amplitude of the fields |ψ_A/ψ_B| on A/B sublattice is not determined in Eq. (<ref>). These degeneracies in mean field imply phase fluctuations of ψ_± and the classical part of density fluctuation is gapless in S_2. Consider fluctuations beyond mean field: ψ_i± = √(ρ_i±+δρ_i±)e^i(ϕ_A/B±+δϕ_i±), where we take ρ_i∈ A± = ρ_i∈ B± = √(J/Γ), ϕ_A-+ϕ_B- = ϕ_A++ϕ_B+ = -π/2, and define the following new fields: u_i± = (-1)^s_iδρ_i±/ρ_i±, δθ_i± = (-1)^s_iδϕ_i±; u_i,c/q= 1/√(2)(u_i+± u_i-), θ_i,c/q= 1/√(2)(δθ_i+±δθ_i-); where s_i = 0(1) for A(B) sublattice, the u_±,θ_± fields represent density and phase fluctuations around the saddle point. We expand the action to quadratic order in u,θ. And the action takes the form: S_2 = S[u_q,θ_c]+S[u_c,θ_q]. As is evident from the saddle point solution Eq. (<ref>), fluctuations θ_c,θ_q,u_c are gapless, and u_q is gapped with a mass term. We integrate out u_q and arrive at an effective action S_2^eff : S_2^eff = S[θ_c]+S[u_c,θ_q], S[θ_c] = i/2γ∫ dt d^dx (∂_tθ_c-D∇^2θ_c)^2, S[u_c,θ_q] = ∫ dt d^dx (-u_c∂ _tθ_q+D∇ u_c∇θ_q)+iσ(∇θ_q)^2, where the phenomenological coupling constants are determined by parameters in the microscopic model. Detailed derivation is provided in Appendix <ref>. As expected, the effective action S_2^eff respects two U(1) symmetries (U(1) rotation of e^iθ_c and e^iθ_q) and they are encoded in two parts: S[θ_c] describes the spontaneous breaking of weak U(1) symmetry, and S[u_c,θ_q] describes the U(1) strong-to-weak symmetry breaking. S[θ_c] describes the fluctuation of weak U(1) symmetry order parameter e^iθ_c. It is of the same form as the effective action for S_1 in Sec. <ref>, which corresponds to a Langevin equation for θ_c: ∂_tθ_c = D∇^2θ_c + η(x,t), where long-range order is preserved in d≥ 3. In low dimensions d≤ 2, long-range order is destroyed by fluctuations, and similar to Eq. (<ref>), a nonlinear term ∼ (∇θ_c)^2 beyond our quadratic approximation becomes relevant, and there is even no quasi long-range order. For S[u_c,θ_q], we can integrate by part to arrive at an action describing the diffusive hydrodynamics of density fluctuation u_c: S[u_c,θ_q] = ∫ dt d^dx iσ(∇θ_q)^2+θ_q(∂_t u_c-D∇^2 u_c), from which we can calculate the correlation: ⟨ e^iθ_q(x,t)e^-iθ_q(0,0)⟩∼Const, ∀ d. The long-range correlation of e^iθ_q dictates the strong-to-weak symmetry breaking, which occurs in any dimension. This action has been used as the lowest order phenomenological description of diffusive hydrodynamics in interacting classical and quantum systems <cit.>. Indeed, the densiy-density correlation obeys the following form: ⟨ u_c(x,t)u_c(0,0)⟩∼1/√(4π Dt)e^-x^2/4Dt. Here we derive the action from a microscopic model Eq. (<ref>). Indeed we see that strong-to-weak U(1) symmetry breaking leads to gapless diffusion modes of the symmetry charge. For S_2, there are two spontaneous symmetry-breaking processes: not only the strong U(1) symmetry breaks to the corresponding weak U(1), but the weak U(1) symmetry also spontaneously breaks. Consequently, there are two Goldstone modes, corresponding to spin waves and diffusion of symmetry charge. § ENHANCED LIEB-SCHULTZ-MATTIS THEOREM FROM STRONG-TO-WEAK SYMMETRY BREAKING In this section, we elaborate more on how the interplay between strong U(1) symmetry and translational invariance leads to gapless Liouvillian spectrum at any filling. In Sec. <ref>, we have seen that in the effective action S_2^eff of ℒ_II near half filling, there is a part S[u_c,θ_q] associated with strong-to-weak symmetry breaking that leads to gapless diffusion modes of density fluctuation. Here through a simple model, we establish a connection between gapless Goldstone modes of strong-to-weak symmetry breaking and diffusion of the corresponding symmetry charge at general filling factors (except for the empty or the full filling case). For systems at ground states or in thermal equilibrium, spontaneous symmetry breaking takes place only for certain parameter regime (or below critical temperature T_c). Moreover, in low dimensions, it is usually the case that fluctuations are so strong as to destroy SSB. However, strong-to-weak symmetry breaking always takes place for any strongly symmetric Liouvillians in any dimensions. This is guaranteed by the structure of strongly symmetric Liouvillians: there must exist steady states in each symmetry sector (as discussed in Sec. <ref>). The model we consider here is a spin-S XXZ chain under dephasing: H_XXZ = ∑_i[J_xy/2(S_i^+S_i+1^-+S_i^-S_i+1^+)+J_zS_i^zS_i+1^z], ℒ_III[ρ]= -i[H_XXZ,ρ] + Γ∑_i𝒟(S_i^z)[ρ], where S_i^±,z are the spin operators. The S = 1/2 case has been studied in <cit.>. In <cit.>, it is argued from a LSM point of view that the S = 1/2 model is gapless while a dissipative gap can be open for S = 1. However, here we demonstrate, for general spin and at general filling, model III is always gapless from strong-to-weak symmetry breaking and the Goldstone modes in the physical space correspond to diffusion of the symmetry charge. ℒ_III respects the strong U(1) symmetry generated by S^z = ∑_i S_i^z or N = ∑_i n_i = ∑_i (S_i^z+S) if we treat the spins as bosons. The steady states of ℒ_III are trivial, which are maximally mixed states I^S^z in each S^z sector. Thus, it is clear that the weak U(1) symmetry is not spontaneously broken: 𝒰^w(θ)[I^S^z] = e^-iŜ^zθI^S^ze^iŜ^zθ = I^S^z. However, the strong U(1) symmetry spontaneously breaks to weak U(1) symmetry since a superposition of I^S_z's is still a steady state, but it is not invariant under strong U(1) symmetry operation Eq. (<ref>): 𝒰^s(θ)[∑_S^z I^S^z] = e^-iŜ^zθ∑_S^z I^S^z = ∑_S^z e^-iS^zθI^S^z. We argue here that strong-to-weak symmetry breaking renders the spectrum of ℒ_III gapless with Goldstone modes corresponding to diffusion of the conserved symmetry charge ⟨ S_i^z⟩. When there is no Hamiltonian term J_xy = J_z = 0, the steady states are classical superpositions of the symmetry charge S_i^z eigenstates: ρ_ss = ∑_{S_i^z}p_{S_i^z}|{S_i^z}⟩⟨{S_i^z}|, {S_i^z} = {S_1^z,S_2^z⋯}, S_i^z = -S,-S+1⋯ S; where p_{S_i^z}≥ 0 and ∑_{S_i^z}p_{S_i^z} = 1. Now we consider adding H_XXZ perturbatively. For the effective Liouvillian ℒ_III^eff, we consider only density matrices in the original steady state subspace. By adopting the following map |{S_i^z}⟩⟨{S_i^z}|→ |{S_i^z}⟩, -ℒ_III^eff takes the form of a spin chain Hamiltonian H^eff, with ground state energy being 0. (ℒ_III^eff is Hermitian because the dissipators are Hermitian, and there is no odd-order contribution of H_XXZ). Specifically, in the case of S = 1/2, to second order in J_xy, H^eff is the ferromagnetic Heisenberg Hamiltonian <cit.>: H^eff = J_xy^2/2Γ∑_i[1/2(S_i^+S_i+1^-+h.c.)+S_i^zS_i+1^z-1/4]. As discussed in Sec. <ref>, the strong U(1) symmetry has two generators in the double space: N_L-N_R and N_L+N_R. For the effective Liouvillian ℒ_III^eff (or H^eff), N_L-N_R generated U(1) symmetry acts trivially as identity and N_L+N_R generated U(1) symmetry is the U(1) rotation around the z axis. Moreover, the U(1) symmetry of H^eff is spontaneously broken. The ground states in different S^z sectors are all degenerate with E_G = 0: H^eff|I^S^z⟩ = 0, where |I^S^z⟩ is the double space map of I^S^z: |I^S^z⟩∼∑_∑_iS_i^z = S^z|{S_i^z}⟩, which corresponds to magnon condensation at k = 0. The above discussion on spontaneous U(1) symmetry breaking of H^eff holds true for general S and to arbitrary order in the perturbation theory. H^eff is always in the U(1) symmetry broken phase, with magnon condensation in the ground states. To illustrate the corresponding gapless Goldstone modes, we write down variational wave functions in each S^z sector orthogonal to |I^S^z⟩ and with vanishing energy in the thermodynamical limit. By mapping the spins to bosons: n_i = b^†_ib_i = S_i^z+S, b_i^†|n-S⟩_i= √(n+1)|n+1-S⟩_i, n_i≤ 2S, b_i^†|S⟩_i= 0; the steady states (or ground states of H^eff) satisfy: I^S^z = 1/N∑_i b_i^†I^S^z-1b_i, |I^S^z⟩ = 1/N∑_i b_i^†|I^S^z-1⟩; where N = ∑ n_i = ∑ S_i^z + S. Then the variational wave function in the form of Goldstone modes are: |k⟩∼∑_r e^ikr b_r^†|I^S^z-1⟩, ρ_k ∼∑_r e^ikr b_r^†I^S^z-1b_r. Since in the ground states |I^S^z⟩, the magnons condense at k = 0, the variational wave function |k⟩ gives one of the magnons a nonzero momentum k. One can check that |k⟩ is orthogonal to |I^S^z⟩ and in the limit k→ 0, |k⟩ has vanishing energy ⟨ k|H^eff|k⟩→0. In the physical Hilbert space, the effective Liouvillian ℒ_III^eff governs the evolution of the density matrix in the subspace defined by Eq. (<ref>), which is nothing but the evolution of the probability distribution p_{S_i^z}. The Goldstone modes in the physical Hilbert space can pair in opposite momentum: ρ^k + ρ^-k∼∑_r cos kr b_r^†I^S^z-1b_r, ρ^k - ρ^-k∼∑_r sin kr b_r^†I^S^z-1b_r; which corresponds to the density fluctuation of the symmetry charge ⟨ S_i^z⟩(or ⟨ n_i^z⟩). The dispersion is proportional to k^2 (this is evident from the one particle sector). Consequently, the Goldstone modes of strong-to-weak symmetry breaking govern the diffusion of the conserved symmetry charge. The connection between gapless goldstone modes and diffusion has also been studied in the context of Brownian Hamiltonian evolution <cit.>. Now that we have constructed gapless diffusion modes at any filling factor and for general spin, it is clear how strong-to-weak symmetry breaking leads to the gapless spectrum of ℒ_III. Generally, we believe the spectrum of Liouvillians with strong U(1) symmetry and translational invariance is always gapless. § CONCLUDING REMARKS We have studied symmetries and their spontaneous breaking in open quantum systems. Unlike closed systems, open systems display two distinct types of symmetries: strong and weak, depending on whether there is an exchange of symmetry charges with the environment. We outline different scenarios of spontaneous symmetry breaking in open systems and provid concrete examples for each scenario. For strong continuous symmetry, there are two symmetry-breaking processes: the inevitable strong-to-weak symmetry breaking leads to diffusion of symmetry charge, and further weak symmetry breaking leads to long-range order with Goldstone modes describing fluctuations of the order parameter. We also uncover a novel transition from a symmetric phase with a Bose surface to a symmetry-broken phase with long-range order tuned by filling. There are a number of interesting future directions to explore. First, whereas we have shown how strong-to-weak symmetry breaking leads to a gapless Liouvillian spectrum and diffusion modes of the conserved charge in several concrete models, a general proof that strong continuous symmetry and translational invariance always lead to gapless spectrums in Liouvillian systems is desirable. Second, as for symmetries in open quantum systems, we consider here only the simplest U(1) case. It would be interesting to extend the results to other symmetries, for example, non-abelian symmetries and their constraints on the dynamics and steady states <cit.>. Acknowledgments.—We thank Zi Cai for helpful discussions. This work is supported by the NSFC under Grant No. 12125405, National Key R&D Program of China (No. 2023YFA1406702). § DERIVATION OF THE EFFECTIVE ACTION FOR STRONG SYMMETRIC MODEL In this appendix, we derive the effective action Eq. (<ref>) for the strong U(1) symmetric model ℒ_II Eq. (<ref>) near n = 1/2, starting from the bosonic Liouvillian ℒ^eff_n = 1/2 Eq. (<ref>). The Keldysh action for ℒ^eff_n = 1/2: iS_2 = ∫ dt ∑_i ψ_i+∂_t ψ_i+^*+ψ_i-^*∂_t ψ_i- -iJ∑_⟨ ij⟩(ψ_i+^*ψ_j+^*+ψ_i+ψ_j+)+iJ∑_⟨ ij⟩(ψ_i-^*ψ_j-^*+ψ_i-ψ_j-) +Γ∑_⟨ ij⟩(2ψ_i+ψ_j+ψ_i-^*ψ_j-^*-|ψ_i+ψ_j+|^2-|ψ_i-ψ_j-|^2), where there is a U(1)× U(1) symmetry, which are U(1) rotations of ψ_±: ψ_i+→ψ_i+e^iϕ_+, i∈ A; ψ_i+→ψ_i+e^-iϕ_+, i∈ B; ψ_i-→ψ_i-e^iϕ_-, i∈ A; ψ_i-→ψ_i-e^-iϕ_-, i∈ B; We separate the imaginary part and the real part of the action: iS_2 = ∫ dt (im)[∑_i ψ_i+∂_t ψ_i+^*+ψ_i-^*∂_t ψ_i- -iJ∑_⟨ ij⟩(ψ_i+^*ψ_j+^*+ψ_i+ψ_j+)+iJ∑_⟨ ij⟩(ψ_i-^*ψ_j-^*+ψ_i-ψ_j-) +Γ∑_⟨ ij⟩(ψ_i+ψ_j+ψ_i-^*ψ_j-^*-ψ_i+^*ψ_j+^*ψ_i-ψ_j-)]-(re)[Γ∑_⟨ ij⟩|a_m+a_n+-a_m-a_n-|^2], the Euler-Lagrange equation (saddle point for the imaginary part of iS) is: ∂_t ψ_i+ = ∑_nearest j-iJ ψ_j+^*-Γψ_j+^*ψ_j-ψ_i-, ∂_t ψ_i- = ∑_nearest j-iJ ψ_j-^*-Γψ_j-^*ψ_j+ψ_i+; the steady state satisfies: ψ_A+ψ_B+ = -iJ/Γ = ψ_A-ψ_B-, where the fields on A/B sublattice are uniform and represented by ψ_A/B±. In the steady state, the global phase of ψ_± (relative phase between A/B sublattice) is free: the global phase can take any value, which is the consequence of U(1)× U(1) symmetry of the action. The relative density between A/B sublattice: |ψ_A|^2-|ψ_B|^2 is also free, for both ψ_±. However, as calculation turns out, if we consider higher order terms in the large spin expansion, in the steady state it is also required: |ψ_A+| = |ψ_A-|, |ψ_B+| = |ψ_B-|; which means that only the classical part of the density fluctuation ρ_Ac-ρ_Bc = (|ψ_A+|^2+|ψ_A-|^2)-(|ψ_B+|^2+|ψ_B-|^2) is now free in the steady state (this is due to the conservation of ∑_i∈ Ab_i^†b_i-∑_i∈ Bb_i^†b_i), the quantum part of density ρ_q∼ρ_+-ρ_- is simply 0 in the steady state. Written in terms of density and phase ψ_i± = √(ρ_i±)e^iθ_±, the action is: iS_2 = ∫ dt i∑_i -ρ_i+∂_tθ_i++ρ_i-∂_tθ_i- -2iJ∑_⟨ ij⟩√(ρ_i+ρ_j+)cos(θ_i++θ_j+)+2iJ∑_⟨ ij⟩√(ρ_i-ρ_j-)cos(θ_i-+θ_j-) +2iΓ∑_⟨ ij⟩√(ρ_i+ρ_j+ρ_i-ρ_j-)sin(θ_i++θ_j+-θ_i--θ_j-)-Γ∑_⟨ ij⟩|√(ρ_i+ρ_j+)e^i(θ_i++θ_j+)-√(ρ_i-ρ_j-)e^i(θ_i-+θ_j-)|^2, Consider fluctuations around saddle point ψ_i± = √(ρ_i±+δρ_i±) e^i(θ_i±+δθ_i±), where we choose θ_A-+θ_B- = θ_A++θ_B+ = -π/2 and ρ_A-ρ_B- = ρ_A+ρ_B+ = J^2/Γ^2. To quadratic order in δρ and δθ: iS_2 = ∫ dt i∑_i -δρ_i+∂_tδθ_i++δρ_i-∂_tδθ_i- -2iJ^2/Γ∑_⟨ ij⟩(δρ_i+/2ρ_i++δρ_j+/2ρ_j+)(δθ_i-+δθ_j-)+2iJ^2/Γ∑_⟨ ij⟩(δρ_i-/2ρ_i-+δρ_j-/2ρ_j-)(δθ_i++δθ_j+) -J^2/Γ∑_⟨ ij⟩(δρ_i+/2ρ_i++δρ_j+/2ρ_j+-δρ_i-/2ρ_i--δρ_j-/2ρ_j-)^2+(δθ_i++δθ_j+-δθ_i--δθ_j-)^2, as discussed above, when higher order terms in large spin expansion are considered, at saddle point we have ρ_i+ = ρ_i-. Here we consider the simplest scenario ρ_A+ = ρ_A- = ρ_B+ = ρ_B- = J/Γ (the case where ρ_A≠ρ_B are similar qualitatively). And we also make variable change: u_i = (-1)^iδρ_i/ρ_i, δθ_i = (-1)^iδθ_i (i = 0/1 for A/B sublattice). Then the action takes a simpler form: iS_2 = ∫ dt iJ/Γ∑_i - u_i+∂_tδθ_i++ u_i-∂_tδθ_i- -iJ^2/Γ∑_⟨ ij⟩( u_i+- u_j+)(δθ_i--δθ_j-)+iJ^2/Γ∑_⟨ ij⟩( u_i-- u_j-)(δθ_i+-δθ_j+) -J^2/Γ∑_⟨ ij⟩1/4(u_i+-u_j+-u_i-+u_j-)^2+(δθ_i+-δθ_j+-δθ_i-+δθ_j-)^2, in terms of classical and quantum variable u_ic/q = 1/√(2)(u_i+± u_i-),θ_ic/q = 1/√(2)(δθ_i+±δθ_i-): iS_2 = ∫ dt -iJ/Γ∑_i u_ic∂_tθ_iq+u_iq∂_tθ_ic +iJ^2/Γ∑_⟨ ij⟩( u_ic- u_jc)(θ_iq-θ_jq)-iJ^2/Γ∑_⟨ ij⟩( u_iq- u_jq)(θ_ic-θ_jc) -J^2/Γ∑_⟨ ij⟩1/2(u_iq-u_jq)^2+(θ_iq-θ_jq)^2, in the above action, there is no mass term, which means all the fluctuations u_c/q,θ_c/q are gapless. However, as discussed above, with higher order terms, the quantum part of density fluctuation u_q should be gapped. The correct action takes the form: iS_2 = ∫ dt -iJ/Γ∑_i u_ic∂_tθ_iq+u_iq∂_tθ_ic +iJ^2/Γ∑_⟨ ij⟩( u_ic- u_jc)(θ_iq-θ_jq)-iJ^2/Γ∑_⟨ ij⟩( u_iq- u_jq)(θ_ic-θ_jc) -J^2/Γ∑_⟨ ij⟩1/2(u_iq-u_jq)^2+(θ_iq-θ_jq)^2-m_q^2/2∑_iu_iq^2, where m_q arises from higher order terms in large spin expansion. Then it is straightforward to go to the continuum limit and integrate out the gapped u_q fluctuation: S_2^eff = S[θ_c]+S[u_c,θ_q], S[θ_c] = i/2γ∫ dt d^dx (∂_tθ_c-D∇^2θ_c)^2, S[u_c,θ_q] = ∫ dt d^dx (-u_c∂ _tθ_q+D∇ u_c∇θ_q)+iσ(∇θ_q)^2, where the parameters γ,D,σ arises from microscopic parameter J,Γ,m_q.
http://arxiv.org/abs/2406.19017v1
20240627090522
On the Hecke Module of $\text{GL}_n(k[[z]])\backslash \text{GL}_n(k((z)))/\text{GL}_n(k((z^2)))$
[ "Yuhui Jin" ]
math.CO
[ "math.CO", "math.RT" ]
On the Hecke Module of GL_n(k[[z]])\GL_n(k((z)))/GL_n(k((z^2))) Jack Lidmar July 1, 2024 =============================================================== § ABSTRACT Every double coset in GL_m(k[[z]])\GL_m(k((z)))/GL_m(k((z^2))) is uniquely represented by a block diagonal matrix with diagonal blocks in {1,z, [ 1 z; 0 z^i; ] (i>1)} if char(k) ≠ 2 and k is a finite field. These cosets form a (spherical) Hecke module ℋ(G,H,K) over the (spherical) Hecke algebra ℋ(G,K) of double cosets in K\ G/H, where K=GL_m(k[[z]]) and H=GL_m(k((z^2))) and G=GL_m(k((z))). Similarly to Hall polynomial h_λ,ν^μ from the Hecke algebra ℋ(G,K), coefficients h_λ,ν^μ arise from the Hecke module. We will provide a closed formula for h_λ,ν^μ, under some restrictions over λ,ν,μ. intro.tex chapter1.tex chapter2.tex chapter3.tex [heading=bibintoc]
http://arxiv.org/abs/2406.18942v1
20240627071318
Limitation of single-repumping schemes for laser cooling of Sr atoms
[ "Naohiro Okamoto", "Takatoshi Aoki", "Yoshio Torii" ]
physics.atom-ph
[ "physics.atom-ph" ]
okamoton@g.ecc.u-tokyo.ac.jp ytorii@phys.c.u-tokyo.ac.jp Institute of Physics, The University of Tokyo, 3-8-1 Komaba, Megro-ku, Tokyo 153-8902, Japan § ABSTRACT We explore the efficacy of two single-repumping schemes, 5s5p ^3P_2 - 5p^2 ^3P_2 (481 nm) and 5s5p ^3P_2 - 5s5d ^3D_2 (497 nm), for a magneto-optical trap (MOT) of Sr atoms. We reveal that the enhancement in the MOT lifetime is limited to 26.9(2) for any single-repumping scheme. Our investigation indicates that the primary decay path from the 5s5p ^1P_1 state to the 5s5p ^3P_0 state proceeds via the 5s4d ^3D_1 state, rather than through the upper states accessed by the single-repumping lasers. We estimate that the branching ratio for the 5s5p ^1P_1 → 5s4d ^3D_1 → 5s5p ^3P_0 decay path is 1:3.9 × 10^6 and the decay rate for the transition from the 5s5p ^1P_1 state to the 5s4d ^3D_1 state is 83(32) s^-1. This outcome underscores the limitation on atom number in the MOT for long loading times (≳ 1 s) when employing a single-repumping scheme. These findings will contribute to the construction of field-deployable optical lattice clocks. Limitation of single-repumping schemes for laser cooling of Sr atoms Yoshio Torii July 1, 2024 ==================================================================== § INTRODUCTION The electronic structure of alkaline-earth-metal (-like) atoms is characterized by the presence of long-lived metastable states and ultra-narrow transitions, offering a diverse array of applications such as precision measurements <cit.>, tests of special relativity <cit.>, detection of gravitational redshift <cit.>, quantum simulation <cit.>, quantum information <cit.> and search for fundamental physics <cit.>, including gravitational wave detection <cit.> and search for dark matter <cit.>. Particularly, optical lattice clocks utilizing Sr have been intensively studied as promising candidates for redefining the second <cit.>. In a standard magneto-optical trap (MOT) of Sr atoms, the 5s^2 ^1S_0 - 5s5p ^1P_1 transition (461 nm) is typically employed. However, this transition is not completely closed; the atoms can decay from the 5s5p ^1P_1 state to the 5s4d ^1D_2 state with a branching ratio of 1:50 000 <cit.>. Subsequently, atoms in the 5s4d ^1D_2 state further decay to the 5s5p ^3P_2 and 5s5p ^3P_1 states at a ratio of 2:1 <cit.>. Those atoms decaying to the 5s5p ^3P_1 state return to the 5s^2 ^1S_0 state at a rate of 4.7×10^4 s^-1 <cit.> and are subsequently recaptured in the MOT. Conversely, atoms decaying to the 5s5p ^3P_2 state leak out of the trap because the lifetime of the 5s5p ^3P_2 state is approximately 10^3 s <cit.>. Therefore, an additional laser repumping the atoms in the ^3P_2 state to the 5s5p ^3P_1 state is required. Earlier experiments utilized the 5s5p ^3P_2 - 5s6s ^3S_1 (707 nm) transition for repumping the atoms in the 5s5p ^3P_2 state. However, atoms excited to the 5s6s ^3S_1 state can decay to the long-lived 5s5p ^3P_0 state <cit.>, necessitating another laser at the 5s5p ^3P_0 - 5s6s ^3S_1 (679 nm) transition <cit.>. Recently, single-repumping schemes have also been demonstrated, specifically using 5s5p ^3P_2 - 5s5d ^3D_2 (497 nm) <cit.>, 5s5p ^3P_2 - 5s6d ^3D_2 (403 nm) <cit.>, 5s5p ^3P_2 - 5p^2 ^3P_2 (481 nm) <cit.>, and 5s5p ^3P_2 - 5s4d ^3D_2 (3012 nm) <cit.>. If the upper state of the repumping transition had a negligible decay rate to the 5s5p ^3P_0 state, the single-repumping scheme would suffice. However, all previous studies utilizing a single-repumping scheme have reported that the trap lifetime (< 1 s) was much shorter than the vacuum-limited value (∼ 10 s). For a MOT of Yb atoms using the 6s^2 ^1S_0 - 6s6p ^1P_1 (399 nm) transition, the dominant loss channel is the decay path of 6s6p ^1P_1 → 6s5d ^3D_1 → 6s6p ^3P_0, limiting the trap lifetime to ∼ 1 s <cit.>. However, for a Sr MOT, the decay path of 5s5p ^1P_1 → 5s4d ^3D_1 → 5s5p ^3P_0 has rarely been discussed <cit.>. To our knowledge, there has been no measurement of the branching ratio for the transition from the 5s5p ^1P_1 state (via the 5s4d ^3D_1 state) to the 5s5p ^3P_0 state. In this study, we investigate the performance of a MOT of ^88 Sr atoms for two single-repumping schemes: 5s5p ^3P_2 - 5p^2 ^3P_2 (481 nm) and 5s5p ^3P_2 - 5s5d ^3D_2 (497 nm). Our findings reveal that the enhancement in the MOT lifetime is 26.9(2), irrespective of both the single-repumping transition and the trapped atom density. This outcome indicates that the dominant decay path from the 5s5p ^1P_1 state to the 5s5p ^3P_0 state is via the 5s4d ^3D_1 state (5s5p ^1P_1 → 5s4d ^3D_1 → 5s5p ^3P_0). Additionally, we estimate that the branching ratio for the transition from the 5s5p ^1P_1 state (via the 5s4d ^3D_1 state) to the 5s5p ^3P_0 state is 1:3.9 × 10^6 and the decay rate for the transition from the 5s5p ^1P_1 state to the 5s4d ^3D_1 state is 83(32) s^-1. These findings underscore that, when a long loading time (≳ 1 s) is necessary, the atom number in the MOT is significantly limited for single-repumping schemes. This revelation will aid in the development of field-deployable optical lattice clocks. § EXPERIMENTAL SETUP Figure <ref> shows the energy levels and decay rates of Sr relevant to our investigation. The laser for trapping drives the 5s^2 ^1S_0 - 5s5p ^1P_1 (461 nm) transition. Additional lasers are employed to address specific transitions for repumping: the 5s5p ^3P_2 - 5p^2 ^3P_2 transition at 481 nm, and the 5s5p ^3P_2 - 5s5d ^3D_2 transition at 497 nm to compare the two ^3P_2 repumping schemes, and the 5s5p ^3P_0 - 5s5d ^3D_1 transition at 483 nm for ^3P_0 repumping. All the lasers are homemade external-cavity diode lasers (ECDL). The MOT is composed of three retroreflected beams at 461 nm with a diameter of 18 mm and a combined power of 65 mW yielding a total intensity at the MOT position of 48 mW/cm^2, estimated from MOT lifetime measurements (see Appendix <ref>). The axial magnetic field gradient of the MOT coils is 50 G/cm. The repumping beams, each with a power of a few mW and a diameter of about 5 mm, have sufficient intensities; halving their power does not degrade the MOT performance. The frequencies of these lasers are stabilized using birefringent atomic vapor laser lock (BAVLL) employing a hollow cathode lamp <cit.>. The detuning of the trapping light is adjusted by introducing an offset to the BAVLL signal, while the frequencies of all the repumping lasers are locked at resonance. The details of the vacuum system will be described elsewhere. In summary, a compact oven with capillaries, following a design inspired by Ref. <cit.>, is utilized for loading atoms in the MOT. The atoms are loaded in the MOT directly from a thermal atomic beam derived from the oven as demonstrated for Li <cit.> and Ca <cit.>. The atoms are trapped in a glass cell (25 mm × 25 mm × 100 mm), and the entire vacuum system is evacuated by a single 55-l/s ion pump. The distance between the oven and the MOT region is 37 cm. The oven temperature is set to 335 ^∘ C, resulting in a MOT loading rate of 5.0 × 10^5 atoms/s and a vacuum pressure of ∼ 1 × 10^-10 Torr. Under these conditions, with both the 5s5p ^3P_2 and 5s5p ^3P_0 repumping laser beams on, the number of trapped atoms is ∼ 2 × 10^6 as estimated by measuring fluorescence using a photodiode. The observed decay of trapped atoms, after shutting off the atomic beam, exhibits a lifetime of 15 s, which is presumably limited by background gas collisions. § RESULTS AND DISCUSSION To estimate the decay rate for the transition from the 5s5p ^1P_1 state to the 5s5p ^3P_0 state for the two single-repumping schemes, we initially load the MOT using both the ^3P_0 (483 nm) and ^3P_2 (481 nm or 497 nm) repumping laser beams, followed by turning off the ^3P_0 repumping laser beam. Figure <ref>(a) shows the decay of atom number in the MOT after turning off the 483-nm repumping light at various detunings of the trapping light (461 nm) when utilizing the 481-nm light. Figure <ref>(b) shows the dependence of the lifetime on the detuning of the trapping light for both the 481-nm and 497-nm repumping laser beams, indicating that the decay rate for the transition from the 5s5p ^1P_1 state to the 5s5p ^3P_0 state is independent of the choice of the ^3P_2 repumping transition. The MOT lifetime increases with larger detuning because the fraction of atoms in the 5s5p ^1P_1 state diminishes with detuning as outlined in Eqs. (<ref>) and (<ref>) in Appendix <ref>. To assess the improvement in lifetime for the two single-repumping schemes, we also measure the decay rate for the transition from the 5s5p ^1P_1 state to the 5s5p ^3P_2 state by turning off the ^3P_2 (481 nm or 497 nm) repumping light while maintaining the ^3P_0 (483 nm) repumping light (see Appendix <ref>). Figure <ref> shows the enhancement, defined as the ratio of the MOT lifetimes with and without the single-repumping light (481 nm or 497 nm) for various detunings. This enhancement remains unaffected by both the single-repumping transition and the detuning of the trapping light. The weighted average enhancement factor is 26.9(2), consistent with the previously reported value of 32(5) for the 497-nm repumping scheme <cit.>. We note that Fig. <ref> indicates the enhancement factor is independent of the density of the trapped atoms because the density significantly depends on the detuning as described in Ref. <cit.>. The MOT lifetime, depicted in Fig. <ref>(a), using solely the ^3P_2 (481 nm or 497 nm) repumping light, is less than 1 s, considerably shorter compared with when both the ^3P_2 and ^3P_0 repumping laser beams are employed (15 s). Previous studies employing a single-repumping scheme also consistently reported trap lifetimes of less than 1 s <cit.>. The decay from the upper state of the repumping transition to the ^3P_0 state via intermediate states was mentioned <cit.>. If this hypothesis is the case, the enhancement factor should depend on the upper state (see Appendix <ref>). However, as depicted in Fig. <ref>, the enhancement factor remains unaffected by both the two single-repumping schemes, indicating that the decay from the upper state of the repumping transition to the ^3P_0 state is negligible. This is supported by the fact that, for the 497 nm repumping scheme, the branching ratio for the dipole-allowed transitions [5s5d ^3D_2 → 5s6p ^3P_2 → 5s6s ^3S_1 (5s4d ^3D_1) → 5s5d ^3P_0] was estimated to be quite small (0.035%) <cit.>, and, for the 481 nm repumping scheme, there is no intermediate states through which the upper state 5p^2 ^3P_2 is connected to the 5s5p ^3P_0 state by dipole-allowed transitions <cit.>. As an alternative reason for the shorter MOT lifetimes for single-repumping schemes, binary collisional loss involving atoms in excited states was also mentioned <cit.>. However, the single exponential decay observed in trapped atoms, as depicted in Fig. <ref>(a), rules out possible decay due to binary collisions. This is further supported by Fig. <ref>, showing that the MOT lifetime enhancement is unaffected by detuning of the trapping light (i.e., MOT density). Our results strongly indicate, as discussed in Appendix <ref>, that the primary decay path from the 5s5p ^1P_1 state to the 5s5p ^3P_0 state proceeds via the 5s4d ^3D_1 state as shown in Fig. <ref> (5s5p ^1P_1 → 5s4d ^3D_1 → 5s5p ^3P_0). The branching ratio from the 5s5p ^1P_1 state (via the 5s4d ^3D_1 state) to the 5s5p ^3P_0 state is 1:3.9 × 10^6, derived from the branching ratio for the transition from the 5s5p ^1P_1 state (via the 5s4d ^1D_2 state) to the 5s5p ^3P_2 state (1:150 000) <cit.> and the observed enhancement factor of 26.9(2). Considering the branching ratio for the transition from the 5s4d ^3D_1 state to the 5s5p ^3P_1 state (60%) <cit.>, the decay rate for the transition from the 5s5p ^1P_1 state to the 5s4d ^3D_1 state is 83(32) s^-1 (see Appendix <ref>); the uncertainty primarily arises from the decay rate for the transition from the 5s5p ^1P_1 state to the 5s4d ^1D_2 state [3.9(1.5) × 10^3 s^-1] <cit.>. Our results indicate that for a single-repumping scheme, if the loading rate of atoms is low enough to require a long loading time (≳ 1 s), the atom number in the MOT is considerably limited. This is experimentally demonstrated in Fig. <ref>. For instance, at an oven temperature of 455 ^∘ C with a loading rate of 4 × 10^7 s^-1, the number of trapped atoms saturates within 1 s and the reduction in the number of trapped atoms for the single-repumping scheme is only 40% compared with the dual-repumping scheme [Fig. <ref>(a)]. Conversely, at an oven temperature of 285 ^∘ C with a loading rate of 2 × 10^4 s^-1, the loading time of the MOT for the dual-repumping scheme exceeds 10 s, and the number of trapped atoms is less than 10^4 for the single-repumping scheme, being only 1/50 of that for the dual-repumping scheme [Fig. <ref>(b)]. Considering the requirements for field-deployable or spaceborne optical lattice clocks <cit.>, a small-sized and energy-saving atomic beam source is essential. Achieving these goals necessitates operating the atom oven at low temperatures and avoiding the use of Zeeman slowers and 2-dimensional MOTs <cit.>. In this scenario, from which high MOT loading rates cannot be expected, a dual-repumping scheme should be adopted. § CONCLUSION We conducted a comparative analysis of a MOT of ^88 Sr atoms for two single-repumping schemes (481 nm and 497 nm). Our investigation revealed that the enhancement in lifetime was consistent at 26.9(2), demonstrating independence not only from the single-repumping transition but also from the density of trapped atoms. This outcome indicates that the primary decay path from the 5s5p ^1P_1 state to the 5s5p ^3P_0 state proceeds via the 5s4d ^3D_1 state (5s5p ^1P_1 → 5s4d ^3D_1 → 5s5p ^3P_0), a mechanism previously overlooked. Owing to this decay path, the trapped atom number should be significantly restricted when employing a single-repumping scheme, particularly when a long loading time (≳ 1 s) is required. We estimated the branching ratio for the transition from the 5s5p ^1P_1 state (via the 5s4d ^3D_1 state) to the 5s5p ^3P_0 state to be 1:3.9 × 10^6 and the decay rate for the transition from the 5s5p ^1P_1 state to the 5s4d ^3D_1 state to be 83(32) s^-1. Our findings underscore the need to reevaluate single-repumping schemes for Sr laser cooling. § ACKNOWLEDGMENTS We thank T. Sato and M. Mori for their contributions to the experiments. This work was supported by JSPS KAKENHI Grant Numbers 23K20849 and 22KJ1163. § RATE EQUATIONS We present the rate equations describing the number of trapped atoms for various schemes: * dual-repumping (^3P_2 and ^3P_0) scheme * scheme with only the ^3P_0 repumping laser * single-repumping (^3P_2) scheme * scheme with no repumping laser For scheme <ref>, the rate equation for the number of trapped atoms is given by dN/dt = R - γ_c N - β N^2, where N denotes the number of trapped atoms, R the atom loading rate, γ_c the background collisional loss rate, and β the coefficient for two-body collisional loss. Under our experimental conditions (oven temperature of 335 ^∘ C), γ_c and β N are of the order of 0.1 s^-1, confirmed by observing the decay of the number of trapped atoms after shutting off the atomic beam. For scheme <ref>, in which atoms decaying to the ^3P_2 state are lost, the rate equation is given by dN/dt = R - f a_2 N - γ_c N - β N^2, where a_2 denotes the decay rate for the transition from the ^1P_1 state to the ^3P_2 state and f the excitation fraction of the 5s^2 ^1S_0 - 5s5p ^1P_1 (461 nm) transition, which is given by f =1/2s_0/1+s_0+4(Δ/Γ)^2, where s_0 = I/I_s (I being the 461-nm laser beam intensity and I_s = 40 mW/cm^2 the saturation intensity) denotes the resonant saturation parameter, Δ the detuning, and Γ = 2π× 30 MHz the natural width. According to Ref. <cit.>, a_2 is given by a_2 = A_^1P_1 →^1D_2 B_^1D_2 →^3P_2, where A_^1P_1 →^1D_2 = 3.85(1.47)×10^3 s^-1 <cit.> denotes the decay rate for the transition from the ^1P_1 state to the ^1D_2 state, and B_^1D_2 →^3P_2 = 0.33 <cit.> the branching ratio from the ^1D_2 state to the ^3P_2 state. Because the value of f is typically ∼ 0.1 for a standard MOT, f a_2 is ∼ 100 s^-1, which is much larger than γ_c and β N. Thus, Eq. (<ref>) can be approximated by dN/dt = R - f a_2 N = R - N/τ_2, where τ_2 is the lifetime of the MOT only with ^3P_0 repumping laser, which is expressed as τ_2 = 1/f A_^1P_1 →^1D_2 B_^1D_2 →^3P_2. For scheme <ref>, in which atoms decaying to the ^3P_0 state are lost, the rate equation is given by dN/dt = R - f a_0(X) N - γ_c N - β N^2, where a_0(X) denotes the decay rate for the transition from the ^1P_1 state to the ^3P_0 state, and X the upper state of the single-repumping transition (X = 5p^2 ^3P_2 for 481 nm and X = 5s5d ^3D_2 for 497 nm). From Fig. <ref>(a), the observed decay rates of the MOT for the single-repumping scheme are of order 10 s^-1, which is still much larger than γ_c and β N. Therefore, Eq. (<ref>) can be approximated by dN/dt = R - f a_0(X) N = R - N/τ_0(X), where τ_0(X) is the MOT lifetime, which is expressed as τ_0(X) = 1/f a_0(X). For scheme <ref>, in which atoms decaying to the ^3P_0 or ^3P_2 state are lost, the rate equation is given by dN/dt = R - f (a_0(X) + a_2) N = R - N/τ_norep(X), for which terms γ_c N and β N^2 are neglected, as in the scheme above, and τ_norep(X) denotes the MOT lifetime, expressed as τ_norep(X) = 1/f (a_0(X)+a_2). To eliminate the uncertainty in the estimation of f, we measure the ratio r (X) = τ_0(X)/τ_2, which is expressed as r (X) = a_2/a_0(X). The enhancement factor ϵ (X) = τ_0(X)/τ_norep for the single-repumping scheme is then given by ϵ (X) = a_0(X)+a_2/a_0(X) = 1+r(X). § ESTIMATION OF THE DECAY RATE FROM THE 5S5P ^1P_1 STATE TO THE 5S4D ^3D_1 STATE The energy levels of the 5s4d ^1D_2 state and the 5s4d ^3D_J state are below the 5s5p ^1P_1 state (see Fig. <ref>). However, there is no electric-dipole allowed (E1) decay path from the 5s5p ^1P_1 state to the 5s5p ^3P_0 state through these states. Our experimental results show the decay from the upper state of the 5s5p ^3P_2 repumping transition to the ^3P_0 state via intermediate states is negligible. Therefore, we need to identify the decay path from the 5s5p ^1P_1 state to the 5s5p ^3P_0 state via the 5s4d ^1D_2 state or the 5s4d ^3D_J state. Considering that the decay rate for the spin-forbidden transition of 5s4d ^1D_2 - 5s5p ^3P_2 (1.9 μ m) is 661.1 s^-1 (see Table. <ref> in Appendix <ref>), it is reasonable to expect that the decay rates for the spin-forbidden transition of 5s5p ^1P_1 - 5s4d ^3D_1 (2.8 μ m) is of order 10^2 ∼ 10^3 s^-1. Thus, we assume that the primary decay path from the 5s5p ^1P_1 state to the 5s5p ^3P_0 state is 5s5p ^1P_1 → 5s4d ^3D_1 → 5s5p ^3P_0. Under this assumption, a_0 can be expressed as a_0 = A_^1P_1 →^3D_1 B_^3D_1 →^3P_0, where A_^1P_1 →^3D_1 denotes the decay rate for the transition from the ^1P_1 state to the ^3D_1 state, and B_^3D_1 →^3P_0 = 0.595 <cit.> the branching ratio from the ^3D_1 state to the ^3P_0 state. From Eqs. (<ref>) and (<ref>), A_^1P_1 →^3D_1 is expressed as A_^1P_1 →^3D_1 = a_2/r B_^3D_1 →^3P_0 = A_^1P_1 →^1D_2 B_^1D_2 →^3P_2/r B_^3D_1 →^3P_0. All the constants on the right-hand side of Eq. (<ref>) are known from the previous studies and our experiments: A_^1P_1 →^1D_2 = 3.85(1.47)×10^3 s^-1 <cit.>, B_^1D_2 →^3P_2 = 0.33 <cit.>, B_^3D_1 →^3P_0 = 0.595 <cit.>, r= ϵ - 1 = 25.9(2) [see Eq. (<ref>) and Fig. <ref>]. Then, one can calculate the decay rate for the 5s5p ^1P_1 - 5s4d ^3D_1 transition as A_^1P_1 →^3D_1 = 83(32) s^-1. § ESTIMATION OF THE EXCITATION FRACTION AND THE SATURATION PARAMETER In general, estimating the total intensity of the MOT beam by measuring the beam power is difficult because of the uncertainties in gauging the shape of the beam and the loss of beam power in various optical components. To circumvent this problem, we adopt a method based on the excitation fraction. From Eq. (<ref>), the excitation fraction of the 461-nm transition is expressed as f = 1/τ_2 A_^1P_1 →^1D_2 B_^1D_2 →^3P_2, where τ_2 denotes the MOT lifetime obtained only with a ^3P_0 repumping laser; the values of A_^1P_1 →^1D_2 and B_^1D_2 →^3P_2 are found in Ref. <cit.> and <cit.>, respectively (see Appendix <ref>). Using Eq. (<ref>), we can estimate f by measuring τ_2 with a relative uncertainty of 40%, stemming mainly from that of A_^1P_1 →^1D_2. From Eq. (<ref>), we can infer the resonant saturation parameter s_0, and hence the total intensity I = s_0 I_s, where I_s = 40 mW/cm^2 is the saturation intensity. § LIST OF THE DECAY RATES We list the decay rates for the transitions relevant to our work in Table. <ref> .
http://arxiv.org/abs/2406.19348v1
20240627172602
Theory of superconductivity in twisted transition metal dichalcogenide homobilayers
[ "Jihang Zhu", "Yang-Zhi Chou", "Ming Xie", "Sankar Das Sarma" ]
cond-mat.supr-con
[ "cond-mat.supr-con", "cond-mat.mes-hall", "cond-mat.str-el" ]
jizhu223@gmail.com Condensed Matter Theory Center and Joint Quantum Institute, Department of Physics, University of Maryland, College Park, Maryland 20742, USA Condensed Matter Theory Center and Joint Quantum Institute, Department of Physics, University of Maryland, College Park, Maryland 20742, USA Condensed Matter Theory Center and Joint Quantum Institute, Department of Physics, University of Maryland, College Park, Maryland 20742, USA Condensed Matter Theory Center and Joint Quantum Institute, Department of Physics, University of Maryland, College Park, Maryland 20742, USA § ABSTRACT For the first time, robust superconductivity has been independently observed in twisted WSe_2 bilayers by two separate groups [Y. Xia et al., arXiv:2405.14784; Y. Guo et al., arXiv:2406.03418.]. In light of this, we explore the possibility of a universal superconducting pairing mechanism in twisted WSe_2 bilayers. Using a continuum band structure model and a phenomenological boson-mediated effective electron-electron attraction, we find that intervalley intralayer pairing predominates over interlayer pairing. Notably, despite different experimental conditions, both twisted WSe_2 samples exhibit a comparable effective attraction strength. This consistency suggests that the dominant pairing glue is likely independent of the twist angle and layer polarization, pointing to a universal underlying boson-induced pairing mechanism. Theory of superconductivity in twisted transition metal dichalcogenide homobilayers Sankar Das Sarma July 1, 2024 =================================================================================== Introduction— The emergence of moiré superlattice in twisted transition metal dichalcogenide (TMD) homobilayers has led to the observation of a rich set of strongly correlated phases <cit.>, from correlated insulators at integer fillings to fractional Chern insulators in the absence of an external magnetic field. While the phases driven by strong electron-electron interaction manifest in the moiré TMD bilayers, robust observable superconductivity (SC) at dilute doping densities–associated with a few carriers per moiré unit cell–has remained elusive in most moiré TMD systems <cit.>. On the other hand, both moiré and moiréless graphene multilayers have demonstrated reproducible SC with T_c ranging from 20 mK to 3 K <cit.>, indicating that SC is a generic feature in graphene-based multilayers. The source of such a striking difference between moiré TMD and moiré graphene systems is an important unsolved puzzle in condensed matter and material physics. Recently, two groups have independently discovered robust SC in twisted WSe_2 bilayers (tWSe_2). One group reported SC at a filling factor ν_ SC=-1 with a twist angle of θ=3.65^∘ and a small displacement field <cit.>. The critical temperature was estimated to be 0.22 K. The other group observed SC around ν_ SC=-1.1 with a twist angle of θ=5^∘ and a large displacement field <cit.>. The critical temperature was approximately 0.426 K. We will refer to these two experimental samples as Sample A and Sample B, respectively, throughout our paper. In this Letter, we investigate SC in tWSe_2 using a realistic continuum band structure model and a phenomenological boson-mediated BCS model without assuming a specific pairing mechanism. We demonstrate the dominance of intervalley intralayer pairing with an order parameter consistent with a mixture of s and f waves <cit.>. Our estimates of the effective coupling constants of SC in two different experiments show very similar values, suggesting a universal dominant pairing mechanism in tWSe_2 (i.e., the same bosonic glue). The possible microscopic mechanisms and the importance of the in-plane magnetic field response are also discussed. Our work represents the first systematic investigation of the SC phenomenology in tWSe_2 systems, paving the way for future exploration of SC in moiré TMD systems. Band Structure Model— Our phenomenological superconducting mean-field theory is based on the low-energy continuum model of tWSe_2 <cit.>, of which the valley-projected Hamiltonian is ℋ = [ h(q_b) + U_b(r) +V_z/2 T(r); T^†(r) h(q_t) + U_t(r) -V_z/2 ], where h(q) = -ħ^2q^2/2m^* is the effective mass approximation of the valence band edge at the Dirac point κ_l of layer l=b,t. q_l = k-κ_l is the momentum measured from the Dirac points, and V_z is the interlayer energy difference due to the displacement field. The layer-dependent moiré potential U_l(r) and the interlayer tunneling T(r) are spatially periodic with moiré periodicity, U_t,b(r) = 2V∑_j=1,3,5cos(G_j ·r±ψ), T(r) = w(1+e^-iG_2 ·r+e^-iG_3 ·r). G_1 = (1,0)b_ M, G_j = ℛ_(j-1)π/3G_1 are the first-shell moiré reciprocal lattice vectors, with b_ M = 4πθ/√(3) a_0 as the length of moiré primitive reciprocal lattice vector. In the calculations throughout this paper, we use the continuum model parameters fitted by large-scale DFT calculations <cit.>: V=9 meV, ψ=128^∘, w=18 meV, lattice constant a_0 = 3.317 Å, and the effective mass m^*=0.43m_e with m_e being the electron mass. Figures <ref> and <ref> show the moiré band structures, density of states (DOS) and layer polarization l_z in momentum and real space for Samples A and B. The layer polarization operator l̂_z is represented by the Pauli matrix acting on the layer subspace. In Sample A, the layer is weakly polarized due to the small displacement field, corresponding to an interlayer energy difference of approximately 2.1 meV <cit.>. In momentum space, as shown in Fig. <ref>(c), the wave function is localized on each layer around its Dirac point. In real space, depicted in Fig. <ref>(d-e), the layer projected density forms two triangular lattices centered on XM or MX local stackings, creating an effective honeycomb lattice with a small onsite energy difference. The DOS plot in Fig. <ref>(b) shows a Van Hove singularity (VHS) around ν=-0.77. The SC observed at ν_ SC=-1 is only ∼ 1.5 meV away from the VHS (inset of Fig. <ref>(b)). In our notation, the filling factor ν represents the number of electrons per moiré unit cell. In Sample B, a large displacement field induces an interlayer energy difference of ∼ 43.75 meV <cit.>, resulting in strong layer polarization (Fig. <ref>(c-e)). The VHS is located at ν=-1.14 (Fig. <ref>(b)), which is less than 0.5 meV away from the chemical potential at the filling factor ν_ SC=-1.1, where SC was observed. This proximity is conducive to pairing instability. Despite the significant differences in band structure, layer polarization, and effective lattice model between Samples A and B, we explore the possibility that the same underlying mechanism is responsible for SC in both cases in the next section. Superconductivity— Closely connected to the two experiments, we study the effective pairing strength in Samples A and B using a phenomenological BCS theory without assuming a specific pairing mechansim. We will discuss the possible pairing glues, e.g., phonons and magnons, at the end of this Letter. Given that no evidence of time-reversal symmetry (TRS) breaking was observed near the superconducting phase in either sample, we consider only the intervalley pairing that preserves TRS. In the following part, we focus on intralayer pairing, as detailed in Appendix A, the interlayer pairing is significantly weaker by a factor of five to ten. The dominance of the intralayer pairing is a direct consequence of the layer polarization patterns as shown in Figs. <ref>(c) and <ref>(c). The effective electron-electron attraction mediated by intralayer bosons is given by H_ att = -g/2A∑_l,qn̂_l(q) n̂_l(-q), where A is the system area, and we approximate the intervalley intralayer pairing strength by a tunable static and momentum independent potential g. The density operator, n̂_l(q), is defined as n̂_l(-q) = ∑_τ,k,Gψ^†_τ, l(k+G) ψ_τ, l(k+G-q), where τ=± is the valley index, l=t,b represents the layer degree of freedom and k is in the first moiré Brillouin zone (MBZ). Thus, the effective attraction Eq. (<ref>) is H_ att = -g/A∑_l,k,k' G,G' ψ^†_+,l(k+G) ψ^†_-,l(-k-G) ψ_-,l(-k'-G') ψ_+,l(k'+G'). The factor of 2 in Eq. (<ref>) is canceled by summing over valleys and keeping only the intervalley terms. Projecting to the first moiré valence band of tWSe_2, the effective pairing Hamiltonian becomes H_ p = -1/A∑_k,k' g_k,k' c^†_+(k) c^†_-(-k) c_-(-k') c_+(k'), g_k,k' = g∑_l,G,G' |z_+,l,G(k)|^2 |z_+,l,G'(k')|^2, where c^† (c) is the quasiparticle creation (annihilation) operator in the plane-wave expansion, c^†_τ(k) = ∑_l,G z_τ,l,G(k) ψ^†_τ,l(k+G), with G and G' being the moiré reciprocal lattice vectors. For T≈ T_c, the linearized gap equation in the BCS mean-field approximation is Δ_k = 1/A∑_k' g_k,k'tanh(ε_+(k')-μ/2k_ BT)/2(ε_+(k')-μ)Δ_k', ε_τ(k) is the quasiparticle eigenenergy of the first moiré valence band and μ is the chemical potential. In obtaining Eq. (<ref>) and (<ref>), we have used the time-reversal symmetry properties ε_+(k) = ε_-(-k), z_+,l,G(k) = z^*_-,l,-G(-k). Equation (<ref>) can be rewritten in the matrix form: Δ = gMΔ. At the critical point T=T_c, Eq. (<ref>) has only one stable solution. Given the electron-boson coupling strength g, T_c is found by obtaining the largest eigenvalue of M to be g^-1. Equivalently, for a given temperature T_c, the critical electron-boson coupling strength g^* is the inverse of the maximum eigenvalue of M. In Fig. <ref>(a-b), we show g^* as a function of filling factor ν for Sample A, given the experimental T_c = 0.22 K, and for Sample B, given the experimental T_c = 0.426 K. g^* reaches its minimum at the VHS for both samples, around ν = -0.77 for Sample A (Fig. <ref>(b)) and ν=-1.1 for Sample B (Fig. <ref>(b)). The minimum g^* is approximately 80 (105) meV·nm^2 for Sample A (B). At the filling factor where robust SC was observed, g^*_ SC≈ 120 meV·nm^2 (at ν_ SC=-1) for Sample A, and g^*_ SC≈ 105 meV·nm^2 (at ν_ SC=-1.1) for Sample B. The similar values of g^*_ SC in the two very different samples strongly suggest intralayer pairing with a universal bosonic glue producing SC. We note that the interlayer pairing plays a minor role in this system, as detailed in Appendix B, the corresponding electron-boson coupling strength g^*_ inter is at least five times larger than the intralayer g^* for Sample A and ten times larger for Sample B. In Fig <ref>(c-d), T_c is shown as a function of ν for several representative values of g. In both samples, T_c reaches its maximum at the VHS and remains observable away from the VHS, similar to the situation found in the graphene multilayers <cit.>. We note that non-adiabatic vertex corrections <cit.>, which we ignore, might become important for doping densities very close to VHS. This is an interesting subject for future work. To further explore the dependence of g^* and T_c on the displacement field, which is a common experimental tuning parameter, we show phase diagrams of g^* and T_c in Appendix B for twist angles θ=3.65^∘ and θ=5^∘. In both cases, the minimum g^* and maximum T_c track the VHS. For T < T_c, the order parameter Δ_k is solved self-consistently, Δ_k = 1/A∑_k' g_k,k'tanh(√(ξ^2_k'+|Δ_k'|^2)/2k_ BT)/2√(ξ^2_k'+|Δ_k'|^2)Δ_k', where ξ_k = ε_+(k) - μ. At T=0, the hyperbolic tangent term simplifies to 1. The k-space distributions of the order parameters for Samples A and B, calculated at T=0, are shown in Fig. <ref>. Since we consider only intralayer pairing, the symmetry of Δ_k closely matches that of the layer polarization shown in Fig. <ref>(c) and Fig. <ref>(c), indicating a mixture of s and f waves <cit.>. In Sample A, Δ̅_k/k_BT_c = 1.84, and Δ̅_k/k_BT_c = 1.66 in Sample B, where Δ̅_k is the k-space average of Δ_k. Both ratios are close to the BCS mean-field value of 1.75. Discussion— We develop a BCS theory for the recently observed SC in tWSe_2. We establish that the SC likely arises from the same bosonic glue in two different experiments with different twist angles, displacement fields, and doping levels. The dominant pairing is intervalley intralayer interaction with an s+f order parameter symmetry, and the maximum T_c is not far from the VHS. Take the acoustic phonon as an example <cit.> for the bosonic glue, the calculated g^* is related to the deformation potential by D = v_s√(g^* ρ_m). Using the mass density ρ_m = 6.2 × 10^-7 g/cm^2 and sound velocity v_s=3.3 × 10^5 cm/s of monolayer WSe_2 <cit.>, g^*_ SC of Samples A and B in Fig. <ref>(a-b) correspond to deformation potentials of 7.1 eV and 6.7 eV, respectively, which are in the same order of magnitude as D=3.2 eV for monolayer WSe_2 estimated from previous density functional theory (DFT) calculations <cit.>. The quantitative extraction of the deformation potential is difficult and DFT estimates are often much smaller than experimental values, a common occurrence in semiconductors. It is also possible that the SC observed in both samples is mediated by magnons, as suggested by the close proximity of the SC to the correlated state, very likely of an antiferromagnetic order, and the T_c that peaks near the phase boundary between these states. Understanding the magnetic order is essential for characterizing the nature of spin fluctuations and the resulting superconducting state. At present, the underlying mechanism of SC remains an unresolved and intriguing question that future experimental and theoretical work should address. Next, we comment on the role of Coulomb interactions. Coulomb interactions may result in correlated states <cit.> that could preempt SC predicted by our phenomenological BCS theory. Moreover, the band renormalization effect, which we ignore, may quantitatively adjust the single-particle band structure used in this Letter. The Coulomb repulsion in the Cooper channel is effectively captured by our theory through the effective coupling constant g. Microscopic calculations incorporating the frequency-dependent pairing and Coulomb interactions are necessary for quantitative understanding of the underlying SC mechanism in tWSe_2 <cit.>. Finally, we discuss the response to an in-plane magnetic field, which has been an important tool to discern the properties of SC in graphene-based materials <cit.>. In WSe_2, the Zeeman effect can be ignored due to large Ising spin-orbit coupling. However, the orbital effect might still be nontrivial as the separation between two WSe_2 layers is not small. We find that SC is suppressed by an in-plane magnetic field of a few Teslas because the nesting of intervalley pairing requires ε_+(k)=ε_-(-k), which is easily violated in the presence of a magnetic field. Additionally, an in-plane magnetic field may influence nearby correlated states; for example, in an antiferromagnetic state, spin fluctuation can be reduced by an applied magnetic field, weakening fluctuation-induced pairing. Systematic investigations along these lines are essential for understanding SC in tWSe_2. Acknowledgments— We thank Yuting Tan for useful discussions. Y.-Z. C. and J. Z. also acknowledge useful conversation with Yi Huang and Jay D. Sau. This work is supported by the Laboratory for Physical Sciences. § SUPPLEMENTARY INFORMATION § A. THE INTERLAYER PAIRING Similar to the derivation of intralayer pairing in the main text, the effective electron-electron attraction mediated by interlayer pairing is H_ att^ inter = -1/2 A∑_q V_g(q) ∑_l n̂_l(q) n̂_l̅(-q) = -g/A∑_k,k',lψ^†_+,l(k) ψ^†_-,l̅(-k) ψ_-,l̅(-k') ψ_+,l(k'), where l̅ represents the opposite layer of l. The corresponding pairing Hamiltonian and effective coupling are H_ p^ inter = -1/A∑_k,k' g^ inter_k,k' c^†_+(k) c^†_-(-k) c_-(-k') c_+(k'), g^ inter_k,k' = g∑_l,G,G' z^*_+,lG(k) z_+,l̅G(k) z^*_+,l̅G'(k') z_+,lG'(k'). Similar to Fig. <ref>(a-b), the critical electron-boson coupling strength g^*_ inter for interlayer pairing is shown in Fig. <ref>. For Sample A, g^*_ inter as a function of ν exhibits a similar trend to the g^* obtained from intralayer pairing (Fig. <ref>(a)), but is scaled up by a factor of ∼ 5. This is because the layer-projected wave function is localized in separate regions in k-space, as shown in Fig. <ref>(c), and the layer polarization has a weak dependence on ν due to the small displacement field. For Sample B, however, g^*_ inter is much larger than the intralayer pairing g^* (Fig. <ref>(b)) at small filling |ν| ≲ 0.4, because interlayer pairing is weaker for larger layer polarization. In Sample B, the interlayer pairing strength is at least ten times weaker than that of the intralayer pairing. § B. SUPERCONDUCTIVITY PHASE DIAGRAM To further explore the superconducting properties, we show the phase diagrams of g^* and T_c across a broad range of displacement fields (or equivalently interlayer energy difference V_z) and filling factors for twist angles θ=3.65^∘ and θ=5^∘ in Fig. <ref>, in which the conditions for experimentally realized SC in Samples A and B are marked by stars. In both cases, the minimum g^* and maximum T_c track the VHS. Remarkably, over the broad (V_z, ν) parameter space, g^* values are similar in both samples (Fig. <ref>(a-b)) with different twist angles. For the smaller twist angle (Fig. <ref>(a,c)), T_c is maximized at ν≈ -0.8 for a small displacement field, and at ν≲ -1 for an intermediate displacement field, followed by a decrease in T_c with increasing V_z. For the larger twist angle (Fig. <ref>(b,d)), the VHS is pinned near ν=-1, as well as maximum T_c. In Fig. <ref>(d), the maximum T_c remains nearly constant with varying V_z within the parameter range in our calculations. Note that the difference in maximum T_c between Fig. <ref>(c) and (d) results from the different g values used in these figures. 47 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Tang et al.(2020)Tang, Li, Li, Xu, Liu, Barmak, Watanabe, Taniguchi, MacDonald, Shan, and Mak]Tang2020 author author Y. Tang, author L. Li, author T. Li, author Y. Xu, author S. Liu, author K. Barmak, author K. Watanabe, author T. Taniguchi, author A. H. MacDonald, author J. Shan, and author K. F. Mak, 10.1038/s41586-020-2085-3 journal journal Nature volume 579, pages 353 (year 2020)NoStop [Regan et al.(2020)Regan, Wang, Jin, Utama, Gao, Wei, Zhao, Zhao, Zhang, Yumigeta, Blei, Carlström, Watanabe, Taniguchi, Tongay, Crommie, Zettl, and Wang]Regan2020 author author E. C. Regan, author D. Wang, author C. Jin, author M. I. B. Utama, author B. Gao, author X. Wei, author S. Zhao, author W. Zhao, author Z. Zhang, author K. Yumigeta, author M. Blei, author J. D. Carlström, author K. Watanabe, author T. Taniguchi, author S. Tongay, author M. Crommie, author A. Zettl, and author F. Wang, 10.1038/s41586-020-2092-4 journal journal Nature volume 579, pages 359 (year 2020)NoStop [Xu et al.(2020)Xu, Liu, Rhodes, Watanabe, Taniguchi, Hone, Elser, Mak, and Shan]Xu2020 author author Y. Xu, author S. Liu, author D. A. Rhodes, author K. Watanabe, author T. Taniguchi, author J. Hone, author V. Elser, author K. F. Mak, and author J. Shan, 10.1038/s41586-020-2868-6 journal journal Nature volume 587, pages 214 (year 2020)NoStop [Wang et al.(2020a)Wang, Shih, Ghiotto, Xian, Rhodes, Tan, Claassen, Kennes, Bai, Kim, Watanabe, Taniguchi, Zhu, Hone, Rubio, Pasupathy, and Dean]Wang2020 author author L. Wang, author E.-M. Shih, author A. Ghiotto, author L. Xian, author D. A. Rhodes, author C. Tan, author M. Claassen, author D. M. Kennes, author Y. Bai, author B. Kim, author K. Watanabe, author T. Taniguchi, author X. Zhu, author J. Hone, author A. Rubio, author A. N. Pasupathy, and author C. R. Dean, 10.1038/s41563-020-0708-6 journal journal Nature Materials volume 19, pages 861 (year 2020a)NoStop [Li et al.(2021a)Li, Jiang, Li, Zhang, Kang, Zhu, Watanabe, Taniguchi, Chowdhury, Fu, Shan, and Mak]Li2021 author author T. Li, author S. Jiang, author L. Li, author Y. Zhang, author K. Kang, author J. Zhu, author K. Watanabe, author T. Taniguchi, author D. Chowdhury, author L. Fu, author J. Shan, and author K. F. Mak, 10.1038/s41586-021-03853-0 journal journal Nature volume 597, pages 350 (year 2021a)NoStop [Ghiotto et al.(2021)Ghiotto, Shih, Pereira, Rhodes, Kim, Zang, Millis, Watanabe, Taniguchi, Hone, Wang, Dean, and Pasupathy]Ghiotto2021 author author A. Ghiotto, author E.-M. Shih, author G. S. S. G. Pereira, author D. A. Rhodes, author B. Kim, author J. Zang, author A. J. Millis, author K. Watanabe, author T. Taniguchi, author J. C. Hone, author L. Wang, author C. R. Dean, and author A. N. Pasupathy, 10.1038/s41586-021-03815-6 journal journal Nature volume 597, pages 345 (year 2021)NoStop [Li et al.(2021b)Li, Jiang, Shen, Zhang, Li, Tao, Devakul, Watanabe, Taniguchi, Fu, Shan, and Mak]Li2021a author author T. Li, author S. Jiang, author B. Shen, author Y. Zhang, author L. Li, author Z. Tao, author T. Devakul, author K. Watanabe, author T. Taniguchi, author L. Fu, author J. Shan, and author K. F. Mak, 10.1038/s41586-021-04171-1 journal journal Nature volume 600, pages 641 (year 2021b)NoStop [Foutty et al.(2024)Foutty, Kometter, Devakul, Reddy, Watanabe, Taniguchi, Fu, and Feldman]Foutty2024 author author B. A. Foutty, author C. R. Kometter, author T. Devakul, author A. P. Reddy, author K. Watanabe, author T. Taniguchi, author L. Fu, and author B. E. Feldman, 10.1126/science.adi4728 journal journal Science volume 384, pages 343 (year 2024)NoStop [Cai et al.(2023)Cai, Anderson, Wang, Zhang, Liu, Holtzmann, Zhang, Fan, Taniguchi, Watanabe, Ran, Cao, Fu, Xiao, Yao, and Xu]CaiJ2023a author author J. Cai, author E. Anderson, author C. Wang, author X. Zhang, author X. Liu, author W. Holtzmann, author Y. Zhang, author F. Fan, author T. Taniguchi, author K. Watanabe, author Y. Ran, author T. Cao, author L. Fu, author D. Xiao, author W. Yao, and author X. Xu, 10.1038/s41586-023-06289-w journal journal Nature volume 622, pages 63 (year 2023)NoStop [Zeng et al.(2023)Zeng, Xia, Kang, Zhu, Knüppel, Vaswani, Watanabe, Taniguchi, Mak, and Shan]ZengY2023a author author Y. Zeng, author Z. Xia, author K. Kang, author J. Zhu, author P. Knüppel, author C. Vaswani, author K. Watanabe, author T. Taniguchi, author K. F. Mak, and author J. Shan, 10.1038/s41586-023-06452-3 journal journal Nature volume 622, pages 69 (year 2023)NoStop [Park et al.(2023)Park, Cai, Anderson, Zhang, Zhu, Liu, Wang, Holtzmann, Hu, Liu, Taniguchi, Watanabe, Chu, Cao, Fu, Yao, Chang, Cobden, Xiao, and Xu]ParkH2023 author author H. Park, author J. Cai, author E. Anderson, author Y. Zhang, author J. Zhu, author X. Liu, author C. Wang, author W. Holtzmann, author C. Hu, author Z. Liu, author T. Taniguchi, author K. Watanabe, author J.-H. Chu, author T. Cao, author L. Fu, author W. Yao, author C.-Z. Chang, author D. Cobden, author D. Xiao, and author X. Xu, 10.1038/s41586-023-06536-0 journal journal Nature volume 622, pages 74 (year 2023)NoStop [Xu et al.(2023)Xu, Sun, Jia, Liu, Xu, Li, Gu, Watanabe, Taniguchi, Tong, Jia, Shi, Jiang, Zhang, Liu, and Li]XuF2023a author author F. Xu, author Z. Sun, author T. Jia, author C. Liu, author C. Xu, author C. Li, author Y. Gu, author K. Watanabe, author T. Taniguchi, author B. Tong, author J. Jia, author Z. Shi, author S. Jiang, author Y. Zhang, author X. Liu, and author T. Li, 10.1103/PhysRevX.13.031037 journal journal Phys. Rev. X volume 13, pages 031037 (year 2023)NoStop [Wang et al.(2020b)Wang, Shih, Ghiotto, Xian, Rhodes, Tan, Claassen, Kennes, Bai, Kim, Watanabe, Taniguchi, Zhu, Hone, Rubio, Pasupathy, and Dean]Wang2020_tWSe2 author author L. Wang, author E.-M. Shih, author A. Ghiotto, author L. Xian, author D. A. Rhodes, author C. Tan, author M. Claassen, author D. M. Kennes, author Y. Bai, author B. Kim, author K. Watanabe, author T. Taniguchi, author X. Zhu, author J. Hone, author A. Rubio, author A. N. Pasupathy, and author C. R. Dean, 10.1038/s41563-020-0708-6 journal journal Nature Materials volume 19, pages 861 (year 2020b)NoStop [Cao et al.(2018)Cao, Fatemi, Fang, Watanabe, Taniguchi, Kaxiras, and Jarillo-Herrero]CaoY2018 author author Y. Cao, author V. Fatemi, author S. Fang, author K. Watanabe, author T. Taniguchi, author E. Kaxiras, and author P. Jarillo-Herrero, 10.1038/nature26160 journal journal Nature volume 556, pages 43 (year 2018)NoStop [Yankowitz et al.(2019)Yankowitz, Chen, Polshyn, Zhang, Watanabe, Taniguchi, Graf, Young, and Dean]YankowitzM2019 author author M. Yankowitz, author S. Chen, author H. Polshyn, author Y. Zhang, author K. Watanabe, author T. Taniguchi, author D. Graf, author A. F. Young, and author C. R. Dean, 10.1126/science.aav1910 journal journal Science volume 363, pages 1059 (year 2019)NoStop [Lu et al.(2019)Lu, Stepanov, Yang, Xie, Aamir, Das, Urgell, Watanabe, Taniguchi, Zhang, Bachtold, MacDonald, and Efetov]LuX2019 author author X. Lu, author P. Stepanov, author W. Yang, author M. Xie, author M. A. Aamir, author I. Das, author C. Urgell, author K. Watanabe, author T. Taniguchi, author G. Zhang, author A. Bachtold, author A. H. MacDonald, and author D. K. Efetov, 10.1038/s41586-019-1695-0 journal journal Nature volume 574, pages 653 (year 2019)NoStop [Park et al.(2021)Park, Cao, Watanabe, Taniguchi, and Jarillo-Herrero]ParkJM2021 author author J. M. Park, author Y. Cao, author K. Watanabe, author T. Taniguchi, and author P. Jarillo-Herrero, 10.1038/s41586-021-03192-0 journal journal Nature volume 590, pages 249 (year 2021)NoStop [Hao et al.(2021)Hao, Zimmerman, Ledwith, Khalaf, Najafabadi, Watanabe, Taniguchi, Vishwanath, and Kim]HaoZ2021 author author Z. Hao, author A. M. Zimmerman, author P. Ledwith, author E. Khalaf, author D. H. Najafabadi, author K. Watanabe, author T. Taniguchi, author A. Vishwanath, and author P. Kim, 10.1126/science.abg0399 journal journal Science volume 371, pages 1133 (year 2021)NoStop [Cao et al.(2021)Cao, Park, Watanabe, Taniguchi, and Jarillo-Herrero]CaoY2021 author author Y. Cao, author J. M. Park, author K. Watanabe, author T. Taniguchi, and author P. Jarillo-Herrero, 10.1038/s41586-021-03685-y journal journal Nature volume 595, pages 526 (year 2021)NoStop [Oh et al.(2021)Oh, Nuckolls, Wong, Lee, Liu, Watanabe, Taniguchi, and Yazdani]OhM2021 author author M. Oh, author K. P. Nuckolls, author D. Wong, author R. L. Lee, author X. Liu, author K. Watanabe, author T. Taniguchi, and author A. Yazdani, 10.1038/s41586-021-04121-x journal journal Nature volume 600, pages 240 (year 2021)NoStop [Zhou et al.(2021)Zhou, Xie, Taniguchi, Watanabe, and Young]ZhouH2021 author author H. Zhou, author T. Xie, author T. Taniguchi, author K. Watanabe, and author A. F. Young, 10.1038/s41586-021-03926-0 journal journal Nature volume 598, pages 434 (year 2021)NoStop [Zhou et al.(2022)Zhou, Holleis, Saito, Cohen, Huynh, Patterson, Yang, Taniguchi, Watanabe, and Young]ZhouH2022 author author H. Zhou, author L. Holleis, author Y. Saito, author L. Cohen, author W. Huynh, author C. L. Patterson, author F. Yang, author T. Taniguchi, author K. Watanabe, and author A. F. Young, 10.1126/science.abm8386 journal journal Science volume 375, pages 774 (year 2022)NoStop [Su et al.(2023)Su, Kuiri, Watanabe, Taniguchi, and Folk]SuR2023 author author R. Su, author M. Kuiri, author K. Watanabe, author T. Taniguchi, and author J. Folk, 10.1038/s41563-023-01653-7 journal journal Nat. Mater. volume 22, pages 1332 (year 2023)NoStop [Zhang et al.(2023)Zhang, Polski, Thomson, Lantagne-Hurtubise, Lewandowski, Zhou, Watanabe, Taniguchi, Alicea, and Nadj-Perge]ZhangY2023 author author Y. Zhang, author R. Polski, author A. Thomson, author É. Lantagne-Hurtubise, author C. Lewandowski, author H. Zhou, author K. Watanabe, author T. Taniguchi, author J. Alicea, and author S. Nadj-Perge, 10.1038/s41586-022-05446-x journal journal Nature volume 613, pages 268 (year 2023)NoStop [Holleis et al.()Holleis, Patterson, Zhang, Yoo, Zhou, Taniguchi, Watanabe, Nadj-Perge, and Young]HolleisL2023 author author L. Holleis, author C. L. Patterson, author Y. Zhang, author H. M. Yoo, author H. Zhou, author T. Taniguchi, author K. Watanabe, author S. Nadj-Perge, and author A. F. Young, https://arxiv.org/abs/2303.00742 http://arxiv.org/abs/arXiv:2303.00742 arXiv:2303.00742 NoStop [Li et al.(2024)Li, Xu, Li, Li, Li, Watanabe, Taniguchi, Tong, Shen, Lu, Jia, Wu, Liu, and Li]LiC2024a author author C. Li, author F. Xu, author B. Li, author J. Li, author G. Li, author K. Watanabe, author T. Taniguchi, author B. Tong, author J. Shen, author L. Lu, author J. Jia, author F. Wu, author X. Liu, and author T. Li, 10.1038/s41586-024-07584-w journal journal Nature , pages 1 (year 2024)NoStop [Xia et al.()Xia, Han, Watanabe, Taniguchi, Shan, and Mak]KFMak_WSe2_2024 author author Y. Xia, author Z. Han, author K. Watanabe, author T. Taniguchi, author J. Shan, and author K. F. Mak, https://arxiv.org/abs/2405.14784 http://arxiv.org/abs/arXiv:2405.14784 arXiv:2405.14784 NoStop [Guo et al.()Guo, Pack, Swann, Holtzman, Cothrine, Watanabe, Taniguchi, Mandrus, Barmak, Hone, Millis, Pasupathy, and Dean]CRDean_WSe2_2024 author author Y. Guo, author J. Pack, author J. Swann, author L. Holtzman, author M. Cothrine, author K. Watanabe, author T. Taniguchi, author D. Mandrus, author K. Barmak, author J. Hone, author A. J. Millis, author A. N. Pasupathy, and author C. R. Dean, https://arxiv.org/abs/2406.03418 http://arxiv.org/abs/arXiv:2406.03418 arXiv:2406.03418 NoStop [Chou et al.(2022a)Chou, Wu, and Das Sarma]ChouYZ2022 author author Y.-Z. Chou, author F. Wu, and author S. Das Sarma, 10.1103/PhysRevB.106.L180502 journal journal Phys. Rev. B volume 106, pages L180502 (year 2022a)NoStop [Zegrodnik and Biborski(2023)]Zegrodnik_SC_tWSe2_2023 author author M. Zegrodnik and author A. Biborski, 10.1103/PhysRevB.108.064506 journal journal Phys. Rev. B volume 108, pages 064506 (year 2023)NoStop [Akbar et al.()Akbar, Biborski, Rademaker, and Zegrodnik]Akbar_SC_2024 author author W. Akbar, author A. Biborski, author L. Rademaker, and author M. Zegrodnik, https://arxiv.org/abs/2403.05903 http://arxiv.org/abs/arXiv:2403.05903 arXiv:2403.05903 NoStop [Wu et al.(2019a)Wu, Lovorn, Tutuc, Martin, and MacDonald]FCW_TMDhomo_2019 author author F. Wu, author T. Lovorn, author E. Tutuc, author I. Martin, and author A. H. MacDonald, 10.1103/PhysRevLett.122.086402 journal journal Phys. Rev. Lett. volume 122, pages 086402 (year 2019a)NoStop [Devakul et al.(2021)Devakul, Crépel, Zhang, and Fu]TDevakul_DFT_2021 author author T. Devakul, author V. Crépel, author Y. Zhang, and author L. Fu, 10.1038/s41467-021-27042-9 journal journal Nature Communications volume 12, pages 6730 (year 2021)NoStop [Wef()]Wefollow @noop note In Sample A, we follow Ref.<cit.> that the interlayer energy difference is Δε = eEd ϵ_ hBN/ϵ_ TMD, E=8 mV/nm, d=0.7 nm, ϵ_ hBN=3 and ϵ_ TMD=8.Stop [Wea()]Weapproximate @noop note In Sample B, we approximate the interlayer energy difference to be Δε = eDd /ϵ_ TMD, D=500 mV/nm <cit.>, d=0.7 nm and ϵ_ TMD=8.Stop [Löthman and Black-Schaffer(2017)]LothmanT2017 author author T. Löthman and author A. M. Black-Schaffer, 10.1103/PhysRevB.96.064505 journal journal Phys. Rev. B volume 96, pages 064505 (year 2017)NoStop [Chou et al.(2021)Chou, Wu, Sau, and Das Sarma]ChouYZ2021 author author Y.-Z. Chou, author F. Wu, author J. D. Sau, and author S. Das Sarma, 10.1103/PhysRevLett.127.187001 journal journal Phys. Rev. Lett. volume 127, pages 187001 (year 2021)NoStop [Chou et al.(2022b)Chou, Wu, Sau, and Das Sarma]ChouYZ2022a author author Y.-Z. Chou, author F. Wu, author J. D. Sau, and author S. Das Sarma, 10.1103/PhysRevB.106.024507 journal journal Phys. Rev. B volume 106, pages 024507 (year 2022b)NoStop [Chou et al.(2022c)Chou, Wu, Sau, and Das Sarma]ChouYZ2022b author author Y.-Z. Chou, author F. Wu, author J. D. Sau, and author S. Das Sarma, 10.1103/PhysRevB.105.L100503 journal journal Phys. Rev. B volume 105, pages L100503 (year 2022c)NoStop [Cappelluti and Pietronero(1996)]Cappelluti_SC_1996 author author E. Cappelluti and author L. Pietronero, 10.1103/PhysRevB.53.932 journal journal Phys. Rev. B volume 53, pages 932 (year 1996)NoStop [Phan and Chubukov(2020)]DPhan_phononSC_2020 author author D. Phan and author A. V. Chubukov, 10.1103/PhysRevB.101.024503 journal journal Phys. Rev. B volume 101, pages 024503 (year 2020)NoStop [Wu et al.(2019b)Wu, Hwang, and Das Sarma]WuF2019 author author F. Wu, author E. Hwang, and author S. Das Sarma, 10.1103/PhysRevB.99.165112 journal journal Phys. Rev. B volume 99, pages 165112 (year 2019b)NoStop [Wu and Das Sarma(2020)]WuF2020 author author F. Wu and author S. Das Sarma, 10.1103/PhysRevB.101.155149 journal journal Phys. Rev. B volume 101, pages 155149 (year 2020)NoStop [Lewandowski et al.(2021)Lewandowski, Chowdhury, and Ruhman]LewandowskiC2021 author author C. Lewandowski, author D. Chowdhury, and author J. Ruhman, 10.1103/PhysRevB.103.235401 journal journal Phys. Rev. B volume 103, pages 235401 (year 2021)NoStop [Boström et al.()Boström, Fischer, Profe, Zhang, Kennes, and Rubio]BostromEV2023 author author E. V. Boström, author A. Fischer, author J. B. Profe, author J. Zhang, author D. M. Kennes, and author A. Rubio, 10.48550/arXiv.2311.02494 10.48550/arXiv.2311.02494NoStop [Jin et al.(2014)Jin, Li, Mullen, and Kim]Jin_TMDphonon_2014 author author Z. Jin, author X. Li, author J. T. Mullen, and author K. W. Kim, 10.1103/PhysRevB.90.045422 journal journal Phys. Rev. B volume 90, pages 045422 (year 2014)NoStop [Kim et al.()Kim, Mendez-Valderrama, Wang, and Chowdhury]KimS2024 author author S. Kim, author J. F. Mendez-Valderrama, author X. Wang, and author D. Chowdhury, https://arxiv.org/abs/2406.03525 http://arxiv.org/abs/arXiv:2406.03525 arXiv:2406.03525 NoStop
http://arxiv.org/abs/2406.18905v1
20240627053524
Bayesian inference: More than Bayes's theorem
[ "Thomas J. Loredo", "Robert L. Wolpert" ]
stat.ME
[ "stat.ME", "astro-ph.IM" ]
811 Loredo et al. Thomas J. Loredo ^1,* and Robert L. Wolpert ^2 ^1Cornell Center for Astrophysics and Planetary Science (CCAPS) and Department of Statistics and Data Science, Cornell University, Ithaca NY, USA ^2Department of Statistical Science, Duke University, Durham NC, USA Thomas Loredo loredo@astro.cornell.edu
http://arxiv.org/abs/2406.18043v1
20240626034148
Multimodal foundation world models for generalist embodied agents
[ "Pietro Mazzaglia", "Tim Verbelen", "Bart Dhoedt", "Aaron Courville", "Sai Rajeswar" ]
cs.AI
[ "cs.AI", "cs.CV", "cs.LG", "cs.RO" ]
Revisiting the controversy over the time-dependent Aharonov-Bohm effect [ July 1, 2024 ======================================================================= Project website: § ABSTRACT Learning generalist embodied agents, able to solve multitudes of tasks in different domains is a long-standing problem. Reinforcement learning (RL) is hard to scale up as it requires a complex reward design for each task. In contrast, language can specify tasks in a more natural way. Current foundation vision-language models (VLMs) generally require fine-tuning or other adaptations to be functional, due to the significant domain gap. However, the lack of multimodal data in such domains represents an obstacle toward developing foundation models for embodied applications. In this work, we overcome these problems by presenting multimodal foundation world models, able to connect and align the representation of foundation VLMs with the latent space of generative world models for RL, without any language annotations. The resulting agent learning framework, , allows one to specify tasks through vision and/or language prompts, ground them in the embodied domain’s dynamics, and learns the corresponding behaviors in imagination. As assessed through large-scale multi-task benchmarking, exhibits strong multi-task generalization performance in several locomotion and manipulation domains. Furthermore, by introducing a data-free RL strategy, it lays the groundwork for foundation model-based RL for generalist embodied agents. § INTRODUCTION Foundation models are large pre-trained models endowed with extensive knowledge of the world, which can be readily adapted for a given task <cit.>. These models have demonstrated extraordinary generalization capabilities in a wide range of vision <cit.> and language tasks <cit.>. As we aim to extend this paradigm to embodied applications, where agents physically interact with objects and other agents in their environment, we require generalist agents that are capable of reasoning about these interactions and executing action sequences within these settings <cit.>. Reinforcement learning (RL) allows agents to learn complex behaviors from visual and/or proprioceptive inputs <cit.> by maximizing a specified reward function. Scaling up RL to multiple tasks and embodied environments remains challenging as designing reward functions is a complicated process, requiring expert knowledge and prone to errors which can lead to undesired behaviors <cit.>. Recent work has proposed the adoption of visual-language models (VLMs) to specify rewards for visual environments using language <cit.>, e.g. using the similarity score computed by CLIP <cit.> between an agent's input images and text prompts. However, these approaches require fine-tuning of the VLM <cit.> or adaptation of the visual domain <cit.>, to work reliably. In most RL settings, we lack multimodal data to train or fine-tune domain-specific foundation models, due to the costs of labelling agents’ interactions and/or due to the intrinsic unsuitability of some embodied contexts to be converted into language. For instance, in robotics, it's non-trivial to convert a language description of a task to the agent’s actions which are hardware-level controls, such as motor currents or joint torques. These difficulties make it hard to scale current techniques to large-scale generalization settings, leaving open the question: How does one effectively leverage foundation models for generalization in embodied domains? In this work, we present , a novel approach for training generalist agents from visual or language prompts, requiring no language annotations. learns multimodal foundation world models (MFWMs), where the joint embedding space of a foundation video-language model <cit.> is connected and aligned with the representation of a generative world model for RL <cit.>, using unimodal vision-only data. The MFWM allows the specification of tasks by grounding language or visual prompts into the RL domain. Then, we introduce an RL objective for learning to accomplish the specified tasks in imagination <cit.>, by matching the prompts in latent space. Compared to previous work in world models and VLMs for RL, one emergent property of is the possibility to generalize to new tasks in a completely data-free manner. After training the MFWM, it possesses both strong priors over the dynamics of the environment, and large-scale multimodal knowledge. This combination enables the agent to interpret a large variety of task specifications and learn the corresponding behaviors. Thus, analogously to foundation models for vision and language, allows generalization to new tasks without additional data and lays the groundwork for foundation models in embodied RL domains <cit.>. [The code, datasets and trained models will be made publicly available.] § PRELIMINARIES AND BACKGROUND Additional related works can be found in Appendix <ref>. Problem setting. The agent receives from the environment observations x ∈𝒳 and interacts with it through actions a ∈𝒜. The objective of the agent is to accomplish a certain task τ, which can be specified either in the observation space x_τ, e.g. through images or videos, or in language space y_τ, where 𝒴 represents the space of all possible sentences. Crucially, compared to a standard RL setting, we do not assume that a reward signal is available to solve the task. When a reward function exists, it is instead used to evaluate the agent's performance. Generative world models for RL. In model-based RL, the optimization of the agent’s actions is done efficiently, by rolling out and scoring imaginary trajectories using a (learned) model of the environment’s dynamics. In recent years, this paradigm has grown successful thanks to the adoption of generative world models, which learn latent dynamics by self-predicting the agent’s inputs <cit.>. World models have shown impressive performance in vision-based environments <cit.>, improving our ability to solve complex and open-ended tasks <cit.>. Generative world models have been successfully extended to many applications, such as exploration <cit.>, skill learning <cit.>, solving long-term memory tasks <cit.>, and robotics <cit.>. Foundations models for RL. Large language models (LLMs) have been used for specifying behaviors using language <cit.>, but this generally assumes the availability of a textual interface with the environment or that observations and/or actions can be translated to the language domain. The adoption of vision-language models (VLMs) reduces these assumptions, as it allows the evaluation of behaviors in the visual space. However. this approach has yet to show robust performance, as it generally requires fine-tuning of the VLM <cit.>, prompt hacking techniques <cit.> or visual modifications to the environment <cit.>. Vision-language generative modelling. Given the large success of image-language generative models <cit.>, recent efforts in the community have focused on replicating and extending such success to the video domain, where the temporal dimension introduces new challenges, such as temporal consistency and increased computational costs <cit.>. Video generative models are similar to world models for RL, with the difference that generation models outputs are typically not conditioned on actions, but rather conditioned on language<cit.> or on nothing at all (i.e. an unconditional model). § §.§ World models for RL learns a task-agnostic world model representation by modelling the sequential dynamics of the environment in a compact discrete latent space S <cit.>. Latent states s ∈ S are sampled from independent categorical distributions. The gradients for training the model are propagated through the sampling process with straight-through estimation <cit.>. The world model is made of the following components: Encoder: q_ϕ(s_t|x_t), Sequence model: h_t = f_ϕ(s_t-1, a_t-1, h_t-1), Decoder: p_ϕ(x_t|s_t), Dynamics predictor: p_ϕ(s_t|h_t), trained with the loss: ℒ_ϕ = ∑_t [q_ϕ(s_t|x_t) ‖ p_ϕ(s_t|s_t-1, a_t-1)]_dyn loss - _q_ϕ(s_t|x_t)[log p_ϕ(x_t|s_t)]_recon loss, where p_ϕ(s_t|s_t-1, a_t-1) is a shorthand for p_ϕ(s_t|f_ϕ(s_t-1, a_t-1, h_t-1)). The sequence model is implemented as a linear GRU cell <cit.>. Differently from recurrent state space models (RSSM; <cit.>), for our framework, encoder and decoder models are not conditioned on the information present in the sequence model. This ensures that the latent states only contain information about a single observation. We can then leverage the encoder as a probabilistic visual tokenizer, that is grounded in the target embodied environment. §.§ Multimodal foundation world models Multimodal VLMs are large pre-trained models that have the following components: Vision embedder: e^(v) = f^(v)_PT(x_t:t+k), Language embedder: e^(l) = f^(l)_PT(y), where x_t:t+k is a sequence of visual observations and y is a text prompt. For video-language models, k is generally a constant number of frames (e.g. k∈{4,8,16} frames). Image-language models are a special case where k=1 as the vision embedder takes a single frame as an input. For our implementation, we adopt the InternVideo2 video-language model <cit.>. To connect the representation of the multimodal foundation VLM with the world model latent space, we instantiate two modules: a latent connector and a representation aligner: Connector: p_ψ(s_t:t+k|e), Aligner: e^(v)= f_ψ(e^(l)), ℒ_𝒸ℴ𝓃𝓃 = ∑_t [p_ψ(s_t|s_t-1, e) ‖sg(q_ϕ(s_t|x_t))], ℒ_𝒶𝓁𝒾ℊ𝓃 = ‖ e^(v) -f_ψ(e^(l)) ‖^2_2, where sg(·) indicates to stop gradients propagating. The connector learns to predict the latent states of the world model from embeddings in the VLM's representation space. The connector's objective consist of minimizing the KL divergence between its predictions and the world model's encoder distribution. While more expressive architectures, such as transformers <cit.> or state-space models <cit.> could be adopted, we opt for a simpler GRU-based architecture for video modelling. This way, we keep the method simple and the architecture of the connector is symmetric with respect to the world model's components. Multimodal VLMs trained with contrastive learning exhibit a multimodality gap <cit.>, where the spherical embeddings of different modalities are not aligned. The role of the aligner is to reduce this multimodality gap, by projecting text embeddings into their corresponding visual embeddings. Given a dataset of vision-language data, this projective function can be learned. In embodied domains vision-language data is typically unavailable. Instead, we leverage the idea that language embeddings can be treated as a `corrupted' version of their vision counterparts <cit.>. This allows us to approximate the language embedding with the corresponding vision embedding: e^(l)≈ e^(v) + ϵ. Previous methods inject the noise into the embeddings when training the new module <cit.>, in our case the connector. However, we opt for training a separate aligner network. This allows us to train a noise-free connector, which has two main advantages: (i) it yields higher prediction accuracy for visual embedding inputs while maintaining a similar alignment for language embedding inputs; and (ii) it is more flexible; it's easier to re-train/adapt for different noise levels, as it only requires re-training the aligner module, and its use can be avoided if unnecessary. §.§ Learning specified task behaviors in imagination World models can be used to imagine trajectories in latent space, using the dynamics model. This allows us to train behavior policies in a model-based RL fashion <cit.>. Given a task specified through a visual or language prompt, our MFWM can generate the corresponding latent states by turning the embedder's output, e_task, into sequences of latent states s_t:t+k (examples are shown in Figure <ref>). The objective of the policy model π_θ is then to match the goals specified by the user by performing trajectory matching. The trajectory matching problem can be solved as a divergence minimization problem <cit.>, between the distribution of the states visited by the policy π_θ and the trajectory generated using the aligner-connector networks from the user-specified prompt: θ = min_θ_a_t ∼π_θ(s_t)[ ∑_t γ^t ( p_ϕ(s_t+1|s_t, a_t) ‖ p_ψ(s_t+1|e_task))], with e_task=f_PT(·). The KL divergence is a natural candidate for the distance function <cit.>. However, in practice, we found that using the cosine distance between linear projections of the latent states notably speeds up learning and enhances stability. We can then turn the objective in Eq. <ref> into a reward for RL: r_ = cos( g_ϕ(s^dyn_t+1), g_ϕ(s^task_t+1)), with s^dyn_t+1∼ p_ϕ(s_t+1|s_t, a_t), s^task_t+1∼ p_ψ(s_t+1|e_task), where g_ϕ represents the first linear layer of the world model's decoder. We train an actor-critic model to maximize this reward and achieve the tasks specified by the user <cit.>. Additional implementation details are provided in Appendix <ref>. Temporal alignment. One issue with trajectory matching is that it assumes that the distribution of states visited by the agent starts from the same state as the target distribution. However, the initial state generated by the connector may differ from the initial state where the policy is currently in. For example, consider the Stickman agent on the right side of Figure <ref>. If the agent is lying on the ground and tasked to run, the number of steps to get up and reach running states may surpass the temporal span recognized by the VLM (e.g. typically 4, 8, or 16 frames), causing disalignment in the reward. To address this initial condition alignment issue, we propose a best matching trajectory technique, inspired by best path decoding in speech recognition <cit.>. Our technique involves two steps: * We compare the first b states of the target trajectory with b states obtained from the trajectories imagined by the agent by sliding along the time axis. This allows one to find at which timestep t_a the trajectories are best aligned (the comparison provides the highest reward). * We align the temporal sequences in the two possible contexts: (a) if a state from the agent sequence comes before t_a, the reward uses the target sequence's initial state; and (b) if the state comes k steps after t_a, it's compared to the s_t+k state from the target sequence. In all experiments, we fix b=8 (number of frames of the VLM we use <cit.>), which we found to strike a good compromise between comparing only the initial state (b=1) and performing no alignment (b=imagination horizon). An ablation study can be found in Appendix <ref>. § EXPERIMENTS Overall, we employ a set of 4 locomotion environments (Walker, Cheetah, Quadruped, and a newly introduced Stickman environment) <cit.> and one manipulation environment (Kitchen) <cit.>, for a total of 35 tasks where the agent is trained without rewards, only from text prompts. This represents the first large-scale study of multi-task generalization from language in RL. Details about datasets, tasks, and prompts used can be found in the Appendix <ref>. §.§ Offline RL In offline RL, the objective of the agent is to learn to extract a certain task behavior from a given fixed dataset <cit.>. The performance of the agent generally depends on its ability to `retrieve' the correct behaviors in the dataset and interpolate among them. Popular techniques for offline RL include off-policy RL methods, such as TD3 <cit.>, advantage-weighted behavior cloning, such as IQL <cit.>, and behavior-regularized approaches, such as CQL <cit.> or TD3+BC <cit.>. We aim to assess the multi-task capabilities of different approaches for designing rewards using VLMs. We collected large datasets for each of the domains evaluated, containing a mix of structured data (i.e. the replay buffer of agent learning to perform some tasks) and unstructured data (i.e. exploration data collected using <cit.>). We have removed the explicit reward information about the task and replaced it with a short task description, in language form. We compare to two main categories of approaches: * Image-language rewards: following <cit.>, the cosine similarity between the embedding for the language prompt and the embedding for the agent's visual observation is used as a reward. For the VLM, we adopt the SigLIP-B <cit.> model as it's reported to have superior performance than the original CLIP <cit.>. * Video-language rewards: similar to the image-language rewards, with the difference that the vision embedding is computed from a video of the history of the last k frames, as done in <cit.>. For the VLM, we use the InternVideo2 model <cit.>, the same used for . We test both approaches with a variety of offline RL methods, including IQL, TD3+BC, and TD3. All methods are trained for 500k gradient steps, and evaluated on 20 episodes. Other training details are reported in Appendix <ref>. Behavior extraction. We want to verify whether the methods can retrieve the tasks behaviors that are certainly present in the dataset. We present results in Table <ref>, with episodic rewards rescaled so that 0 represents the performance of a random agent, while 1 represents the performance of an expert agent. excels in overall performance across all domains, particularly in dynamic tasks like walking and running in the quadruped and cheetah domains. In contrast, in static tasks such as 'stickman stand' and kitchen tasks, other methods occasionally outperform . This can be explained by the fact that the target sequences that GenRL infers from the prompt are often slightly in motion, even in static cases. To address this, we could set the target sequence length to 1 for static prompts, but we opted to maintain the method's simplicity and generality, acknowledging this as a minor limitation. As expected, video-language rewards tend to perform better than image-language rewards for dynamic tasks. For video-based rewards, the less conservative approach, TD3, performs better than all other baselines in most tasks, similarly to what is shown in <cit.>. We instead observe an opposite trend for image-language rewards, where more conservative approaches, such as IQL and TD3+BC, tend to perform better. We believe this is because when the `task target' is static, imitating segments of trajectories from the dataset proves beneficial. Multi-task generalization. To assess multi-task generalization, we defined a set of tasks not included in the training data. Although we don't anticipate agents matching the performance of expert models, higher scores in this benchmark help gauge the generalization abilities of different methods. We averaged the performance across various tasks for each domain and summarized the findings in Figure <ref>, with detailed task results in Appendix <ref>. Overall, we observe a similar trend as for the behavior extraction results. significantly outperforms the other approaches, especially in the quadruped and cheetah domains, where the performance is close to the specialized agents' performance. Both for image-language (-I in the Figure) and video-language (-V in the Figure) more conservative approaches, such as IQL and TD3+BC tend to perform worse. This could be associated with the fact that imitating segments of trajectories is less likely to lead to high-rewarding trajectories, as the tasks are not present in the training data. §.§ Data-free RL In the previous section, we evaluated several approaches for designing reward using foundation VLMs. Clearly, model-free RL approaches require continuous access to a dataset, to train the actor-critic and generalize across new tasks. Model-based RL can learn the actor-critic in imagination. However, in previous work <cit.>, imagining sequences for learning behaviors first requires processing actual data sequences. The data is used to initialize the dynamics model, and obtain latent states that represent the starting states to rollout the policy in imagination. Furthermore, in order to learn new tasks, reward-labelled data is necessary to learn a reward model, which provides rewards to the agent during the task learning process. Foundation models <cit.> are generally trained on enormous datasets in order to generalize to new tasks. The datasets used for the model pretraining are not necessary for the downstream applications, and sometimes these datasets are not even publicly available <cit.>. In this section, we aim to establish a new paradigm for foundation models in RL, which follows the same principle of foundation models for vision and language. We call this paradigm data-free RL and we define it as the ability to generalize to new tasks, after pre-training, using no additional data. enables data-free RL thanks to two main reasons: the agent learns a task-agnostic MFWM on a large varied dataset during pre-training, and the MFWM enables the possibility of specifying tasks directly in latent space, without requiring any data. Thus, in order to learn behaviors in imagination, the agent can: sample random latent states in the world model's representation, rollout sequences in imagination, following the policy, and compute rewards, using the targets obtained by processing the given prompts with the connector-aligner networks. Initial states distribution. Uniform sampling from the latent space of the world model often results in meaningless latent states. Additionally, the sequential dynamics model of the MFWM, using a GRU, requires some 'warmup' steps to discern dynamic environmental attributes, such as velocities. To address these issues we perform two operations. First, we combine uniformly sampled states from the discrete latent spaces with states generated by randomly sampling the connector model, as sequences generated by the connector tend to have a more coherent structure than random uniform samples. Second, we perform a rollout of five steps using a mix of actions from the trained policy and random actions. This leads to a varied distribution of states, containing dynamic information, which we use as the initial states for the learning in imagination process. Performance of data-free RL. Our results, detailed in Figure <ref>, compare data-free RL to traditional offline RL with , as discussed in Section <ref>. We ablate the choice of using random samples from the connector-aligner to improve randomly sampled initial states. While data-free RL generally shows a slight decrease in overall performance, the differences are minimal across most domains, and it even outperforms in the kitchen domain. The use of states from the connector model enhances average scores and reduces variance, especially noticeable in the cheetah domain. By employing data-free learning, after pre-training, agents can master new tasks without data, often converging within only 30 minutes of GPU training. As we scale up foundation models for behavior learning, the ability to learn data-free will become crucial. Although very large datasets will be employed to train new foundation models, adapts well without direct access to original data, offering flexibility where data may be proprietary, licensed or unavailable. §.§ Training data distribution As demonstrated in Sections <ref> and <ref>, after training on a large dataset, a agent can adapt to multiple new tasks without additional data. The nature of the training data, detailed in Appendix <ref>, combines exploration and task-specific data. To identify critical data types for , we trained different MFWMs on various dataset subsets. Then, we employ data-free RL to train task behaviors, with analyses over subsets of the walker dataset provided in Figure <ref>, where `all' reports the data-free performance when training on the full dataset. The results confirm that a diverse data distribution is crucial for task success, with the best performance achieved by using the complete dataset, followed by the varied exploration data. Task-specific data effectiveness depends on task complexity, for instance, 'run' data proves more generalizable than 'walk' or 'stand' data across tasks. Crucially, 'stand' data, which shows minimal variation, limits learning for a general agent but can still manage simpler tasks like 'lying down' and 'sitting on knees' as detailed in Appendix <ref>. Moving forward with training foundation models in RL, it will be essential to develop methods that extract multiple behaviors from unstructured data and accurately handle complex behaviors from large datasets. Thus, the ability of to primarily leverage unstructured data is a significant advantage for scalability. § ADDITIONAL ANALYSIS A framework for behavior generation. A common challenge with using LLMs and VLMs involves the need for prompt tuning to achieve specific tasks. As relies on a foundation VLM, similar to previous approaches <cit.> it is not immune from this issue. However, uniquely allows for the visualization of targets obtained from specific prompts. By decoding the latent targets, using the MFWM decoder, we can visualize the interpreted prompt before training the corresponding behavior. This enables a much more explainable framework, which allows fast iteration for prompt tuning, compared to previous (model-free) approaches which often require training the agent to identify which behaviors are rewarded given a certain prompt. Video grounding. In Section <ref>, we have focussed our evaluation on language-driven RL, using text prompts to specify tasks. While language strongly simplifies specification of the task, in some cases providing visual examples of the task might be easier. Similarly as for language prompts, allows translating vision prompts (short videos) into behaviors. As this is a less common evaluation setting, we limit our assessment to qualitative results. In Figure <ref>, we present a set of video grounding examples, obtained by inferring the latent targets corresponding to the vision prompts (right image) and then using the decoder model to decode images (left image). We observe that the agent is able to translate short human action videos[Short videos are generated using <meta.ai>.] into the same actions but for the Stickman embodiment. By applying this approach, it would be possible to learn behaviors from a single video. Scaling to complex observations. Generalist embodied agents should be able to scale to open-ended learning settings. Using , we explored this by training an agent in the Minecraft environment using a small dataset collected by a DreamerV3 agent <cit.>. The primary challenge we found was the model's difficulty in reconstructing complex observations in this open-ended environment. Reconstructing complex observations is a common issue with world models <cit.>. To overcome this limitation, while keeping the method unaltered, we attempted to scale up the number of parameters of MFWM. Qualitative reconstruction results are presented in Figure <ref>. We observe that the agent is able to identify different biomes from language, even with the smaller size of the model. However, the reconstructions are significantly blurrier compared to the other environments we analyzed (e.g. Figure <ref>). When using a larger model, the reconstructions gain some details but the results still highlight the difficulty of the model in providing accurate targets from prompts. While this might not be an issue for simple high-level tasks, e.g. `navigate to a beach', unclear targets might make it difficult to perform more precise actions, e.g. `attack a zombie'. Future research should aim to address this issue, for instance, by improving our simple GRU-based architecture, leveraging transformers or diffusion models to improve the quality of the representation <cit.>. § DISCUSSION We introduced , a world-model based approach for grounding vision-language prompts into embodied domains and learning the corresponding behaviors in imagination. The multimodal foundation world models of can be trained using unimodal data, overcoming the lack of multimodal data in embodied RL domains. The data-free RL property of lays the groundwork for foundation models in RL that can generalize to new tasks without additional data. Limitations. Despite its strengths, presents some limitations, largely due to inherent weaknesses in its components. From the VLMs, inherits the issue related to the multimodality gap <cit.> and the reliance on prompt tuning. We proposed a connection-alignment mechanism to mitigate the former. For the latter, we presented an explainable framework, which facilitates prompt tuning by allowing decoding of the latent targets corresponding to the prompts. From the world model, inherits a dependency on reconstructions, which offers advantages such as explainability but also drawbacks, such as failure modes with complex observations. Future work. As we strive to develop foundation models for generalist embodied agents, our framework opens up numerous research opportunities. One such possibility is to learn multiple behaviors and have another module, e.g. an LLM, compose them to solve long-horizon tasks. Another promising area of research is investigating the temporal flexibility of the framework. We witnessed that for static tasks, greater temporal awareness could enhance performance. This concept could also apply to actions that extend beyond the time comprehension of the VLM. Developing general solutions to these challenges could lead to significant advancements in the framework. § ACKNOWLEDGMENTS Pietro Mazzaglia is funded by a Ph.D. grant of the Flanders Research Foundation (FWO). This research was supported by a Mitacs Accelerate Grant. abbrv
http://arxiv.org/abs/2406.19326v1
20240627165735
Simple homotopy invariance of the loop coproduct
[ "Florian Naef", "Pavel Safronov" ]
math.AT
[ "math.AT", "math.GT", "math.KT" ]
Trinity College Dublin, Dublin, Ireland naeff@tcd.ie School of Mathematics, University of Edinburgh, Edinburgh, UK p.safronov@ed.ac.uk § ABSTRACT We prove a transformation formula for the Goresky–Hingston loop coproduct in string topology under homotopy equivalences of manifolds. The formula involves the trace of the Whitehead torsion of the homotopy equivalence. In particular, it implies that the loop coproduct is invariant under simple homotopy equivalences. In a sense, our results determine the Dennis trace of the simple homotopy type of a closed manifold from its framed configuration spaces of ≤ 2 points. We also explain how the loop coproduct arises as a secondary operation in a 2-dimensional TQFT which elucidates a topological origin of the transformation formula. Simple homotopy invariance of the loop coproduct Pavel Safronov July 1, 2024 ================================================ § INTRODUCTION §.§ String topology In his description of the Poisson bracket on the character variety of an oriented (connected) surface Σ, Goldman <cit.> introduced a Lie bracket on the abelian group (Σ) of homotopy classes of loops S^1→Σ. The Goldman bracket is defined in terms of counting intersection points of two loops. Turaev <cit.> defined a Lie cobracket on (Σ) in terms of counting self-intersection points, so that (Σ) becomes a Lie bialgebra. In fact, the Lie cobracket takes values in ∧^2(Σ), where (Σ)=(Σ)/ is the quotient by the class of contractible loops. The generalizations of these operations to higher-dimensional manifolds is given by the string topology operations <cit.>. Let M be a closed oriented d-manifold and LM=(S^1, M) its free loop space. We will primarily be interested in the following two operations: * Loop product ∧_̋∙(LM)⊗_̋∙(LM)→_̋∙-d(LM). * Loop coproduct ∨_̋∙+d-1(LM)→_̋∙(LM× LM, M× LM∪ LM× M). For instance, if _̋∙(LM, M) has no torsion, the target of the loop coproduct is _̋∙(LM, M)⊗_̋∙(LM, M). We refer to <cit.> for an exposition of different approaches to string topology operations. Along with the BV operator Δ_̋∙(LM)→_̋∙+1(LM) given by the loop rotation, the loop product and coproduct give rise to the string bracket and cobracket on ^̋S^1_∙(LM). For M=Σ a surface the string bracket and string cobracket reduce to the Goldman bracket and Turaev cobracket on (Σ)≅^̋S^1_0(LΣ) and (Σ)≅^̋S^1_0(LΣ, Σ). §.§ Homotopy invariance of string topology The original construction of the string topology operations involved a subtle infinite-dimensional intersection theory on the loop space LM. In particular, these constructions require M to be a smooth manifold. It was realized early on that both the BV operator and the loop product are homotopy invariant <cit.>. Namely, if f M→ N is an orientation-preserving homotopy equivalence of closed oriented d-manifolds, then the diagram _̋∙(LM)⊗_̋∙(LM)^-∧_M[r] ^Lf⊗ Lf[d] _̋∙-d(LM) ^Lf[d] _̋∙(LN)⊗_̋∙(LN)^-∧_N[r] _̋∙-d(LN) commutes. However, it was also conjectured by Sullivan, see <cit.> and <cit.>, that the full range of string topology operations is not homotopy invariant. The previous results in the direction of this question are as follows. If M is simply-connected, then over the rationals/reals the coproduct was shown to be homotopy invariant in <cit.>. It was also shown to be invariant under homotopy equivalences satisfying certain regularity conditions in <cit.>. The case of non-simply-connected manifolds is more subtle: the first author showed <cit.> that in the case of a homotopy equivalence f L(7, 1)→ L(7, 2) of 3-dimensional lens spaces, the loop coproduct is not preserved. The above homotopy equivalence of 3-dimensional lens spaces is the simplest example of a homotopy equivalence f N→ M with a nonvanishing Whitehead torsion τ(f), an invariant lying in the Whitehead group (π_1(M)) = K_1([π_1(M)])/(±π_1(M)). The Dennis trace defines a map from K-theory to Hochschild homology, which in this case produces a map (π_1(M))→_̋1(LM, M). Consider the trace of the Whitehead torsion (τ(f))∈_̋1(LM, M). Denote its image under the antidiagonal map LM→ LM× LM given by γ↦ (γ^-1, γ) by ν' ⊗ν”∈⊕_i+j=1(_̋i(LM, M) ⊗_̋j(LM)) and similarly for ν'⊗ν”∈⊕_i+j=1(_̋i(LM) ⊗_̋j(LM, M)). Our first main result is a complete explanation of the non-homotopy invariance of the loop coproduct. [<Ref>] Let f M→ N be an orientation-preserving homotopy equivalence of closed oriented d-manifolds. For every α∈_̋n+d-1(LM, M) we have ∨_M(f(α)) - f(∨_N(α)) = (-1)^n+1ν'⊗ (ν”∧_N f(α)) - (f(α)∧_N ν')⊗ν”. The same result also holds for any other homology theory over which M and N are oriented and f is orientation-preserving. On the level of spectra the loop coproduct is a map Σ(LM^- M) →Σ^∞ LM/M ∧Σ^∞ LM/M, where LM^- M is the corresponding Thom spectrum. The same transformation formula holds for this operation as well, now for arbitrary (i.e. not necessarily orientation-preserving) homotopy equivalences. Homotopy equivalences with vanishing Whitehead torsion are known as simple homotopy equivalences, so we obtain that the loop coproduct is invariant under orientation-preserving simple homotopy equivalences. This is the first result to our knowledge which relates string topology operations to K-theoretic invariants such as the Whitehead torsion. While preparing this manuscript, we were informed that Lea Kenigsberg and Noah Porcelli have obtained an independent proof of a variant of Theorem A. The details will appear in their upcoming preprint <cit.>. To explain the appearance of the Whitehead torsion, let us recall from <cit.> that the loop product can be viewed as a fiberwise version of the intersection product _̋∙(M)⊗_̋∙(M)→_̋∙-d(M) on the manifold. Namely, given a space E→ M× M the fiberwise version of the intersection product is _̋∙(E)→_̋∙-d(E×_M× M M). Setting E = LM× LM M× M, where LM→ M is the evaluation of a loop at the basepoint, by <ref> we get the loop product as the composite _̋∙(LM)⊗_̋∙(LM)⟶_̋∙-d(LM×_M LM)⟶_̋∙-d(LM), where the last map is given by the composition of loops. The first map is a fiberwise intersection product and may be constructed from a pairing ϵ_p_M⊠_M→Δ_♯_M[d] in the ∞-category of local systems of chain complexes on M× M which is invariant under orientation-preserving homotopy equivalences. To present a similar point of view on the loop coproduct, we define the relative intersection product in <ref> following <cit.>. To define it we consider a commutative diagram F [r] [d] E [d] M [r] M× M The relative intersection product, see <ref>, lifts the intersection product to relative homology: _̋∙(E, F)⟶_̋∙-d(E×_M× M M, F). We have a commutative diagram LM⊔ LM ^-J_0⊔ J_1[r] [d] LM ^(_0, _1/2)[d] M ^-Δ[r] M× M where _t LM→ M are the evaluation maps at time t∈ [0, 1] and J_i LM→ LM are the reparametrization maps which make the loop constant either for times [0, 1/2] or [1/2, 1]. Then the loop coproduct _̋∙+d-1(LM)→_̋∙(LM× LM, M× LM∪ LM× M) is defined in terms of the relative intersection product with respect to the above commutative diagram, see <ref>. The relative intersection product consists of two essential ingredients: the pairing ϵ_p as well as a trivialization of the image of a certain canonical element, the Hochschild homology Euler characteristic, χ_(M)∈_0(LM) in _∙(LM, M). In other words, the second piece of data consists of a lift of χ_(M) along the inclusion of the constant loops _∙(M)→_∙(LM). The Hochschild homology Euler characteristic χ_(M)∈_0(LM) is well-defined for any finitely dominated space M and can be understood as the class of the constant local system _M∈(M) (which is a compact object since M is finitely dominated) in Hochschild homology _∙((M)^ω)≅((M)). Moreover, the image of χ_(M) in _0()= under the projection LM→ gives the Euler characteristic of M. Let us assume for simplicity that M is connected. Then we may define a version of χ_(M) over the sphere spectrum (see <ref> for details): * There is an element χ_(M)∈Ω^∞Σ^∞_+ LM≅(Σ^∞_+Ω M), the Euler characteristic. It coincides with the free loop transfer along M→ in the sense of <cit.>. * There is an element χ_A(M)∈ A(M) = K(Σ^∞_+ Ω M) in Waldhausen's A-theory, the A-theoretic Euler characteristic. It coincides with the homotopy-invariant Euler characteristic from <cit.>. Under the Dennis trace map A(M)→Ω^∞Σ^∞_+ LM we have χ_A(M)↦χ_(M). The inclusion of constant loops _∙(M)→_∙(LM) is an instance of the assembly map <cit.>: it is a universal colimit-preserving approximation to the functor M↦_∙(LM). There are similar assembly maps αΩ^∞_+ M⊗()⟶(M), αΣ^∞_+ M⟶Σ^∞_+ LM in A-theory and related by the Dennis trace map. The study of lifts of the A-theoretic Euler characteristic χ_A(M) along the assembly map is at the heart of simple homotopy theory. Namely, the fiber of the assembly map at χ_A(M) is the space of the structures of a simple homotopy type on a given finitely dominated homotopy type. An explicit description of the fiber of the assembly map in terms of the space of PL h-cobordisms is given by the parametrized h-cobordism theorem <cit.>. We use the following two basic facts from simple homotopy theory: * If M is a finite polyhedron (equivalently, a finite CW complex), there is a lift λ_(M) of χ_A(M) along the assembly map, see <ref>. * If f M_1→ M_2 is a homotopy equivalence of finite polyhedra, the difference of the lifts defines an element τ(f)∈(π_1(M_1)) in the Whitehead group given by the Whitehead torsion, see <ref>. The geometric input to the construction of the relative intersection product is a model of the Thom collapse for the diagonal ϵ_p which fixes the diagonal up to homotopy; namely, we require the composite Δ_♯_M→_M⊠_MΔ_♯_M[d] to be the pushforward of a map _M→_M[d] (the Euler class in ^d(M)) along the diagonal. Geometrically, for M a closed manifold we use a model of the Thom collapse given by the commutative diagram M [r] [d] _2(M) [d] M [r] M× M, where _2(M) is the Fulton–MacPherson compactification of the configuration space of two points M× M∖Δ and M→_2(M) is the inclusion of its boundary, which is the unit tangent bundle of M. This diagram is a pushout of spaces; so, passing to relative suspension spectra, we obtain a commutative triangle Δ_♯_M[r] _Δ_♯ e(M)[d] _M× M^ϵ_p[dl] Δ_♯^ M of parametrized spectra over M× M. We explain in <ref> that this diagram gives rise (and is in fact equivalent) to a lift λ_(M) of the Euler characteristic χ_(M) along the assembly map. So, at this point we have two lifts of χ_(M): one, λ_(M), constructed using the geometric model of the Pontryagin–Thom collapse along the diagonal and another one, (λ_(M)), constructed using a triangulation of M. The proof of <ref> then reduces to the following statement. [<Ref>] Let M be a closed d-manifold. Then the lifts λ_(M) and (λ_(M)) of the Euler characteristic χ_(M) are equivalent. To prove <ref>, we show that the lift λ_(M) is compatible with triangulations. As the assembly map is an equivalence when M is contractible and λ_(M) is constructed by covering the manifold by open stars of simplices of the triangulation, this gives the result. §.§ Connection to embedding calculus <Ref> has the following implication for embedding calculus <cit.>. Let _d be the ∞-category of smooth manifolds and embeddings. Let _d⊂_d be the full subcategory consisting of finite disjoint unions of ^d. Following the perspective of <cit.> we understand embedding calculus as the restricted Yoneda embedding _d→(_d)=(_d^, ) given by sending a manifold M to E_M := (-, M), the collection of spaces of embeddings of Euclidean spaces into M. For instance, for a _d-algebra A its factorization homology over M as in <cit.> is completely determined by E_M: ∫_M A ≅ E_M⊗__d A. On the level of homotopy types, our diagram (<ref>) can be written in terms of the _d-presheaf E_M as follows: E_M(^d) ×_O(d) E_^d(^d ⊔^d)/O(d)^× 2[r] [d] E_M(^d ⊔^d) /O(d)^× 2[d] M [r] M× M so that the lift λ_(M) is an invariant of E_M. We thus obtain the following consequence. Let f N → M be a homotopy equivalence between closed d-manifolds with Whitehead torsion τ(f)∈(π_1(M)). If (τ(f)) ≠ 0∈_̋1(LM, M), then f cannot be extended to an equivalence E_N→ E_M of _d-presheaves. As explained above, (τ(f)) only depends on the corresponding diagrams (<ref>). Diagrams of that shape (satisfying a certain non-degeneracy condition) are also studied in <cit.> (see also <cit.> for a comparison to our setting) under the name of Poincaré diagonals. It is furthermore suggested there that a C_2-refinement of (λ_(M)) is a complete invariant for d sufficiently large. §.§ String topology TQFT Cohen and Godin <cit.> described a 2-dimensional TQFT Z whose state space Z(S^1) is isomorphic to _̋∙(LM) and whose pair-of-pants product is given by the loop product. There are two subtleties related to the definition of this TQFT. First, the TQFT suffers an orientation anomaly: the operation associated to a surface Σ shifts the homological degree by χ(Σ)(M). We instead consider Z as a framed 2d TQFT, so that, for instance, the boundary circles are equipped with 2-framings; we denote by S^1_n the circle equipped with a 2-framing with n twists. The second issue is that the operation associated to a disk with an incoming circle is not defined; in other words, the 2d TQFT is a positive boundary one (in the terminology of <cit.>) or a non-compact one (in the terminology of <cit.>). The construction of the (closed) 2-dimensional TQFT of Cohen and Godin was extended to an open-closed TQFT (in the sense of Moore and Segal <cit.>) in the works <cit.> with branes corresponding to submanifolds of M. It was observed by Tamanoi <cit.> that while the pair-of-pants product in the Cohen–Godin TQFT is interesting (it is the loop product), the pair-of-pants coproduct is not interesting: it can be expressed purely in terms of the Euler characteristic of M. The proof of this fact can be given by using a Frobenius-type relation illustrated in <ref> as well as the computation of the value (1)∈ Z(S^1)⊗ Z(S^1)≅_̋∙(LM)⊗_̋∙(LM) of Z on the cylinder viewed as a cobordism with an empty incoming boundary. The key observation is that the element (1)∈_̋0(LM)⊗_̋0(LM) is the image of the homological Euler class e(M)∈_̋0(M) under _̋0(M)_̋0(M)⊗_̋0(M)⟶_̋0(LM)⊗_̋0(LM). In turn, if M is connected with a basepoint x, the homological Euler class is equal to χ(M)[x]. In <cit.> it was suggested how to extend the string topology TQFT to a fully extended (non-compact) framed 2d TQFT in the fully homotopical context. Namely, by the cobordism hypothesis such a TQFT Z is determined by a smooth dg category Z() which one can take to be (M). In a sense, it corresponds to an open-closed TQFT with a maximal set of branes; the branes corresponding to submanifolds i N↪ M in this picture correspond to the pushforward local systems i_♯_N. We show in <ref> that the pair-of-pants product in this TQFT coincides with the loop product; so, this 2d TQFT Z is indeed a homotopical and fully extended lift of the Cohen–Godin TQFT. The homotopy of cobordisms shown in <ref>, which realizes the proof of the triviality of the pair-of-pants coproduct Z(S^1_1)→ Z(S^1_0)⊗ Z(S^1_0), gives rise to a secondary operation on Z(S^1_1). Namely, given a trivialization of (1)∈_0(LM× LM), which is marked in <ref> in blue, we obtain a secondary coproduct ∨ Z(S^1_1)⟶ Z(S^1_0)⊗ Z(S^1_0)[-1]. For a general closed oriented manifold M, the lift λ_(M) provides a trivialization of the image of (1) in _0(LM× LM, M× M). Thus, using the lift λ_(M) we obtain the coproduct ∨_∙(LM)[-d]⟶_∙(LM, M)⊗_∙(LM, M)[-1]. [<Ref>] The secondary TQFT coproduct (<ref>) coincides with the loop coproduct. This theorem provides a conceptual explanation for the failure of homotopy invariance of the loop coproduct: while the TQFT Z is homotopy invariant, the loop coproduct is a secondary operation which requires a trivialization of (1) and this extra piece of information is not homotopy invariant. More precisely, any two trivializations differ by an element τ∈ Z(S^1_0) ⊗ Z(S^1_0)[-1]. The corresponding secondary TQFT coproduct then differ by the sum of the two sides of <ref>, where (1) is replaced by τ. This is exactly the formula in <ref>. This theorem also suggests how to generalize the loop coproduct for smooth dg categories other than =(M). Namely, any such smooth dg category defines a non-compact framed 2d TQFT Z_. In this case (1) is known as the Shklyarov copairing <cit.> (its analog for proper dg categories is the Mukai pairing <cit.>). Given a trivialization of (1) (or its partial trivialization like in the case of manifolds with non-vanishing Euler characteristic) we obtain a secondary coproduct ∨ Z(S^1_1)→ Z(S^1_0)⊗ Z(S^1_0)[-1]. In <ref> we describe several examples of smooth dg categories, where the Shklyarov copairing admits a trivialization (or a partial trivialization); in some of these cases we expect that the secondary coproduct coincides with the already known constructions of such coproducts: * For oriented manifolds M together with a non-vanishing vector field f (or equivalently a combinatorial Euler structure in the sense of <cit.>) one can fully trivialize (1) so that we obtain a loop operation ∨_f _∙(LM)[-d] →_∙(LM) ⊗_∙(LM)[-1]. We expect this to coincide with the constructions in <cit.>, <cit.> and <cit.>. We note that this coproduct depends on the non-vanishing vector field and is not preserved by general diffeomorphisms (not preserving the vector field). Instead, there is a transformation formula analogous to <ref>. * If M is a nondegenerate Liouville manifold of dimension 2n, the wrapped Fukaya category (M) is smooth and there is an isomorphism _∙((M))≅^∙(M) to the symplectic cohomology <cit.>. By <cit.> the Shklyarov copairing (1) vanishes when projected to the positive-energy symplectic cohomology ^∙_>0(M), so we obtain a secondary coproduct ^∙(M)⟶^∙_>0(M)⊗^∙_>0(M)[n-1]. We expect that it coincides with the loop coproduct defined geometrically <cit.>. We refer to <ref> for more on this. * Let A = k⟨ x_1, …, x_n⟩ be the free algebra. It is smooth and by <ref> the Shklyarov copairing (1) vanishes when projected to the reduced Hochschild homology _∙(A) defined as the quotient of _∙(A) by the span of the class [A] of the free module, so we obtain a secondary coproduct ^1(A)⟶_0(A) ⊗_0(A^). We expect that it coincides with the divergence map from <cit.>. In the case of the path algebra of a doubled quiver, the composite of the divergence map and the Connes operator coincides with the Lie cobracket on the necklace Lie bialgebra <cit.>. Related interpretations of the loop coproduct have appeared in the literature previously: * For a smooth dg category the work of Rivera, Takeda and Wang (see <cit.>) constructs an explicit homotopy G between Hochschild complexes of which we expect to be a chain-level model of the homotopy ∨' from <ref>. They conjecture that the secondary coproduct coincides with the loop coproduct for an appropriate choice of the lift of (1) (which is denoted by E in their paper). <Ref> states that an appropriate choice is the Pontryagin–Thom lift λ_(M), which by <ref> is the same as the Whitehead lift (λ_(M)). Moreover, they show in <cit.> that the corresponding secondary coproduct is coassociative for M ≥ 3. * For a closed oriented Riemannian manifold M the work of Cieliebak, Hingston and Oancea (see <cit.>) constructs a homotopy λ^F between certain Floer chain complexes of the unit disk cotangent bundles D^* M (following Abbondandolo and Schwarz <cit.>). The analog of (1) in that case is the secondary continuation quadratic vector Q_0^F. Moreover, they show in <cit.> that the resulting secondary coproduct on the reduced symplectic homology of D^* M coincides with the loop coproduct under the Viterbo–Abbondandolo–Schwarz isomorphism ^∙(D^* M)≅_̋-∙(LM). §.§ Acknowledgements We would like to thank Manuel Rivera and Nathalie Wahl for useful discussions. § DUALITY AND DIMENSIONS §.§ Categorical background Throughout the paper we use the language of ∞-categories. We denote by the ∞-category of spaces (homotopy types or ∞-groupoids). Given a topological space X, we denote the corresponding homotopy type by (X)∈. We denote by the ∞-category of spectra. As a convention, we denote by smash product in by ⊗ (more classically, it is denoted by ∧). Similarly, the direct sum in is denoted by ⊕ (more classically, it is denoted by ∨). For an ∞-category we denote by ^ω⊂ the subcategory of compact objects. We denote by the symmetric monoidal (∞, 2)-category whose objects are idempotent-complete small stable ∞-categories and 1-morphisms exact functors. The unit object ^ω∈ is the ∞-category of finite spectra. We denote by the symmetric monoidal (∞, 2)-category whose objects are stable presentable ∞-categories and 1-morphisms colimit-preserving functors. The unit object ∈ is the ∞-category of spectra. For an ∞-category we denote by _(-, -)∈ the mapping space. If ∈ we denote by _(-, -)∈ the mapping spectrum. §.§ Dimensions In this section we recall the formalism of traces in ∞-categories explained in <cit.>. We refer to <cit.> for a 1-categorical treatment. We denote by σ the braiding on a symmetric monoidal ∞-category. Let be a symmetric monoidal category. An object x∈ is dualizable if there is an object x^∨∈ and a pair of morphisms x⊗ x^∨→_ and _→ x^∨⊗ x such that the composites x x⊗ x^∨⊗ x x x^∨ x^∨⊗ x⊗ x^∨ x^∨ are the identities. An object of a symmetric monoidal ∞-category is dualizable if it is dualizable in the underlying homotopy 1-category. Explicitly, an object x∈ of a symmetric monoidal ∞-category is dualizable if it has a dual x^∨, evaluation and coevaluation morphisms x⊗ x^∨→_, _→ x^∨⊗ x and cusp 2-isomorphisms α (⊗𝕀_x)∘ (𝕀_x⊗)→𝕀_x, β (𝕀_x^∨⊗)∘ (⊗𝕀_x^∨)→𝕀_x^∨. We have the following uniqueness statements for duality data (see <cit.> and <cit.>): * For a given x, the space {x^∨, } of objects x^∨ and evaluation morphisms , for which there exist some compatible triple {,α,β}, is contractible. * For a given x, the space {x^∨, ,,α} of objects x^∨, evaluation morphisms , coevaluation morphisms and the cusp isomorphism α, for which there exists some compatible β, is contractible. In other words, we are allowed not to worry about the choices of the duality data as long as we choose the “right” amount of information. Given dualizable objects x, y∈ the object x⊗ y is dualizable with the dual y^∨⊗ x^∨, the evaluation morphism x⊗ y⊗ y^∨⊗ x^∨x⊗ x^∨_ and the coevaluation morphism _ y^∨⊗ y y^∨⊗ x^∨⊗ x⊗ y. Given a morphism of dualizable objects, we can pass to its dual. Let be a symmetric monoidal category, x,y∈ are dualizable objects and f x→ y is a morphism. The dual morphism f^∨ y^∨→ x^∨ is the composite y^∨x^∨⊗ x⊗ y^∨ x^∨⊗ y⊗ y^∨ x^∨. In particular, we have canonical commutative diagrams (see <cit.>) x⊗ x^∨^f⊗𝕀[dr] _^_x[ur] __y[dr] y⊗ x^∨ y⊗ y^∨_𝕀⊗ f^∨[ur] y⊗ y^∨^_x[dr] x⊗ y^∨_𝕀⊗ f^∨[dr] ^f⊗𝕀[ur] _ x⊗ x^∨__y[ur] Let be a symmetric monoidal ∞-category and x∈ a dualizable object with evaluation x⊗ x^∨→_ and coevaluation _→ x^∨⊗ x. Then there is a canonical 2-isomorphism ()^∨≅σ∘. By definition the dual ()^∨ is given by the composite _ x⊗ x^∨ x⊗ x^∨⊗ x⊗ x^∨ x⊗ x^∨. Applying the cusp 2-isomorphism β_x we obtain the 2-isomorphism ()^∨→. Dualizable objects have dimensions defined as follows. Let be a symmetric monoidal ∞-category and x∈ a dualizable object. Its dimension (x)∈_(_) is the composite (x)_ x^∨⊗ x x⊗ x^∨_. More generally, for an endomorphism f x→ x of a dualizable object we define the trace (f)∈_(_) to be the composite (f)_ x^∨⊗ x x^∨⊗ xx⊗ x^∨_. To simplify the notation, we will often omit the braiding, implicitly identifying coevaluation maps _→ x^∨⊗ x with maps _→ x⊗ x^∨ using the braiding. <Ref> makes it clear how to extend to a map ^→_() where ^ is the groupoid core of dualizable objects in (or, equivalently, the ∞-groupoid of tuples (x,x^∨, , , α) satisfying the above conditions). Now let be a symmetric monoidal (∞, 2)-category. In this case the construction of the dimension extends as follows. By definition, an adjunction in is the same as an adjunction in the corresponding homotopy 2-category. In other words, it is specified by the condition that there exist two 1-morphisms f x→ y and f^ y→ x together with two 2-morphisms η𝕀_x⇒ f^ f and ϵ ff^⇒𝕀_y together with 3-isomorphisms witnessing the triangle equalities. We refer to <cit.> for a proof that in this notion of adjunction is equivalent to the one given in <cit.>. Let be a symmetric monoidal (∞, 2)-category, f x→ y is a morphism of dualizable objects which admits a right adjoint f^ y→ x. The transfer map (f)(x)⟶(y) is given by traversing the diagram @C=1cm x⊗ x^∨_f⊗𝕀[dr] @=[rr] @[d]^(.2)="a"^(.63)="b" ^η_f@=> "a";"b" x⊗ x^∨^_x[dr] _^_x[ur] __y[dr] y⊗ x^∨_f^⊗𝕀[ur] ^𝕀⊗ (f^)^∨[dr] @[d]^(.2)="c"^(.63)="d" ^ϵ_f@=> "c";"d" _ y⊗ y^∨^-𝕀⊗ f^∨[ur] @=[rr] y⊗ y^∨__y[ur] where the two squares commute using (<ref>). In formulas, the transfer map is given by the composite (x) =_x ∘_x _x ∘ (f^ f ⊗𝕀)∘_x ≅_x ∘ (f^⊗ f^∨)∘_y ≅_y ∘ (𝕀⊗ (f^)^∨∘ f^∨)∘_y ⟶_y ∘_y=(y). We are mainly interested in the example where =. The following statement gives a description of some of the objects in ^ (a characterization of all dualizable objects in is given in <cit.>). Suppose ∈ is compactly generated, i.e. =(^ω). Then is dualizable. The dual is ^∨ = (^ω, ) and the evaluation pairing ⊗^∨→ is given by ind-extending the mapping spectrum functor _(-, -)^ω×^ω, →. Let I be a small ∞-category. Then ^I = (I, ) is dualizable with dual given by ^I^. The evaluation pairing ^I⊗^I^→ is (F, G) = ∫^i∈ I F(i)⊗ G(i) and the coevaluation pairing →^I^⊗^I≅^I^× I sends to the functor i, j↦Σ^∞_+ _I(i, j). In particular, (^I)≅∫^i∈ IΣ^∞_+ _I(i, i) and for a functor f I→ J the transfer (f_♯)(^I)⟶(^J) is given by the composite ∫^i∈ IΣ^∞_+_I(i, i)⟶∫^i∈ IΣ^∞_+_J(f(i), f(i))⟶∫^j∈ JΣ^∞_+_J(j, j). Suppose ,∈ are compactly generated and F→ is a functor preserving compact objects. Then it admits a colimit-preserving right adjoint; in other words, it is a right-adjointable 1-morphism in . By <cit.> we have ()≅(^ω) and the transfer map fits into the commutative diagram @C=2cm() ^(F)[r] ^∼[d] () ^∼[d] (^ω) ^(F)[r] (^ω). Suppose ∈ is compactly generated and x∈ is a compact object. Consider the functor F_x→ given by V↦ V⊗ x which admits a colimit-preserving right adjoint. By the previous example the class [x]∈Ω^∞(^ω) is computed as the image of the transfer map ≅()(). This is the Chern character of the object x <cit.>. The construction of transfer maps can be formulated coherently as follows. The authors of <cit.> define an (∞,1)-category ^ with the following informal description: * Objects of ^ are dualizable objects in . * For x, y ∈^ morphisms x→ y in ^ are the same as morphisms x→ y in admitting a right adjoint. The ∞-category ^ allows one to formulate the functoriality of dimensions as follows (see <cit.>). There exists a symmetric monoidal functor (-) ^→_(), given on objects by x↦(x) and on morphisms by (f x→ y)↦ ((f)(x)→(y)). Using the functoriality of the dimensions given by <ref>, we see that the Chern character (F_x)→() when x∈^ω ranges over compact objects of defines a map (^ω)^∼⟶Ω^∞() from the groupoid core of compact objects in . §.§ Relative dualizability We will now specialize the setting of the previous section to the case =. We begin with the following observation. Let ∈ be a stable presentable ∞-category and x∈ an object. Then x is compact if, and only if, the functor → given by V ↦ V ⊗ x admits a right adjoint in . By definition, the functor V↦ V⊗ x admits a right adjoint given by the mapping spectrum functor _(x, -)→. It preserves finite colimits since is stable; therefore, it defines a right adjoint in if it preserves filtered colimits. Suppose _(x, -)→ preserves filtered colimits. Then the composite _(x, -) preserves filtered colimits. Therefore, x∈ is compact. Conversely, suppose x∈ is compact. Then x[n]∈ is compact for every n∈. Therefore, Ω^∞+n_(x, -)→ preserves filtered colimits. As the functors {Ω^∞+n→}_n∈ are jointly conservative and preserve filtered colimits, this implies that _(x, -)→ preserves filtered colimits. The previous statement rephrases the property of an object of a stable presentable ∞-category being compact internally to the 2-category . We are now going to use this characterization to introduce a duality for compact objects in dualizable (for example, compactly generated) stable presentable ∞-categories. Let ∈ be a dualizable stable presentable ∞-category with the dual ^∨∈, evaluation functor ⊗^∨→ and the coevaluation functor →^∨⊗. We say an object x∈ is relatively (right) dualizable if there is an object x^∨∈^∨ together with morphisms ϵ x^∨⊠ x→() and η→(x, x^∨) satisfying the obvious analogs of the duality axioms in <ref>. Let ∈ be a dualizable stable presentable ∞-category and x∈. The following are equivalent: * x∈ has a relative dual x^∨∈^∨ with evaluation pairing ϵ x^∨⊠ x→(). * The functor → given by ↦ x admits a colimit-preserving right adjoint H_x→ such that the induced map x^∨→ (𝕀⊗ H_x)() is an isomorphism, i.e. the functor →^∨ given by ↦ x^∨ is dual to H_x. * x is compact. Assume x∈ has a relative dual x^∨∈^∨. The relative dualizability data is provided by maps ϵ x^∨⊠ x⟶() η ⟶(x, x^∨) where ϵ is a morphism in ^∨⊗ and η is a morphism in . Identifying ^∨⊗≅(, ) via y⊠ x↦(-, y)⊗ x the data of ϵ is equivalent to the data of the morphism ϵ(-, x^∨)⊗ x⟶𝕀 in (, ). We see that the relative dualizability data (ϵ, η) for (x, x^∨) is equivalent to the data witnessing the functors (↦ x, z↦(z, x^∨)) as being adjoint. In turn, the functor → given by z↦(z, x^∨) is dual to the functor →^∨ given by ↦ x^∨ which proves the equivalence of the first two statements. The equivalence of the last two statements is the content of <ref>. This perspective on compact objects allows us to give a formula for the Chern character. Suppose ∈ is dualizable and x∈ is compact. Then its Chern character (F_x)→() = (()) is given by the composite (x, x^∨)(()). The statement is obtained by unpacking <ref>. §.§ K-theory Given a small stable ∞-category we have the connective K-theory spectrum () defined as in <cit.>. We denote by K()=Ω^∞() the underlying space. By construction, there is a natural map ^∼→ K() from the groupoid core of objects in to K(); so, an object x∈ defines a point [x]∈ K(). There is a Dennis trace map ()⟶() defined, for instance, by the universal property in <cit.>. As explained in <cit.>, it is compatible with the dimension map defined previously as follows. Let be a stable compactly generated ∞-category. Then there is a commutative diagram (^ω)^∼^-(<ref>)[r] Ω^∞() K(^ω) [u] ^-[r] (^ω) ^∼[u] Recall from <cit.> that an additive invariant is a functor F→ which preserves filtered colimits, sends Morita equivalences to isomorphisms and sends split-exact sequences to exact sequences of spectra. We will use that algebraic K-theory → as well as topological Hochschild homology → are additive functors, see <cit.>. §.§ Smooth objects Throughout this section denotes a symmetric monoidal (∞, 2)-category. Let x∈ be a dualizable object. * x is smooth if admits a left adjoint ^Ł. * x is proper if admits a right adjoint ^. For any dualizable object the evaluation map is dual to the coevaluation map , so admits a left adjoint if, and only if, admits a right adjoint ^ and vice versa. In particular, for ∈ we obtain that is smooth if, and only if, (1) ∈⊗^∨ is compact. Suppose = and ∈ is compactly generated. Using the duality data from <ref> we see that is proper if, and only if, for every compact objects x,y∈ the mapping spectrum _(x, y) is finite. If ∈ is smooth, by adjunction we obtain an equivalence ()≅_⊗^∨(^Ł(), ()). In the smooth case the data of a relative dualizability of an object x∈ can be phrased in terms of the evaluation ϵ x⊠ x^∨→() and coevaluation η^Ł()→ x⊠ x^∨ satisfying the duality axioms. In this case the Chern character of a relatively dualizable object x∈ in a smooth ∞-category can be written as the composite ^Ł() x⊠ x^∨() If is a monoidal ∞-category and a -module ∞-category equipped with an object M∈, the center of M is an object (M)∈ satisfying the universal property _(D, (M))≅_(D⊗ M, M) for every D∈, see <cit.> for more details. We will use the center in the following way. Let x∈ be an object. The center of x is an object (x)∈_() which is the center of 𝕀_x∈_(x), where we view _(x) as an _()-module ∞-category. As 𝕀_x is an _1-algebra in _(x), one can show that if the center (x)∈_() exists, then (x) upgrades to an _2-algebra in _(). Considering as the bicategory of small categories, we see that for a category ∈ its center () is identified with the set of natural endomorphisms of the identity functor. Let x∈ be a smooth object. Then ^∘ is the center (x). Since x is dualizable, we may identify _(x)≅_(, x^∨⊗ x) and under this identification 𝕀_x↦. Moreover, since x is smooth, has a right adjoint ^. For D∈_() we have natural isomorphisms __(x)(D⊗𝕀, 𝕀)≅__(, x^∨⊗ x)(∘ D, )≅__()(D, ^∘). § PARAMETRIZED SPECTRA In this section we describe ∞-categorical functoriality of the category of parametrized spectra introduced in <cit.> following <cit.>. §.§ Functor categories Let I be a small ∞-category and a stable presentable ∞-category. Consider (I, )∈. As it has small colimits and limits, it is tensored and cotensored over spaces. For a functor f I→ J of small ∞-categories we have the induced functors @C=2cm(I, ) @/^2.0pc/^f_♯[r] @/_2.0pc/^f_*[r] (J, ) _f^*[l] where (f^* G)(i) = G(f(i)) is the restriction functor and its left and right adjoints are the left and right Kan extensions given by the (co)ends (f_♯ F)(j) = ∫^i∈ I_J(f(i), j)⊗ F(i), (f_* F)(j) = ∫_i∈ I(_J(j, f(i)), F(i)), see <cit.>. We have f_♯⊣ f^* ⊣ f_*. In particular, f_♯ and f^* are colimit-preserving functors, but note that f_* is not, in general, colimit-preserving. Let us now state some basic properties of functor categories that we will use. Recall the notion of a smooth functor f I→ J from <cit.> (see also <cit.>). By <cit.> an equivalent way to define it is as follows. A functor f I→ J is proper if for every j∈ J the functor {j}×_J I→ I_/j is cofinal. A functor f is smooth if f^ I^→ J^ is proper. We will encounter smooth functors coming from the following two examples: * If f X→ Y is a morphism of ∞-groupoids, then it is smooth by <cit.>. * The projection p I→ is smooth by <cit.> for any I. Let ∈ be a stable presentable ∞-category. Suppose that Ĩ^g̃[r] ^f̃[d] I ^f[d] J̃^g[r] J is a Cartesian diagram of small ∞-categories, where g is smooth. Then the natural transformation f̃_♯g̃^*⇒ g^* f_♯ of functors (I, )→(J̃, ) is an equivalence. The dual claim (replacing by ^ and left Kan extensions by right Kan extensions) is shown in <cit.>. Suppose is compactly generated. Then the category (I, ) is compactly generated. If I is finite and _I(x, y) is a finite space for every x,y∈ I, then (I, ^ω) = (I, )^ω considered as subcategories of (I, ). For the first claim see <cit.> and for the second claim see <cit.>. For two small ∞-categories I, J and ,∈ there is a natural equivalence ⊠(I, )⊗(J, )⟶(I× J, ⊗). By <cit.> we have (I, )≅(I, )⊗, so it is enough to prove the claim for ==. Recall that by <cit.> there is a natural identification between ⊗ and the ∞-category ^(^, ) of limit-preserving functors ^→. Therefore, (I, )⊗(J, )≅^((I, )^, (J, )). By <cit.> we have ^((I, )^, (J, ))≅(I, (J, )) which is naturally equivalent to (I× J, ). §.§ Duality for stable presheaves Consider a small ∞-category I as before and the ∞-category ^I = (I, ) of functors. Recall from <ref> that ^I is dualizable with dual ^I^. Let f I→ J be a functor of small ∞-categories and f^ I^→ J^ its opposite. Consider the duality (^I)^∨≅^I^ and (^J)^∨≅^J^ from <ref>. Then f_♯^I→^J is dual to (f^)^*^J^→^I^. Moreover, the following conditions are equivalent: * The functor f^*^J→^I admits a colimit-preserving right adjoint f_*^I→^J. * The functor (f^)_♯^I^→^J^ admits a left adjoint (f^)^♯^J^→^I^. For the first statement we have to show that the diagram @C=2cm^[r] ^[d] ^I^⊗^I ^𝕀⊗ f_♯[d] ^J^⊗^J ^(f^)^*⊗𝕀[r] ^I^⊗^J naturally commutes. This follows from the natural equivalence Σ^∞_+_J(f(i), j)≅∫^i∈ IΣ^∞_+(k, i)⊗Σ^∞_+(f(k), j). Given the adjunction f^*⊣ f_*, passing to the duals we get an adjunction (f_*)^∨⊣ (f^*)^∨≅ (f^)_♯ which shows the second statement. Consider the projection p I→∗ and the diagonal map Δ I→ I× I. Denote by _I=p^*∈^I the constant object i∈ I↦. Consider the following additional assumption on the ∞-category I. A small ∞-category I admits duality if the functor Δ_♯ p^*→^I⊗^I is the coevaluation of a self-duality of ^I. The evaluation is dual to coevaluation, so the functor Δ_♯ p^*→^I⊗^I is the coevaluation of a self-duality of ^I if, and only if, the functor (p^)_♯ (Δ^)^*^I^⊗^I^→ is the evaluation of a self-duality of ^I^. This is similar, but not identical, to the notion of a Verdier poset from <cit.>, where a finite poset P is called Verdier if p_*Δ^* is the evaluation of a self-duality of ^P (note the difference in the choice of the pushforward functors). If I admits duality, then we may compare the self-duality of ^I to the duality between ^I and ^I^ from <ref> using the duality functor ^I^⟶^I given by ()(i) = _j (j, i)⊗(j). For an ∞-category I admitting duality the functor is an equivalence. This functor is a generalization of the Costenoble–Waner duality functor, see <ref>. Let f I→ J be a functor of small ∞-categories which admit duality and consider the corresponding self-duality of ^I and ^J. Then there is a natural transformation (f_♯)^∨⟶ f^*. If f is smooth, (<ref>) is an equivalence and the following conditions are equivalent: * The functor f^*^J→^I admits a colimit-preserving right adjoint f_*^I→^J. * The functor f_♯^I→^J admits a left adjoint f^♯^J→^I. Consider the Cartesian diagram of ∞-categories @C=2cm I ^(f×𝕀)∘Δ^I[r] ^f[d] J× I ^𝕀× f[d] J ^Δ^J[r] J× J and the corresponding base change natural transformation (f_♯⊠𝕀)Δ^I_♯ f^*⟶ (𝕀⊠ f^*)Δ^J_♯. Applying it to _J we get a morphism (f_♯⊠𝕀)Δ^I_♯_I⟶ (𝕀⊠ f^*)Δ^J_♯_J which produces a natural transformation (f_♯)^∨→ f^*. If f I→ J is smooth, then (f×𝕀) J× I→ J× J is smooth, since smooth functors are stable under base change <cit.>. In particular, by the base change formula from <ref> the morphism (f_♯⊠𝕀)Δ^I_♯_I→ (𝕀⊠ f^*)Δ^J_♯_J is an isomorphism. The second statement is proven analogously to the second statement in <ref>. If I is a small ∞-category which admits duality, the product I× I admits duality. Then (<ref>) gives a natural transformation Δ_♯^∨⟶Δ^*. The functor Δ^* defines a symmetric monoidal structure on ^I with the unit _I. Its left adjoint, Δ_♯, defines a symmetric comonoidal structure with the counit p_♯. Passing to the dual we obtain a new symmetric monoidal structure on ^I given by Δ_♯^∨ with the same unit: (p_♯)^∨()≅ p^*≅_I. In particular, there is the unitor natural isomorphism Δ_♯^∨(_I ⊠)≅. Then (<ref>) defines a lax symmetric monoidal structure on the identity functor with respect to the two symmetric monoidal structures Δ_♯^∨ and Δ^*. If we assume that I admits duality and _I is compact, we have further structure: * _I admits a relative dual ζ_I=p^♯∈^I. From <ref> we get the formula ζ_I = (p_*⊠𝕀)Δ_♯_I. The adjunction p^♯⊣ p_♯ is given by a natural equivalence _^I(ζ_I, -)≅ p_♯(-). * We obtain a formula for the evaluation as _^I_^I^∨≅ (p^*)^∨∘Δ_♯^∨ p_♯∘Δ_♯^∨_^I(ζ_I, Δ_♯^∨(- ⊠ -)). Under the equivalence _^I(_I ⊠ζ_I) _^I(ζ_I, Δ_♯^∨(_I ⊠ζ_I)) _^I(ζ_I, ζ_I), the coevaluation →_^I(_I ⊠ζ_I) of the relative duality between _I and ζ_I is mapped to the identity morphism ζ_I→ζ_I. First note that the relative duality between _I and ζ_I = p^♯ is established by the adjunction p^♯⊣ (p^*)^∨. By <ref> it follows that the unit of the adjunction → (p^*)^∨(ζ_I) = _^I(⊠ζ_I) is the coevaluation of the relative duality. But the unit of the adjunction is determined by the requirement that under the isomorphism _^I(_I ⊠ζ_I) = (p^*)^∨(ζ_I) (<ref>)≅ p_♯(ζ_I) (<ref>)≅_^I(ζ_I, ζ_I) it is sent to the identity. The statement thus follows once we show that the natural transformation (p^*)^∨(-) = _^I(_I ⊠ -) _^I(ζ_I, Δ_♯^∨(_I ⊠ -)) _^I(ζ_I, -), is equivalent to (p^*)^∨(-) (<ref>), (<ref>)≅_^I(ζ_I, -). Unpacking the definition of (<ref>) we obtain (p^*)^∨(-) = _^I(_I ⊠ -) (p^*)^∨∘Δ_♯^∨ (_I ⊠ -) (<ref>)≅ (p^*)^∨(-) _^I(ζ_I, -). We claim that the automorphism of (p^*)^∨ appearing is homotopic to the identity. Spelling out the definition of (<ref>) we arrive at the following presentation of that automorphism (p^*)^∨ = _^I∘ (p^* ⊗𝕀) (p^*)^∨∘Δ_♯^∨∘ (p^* ⊗𝕀) (p^*)^∨∘Δ_♯^∨∘ (p_♯^∨⊗𝕀) ≅ (p^*)^∨ (( p_♯⊗𝕀) ∘Δ_♯)^∨≅ (p^*)^∨. By passing to duals, it is given by p^* = (p^*)^∨∨ = ((p^*)^∨⊗𝕀)(Δ_♯ p^*) (p_♯⊗𝕀)(Δ_♯ p^*) = p^*. which is equivalent to p^* p_♯^∨ = (p_♯⊗𝕀)(Δ_♯ p^*) = p^*. To show that this composite is homotopic to the identity it suffices to show that the adjoint map p_♯ p^* →𝕀 is the counit of the adjunction p_♯⊣ p^*. But (<ref>) is defined by its adjoint map p_♯ p_♯^∨() = (p_♯⊗ p_♯)Δ_♯ p^*→ as the composition of (p_♯⊗ p_♯)Δ_♯ p^* ≅ (Δ_)_♯ p_♯ p^* = p_♯ p^* →𝕀, where we used the equalities (p × p)∘Δ_I = Δ_∘ p = p, so that precomposing with p^* ≅ (𝕀⊗ p_♯)Δ_♯ p^* gives the desired counit. §.§ The case of spaces In this section we specialize the previous discussion to the case when I=X is a space. The ∞-category of parametrized spectra is ^X = (X, ). Let M be a CW complex and X=(M) its underlying homotopy type. Then M is locally of singular shape in the sense of <cit.> and so one may identify ^X with the full subcategory _lc(M; )⊂(M; ) of locally constant sheaves of spectra over M <cit.>. All constructions and theorems in this section apply equally well to the ∞-category of local systems (X) = (X, _k), where k is an _∞-ring. To simplify the notation, in this section we only consider the case k=. A map of spaces f X→ Y induces functors @C=2cm^X @/^2.0pc/^f_♯[r] @/_2.0pc/^f_*[r] ^Y _f^*[l] with f_♯⊣ f^* ⊣ f_*. In this case the formulas (<ref>) reduce to (f_♯)_y≅_x∈ f^-1(y)_x, (f_*)_y≅lim_x∈ f^-1(y)_x as follows from <cit.>. For instance, for p X→ we have the constant parametrized spectrum _X=p^* and p_♯_X≅_x∈ X≅Σ^∞_+ X. The construction of parametrized spectra has the following compatibility with colimits. Consider a functor I→ given by i↦ X_i with colimit X∈ and denote the natural projections by f_i X_i→ X. Then the counits (f_i)_♯ (f_i)^*→𝕀 identify _i (f_i)_♯ (f_i)^*≅𝕀. Consider x∈ X and ∈^X. Then we have to show that _i _y∈ f_i^-1(x)_f_i(y)→_x is an isomorphism. This is equivalent to showing that _i _y∈ f_i^-1(x)≅_i f_i^-1(x)∈ is contractible. But since colimits in are universal (<cit.>), we have _i f_i^-1(x)≅𝕀^-1(x). Consider the ∞-category _/X of spaces over X and (_/X)_∗ of retractive spaces over X. There are fiberwise stabilization functors Σ^∞_X (_/X)_∗⟶^X, Σ^∞_+X_/X⟶^X. For a space f Y→ X over X we may identify f_♯_Y≅Σ^∞_+X Y. Our next goal is to establish a self-duality of the ∞-category of parametrized spectra. For this we will extend parametrized spectra to a bivariant functor using the results of <cit.>. Since is freely generated under colimits by ∈, there is a unique colimit-preserving functor _♯→ which sends ∈ to ∈. Moreover, the symmetric monoidal structure on induces a symmetric monoidal structure on the functor _♯. Explicitly, _♯ sends X to ^X and f X→ Y to f_♯^X→^Y. Consider the symmetric monoidal (∞, 2)-category () which has the following informal description: * Its objects are spaces. * Its 1-morphisms from X to Y are correspondences X← C→ Y of spaces. * Its 2-morphisms from X← C_1→ Y to X← C_2→ Y are morphisms or correspondences: diagrams of the shape C_1 [d] [ddl] [ddr] C_2 [dl] [dr] X Y We refer to <cit.> for a construction of this (∞, 2)-category. There is a natural symmetric monoidal functor →() which is the identity on objects and which sends f X→ Y to X X Y. In particular, any symmetric monoidal functor out of () restricts to a symmetric monoidal functor out of . There is a unique symmetric monoidal functor of (∞, 2)-categories _♯^*()⟶, whose restriction to is _♯. By <ref> the functor _♯→ satisfies the left Beck–Chevalley condition from <cit.>. Therefore, by <cit.> the functor _♯ uniquely extends to the functor _♯^* and by <cit.> the symmetric monoidal structure uniquely extends. The functor _♯^*()→ constructed in the previous theorem has the following informal description: * On the level of objects it sends X to ^X. * On the level of 1-morphisms it sends X C Y to g_♯ f^*^X→^Y. * On the level of 2-morphisms it sends C [d] _f[ddl] ^g[ddr] C̃^f̃[dl] _g̃[dr] X Y to the natural transformation g_♯ f^*⇒g̃_♯f̃^* induced by C→C̃. Let us state several important corollaries of this construction. First, ^X carries a natural symmetric monoidal structure given by ^X⊗^X≅^X× X^X, where Δ X→ X× X is the diagonal map. This symmetric monoidal structure gives rise to a self-duality datum on ^X. Let X be a space and denote by p X→ the natural projection. The functors ^X⊗^X^X and ^X^X× X≅^X⊗^X establish a self-duality of ^X∈. In particular, X admits duality in the sense of <ref>. By <cit.> (see also <cit.>) every object X∈() is self-dual with the evaluation map X× X X and the coevaluation map X X× X. Since _♯^* is symmetric monoidal, (X) becomes self-dual with the asserted evaluation and coevaluation functors. Using the self-duality data of ^X from <ref> we see that a relative duality for ∈^X consists of an object ^∨∈^X together with maps ⊠^∨→Δ_!_X and → p_♯(⊗^∨) satisfying the duality axioms. This data coincides with the notion of Costenoble–Waner duality of parametrized spectra from <cit.>. Using the duality data we will now compute the dimension of ^X. Consider the composite functor which sends X to (^X) and f X→ Y to (f_♯)(^X)→(^Y). There is a natural equivalence (^-)≅Σ^∞_+ L(-) of functors →. For instance, for a map f X→ Y the transfer (f_♯) fits into a commutative diagram (^X) ^∼[r] ^(f_♯)[d] Σ^∞_+ LX ^Lf[d] (^Y) ^∼[r] Σ^∞_+ LY We refer to <cit.> for the natural equivalence (X)≅ LX for X∈(). Post-composing with ^*_♯ and using the fact that is symmetric monoidal we get the result. Finally, let us describe the case when _X∈^X is compact. A space X∈ is finitely dominated if it is a retract of a finite space. Equivalently, X is finitely dominated if, and only if, it is a compact object in (see <cit.>). Concretely, suppose M is a CW complex. Then (M)∈ is finitely dominated in the above sense if there is a finite CW complex N together with maps i M→ N and r N→ M such that r∘ i is homotopic to the identity. Suppose X is finitely dominated and let p X→. Then the functor p_*^X→ preserves colimits. In other words, _X∈^X is compact. The functor p_*^X→ preserves finite limits, so by stability (see <cit.>) it preserves finite colimits. Therefore, we only have to show that it preserves filtered colimits. Let F I→^X be a functor, where I is filtered. The map _i∈ I (p_* F(i))⟶ p_*(_i∈ I F(i)) is given by _i∈ Ilim_x∈ X F(i)_x⟶lim_x∈ X_i∈ I F(i)_x. In filtered colimits preserve finite limits. By <cit.> it follows that filtered colimits preserve limits indexed by finitely dominated spaces. Let X be a finitely dominated space. Then ^X is smooth. The functor =Δ_♯∘ p^* admits a right adjoint p_*∘Δ^*. The functor Δ^* is always colimit-preserving and p_* is colimit-preserving by <ref> since X is finitely dominated. If X is finitely dominated, by <ref> _X∈^X is compact and, therefore, it admits a relative dual ζ_X∈^X. By <cit.>, if M is a finite CW complex and X=(M), the parametrized spectrum ζ_X is a suspension of the Spivak normal fibration of M. From <ref> and the explicit formula for the evaluation =p_♯Δ^* of the self-duality of ^X we get the following. Suppose X is a finitely dominated space. Then there is a natural isomorphism p_*(-)≅ p_♯Δ^*((-)⊠ζ_X) of functors ^X→. §.§ The case of posets In this section we consider functor categories (P, ), where P is a poset. We begin with the computation of additive invariants of (P, ). Let P be a finite poset and an idempotent-complete small stable ∞-category. Denote by P^δ the same set equipped with the trivial poset structure and π P^δ→ P the functor given by the identity map on objects. Let F→ be an additive invariant. Then the functors π_♯, π_*(P^δ, )→(P, ) exist and induce an equivalence π_♯, π_* F((P^δ,))⟶ F((P, )). The existence of the Kan extension functors π_♯,π_* follows since the relevant limits and colimits in (<ref>) are finite and admits finite limits and colimits. Let us now prove the claim that π_♯ F((P^δ,))→ F((P, )) is an equivalence. We will use induction on the cardinality of P by mimicking the proof of <cit.>. The claim is obvious for P empty. For an arbitrary finite poset P choose a minimum element m∈ P and let i{m}→ P be the inclusion of this element and j P∖ m→ P the inclusion of the complement. Consider the sequences @C=2cm(P∖ m, ) @/^1.0pc/^j_♯[r] (P, ) @/^1.0pc/_j^*[l] @/^1.0pc/^i^*[r] @/^1.0pc/_i_*[l] where the top functors are left adjoint to the bottom functors. From the formulas (<ref>) we see that the functors j_♯ and i_* are given by extension by zero, so this is a split exact sequence (see also <cit.>). Therefore, by additivity we get an exact sequence of spectra F((P∖ m, )) F((P, )) F(). Consider a commutative diagram (P∖ m, ) ^j_♯[r] (P, ) ^i^*[r] (P^δ∖ m, ) ^j_♯[r] ^π_♯[u] (P^δ, ) ^i^*[r] ^π_♯[u] ^𝕀[u] Here the bottom sequence is defined analogously to the top sequence replacing the posets by the same sets with the trivial partial order. The commutativity of the square on the left is obvious and the commutativity of the square on the right follows since m is a minimum. Therefore, we obtain a commutative diagram of spectra F((P∖ m, ))^j_♯[r] F((P, )) ^-i^*[r] F() F((P^δ∖ m, ))^j_♯[r] ^π_♯[u] F((P^δ, )) ^-i^*[r] ^π_♯[u] F() ^𝕀[u] where both rows are exact. The claim that the middle vertical map is an equivalence then follows by induction. The fact that π_* induces an equivalence is proven by an analogous induction by splitting off a maximum. Consider the poset P={0≤ 1}=Δ^1. Then (P^δ, )=× and the functors π_♯,π_*×⟶(Δ^1, ) are given by π_♯(x, y) = (x→ x⊕ y), π_*(x, y) = (x⊕ y→ y). The posets we encounter will be the face posets of triangulations. Namely, let M be a compact polyhedron and choose a triangulation of M with face poset T. For τ∈ T let (τ)⊂ M denote its open star, i.e. the union of the interiors of simplices containing τ. Let T be the face poset of a triangulation of a closed PL manifold M. Then: * (p^)^♯() = ζ_T^∈^T^ is locally constant and invertible, i.e. for every τ∈ T the spectrum ζ_T^(τ) is invertible and for every τ⊆σ the induced map ζ_T^(σ)→ζ_T^(τ) is an isomorphism. * T^ admits duality in the sense of <ref>. Let us first prove the first statement. Since T is finite, _T∈^T is compact. In particular, (p^)^♯ is well-defined. Let () ∈^T^⊗^T be the coevaluation from <ref>. Then by <ref> we get the formula ζ_T^≅ (𝕀⊠ p_*) (), i.e. ζ_T^(τ)≅lim_σ⊇τ. This object is Spanier–Whitehead dual to ξ(τ)≅_σ⊇τ. So, it is enough to show that ξ∈^T is locally constant and invertible. This object fits into a cofiber sequence _σ⊉τ⟶_σ⟶ξ(τ). The first colimit may be identified with Σ^∞_+ (M∖(τ)) and the second colimit with Σ^∞_+ M. Therefore, ξ(τ)≅Σ^∞_+ M / Σ^∞_+(M∖(τ)). Consider τ_1⊆τ_2 and a point x in the interior of τ_1. Then Σ^∞_+ M / Σ^∞_+(M∖(τ_i))⟶Σ^∞_+ M / Σ^∞_+(M∖{x}) is an isomorphism since M∖(τ_i)→ M∖{x} is a deformation retract. This shows that ξ(τ_1)→ξ(τ_2) is an isomorphism. Since M is a PL manifold, the cofiber M/(M∖{x}) is homotopy equivalent to S^n for some n which finishes the proof of the first statement. Next, let us prove the second statement. It is shown in <cit.> that ^T is self-dual with the evaluation pairing _T = p_*Δ^*. Using the duality between ^T and ^T^ from <ref> we transfer the self-duality of ^T to a self-duality of ^T^ with the coevaluation pairing given by _T^() = Δ_♯ζ_T^ by using that the coevaluation is dual to evaluation and the computation of dual functors from <ref>. Since ζ_T^ is locally constant and invertible, we have a natural isomorphism Δ_♯_T^≅ (ζ_T^^-1⊠_T^) ⊗Δ_♯ζ_T^ and hence _T^()=Δ_♯_T^ is also the coevaluation of a self-duality of ^T^. § ASSEMBLY MAPS AND THE EULER CHARACTERISTIC In this section we introduce assembly maps, the Euler characteristic and its lifts along the assembly map. §.§ Assembly map For the following definition see e.g. <cit.>. Let → be a functor from spaces to spectra. The universal excisive approximation is a colimit-preserving functor ^%→ together with a natural transformation ^%(-)→(-) which is an equivalence on . We call this morphism the assembly map. In particular, we can always write it as αΣ^∞_+X ⊗() ⟶(X). We will mainly be concerned with the following examples: * For X∈ a topological space let (X)∈ be the K-theory spectrum of the stable ∞-category (^X)^ω, the A-theory of the space X. Denote by A(X) = Ω^∞(X) the underlying space. Then we have the assembly map αΣ^∞_+X⊗()⟶(X). * Consider the functor X↦(^X)≅Σ^∞_+ LX (see <ref> for the equivalence). The assembly map is given by the inclusion of constant loops αΣ^∞_+ X⟶Σ^∞_+ LX. * The Dennis trace for A-theory defines a morphism of spectra (X)⟶Σ^∞_+ LX. By the universal property of the assembly map, we have a commutative diagram Σ^∞_+X⊗() ^α[r] ^[d] (X) ^[d] Σ^∞_+ X ^α[r] Σ^∞_+ LX. We follow the definition of the A-theory space given in <cit.>, which coincides with the space A(X) defined in <cit.> in terms of the Waldhausen category of (homotopy) finitely dominated retractive spaces over X. The original definition due to Waldhausen <cit.> uses the category of (homotopy) finite retractive spaces over X. The difference only affects π_0. Given a compact parametrized spectrum ∈^X, we will be interested in lifts of its class []∈ A(X) under the assembly map, so let us introduce the relevant space. For χ∈ A(X) denote by (χ,α) = π_0 _χ(αΩ^∞(Σ^∞_+X⊗())→ A(X)), where the homotopy fiber is taken at χ∈ A(X). We will use the same notation for the other assembly maps. Let us now introduce the Euler characteristic. Suppose X is a finitely dominated space. Then the constant parametrized spectrum _X∈^X is compact and hence it defines a class [_X]∈ A(X) in A-theory; applying the Dennis trace we obtain an element [_X]∈Ω^∞Σ^∞_+ LX. Let X be a finitely dominated space. The A-theoretic Euler characteristic is χ_A(X) = [_X]∈ A(X) and the Euler characteristic is χ_(X) = [_X]∈Ω^∞Σ^∞_+ LX. Denote by p X→ the natural projection. Then under the composite π_0(A(X))⟶π_0(A()) = the image of χ_A(X) is the usual Euler characteristic of X. So, χ_A(X) may be considered as a local version of the usual Euler characteristic. §.§ Lifts of the Euler characteristic In this section we recall connections between the assembly map for A-theory and (simple) homotopy theory. Recall the following space introduced in <cit.>. Let X be a space. The PL Whitehead spectrum ^PL(X) is the cofiber of the assembly map αΣ^∞_+X⊗()→(X). The PL Whitehead space is the underlying space ^PL(X) = Ω^∞^PL(X). Suppose X is connected and based. Then π_0(^PL(X))≅K̃_0([π_1(X)]) is the reduced K-theory group and π_1(^PL(X))≅(π_1(X)) is the Whitehead group. Let X be a finitely dominated space. Wall's finiteness obstruction w(X)∈π_0(^PL(X)) is the image of χ_A(X)∈ A(X) under A(X)→^PL(X). The existence of a fiber sequence Ω^∞(Σ^∞_+X⊗()) A(X)⟶^PL(X) implies the following: * (χ_A(X), α) is nonempty if, and only if, Wall's finiteness obstruction w(X) vanishes. * If (χ_A(X), α) is nonempty, it is a torsor over the Whitehead group π_1(^PL(X)). A fundamental result due to Wall <cit.> asserts that w(X) = 0 if, and only if, there is a finite CW complex M (equivalently, a finite polyhedron) such that X≅(M). We will need an explicit construction of the trivialization of Wall's finiteness obstruction. Let M be a finite polyhedron with X=(M) and choose a PL triangulation of M with face poset T. The collection of open stars assembles into a functor T^→(M) to the category of open subsets of M. The inclusion (τ)⊂ M of open stars induces an isomorphism _τ∈ T^((τ))≅(M). For x∈ M denote by T^_x⊂ T^ the subcategory of simplices τ such that x∈(τ). If x lies in the interior of σ∈ T, then T^_x≅ (T_≤σ)^. The category T_≤σ is the face poset of the simplex σ and hence its nerve is weakly contractible. The claim then follows from <cit.>. The colimit _τ∈ T^∈ is the localization of the category T^ obtained by inverting all morphisms. So, we obtain a functor of ∞-categories t T^⟶ X. Therefore, restriction along t and the left Kan extension define an adjunction @C=2cm^T^@/^1.0pc/^t_♯[r] ^X @/^1.0pc/_t^*[l] with t^* fully faithful. In particular, t_♯ t^*_X≅_X. Since T is finite, _T=t^*_X∈^T^ is compact, and so we obtain a lift of [_X]∈ A(X) along t_♯(^T^, ω)⟶(X). Next, let T^δ be the set T equipped with the trivial poset structure and π T^δ→ T^ the functor given by the identity on objects. By <ref> applied to =^ω (note that (T^, ^ω)=(T^, )^ω by <ref>) the map π_♯(T^δ)⟶(^T^, ω) is an equivalence and hence we obtain a lift of [_X]∈ A(X) along (t∘π)_♯(T^δ)⟶(X). Now consider a commutative diagram of assembly maps (T^δ) [r] (X) Σ^∞_+T^δ⊗() ^t∘π[r] ^α[u] Σ^∞_+X⊗() ^α[u] Since T^δ is a finite set, the assembly map Σ^∞_+T^δ⊗ A()→ A(T^δ) is an equivalence and hence in this way we obtain a lift of [_X]∈ A(X) along the assembly map αΣ^∞_+ X⊗()⟶(X). Let M be a finite polyhedron with underlying homotopy type X=(M). The lift of [_X]∈ A(X) defined above is the Whitehead lift λ_(M)∈(χ_A(X), α). The construction of the assembly map is equivalent to <cit.>. More precisely, in loc. cit. it is defined as the dual of ^X →(T, ) (thought of as inclusion of locally constant into T-constructible sheaves) and Verdier duality is used to identify (T, )^∨≅(T, ). In the above description we instead take (T, )^∨≅(T^, ) and use <ref> to identify the dual functor. Let f M_1→ M_2 be a homotopy equivalence of finite polyhedra, inducing an isomorphism f X_1→ X_2 of their homotopy types. The commutative diagram Σ^∞_+X_1⊗() ^-α[r] ^f[d] (X_1) ^f_♯[d] Σ^∞_+X_2⊗() ^-α[r] (X_2) as well as the natural equivalence f_♯_X_1≅_X_2 gives a map f(χ_A(X_1), α)⟶(χ_A(X_2), α). In particular, we obtain an element τ(f) = f(λ_(M_1)) - λ_(M_2)∈π_1(^PL(X_2)). The following statement follows by comparing the Whitehead torsion of a bounded acyclic complex of free modules to the path in the K-theory space obtained using the additivity theorem (see <cit.>). Let f M_1→ M_2 be a homotopy equivalence of finite polyhedra, inducing an isomorphism f X_1→ X_2 of their underlying homotopy types. The element τ(f)∈π_1(^PL(X_2)) constructed above coincides with the Whitehead torsion of f. Homotopy equivalences f M_1→ M_2 with vanishing Whitehead torsion τ(f)∈π_1(^PL(X_2)) are known as simple homotopy equivalences. Therefore, we see that the lift λ_(M)∈(χ_A(X), α) is simple homotopy invariant, but is not homotopy invariant. If f M_1→ M_2 is a homeomorphism of finite polyhedra, it is a simple homotopy equivalence <cit.>. § PONTRYAGIN–THOM LIFT The goal of this section is to construct a lift of the Euler characteristic along the assembly map using intersection theory and compare it to the Whitehead lift from <ref>. More precisely, we construct a lift using a version of Pontryagin–Thom collapse, which we define using the configuration space of two points. Next, we show that this lift coincides with the trace of the Whitehead lift defined in the previous section. §.§ Lift diagrams To construct a lift of the Euler characteristic geometrically, it will be convenient to present the data of a lift in terms of a commutative diagram, which we call a lift diagram. In fact, with a view towards the proof of <ref>, we will define the notion of lifts for a nice class of ∞-categories, which, in particular, contains finitely dominated spaces (viewed as ∞-groupoids). Let I be a small ∞-category. Consider the commutative diagram @C=1cm^I ^Δ_♯[r] ^Δ_♯[d] ^I⊗^I ^𝕀⊠Δ_♯[d] ^I⊗^I ^-Δ_♯⊠𝕀[r] ^I⊗^I⊗^I Passing to right adjoints of the horizontal functors, we obtain a natural transformation Δ_♯∘Δ^*⇒ (Δ^*⊠𝕀)∘ (𝕀⊠Δ_♯). Pre- and post-composing it with _I and p_♯, we obtain the natural transformation 𝕀⇒(_I=(p_♯⊠𝕀)∘(Δ^*⊠𝕀) ∘ (𝕀⊠Δ_♯(_I))). Taking its trace we obtain a morphism β_I(^I, ω)≅(^I)⟶(_I)≅ p_♯Δ^*Δ_♯_I. The morphism β_I may be identified with ∫^i∈ IΣ^∞_+_I(i, i)⟶_i,j∈ IΣ^∞_+(_I(i, j)×_I(i, j)) induced by the map _I(i, i)→_I(i, i)×_I(i, i) sending f ↦ f ×𝕀_i. Moreover, (_I) can be identified with the suspension spectrum of the geometric realization of the category of parallel arrows (Δ^1 ⊔_∂Δ^1Δ^1, I). The unit map _I→Δ^*Δ_♯_I induces a morphism α_I p_♯_I⟶ p_♯Δ^*Δ_♯_I. If we further assume that _I∈^I is compact, then we may define the element χ_(I) = [_I]∈Ω^∞(^I). Let I be a small ∞-category with _I∈^I compact. A lift of χ_(I) is an element e(I)∈Ω^∞ p_♯_I together with a homotopy α_I(e(I))∼β_I([_I]). We denote by (β_I(χ_(I)), α_I) = π_0_β_I(χ_(I))(α_IΩ^∞ p_♯_I→(_I)) the set of lifts. If X is an ∞-groupoid, the morphism 𝕀⇒_X is an equivalence and hence the morphism β_X(^X)→ p_♯Δ^*Δ_♯_X is an equivalence. The composite Σ^∞_+ X(_X)(^X)≅Σ^∞_+ LX is given by the inclusion of constant loops, and hence it coincides with the assembly map. Therefore, if X is finitely dominated (so that _X∈^X is compact), the set of lifts (β_X(χ_(X)), α_X) is naturally isomorphic to the set of lifts (χ_(X), α) of the Euler characteristic along the assembly map. If T is a poset, the unit 𝕀→Δ^*Δ_♯ is an equivalence, which follows from the fact that _T(i, j)→_T(i, j)×_T(i, j) is an equivalence. In particular, α_TΣ^∞_+ |T|≅ p_♯_T⟶(_T) is an equivalence and (β_I(χ_(I)), α_I) is a singleton (assuming it is defined, i.e. _I is compact). Let I be a small ∞-category which admits duality (see <ref>). A lift diagram for I is given by the following data: * An object ζ_I∈^I. * A morphism ϵ_I ⊠ζ_I→Δ_♯_I which exhibits ζ_I as the relative dual of _I. * An Euler class e ζ_I →_I. * A homotopy commuting diagram Δ_♯ζ_I [r] [d, "Δ_♯ e"] ζ_I ⊠_I [dl, "ϵ"] Δ_♯_I By <ref> any ∞-groupoid X admits duality in the sense of <ref>. Therefore, the previous definition applies to ∞-groupoids. Our goal now is to relate lift diagrams to lifts of the Euler characteristic from <ref>. For this, we begin with a technical lemma. The following diagram commutes: _^I[r, "(<ref>)"] _^I∘ (_I ⊗𝕀) p_♯Δ_♯^∨[r, "(<ref>)"] [u, "(<ref>)", "∼" right] p_♯Δ^*, [u, "∼" right] where the right vertical map is the identification __I∘ (_I ⊗𝕀) (≅ p_♯Δ^*) ∘ (𝕀⊠__I) ∘ (𝕀⊠__I⊠𝕀) ≅ p_♯Δ^* Applying another coevaluation we instead show that the diagram () [r] (⊗𝕀)(()) Δ_♯_I [r] [u, "∼"] (Δ^*)^∨_I [u, "∼"] commutes. The top arrow is obtained from [column sep=2cm] ^I [r, "Δ_♯"] [d, "(Δ_♯⊗𝕀) ∘Δ_♯"] ^I ⊗^I [d, "Δ_♯⊗Δ_♯"] ^I ⊗^I ⊗^I [r, "𝕀⊗Δ_♯⊗𝕀"'] ^I ⊗^I ⊗^I ⊗^I, by passing to horizontal right adjoints and pre-/post-composing with _I ⊗_I and 𝕀⊗ p_♯⊗𝕀, respectively. Note that the diagram is expressing the fact that (f_♯⊗𝕀) ∘ (𝕀⊗ f_♯)Δ_♯≅Δ_♯ f_♯ for f = Δ (and up to a permutation of the factors). But this is exactly how the map f_♯→ (f^*)^∨ is constructed in <ref>. Let I be a small ∞-category which admits duality and such that _I∈^I is compact. Then the set (β_I(χ_(I)), α_I) is naturally isomorphic to the set of lift diagrams up to homotopy. Since _I∈^I is compact, by <ref> it admits a relative dual. Therefore, by the usual uniqueness of duality data the space of pairs (ζ_I, ϵ) is contractible. Given that, we see that the space of lift diagrams is identified with the fiber of Δ_♯_^I(ζ_I, _I)⟶_^I× I(Δ_♯ζ_I, Δ_♯_I) over the point given by the composition Δ_♯ζ_I →ζ_I ⊠_I Δ_♯_I ∈_^I × I(Δ_♯ζ_I, Δ_♯_I). So, the claim follows once we identify the first morphism with α_I and the given point with β_I([_I]). The first morphism is equivalent to _^I(ζ_I, _I) →_^I( ζ_I, Δ^* Δ_♯_I) where we postcompose with the unit 𝕀→Δ^*Δ_♯, thus coinciding with the map α_I by using p_♯≅_^I(ζ_I, -). The class β_I([_I]) is obtained by taking the trace of (, 𝕀_) _I→ (^I, 𝕀_^I) → (^I, _I) in . Hence by functoriality of traces and <cit.> it is computed by traversing the following diagram [r,equal] [d, equal] [r, equal] [d, "_I ⊠ζ_I"] [dl, Rightarrow, shorten = 2.5ex] [r,equal] [d, "_I ⊠ζ_I"'] [d,equal] [dl, Rightarrow, shorten = 2.5ex] [r, "Δ_♯_I"] [d, equal] ^I ⊗^I [r, equal] [d, equal] ^I ⊗^I [r, "_^I"] [d, equal] [dl, Rightarrow, shorten = 2.5ex] [d, equal] [r, "Δ_♯_I"] ^I ⊗^I [r, "⊗𝕀"] [rr, bend right=30, "p_♯Δ^*"', ""name=U] ^I ⊗^I [r, "_^I"] , where only the non-invertible two-cells are indicated. <Ref> shows that ^I ⊗^I [r, equal] [d, equal] ^I ⊗^I [d, equal] [dl, Rightarrow, shorten = 2.5ex] ^I ⊗^I [r, "⊗𝕀"] [rr, bend right=30, "p_♯Δ^*"', ""name=U] ^I ⊗^I [r, "_^I"] is equivalent to ^I ⊗^I [r, "Δ_♯^∨" pos=0.45, ""'name=S, bend left=30] [r, bend right=30, "Δ^*"', ""name=T] [from=S, to=T, "(<ref>)", Rightarrow] ^I [r, "p_♯"] , and <ref> shows that [r, equal] [d, "_I ⊠ζ_I"'] [d, equal] [dl, Rightarrow, shorten = 2.5ex] ^I ⊗^I [r, "_^I"] = [r, equal] [d, "_I ⊠ζ_I"'] [r, equal] [d, "ζ_I"] [dl, Rightarrow, shorten = 2.5ex] [d, equal] [dl, Rightarrow, shorten = 2.5ex] ^I ⊗^I [r, "Δ_♯^∨"] ^I [r, "p_♯"] . With that we simplify diagram (<ref>) and obtain [r, equal] [d, equal] [r, equal] [d, "_I ⊠ζ_I"] [dl, Rightarrow, shorten = 2.5ex] [r, equal] [d, "ζ_I"] [dl, Rightarrow, shorten = 2.5ex] [d, equal] [dl, Rightarrow, shorten = 2.5ex] [r, "Δ_♯_I"] ^I ⊗^I [r, "Δ_♯^∨", ""'name=S, bend left=0] [r, bend right=50, "Δ^*"', ""name=T] ^I [r, "p_♯"] [from=S, to=T, Rightarrow] . Finally, using the compatibility of the unit _I with Δ_♯^∨Δ^* this simplifies further to [r, equal] [d, equal] [r, equal] [d, "_I ⊠ζ_I"] [dl, Rightarrow, shorten = 2.5ex] [r, equal] [d, "ζ_I"] [d, equal] [dl, Rightarrow, shorten = 2.5ex] [r, "Δ_♯_I"'] ^I ⊗^I [r, "Δ^*"'] ^I [r, "p_♯"'] , which shows the claim. §.§ Poincare duality for manifolds In this section we give a version of (twisted) Poincaré duality for closed manifolds. We follow the notations of <cit.> apart from denoting the left adjoint to f^* by f_♯: * For a topological space M we consider the category _M of ex-spaces (alias, retractive spaces) over M as in <cit.>, i.e. topological spaces N together with continuous maps r N→ M and i M→ N satisfying r∘ i=𝕀_M. It admits a model structure with underlying ∞-category (_/(M))_∗ as well as a symmetric monoidal structure ∧_M. * For a continuous map f X→ Y of topological spaces we consider (X, f)_+ = X⊔ Y as a retractive space over Y. * For a closed subset K⊂ L over M we denote by _M(K, L)∈_M the unreduced relative mapping cone, see <cit.>. For morphisms K→ L→ M in we denote _M(K, L)=L∐_K M∈(_/M)_∗ the corresponding (homotopy) pushout. * For a vector bundle V→ M we denote by S^V∈_M the corresponding retractive space obtained by a fiberwise one-point compactification and by ^V=Σ^∞_X(S^V)∈^(M) its fiberwise stabilization. Let M be a closed manifold and X=(M) its underlying homotopy type. Consider an embedding M⊂ V into a Euclidean space. Let ν be the normal bundle of the embedding and η^V⟶ p_♯^ν the corresponding Pontryagin–Thom collapse map. We denote by the same letter η⟶ p_♯^- M the map obtained by tensoring it with ^- V. Consider the Riemannian metric on M induced by the embedding M⊂ V and the Pontryagin–Thom collapse along the diagonal _Δ (M× M, 𝕀)_+⟶Δ_♯ S^ M in (_M× M) induced by the diagram (M × M, 𝕀)_+ ⟶ (M^I, π)_+ ∧_M × M (S^ M× M) Δ_♯ S^ M, where the maps are defined as follows. Choose a tubular neighborhood U⊂ M× M of the diagonal which can be identified with the tangent bundle by sending a pair (n, m) to the tangent vector (which we will denote by m-n) at m of the unique geodesic ω(n, m) connecting n and m. Given a point (n, m)∈ M× M we send it to the pair (ω(n, m), m-n). Here we think of S^ M as the quotient of the ϵ-disk bundle inside M by its boundary, where ϵ is smaller than the injectivity radius, so that the formula is well-defined. The second map is given by the inclusion of constant paths and is a weak equivalence. Taking fiberwise suspension spectra we obtain a map _Δ_X× X⟶Δ_♯^ M. We denote by the same letter _Δ_X⊠^- M⟶Δ_♯_X the tensor product of the previous map with _M⊠^- M. The following is shown in <cit.>. From the description above we see that applying the flip map exchanging the two factors of X, we get the Pontryagin–Thom collapse map post-composed with the action of (-1) along the fibers of M. Consider the self-duality data of ^X from <ref>. Then the morphisms _Δ_X⊠^- M⟶Δ_♯_X=(), η⟶ p_♯^- M=(^- M, _X) establish a relative duality between _X and ^- M. In particular, ζ_X ≅^- M. It will be convenient to slightly modify the self-duality data of ^M which, essentially, hides the coevaluation of the Costenoble–Waner duality. The ∞-category ^M∈ has a self-duality data with () = Δ_♯^ M and ^Ł() = Δ_♯_M. With respect to this self-duality data the object _M∈^M is relatively self-dual with the coevaluation ^Ł()=Δ_♯_M⟶_M⊠_M adjoint to the identity map _M→_M and the evaluation _Δ_M⊠_M⟶Δ_♯^ M=(). The functor ^M→^M given by ↦⊗^- M is an equivalence and it transforms the self-duality data from <ref> to the coevaluation asserted in this statement. The new evaluation is then (-) = p_♯(^- M⊗Δ^*(-)). Let us denote by u𝕀→Δ^*Δ_♯ and cΔ_♯Δ^*→𝕀 the unit and counit of the adjunction between Δ_♯ and Δ^*. Then the new evaluation admits a left adjoint ^Ł() = Δ_♯_M whose unit is given by the composite p_♯(^- M) p_♯(^- M⊗Δ^*Δ_♯_M) = (^Ł()). Let us apply to the coevaluation ^Ł()=Δ_♯_M_M⊠_M. We obtain the morphism p_♯^- M p_♯(^- M⊗Δ^*Δ_♯_M) p_♯^- M. Using the adjunction axioms the last two morphisms compose to the identity, so the coevaluation coincides with the one from <ref> which proves the claim. The definition of the Pontryagin–Thom collapse along the diagonal (<ref>) does no longer depend on the embedding M⊂ V, but a priori does depend on the Riemannian metric on M. It will be useful to present another equivalent presentation for the Pontryagin–Thom collapse which does not use a Riemannian metric. Let M be a manifold. The Fulton–MacPherson compactification of the configuration space of two points _2(M) is the real oriented blowup of M× M along the diagonal. Let M = ( M∖ M) / _+ be the unit tangent bundle. By construction _2(M) is a manifold with boundary M and we have a commutative diagram M [r] [d] _2(M) [d] M [r] M× M. In particular, we obtain a morphism Δ_♯_M(M, M)⟶_M× M(M× M, _2(M)) of unreduced relative mapping cones. Suppose X ^f[r] ^π[d] Y ^π̃[d] X' ^f'[r] Y' is a coCartesian diagram in . Then Y'∐_X X'⟶ Y'∐_Y Y' is an equivalence. In particular, f'_♯_X'(X', X)⟶_Y'(Y', Y) is an equivalence in (_/Y')_∗. The first statement follows since the right-hand side is the iterated pushout of X ^f[r] ^π[d] Y ^π̃[r] Y' X', and hence the pushout of the outer square. The second statement follows from the definitions C_X(A,B) := X ∐_B A and f'_♯ A := Y' ∐_X' A. The map (<ref>) is a weak equivalence. The morphism M→_2(M) is an inclusion of the boundary; in particular, it is a q-cofibration. The diagram M [r] [d] _2(M) [d] M [r] M × M, is a pushout. Since the q-model structure on is left proper, it is a homotopy pushout. The claim follows from <ref>. We can thus define another version of the Pontryagin–Thom collapse via the composite (M × M)_+ →_M× M(M × M, _2(M)) Δ_♯_M(M, M)≅Δ_♯ S^ M. The induced maps (<ref>) and (<ref>) are homotopic. It suffices to show that the diagrams (<ref>) and (<ref>) fit into a commuting square. For that, note first that the first map in (<ref>), when restricted to _2(M) admits a canonical null-homotopy. Recall that the map sends a pair of nearby points (x,y) to the pair consisting of the connecting geodesic ω(x,y) and the connecting tangent vector (based at x). The assignment of the corresponding unit tangent vector extends continuously to _2(M). The required homotopy moves the point in S^ M along that unit tangent vector to the basepoint at ∞. We have thus constructed a commutative diagram (M × M)_+ [r] [dr] (M^I,π)_+ ∧_M × M (S^ M× M) [l] Δ_♯ S^ M _M × M(M × M, _2(M)) [u] [l] [u] Δ_♯_M(M, M), where we noticed that the above defined homotopy, when restricted to the diagonal, merely identifies _M(M, M) with S^ M. §.§ Pontryagin–Thom lift Let M be a closed manifold and X its underlying homotopy type. In this section we construct a lift of the Euler characteristic χ_(X)∈Ω^∞Σ^∞_+ LX along the assembly, i.e. along the inclusion of constant loops αΣ^∞_+ X⟶Σ^∞_+ LX. Consider the map 0_M (M, 𝕀)_+→ S^ M of ex-spaces given by the zero section of the tangent bundle. Its suspension gives rise to a morphism e(M)_X⟶^ M which we will call the Euler class. Note that _^X(_X, ^ M)≅Ω^∞Σ^∞_+ X and we may equivalently consider e(M)∈Ω^∞Σ^∞_+ X. Consider the Fulton–MacPherson compactification _2(M) of the configuration space of two points on M. It fits into a pushout diagram M[r] [d] _2(M) [d] M ^Δ[r] M× M. The unreduced relative mapping cone of the left vertical map gives the morphism 0_M (M, 𝕀)_+→ S^ M on ex-spaces. Taking relative suspensions, we obtain a commutative diagram Δ_♯_X [r] _Δ_♯ e(M)[d] _X× X[dl] Δ_♯^ M of parameterized spectra over X× X, where the diagonal map _X× X→Δ_♯^ M is the Pontryagin–Thom collapse map along the diagonal by <ref>. Twisting by ^- M, we obtain a lift diagram Δ_♯^- M[r] _Δ_♯ e(M)[d] _X⊠^- M[dl] Δ_♯_X where we note that by <ref> the diagonal map is nondegenerate. Thus, by <ref> from this lift diagram we obtain a lift (the Pontryagin–Thom lift) λ_(M)∈(χ_(X), α) of [_X]∈Ω^∞Σ^∞_+ LX along the assembly map. The composite X→ LX→ X is the identity, where the first map is the inclusion of constant loops and the second map is the evaluation at the basepoint of S^1. Therefore, the image of [_X] under the projection LX→ X coincides with the Euler class e(M)∈Ω^∞Σ^∞_+ X. The Dennis trace map from K-theory to gives a commutative diagram of assembly maps Σ^∞_+ X⊗() ^-α[r] ^[d] (X) ^[d] Σ^∞_+ X ^α[r] Σ^∞_+ LX As (χ_A(X)) = χ_(X), the above commutative diagram induces a map (χ_A(X), α)⟶(χ_(X), α). Let M be a closed manifold with underlying homotopy type X=(M). Then the lifts (λ_(M)) and λ_(M) of the Euler characteristic χ_(X) are equivalent. §.§ Proof of theorem <ref> Fix a triangulation of M with face poset T. Denote by T^δ the same set with the trivial poset structure and let π T^δ→ T^ be the functor given by the identity on objects. Let t T^→ X be the localization functor from (<ref>) and M_τ=(τ) the open star of the simplex τ∈ T. Let us recall that the Whitehead lift is obtained by traversing the diagram [d, "[_T^]"] [dr, "[_X]"] (^T^δ, ω) [r, "∼" below, "π_♯" above] (^T^, ω) [r, "t_♯"] (^X, ω) Σ^∞_+ T^δ⊗() [u, "α", "∼"'] [rr] Σ^∞_+ X ⊗(), [u, "α"] where we used that the natural morphism t_♯_T^→_X is an equivalence. By naturality of the Dennis trace map it follows that the trace of the Whitehead lift is obtained from the analog diagram where is replaced with . Using the definitions from <ref> the diagram can be extended as follows: [d, "[_T]"] [dr, "[_X]"] (^T^δ, ω) [r, "∼" below, "π_♯" above] [d, "∼" right, "β_T^δ" left] (^T^, ω) [r, "t_♯"] [d, "β_T" left] (^X, ω) [d, "∼" right, "β_X" left] (_T^δ) [r] (_T^) [r] (_X) Σ^∞ T^δ[r] [u, "∼" right, "α_T^δ" left] Σ^∞ |T^| [u, "∼" right, "α_T" left] [r, "∼"] Σ^∞ X. [u, "α_X" left] In other words, the Whitehead lift (λ_(M)) is the unique lift lying in the image of t_♯(β_T^(χ_(T^)), α_T^)⟶(β_X(χ_(X)), α_X) ≅(χ_(X), α), where the set (β_T^(χ_(T^)), α_T^) consists of a single element as α_T is an isomorphism. To show that (λ_(M))=λ_(M), it suffices to show that the Pontryagin–Thom lift is compatible with the triangulation, i.e. it also lies in the image of t_♯. For this, recall from <ref> that T^ admits duality, i.e. Δ_♯ p^*→^T^⊗^T^ defines the coevaluation of a self-duality of ^T^. In particular, we may identify elements of (β_T^(χ_(T^)), α_T^) with lift diagrams using <ref>. By <ref> the relative dual of _T is locally constant. To construct a model of the relative dual, we will use the following observation. Consider the self-dualities of ^T^ and ^X from <ref>. Consider an object ζ_X∈^X together with a morphism ϵ_T^⊠ t^*ζ_X⟶Δ^T^_♯_T^. Then ϵ exhibits t^*ζ_X as the relative dual of _T^ if, and only if, (t_♯⊠ t_♯)(ϵ)_X⊠ζ_X⟶Δ^X_♯_X exhibits ζ_X as the relative dual of _X. The condition that ϵ exhibits t^*ζ_X as the relative dual of _T^ is that the induced morphism ϵ' t^*ζ_X⟶ (p_* t_♯⊠𝕀) Δ^T_♯_T^≅ (p_*⊠ t^*)Δ^X_♯_X is an isomorphism. Similarly, the condition that (t_♯⊠ t_♯)(ϵ) exhibits ζ_X as the relative dual of _X is that the induced morphism t_♯ϵ'ζ_X⟶ (p_*⊠𝕀) Δ^X_♯_X is an isomorphism. But the two conditions are equivalent since t^*^X→^T^ is fully faithful. Let us now construct a version of the Pontryagin–Thom lift compatible with the cover { M_τ}_τ∈ T^. Consider a morphism f E→ M× M in . Then {E|_M_τ× M_μ} defines an open cover E, where E|_Mτ× M_μ⊂ E denotes the strict inverse image of M_τ× M_μ⊂ M× M. Consider the functor (τ, μ↦Σ^∞_+ E|_M_τ× M_μ)∈^T^× T^. There is a natural equivalence (t_♯⊠ t_♯)((τ, μ)↦Σ^∞_+ E|_M_τ× M_μ)≅Σ^∞_+(M× M)E in ^M× M. Consider the functor T^× T^→(E) to the category of open subsets of E given by τ,μ↦ E|_M_τ× M_μ. As in <ref> we get an equivalence _τ,μ((E|_M_τ× M_μ)→(M_τ× M_μ))≅ ((E)→(M× M)) in . Denote by i^E_τμ(E|_M_τ× M_μ)→(E) and i^M_τμ(M_τ× M_μ)→(M× M) the natural inclusions. For any x,y∈ M we have to show that the natural morphism _τ,μ∈ T^((i^M_τμ)_♯ (f|_M_τ× M_μ)_♯(i^E_τμ)^*→ f_♯ is an isomorphism. By functoriality this reduces to showing that _τ,μ∈ T^ (i^E_τμ)_♯(i^E_τμ)^*→𝕀 is an isomorphism which is the content of <ref>. Let us apply the above paradigm to (<ref>). Namely, restricting that diagram to M_τ× M_μ we obtain (M_τ∩ M_μ) [r] [d] _2|_M_τ× M_μ[d] M_τ∩ M_μ[r] M_τ× M_μ which is a pushout diagram for every (τ,μ) ∈ T^× T^. Taking relative suspension spectra we obtain a commutative diagram Δ_♯_T^[r] _Δ_♯ e(T)[d] _T^× T^[dl] Δ_♯^ M_∙ in ^T^× T^, where ^ M_∙∈^T^ is the functor τ↦^ M_τ. By <ref> this spectrum is locally constant and invertible. Therefore, twisting by its inverse we obtain a commutative diagram Δ_♯^- M_∙[r] _Δ_♯ e(T)[d] _T^⊠^- M_∙[dl] Δ_♯_T^ By <ref> applying t_♯⊠ t_♯ we obtain the commutative diagram (<ref>). In particular, by <ref> the morphism _T^⊠^- M_∙→Δ_♯_T^ exhibits ^- M_∙ as the relative dual of _T^ in ^T^. Therefore, (<ref>) is a lift diagram, i.e. it defines an element of (β_T^(χ_(T^)), α_T^) which is sent to λ_(M)∈(β_X(χ_(X)), α_X) under t_♯. As (β_T^(χ_(T^)), α_T^) consists of a single element, this finishes the proof that (λ_(M))=λ_(M). § STRING TOPOLOGY In this section we recall the string topology operations on the loop space homology of a manifold and study their homotopy invariance. We first introduce a relative intersection product that depends on a lift of the Euler characteristic. We then define the string topology operations using this relative intersection product and show that they agree with the definition given in <cit.> for the Pontryagin–Thom lift. The study of the homotopy invariance is thus reduced to the same question about the relative intersection product, which in turn is reduced to the invariance of lifts of the Euler characteristic, so we can apply the results of the previous section. For concreteness, we replace the ∞-category of parametrized spectra ^X=(X, ) by the ∞-category (X) = (X, _) of local systems of chain complexes of abelian groups. We denote by _∙(-)=_∙(-;) the singular chain complex with integral coefficients and _̋∙(-)=_̋∙(-;) the integral homology. Given f X → Y we denote by _̋∙(Y,X) = _̋∙(Y,X;) the homology of the cone of _∙(X) →_∙(Y) even if f is not a inclusion of a subspace. Note that the content in this section works verbatim when is replaced by another _∞-ring k, such that M is oriented over k, i.e. one fixes an equivalence ζ_M → k[-d]. §.§ Relative intersection product In this preliminary section we define the notion of a relative intersection product on a space equipped with a lift of the Euler characteristic. Let us briefly introduce the situation we wish to generalize. Recall the intersection product _̋∙+d(M × M) →_̋∙(M) for an oriented d-manifold M defined either by using Poincaré duality or by geometrically intersecting transverse chains. Similarly, given a local system over M × M, one can define the intersection product with twisted coefficients _̋∙+d(M × M; ) →_̋∙(M; Δ^* ). Our goal is to generalize it to a relative setting as follows. Suppose we are given a local system over M together with a map Δ_♯→ or, equivalently, a map →Δ^*. Given a lift of the Euler characteristic we are going to refine the intersection product to a map _̋∙+d(M × M; /Δ_♯) →_̋∙(M; Δ^* /). Let M be a closed oriented d-manifold and consider the ∞-category (M) = (M, _) with its self-duality data given by <ref>. Denote by _M∈(M) the constant local system which is compact by <ref>. By <ref> its relative dual is _M[-d] (using the orientation) with the evaluation ϵ_p_M× M[-d]→Δ_♯_M given by the Pontryagin–Thom collapse _Δ along the diagonal. Let p_E E→ M× M be a Hurewicz fibration and =(p_E)_♯_E∈(M× M) the corresponding local system over M× M. The intersection product is the composite [-d]⊗Δ_♯_M≅Δ_♯Δ^*. Taking homology, we obtain the map _̋∙+d(E)⟶_̋∙(E×_M× M M). Let σ M× M≅ M× M be the isomorphism exchanging the two factors. Let p_E E→ M× M be a Hurewicz fibration and consider the space E'=E equipped with the map E M× M M× M. Then the intersection products ^E, ^E'_̋∙+d(E)⟶_̋∙(E×_M× M M) with respect to E and E' satisfy ^E = (-1)^d ^E'. From the description of the Pontryagin–Thom collapse map _Δ_M× M→Δ_♯^ M given in <ref>, we see that σ^* _Δ = inv∘_Δ, where inv^ M→^ M is given by acting by (-1) on the tangent bundle. The map ϵ_p_M× M[-d]→Δ_♯_M is a composite of the Pontryagin–Thom collapse map with the trivialization ^ M≅_M[-d]. Under this isomorphism the map inv becomes the multiplication by (-1)^d. By <ref> we may identify ((M))≅_∙(LM). We refer to the class [_M]∈_∙(LM) as the Euler characteristic. To define the relative intersection product, suppose we are given its lift along the assembly map, i.e. along the inclusion of constant loops α_∙(M)→_∙(LM). By <ref> we get a commutative diagram Δ_♯_M[-d] [r] [d, "Δ_♯ e(M)"] _M× M[-d] [d, "ϵ_p"] Δ_♯_M [r, equal] Δ_♯_M, where e(M)∈^d(M) is the Euler class. Consider a map p_F F→ M which fits into a commutative square F ^a[r] ^p_F[d] E ^p_E[d] M [r] M× M Let =(p_F)_♯_F∈(M) be the corresponding local system over M together with the map aΔ_♯→ and, by adjunction, a̅→Δ^*. Consider a commutative diagram @C=1.5cmΔ_♯[-d] [r] ^Δ_♯(𝕀⊗ e(M))[d] Δ_♯Δ^*Δ_♯[-d] [r] ^Δ_♯(𝕀⊗ e(M))[d] Δ_♯[-d] ^a[r] ^ϵ_p[d] [-d] ^ϵ_p[d] Δ_♯[r] Δ_♯Δ^*Δ_♯@=[r] Δ_♯Δ^*Δ_♯^-Δ_♯Δ^*(a)[r] Δ_♯Δ^*, where the middle square is obtained by tensoring (<ref>) with Δ_♯ on the left. This gives rise to a commuting square @C=1.5cmΔ_♯[-d] ^a[r] ^Δ_♯(𝕀⊗ e(M))[d] [-d] ^ϵ_p[d] Δ_♯_Δ_♯(a̅)[r] Δ_♯Δ^*. On the level of singular chains the relative intersection square (<ref>) is _∙(F)[-d] ^e(M)[d] ^a[r] _∙(E)[-d] [d] _∙(F) ^-a̅[r] _∙(E×_M× M M) The above construction can also be understood as follows. Since Δ_♯ is an oplax symmetric monoidal functor, the category of diagrams Δ_♯→, equivalently →Δ^*, has a pointwise symmetric monoidal structure. The diagram (<ref>) represents a morphism (Δ_♯_M[-d]→_M× M[-d])⟶ (Δ_♯_MΔ_♯_M). Diagram (<ref>) is then obtained by tensoring with that morphism and identifying the functor (Δ_♯→) ↦ (Δ_♯→) ⊗ (Δ_♯_M Δ_♯_M) with (Δ_♯→) ↦ (Δ_♯→Δ_♯Δ^*). Let M be a closed oriented manifold equipped with a lift of the Euler characteristic. The relative intersection product is the map (/Δ_♯)[-d]⟶Δ_♯(Δ^*/) induced by diagram (<ref>). Taking homology we obtain the map _̋∙+d(E, F)⟶_̋∙(E×_M× M M, F) which we also call the relative intersection product. Consider the case =0. Then the relative intersection product reduces to the usual intersection product [-d]→Δ_♯Δ^* for which no lift of the Euler characteristic is required. Consider the diagram (<ref>), where p_E E→ M× M is a Hurewicz fibration. Let E'→ M× M be the Hurewicz fibration with the two factors of M exchanged, as in <ref>. Choose the Pontryagin–Thom lift λ_(M) of the Euler characteristic. Then the relative intersection products ^E, F, ^E', F_̋∙+d(E, F)⟶_̋∙(E×_M× M M, F) with respect to the pairs (E, F) and (E', F) satisfy ^E, F = (-1)^d ^E', F. The diagram (<ref>) is invariant under the exchange of the two factors of M, if we further compose with the isomorphism M→ M given by acting by (-1) along the fibers. Therefore, the diagram (<ref>) used in the definition of the relative intersection product is invariant under the change of the factors of M if we replace ϵ_p by (-1)^dϵ_p and e(M) by (-1)^d e(M). Choose the Pontryagin–Thom lift λ_(M) of the Euler characteristic. We will now give a concrete formula for the relative intersection product. Denote by _M∈^̋d(S^ M, M) the Thom class and consider an embedding M⊂ V into a Euclidean space which induces a Riemannian metric on M. Let (-) be the Moore path space. Since p_E E→ M× M is a Hurewicz fibration, we may choose a path-lifting function (also known as a Hurewicz connection), i.e. a section of (E)→(M × M) ×_M× M E. Evaluating at the endpoints we obtain the transport function which we denote by (M × M) ×_M × M E → E ((p_1 × p_2), e) ↦ (p_1 × p_2).e. Recall the definition of the geometrically defined Pontryagin–Thom collapse map (<ref>), (M × M, 𝕀)_+ → (M^I, π)_+ ∧_M × M (S^ M× M) (x,y) ↦ (ω(x,y), y-x), where ω(x,y) is the geodesic connecting x to y and y-x ∈ T_xM is the corresponding tangent vector. Note that the same formula lifts to Moore paths, which is what we will use below. With that notation we can now define the morphism R E ⟶ (E×_M× M M)^ M e ⟼ (_x ×ω(x,y)^-1).e ∧ (y-x), where (x,y) = p_E(e). Suppose M is a closed oriented manifold equipped with the Pontryagin–Thom lift of the Euler characteristic. The relative intersection product is given by _̋∙(E, F)_̋∙((E×_M× M M)^ M, F^ M)_̋∙-d(E×_M× M M, F). By <ref> (and its proof) the square (<ref>) has a lift to ex-spaces over M × M and is equivalent to Δ_♯(M, 𝕀)_+ [r] [d, "0_M"] (M × M, 𝕀)_+ [d] Δ_♯(S^ M) [r, "∼"] (M)_+ ∧_M × M (S^ M× M). This allows us to write (<ref>) (again in ex-spaces over M× M) as Δ_♯ (F, p_F)_+ [r] ^0_M[dd] (E, p_E)_+ [d] (E, p_E)_+ ∧_M × M(M)_+ ∧_M × M (S^ M× M) Δ_♯ ((F, p_F)_+ ∧_M S^ M) [r] (E, p_E)_+ ∧_M × MΔ_♯ S^ M≅Δ_♯( Δ^* (E, p_E)_+ ∧_M S^ M) _∼[u] A retraction to the wrong-way map is given using the path-lifting function for E by the formula (e, p, u) ↦ ((_x × p^-1).e, u). With that the vertical composition on the right becomes exactly R. <Ref> may be applied to F=∅ to obtain the result about the usual intersection product. The intersection product is given by _̋∙(E)_̋∙((E×_M× M M)^ M)_̋∙-d(E×_M× M M). §.§ String topology operations We begin with the definition of the loop product as in <cit.>. Consider the path space M^I of continuous maps [0, 1]→ M. By <cit.> the projection M^I→ M× M to the endpoints is a Hurewicz fibration. Let LM be the space of continuous maps S^1→ M with LM→ M the evaluation at the basepoint 0∈ S^1. As LM=M^I×_M× M M, LM→ M is a Hurewicz fibration. Concatenation of loops defines a morphism LM×_M LM→ LM. It is a morphism of spaces over M, so on the level of corresponding local systems it gives rise to a morphism m_♯_LM⊗_♯_LM⟶_♯_LM. The loop product is given by the composite ∧_̋∙(LM)⊗_̋∙(LM)⟶_̋∙-d(LM×_M LM)_̋∙-d(LM), where the first morphism is given by the intersection product as in <ref>. The loop product is graded commutative: for α∈_̋n(LM) and β∈_̋m(LM) we have α∧β = (-1)^n+m+dβ∧α. The loop product is given by the composite _̋n(LM)⊗_̋m(LM)→_̋n+m(LM× LM)→_̋n+m-d(LM×_M LM)_̋n+m-d(LM). The first map given by the Künneth isomorphism introduces a sign (-1)^n+m when we exchange α and β. The second map given by the intersection product introduces a sign (-1)^d when we exchange the two LM factors by <ref>. Finally, the maps LM×_M LM→ LM given by (γ_1, γ_2)↦γ_1∗γ_2 and (γ_1, γ_2)↦γ_2 ∗γ_1 = γ_2∗ (γ_1∗γ_2)∗γ_2^-1 are homotopic, so they induce equal maps on homology. We now give a description of the loop product that matches the one given in <cit.> for the Chas–Sullivan product. Consider the Hurewicz fibration LM→ M given by evaluating at 0 and let × E=LM× LM→ M× M. Let us briefly recall the map R from (<ref>). We first choose the path-lifting function for the fibration LM→ M given by conjugating a loop in LM with a given Moore path. The corresponding transport function is (M) ×_M × M LM → LM (ω, γ) ↦ω^-1⋆γ⋆ω. We choose the path-lifting function for E→ M× M to be the product of the above. We thus obtain the map R_CS LM × LM → (LM×_M LM)^ M, which can be described as follows. Fix a tubular neighborhood of the diagonal M ⊂ U ⊂ M × M. Let (γ_1, γ_2) be an element in LM × LM with γ_1(0) = x and γ_2(0) = y. If (x,y) ∉U, then send it to the base point. Otherwise, we send it to the pair consisting of (γ_1, ω(x,y) ⋆γ_2 ⋆ω(x,y)^-1) ∈ LM ×_M LM and (x,y) ∈ U/∂ U ≅ S^ M, where ω(x,y) is the geodesic connecting x to y. The loop product is given by the composite _̋∙(LM) ⊗_̋∙(LM) _̋∙((LM ×_M LM)^ M) _̋∙ - d(LM ×_M LM) _̋∙-d(LM). Next, let us recall the definition of the loop coproduct (also known as the Goresky–Hingston coproduct). As before, M is a closed oriented manifold. Consider the space = LM×_M LM⟶ M of maps S^1∨ S^1→ M (the space of figure eights). We have a natural morphism ι_0∪ι_1 LM⊔ LM→ given by figure eights which are constant on each half. The natural projection → LM× LM fits into a commutative diagram @C=3cm[r] LM× LM LM⊔ LM ^-𝕀×∪×𝕀[r] ^ι_0∪ι_1[u] M× LM∪ LM× M ^i×𝕀∪𝕀× i[u] For t∈[0, 1] denote by _t LM→ M the evaluation map at time t. For instance, _0=_1 is the map considered before. Consider the reparametrization map J_s LM→ LM for s∈ [0, 1] defined by J_s(γ) = γ∘θ_1/2→ s, where θ_1/2→ s is the piecewise-linear map that fixes 0 and 1 and takes 1/2 to s. Consider the commutative diagram LM⊔ LM ^-J_0∪ J_1[r] [d] LM ^(_0, _1/2)[d] M ^-Δ[r] M× M The map (_0, _1/2) LM→ M× M is again a Hurewicz fibration: indeed, LM is homeomorphic to (M^I×_M M^I)×_M× M M and (_0, _1/2, _1) M^I×_M M^I→ M× M× M is a Hurewicz fibration. The pullback of the bottom right corner is exactly and the induced map LM⊔ LM→ coincides with ι_0∪ι_1. Consider the homotopy commutative diagram _∙(LM) [r] [d] 0 [d] _∙(LM)⊕_∙(LM) ^-J_0⊕ J_1[r] _∙(LM) where the left vertical map is x↦ (x, -x). This induces a morphism _̋∙-1(LM)→_̋∙(LM, LM⊔ LM). The loop coproduct is given by the composite ∨_̋∙+d-1(LM) →_̋∙+d(LM, LM ⊔ LM) ⟶_̋∙(, LM⊔ LM) ⟶_̋∙(LM× LM, M× LM∪ LM× M). where the second map is given by the relative intersection product using the commutative diagram (<ref>) and the second map is the pushforward along (<ref>). It follows from <ref> that ∨ coincides with the Goresky–Hingston coproduct as described in <cit.>, see also <cit.>. Note that in loc. cit. the loop coproduct has as domain _̋∙(LM, M), i.e. it is shown that ∨ descends along _̋∙(LM) →_̋∙(LM,M). But since that map is split, no information is lost in the above presentation and we obtain the coproduct with domain _̋∙(LM,M) by precomposing with that splitting. If we work with field coefficients or the homology _̋∙(LM, M) has no torsion, there is the Künneth isomorphism _̋∙(LM× LM, M× LM∪ LM× M)≅_̋∙(LM, M)⊗_̋∙(LM, M). If k is an _∞-ring, such that M is oriented over k, the loop coproduct defines a map of spectra _∙+d-1(LM; k)→_∙(LM, M; k)⊗_∙(LM, M; k). If we denote by ∨^ the loop coproduct post-composed with the flip of the two LM factors, then for α∈_̋∙(LM) we have ∨^(α) = (-1)^d+1∨(α). We denote by the morphisms exchanging the two factors of LM. We have a morphism of pairs LM [r] LM LM⊔ LM ^[r] ^J_0∪ J_1[u] LM⊔ LM, ^J_0∪ J_1[u] where the top morphism LM→ LM is given by rotating the loop half-way, i.e. by the map t↦ t + 1/2. Consider the diagram @C=0.5cm_̋∙+d-1(LM) [r] ^𝕀[d] _̋∙+d(LM, LM ⊔ LM) [r] ^[d] _̋∙(, LM⊔ LM) [r] ^[d] _̋∙(LM× LM, M× LM∪ LM× M) ^[d] _̋∙+d-1(LM) [r] _̋∙+d(LM, LM ⊔ LM) [r] _̋∙(, LM⊔ LM) [r] _̋∙(LM× LM, M× LM∪ LM× M) where the squares commute up to signs and the two horizontal composites are both given by the loop coproduct ∨. The first morphism in the definition of the loop coproduct is given by (<ref>), which introduces a factor of (-1) when we exchange the two factors of LM. By <ref> the second morphism, which is the relative intersection product, introduces a factor of (-1)^d when we exchange the two factors. Finally, the diagram (<ref>) is invariant under the exchange of the two loops, so the last morphism does not introduce extra factors. §.§ Simple homotopy invariance of the string topology operations For M a closed oriented d-manifold <ref> provides an equivalence _Mζ_M_M[-d] in (M). Now let M and N be closed oriented d-manifolds and f M→ N a homotopy equivalence in which case we have a canonical isomorphism f^*ζ_N≅ζ_M. We say that f is orientation-preserving if the composite _M[-d]ζ_M≅ f^*ζ_N_M[-d] is equivalent to the identity. As the intersection product [-d]→Δ_♯Δ^* is constructed from the duality pairing _M⊠ζ_M→Δ_♯_M, which is homotopy invariant, as well as the trivialization _M of ζ_M, we see that the intersection product is invariant under orientation-preserving homotopy equivalences. Let us now study homotopy invariance of the relative intersection product. Suppose M and N are homotopy-equivalent closed oriented d-manifolds; we identify the underlying homotopy types using the given homotopy equivalence. Let _M, _N (/Δ_♯)[-d]⟶Δ_♯(Δ^*/) be the two relative intersection products defined using the Pontryagin–Thom lifts λ_(M),λ_(N) of the Euler characteristic. Using <ref> the difference between the two lifts can be identified with the trace of the Whitehead torsion of the homotopy-equivalence f N → M, i.e. (τ(f)) = f(λ_(N)) - λ_(M) ∈_(M)(_M[1-d], (Δ^*Δ_♯_M)/_M) ≅_̋1(LM, M). Let f M→ N be an orientation-preserving homotopy equivalence. The difference _M - _N is given by the composite (/Δ_♯)[-d] ⟶Δ_♯[1-d] Δ_♯( ⊗ (Δ^*Δ_♯_M)/_M ) ⟶Δ_♯( (Δ^*Δ_♯)/) Δ_♯(Δ^*/). On homology it simplifies to _̋∙ + d(E, F) ⟶_̋∙ + d -1(F) _̋∙(F×_M LM, F) ⟶_̋∙(E×_M× M M, F), where the second map is given by taking the (non-relative) intersection product with the class (τ(f)) and the third map is given by _̋∙(F×_M× M M^I, F)_̋∙(E×_M× M M^I, F)_̋∙(E×_M× M M, F), where the last map is an isomorphism since E→ M× M is a fibration. By construction, the two relative intersection products are obtained from the commutative diagram (<ref>) (see also <ref>). Their difference is induced by the diagram Δ_♯_M[-d] [r] [d, "Δ_♯ (e(M) - e(N))"] _M× M[-d] [d, "0"] Δ_♯_M [r, equal] Δ_♯_M, which is adjoint to _M[-d] [r, equal] [d, "(e(M) - e(N))"] _M[-d] [d, "0"] _M [r] Δ^*Δ_♯_M. This diagram naturally factors as _M[-d] [r, equal] [d, equal] _M[-d] [d] _M[-d] [r] [d, "(τ(f))"] 0 [d, equal] (Δ^*Δ_♯_M)/_M[-1] [r] [d] 0 [d] _M [r] Δ^*Δ_♯_M, which induces the diagram Δ_♯[-d] [r] [d, equal] [-d] [d] Δ_♯[-d] [r] [d, "(τ(f))"] 0 [d, equal] Δ_♯ (⊗ (Δ^*Δ_♯_M)/_M[-1]) [r] [d] 0 [d] Δ_♯[r] Δ_♯Δ^*. Passing to horizontal cofibers this gives the stated description. Taking =0 in the previous statement, we recover the fact that the usual intersection product [-d]→Δ_♯Δ^* is homotopy invariant. Let ∨_M,∨_N be the two loop coproducts defined using the relative intersection products _M,_N. Let ν = (τ(f))∈_̋1(LM, M). Consider the map Δ' LM→ LM× LM given by γ↦ (γ^-1, γ). Since _̋0(LM, M) and _̋0(LM) are torsion-free, we have the Künneth isomorphism _̋1(LM× LM, LM× M)≅⊕_i+j=1_̋i(LM)⊗_̋j(LM, M). Consider a morphism of pairs (LM, M) (LM× LM, LM× M). This map, along with the Künneth isomorphism, gives rise to a morphism Δ'_̋1(LM, M)⟶⊕_i+j=1_̋i(LM)⊗_̋j(LM, M) and we denote by ν'⊗ν” the image of ν under this map, where we use the Sweedler notation. We similarly denote by ν'⊗ν”∈⊕_i+j=1_̋i(LM, M)⊗_̋j(LM) the image of ν under the map of pairs (LM, M)(LM× LM, M× LM). Let f M→ N be an orientation-preserving homotopy equivalence. Then for every α∈_̋n+d-1(LM) we have ∨_M(f(α)) - f(∨_N(α)) = (-1)^n+1ν'⊗ (ν”∧_N f(α)) - (f(α)∧_N ν')⊗ν”. We apply <ref> to the case where F=F_0⊔ F_1 = LM⊔ LM and E=LM to obtain that the difference is given by _̋n + d-1(LM) ≅_̋n+d(LM, LM ⊔ LM) →_̋n+d-1(LM ⊔ LM) _̋n(LM×_M (LM⊔ LM), LM⊔ LM) →_̋n(, LM ⊔ LM) →_̋n(LM× LM, LM× M∪ M× LM). We have a commutative diagram @C=1cm_̋n+d-1(LM ⊔ LM) ^-(τ(f))[r] _̋n(LM×_M (LM⊔ LM), LM⊔ LM) [r] _̋n(, LM⊔ LM) _̋n+d-1(LM)^⊕ 2^-(τ(f))⊕(τ(f))[r] ^∼[u] _̋n(LM×_M LM, LM)^⊕ 2[r] [u] _̋n(, LM)^⊕ 2[u] The composite _̋n + d-1(LM) ≅_̋n+d(LM, LM ⊔ LM)→_̋n+d-1(LM⊔ LM)≅_̋n+d-1(LM)^⊕ 2 is given by x↦ (x, -x). Let us now describe the composite of the remaining maps from each summand in _̋n+d-1(LM)^⊕ 2. The map from the first summand is given by the composite _̋n+d-1(LM) _̋n+d-1(LM)⊗_̋1(LM, M) _̋n(LM×_M LM, LM) →_̋n(, LM) →_̋n(LM× LM, M× LM), where the map LM×_M LM→ is given by (γ_1, γ_2)↦ (γ_2^-1, γ_2∗γ_1). By naturality of the intersection product we have a commutative diagram _̋1(LM, M)⊗_̋n+d-1(LM) ^_N[r] ^Δ'⊗𝕀[dd] _̋n(LM×_M LM, LM) [d] _̋n(LM× LM, M× LM) ⊕_i+j=1_̋i(LM, M)⊗_̋j(LM) ⊗_̋n+d-1(LM) ^-𝕀⊗_N[r] ⊕_i+j=1_̋i(LM, M)⊗_̋n+j-1(LM×_M LM) ^𝕀⊗ m[u] Using the symmetry of the Künneth isomorphism and the intersection product (see <ref>) we see that the composites _̋n+d-1(LM) _̋n+d-1(LM)⊗_̋1(LM, M)_̋n(LM×_M LM, LM)_̋n(LM×_M LM, LM), where the last map is given by the flip of the two LM factors, and _̋n+d-1(LM) _̋1(LM, M)⊗_̋n+d-1(LM)_̋n(LM×_M LM, LM) differ by (-1)^n+d-1+d=(-1)^n+1. This shows that the map from the first summand is given by α↦ (-1)^n+1ν'⊗ (ν”∧α). Similarly, the map from the second summand is given by the composite _̋n+d-1(LM) _̋n+d-1(LM)⊗_̋1(LM, M) _̋n(LM×_M LM, LM) →_̋n(, LM) →_̋n(LM× LM, LM× M), where the map LM×_M LM→ is given by (γ_1, γ_2)↦ (γ_1∗γ_2^-1, γ_2). Similarly to the first summand, we see that the map from the second summand is given by α↦ (α∧ν')⊗ν”. Over the integers the error term in <ref> is in the image of the Künneth map _̋∙(LM,M) ⊗_̋∙(LM,M) →_̋∙(LM × LM, M × LM ∪ LM × M). Now suppose k is an _∞-ring, such that M is oriented over k. In this case we have the spectral-level loop coproduct from <ref>. Then the same transformation formula is valid over k, where both sides are understood as maps _∙+d-1(LM; k) →_∙(LM, M; k)⊗_∙(LM, M; k). § SHKLYAROV COPAIRING AND ITS NULLHOMOTOPY §.§ Mukai pairing and Shklyarov copairing Fix an _∞-ring k and let ∈_k be a k-linear stable presentable ∞-category. Let us recall the Mukai pairing and the Shklyarov copairing from <cit.>. Recall that for a proper ∞-category ∈_k the evaluation pairing ⊗^∨→_k admits a colimit-preserving right adjoint and for a smooth ∞-category the coevaluation pairing _k→^∨⊗ admits a colimit-preserving right adjoint. In particular, by <ref> we get an induced map on dimensions. Let ∈_k be a k-linear stable presentable ∞-category. * If is proper, the Mukai pairing is the composite []()⊗()≅(⊗^∨) k. * If is smooth, the Shklyarov copairing is the composite [] k(^∨⊗)≅()⊗(). A relationship between the Mukai pairing and copairing is given by the following straightforward observation, see <cit.> Suppose ∈_k is a smooth and proper ∞-category. The Mukai pairing and Shklyarov copairing establish a self-duality of ()∈_k. The assumptions guarantee that and are morphisms in (_k)^ and thus is dualizable in (_k)^. Using the functoriality of dimensions from <ref> we get that ()∈_k is dualizable with dual (^∨)≅(). In this section we will present several examples of smooth ∞-categories with the following additional structures: * There is a morphism ()→(). * The element []∈()⊗() lies in the image of ()⊗()→()⊗(). We will show in <ref> (see <ref>) that this structure allows us to define an operation ()→()⊗()[-1], where () is the cofiber of ()→(). §.§ Examples in algebra Let k be a field and _k the ∞-category of dg algebras over k. Let A^e = A⊗ A^ be the enveloping algebra. Recall the following notions: * A dg algebra A is finite cell if it is isomorphic as a graded algebra to the free algebra k⟨ x_1, …, x_n⟩ with the differential satisfying dx_i∈ k⟨ x_1, …, x_i-1⟩. * A dg algebra A is locally of finite presentation if it is compact in _k. * A dg algebra A is smooth if the diagonal bimodule (i.e. A as an A^e-module via the left and right actions of A on itself) is compact in the ∞-category _A^e of A^e-modules. By <cit.> a dg algebra A is locally of finite presentation if, and only if, it is a retract of a finite cell algebra. Moreover, a dg algebra locally of finite presentation is smooth. For any dg algebra A, the ∞-category = _A∈ of dg A-modules is dualizable with ^∨ = _A^ and _k→_A⊗_A^≅_A^e given by sending k to the diagonal bimodule A. So, a dg algebra A is smooth in the sense of the above definition precisely if is smooth in the sense of <ref>. If A is smooth, we have a natural point [A]∈ K(A^e) given by the class of the diagonal bimodule. Let [A^e]∈ K(A^e) be the class of A^e as a left A^e-module. Let A = (k⟨ x_1, …, x_n⟩, d) be a finite cell dg algebra. Then there is a homotopy [A]∼ [A^e](1-∑_i=1^n (-1)^|x_i|) in K(A^e). Consider the multiplication map m A⊗ A→ A. It defines a surjective morphism of A^e-modules, where A is the diagonal bimodule. Let Ω^1_A be the kernel of m. By <cit.> the A^e-module Ω^1_A is also finite cell on the generators {dx_1, …, dx_n}, where dx_i = x_i⊗ 1 - 1⊗ x_i. We have a short exact sequence 0⟶Ω^1_A⟶ A^e A⟶ 0 which produces a homotopy [A]∼ [A^e] - [Ω^1_A]. Since Ω^1_A is finite cell, it admits a filtration with associated graded given by the free dg A^e-module on the generators {dx_1,…, dx_n}. This defines a homotopy [Ω^1_A]∼ [A^e] ∑_i=1^n (-1)^|x_i|. It is announced by Efimov that, conversely, a dg algebra locally of finite presentation is quasi-isomorphic to a finite cell algebra if, and only if, [A] is homotopic to a multiple of [A^e]. Let A be a finite cell dg algebra and = _A∈ the derived ∞-category of dg A-modules. Then by <ref> we see that the class []∈()⊗() is proportional to [A^e]. In particular, if we let ()=k mapping to () via the class of the free module [A], then [] admits a lift along ()⊗()→()⊗(). §.§ Examples in homotopy theory Let X be a finitely dominated space and consider =^X. We have the following results: * (^X)≅Σ^∞_+ LX by <ref>. * ^X is smooth by <ref>. * _X∈^X is compact by <ref>. In particular, we have the Euler characteristic χ_(X) = [_X]∈Ω^∞Σ^∞_+ LX. * By <ref> and <ref> we have [] = (LΔ) χ_(X)∈Ω^∞Σ^∞_+(LX× LX), where LΔ LX→ LX× LX is the diagonal map. Let (^X) = Σ^∞_+ X mapping to (^X) via the inclusion of constant loops. From the above description of the assembly map, we see that if X is equipped with a lift of the Euler characteristic, then [] lifts along (^X)⊗(^X). Having an extra structure on X allows us to give a stronger statement. The homological Euler class e(X)∈Ω^∞Σ^∞_+ X is the image of χ_(X) under LX→ X given by evaluating the loop at the basepoint of S^1. We may identify π_0(Ω^∞Σ^∞_+ X)≅_̋0(X;). If X is a closed oriented d-manifold, we may also identify _̋0(X;)≅^̋d(X;) and under this identification the homological Euler class goes to the usual Euler class in cohomology. Let X be a finitely dominated space equipped with the following structure: * A lift of the Euler characteristic along the assembly map. * A trivialization e(X)∼ 0 of the homological Euler class. Then []∈(^X)⊗(^X) admits a nullhomotopy. The composite X⟶ LX⟶ X is the identity, where the first morphism is the inclusion of constant loops. Therefore, a lift of the Euler characteristic along the assembly map provides identifies χ_(X) with the image of e(X) under the assembly map. So, a trivialization of e(X) gives rise to a trivialization of χ_(X). The last statement follows from the fact that a connected finite CW complex carries the Whitehead lift of the Euler characteristic and π_0(Ω^∞Σ^∞_+ X)≅_̋0(X; )≅. Suppose M is a connected finite CW complex and let X=(M) the underlying homotopy type. Then X carries the Whitehead lift of the Euler characteristic and under the composite π_0(Ω^∞Σ^∞_+ X)≅_̋0(X; )≅ we have e(X)↦χ(X). In particular, if χ(X)=0, there is a trivialization e(X)∼ 0 of the homological Euler class. Given a finite CW complex M, the homological Euler class is represented by the 0-chain ∑_a∈ A(-1)^(a)α_a, where A is the set of cells of M and α_a is a point in the interior of a. A bounding 1-chain for the homological Euler class is called a combinatorial Euler structure in <cit.>. §.§ Examples in symplectic geometry Let M be a Liouville manifold of dimension 2n, i.e. an exact symplectic manifold with a convexity condition at infinity, satisfying the nondegeneracy condition (see e.g. <cit.>). For example, M=^*N for a compact manifold N satisfies the nondegeneracy condition <cit.>. More generally, any Weinstein manifold satisfies the non-degeneracy condition by <cit.>. In addition, choose a trivialization of 2c_1(_M) for some almost complex structure on M (again, for M=^* N there is a canonical such structure). Let k be a field. Let ^∙(M) be the symplectic cohomology of M (our grading conventions follow <cit.>) and (M) the wrapped Fukaya category. Non-degeneracy of M has the following implications: * The ∞-category _(M)=((M)^, _k)∈_k of (M)-modules is smooth, see <cit.>. * The natural open-closed map ^̋∙((_(M)))⟶^∙(M) is an isomorphism, see <cit.>. Let (_(M)) be the action zero part of symplectic cohomology, so that ^̋∙((_(M)))≅^̋∙+n(M) (see e.g. <cit.>). The cohomology of the cofiber (_(M)) is then the positive action symplectic cohomology ^∙_>0(M). It fits into a long exact sequence ⟶^̋∙+n(M)^∙(M)⟶_>0^∙(M)⟶^̋∙+n+1(M)⟶ We claim that []∈_∙((M× M))≅^∙(M× M) is equal to the image e^*[Δ] of the diagonal class [Δ]∈^̋2n(M× M). Let us sketch the proof. By <ref> the class [] can be computed in terms of the TQFT structure on Hochschild homology _∙((M)). We refer to <cit.> for a construction of a TQFT structure on symplectic cohomology ^∙(M). It can be shown that the open-closed map intertwines the TQFT structures similarly to the proof of <cit.>. In particular, [] = ψ_Q(1) in the notation of <cit.>. By <cit.> we get ψ_Q(1) = e^*(ψ_Q'(1)), where ψ_Q'(1)=[Δ] is the diagonal class. In particular, [] lifts along the map (_(M))⊗(_(M))→(_(M))⊗(_(M)). Consider a compact n-manifold N and let M=^* N. Then the diagonal class [Δ_M] is proportional to χ(N). In particular, in the case χ(N)=0 it vanishes. In fact, Abouzaid <cit.> shows that _(^* N) is equivalent to the ∞-category of k-local systems on N twisted by the second Stiefel–Whitney class w_2(N)∈^̋2(N; /2). So, this example fits both in the framework of symplectic geometry and homotopy theory as described in <ref>. §.§ Examples in algebraic geometry Throughout this section X will be a quasi-compact and quasi-separated scheme X over the base ring k⊃. We denote by p X→ k the natural projection. Let (X) be the unbounded derived ∞-category of quasi-coherent complexes on X. By <cit.> (see also <cit.> and <cit.>) (X) is compactly generated by the subcategory (X) of perfect complexes. The proof of the following statement is identical to the proof of <ref>. Let X be a quasi-compact and quasi-separated scheme. Then (X) is self-dual with the duality functors (X)⊗(X)(X)_k and _k(X)(X)⊗(X). Denote _∙(X) = _∙((X))≅((X)). From now on, we assume that X is a smooth and separated scheme, in which case (X) is a smooth ∞-category by <cit.>. By <cit.> we have a quasi-isomorphism _∙(X) ≅Γ(X, (Ω^1_X[1])). By <ref> the Shklyarov copairing is given by [] = (Δ_*_X)∈_∙(X)⊗_∙(X). Suppose X is a separated smooth affine variety of nonzero dimension. Then the diagonal class []∈_∙(X)⊗_∙(X) admits a nullhomotopy. We have [(Δ_*_X)]∈⊕_n ^̋n(X, Ω^n_X). By the Grothendieck–Riemann–Roch theorem <cit.> (Δ_* _X) = Δ_* ((X)^-1) whose nonzero components have n≥(X). But ^̋n(X, Ω^n_X) = 0 for n≥ 1 since X is affine. In the case when X is a curve, we can describe such nullhomotopies geometrically as follows. Suppose X is a smooth and separated curve equipped with a rational one-form ω∈Ω^1(k(X× X)) on X× X, which is regular away from the diagonal and with a simple pole along the diagonal with residue the constant function 1. Then there is a nullhomotopy of []∈_∙(X)⊗_∙(X). We have a resolution 0⟶_X× X(-Δ)⟶_X× X⟶Δ_*_X⟶ 0, where =_X× X(-Δ) is a line bundle, since Δ is a Cartier divisor. This gives (Δ_*_X) = 1 - (1 + c_1() + c_1()^2/2). In particular, a trivialization of c_1() gives rise to a trivialization of (Δ_*_X). A trivialization of c_1() is the same as a connection on . In the trivialization given by the rational section 1 of =_X× X(-Δ), a connection is given by ∇=-̣ω for some rational one-form ω which is regular away from the diagonal. The regularity of the connection on then implies that ω has a simple pole along the diagonal with residue 1. § TOPOLOGICAL QUANTUM FIELD THEORIES Given a smooth ∞-category ∈ together with a partial trivialization of the Shklyarov copairing [] as in <ref>, the goal of this section is to produce a secondary coproduct ()⟶()⊗()[-1], see <ref>. We describe the above operation in terms of framed two-dimensional topological quantum field theories. We will use the cobordism hypothesis, see <cit.>. Let us mention that for our purposes it is enough to work in the underlying (3, 2)-category of the (∞, 2)-category of two-dimensional extended bordisms, so <cit.> give us the relevant results. In <ref> we provide a decomposition of bordisms into elementary ones which allows one to forego the cobordism hypothesis and write the relevant operations in terms of the duality data; the cobordism hypothesis then asserts that the operations are independent of the choice of the decomposition of the bordism into elementary ones. §.§ Background For a manifold we denote by ^k the trivial k-dimensional vector bundle. Let M be a smooth k-manifold. An n-framing (for n≥ k) on M is a trivialization of the stabilized tangent bundle: _M⊕^n-k≅^n. We will work with framings on extended bordisms. We refer to <cit.> for precise definitions of bordisms and framings on those. Extended bordisms organize into the symmetric monoidal (∞, 2)-category _2^ which has the following informal description (see <cit.> for a precise construction): * Its objects are closed 2-framed 0-manifolds (finite collections of points). * Its 1-morphisms from X to Y are compact 2-framed 1-bordisms, i.e. compact 2-framed 1-manifolds B with specified incoming and outgoing boundaries: ∂ B≅ X⊔ Y (with a certain compatibility with the framing in a collared neighborhood of the boundary). * Its 2-morphisms from B_1 X→ Y to B_2 X→ Y are 2-framed 2-bordisms, i.e. compact 2-framed 2-manifolds Σ with corners whose boundary is decomposed into X× [0, 1]⊔ Y× [0, 1] and B_1⊔ B_2. Let be a symmetric monoidal (∞, 2)-category. An object x∈ is fully dualizable if it is dualizable and the evaluation and coevaluation morphisms have a tower of adjoints …⊣ (^Ł)^Ł⊣^Ł⊣⊣^⊣ (^)^⊣… It turns out that to check that an object is fully dualizable it is enough to perform finitely many checks. The following is <cit.> and <cit.>. Let be a symmetric monoidal (∞, 2)-category. An object of is fully dualizable if, and only if, it is smooth and proper. Let be a symmetric monoidal (∞, 2)-category and x∈ a smooth object. The inverse Serre morphism is the composite T x x⊗ x^∨⊗ x x^∨⊗ x⊗ x x. Despite the terminology, the morphism T x→ x is not always invertible. If we assume that x∈ is smooth and proper, one may use ^Ł to similarly define a morphism S x→ x inverse to T (see the proof of <cit.>). For a symmetric monoidal (∞, 2)-category we denote by (^)^∼ the ∞-groupoid consisting of fully dualizable objects of . The following is proven in <cit.> on the level of 2-categories and sketched in <cit.> on the level of (∞, 2)-categories. Let be a symmetric monoidal (∞, 2)-category. The evaluation functor ^⊗(_2^, )→ (^)^∼ given by Z↦ Z() is an equivalence. We will now consider a variant of the above statement. Consider the (∞, 2)-subcategory _2^, ⊂_2^ which has the same objects and 1-morphisms, while the 2-morphisms X [r, bend left=50, ""name=U, below, "B"] [r, bend right=50, ""name=D, "B'"below] Y [Rightarrow, from=U, to=D, "Σ"] are extended bordisms Σ, such that every connected component of Σ has a nonempty intersection with B'. Lurie in <cit.> considers a version _2^ of _2^,, where the bordisms are oriented and the condition on Σ is that every connected component has a nonempty intersection with B instead. Let be a symmetric monoidal (∞, 2)-category. The ∞-groupoid ^⊗(_2^, , ) is equivalent to the ∞-groupoid of smooth objects of with the property that the inverse Serre morphism T is invertible. We refer to symmetric monoidal functors _2^, → as positive boundary framed 2-dimensional TQFTs. The following is shown in <cit.>. Let be a symmetric monoidal (∞, 2)-category and Z_2^, → a positive boundary framed 2d TQFT. The inverse Serre morphism T Z()→ Z() is the image under Z of a 1-bordism → represented by the interval with one twist in the framing. §.§ TQFT operations Let be a symmetric monoidal (∞, 2)-category and fix a symmetric monoidal functor Z_2^, →. By <ref> it corresponds to a smooth object x=Z()∈ whose inverse Serre map T x→ x is invertible. In this section we describe some bordisms in _2^, and the corresponding operations provided by the positive boundary TQFT. Consider the circle S^1. The space of 2-framings compatible with the given orientation is a torsor over (S^1, (2))≅. Correspondingly, we may parametrize 2-framed circles by integers, and we denote the corresponding 2-framed circle by S^1_n for n∈, viewed as a 1-morphism ∅→∅ in _2^. Our conventions are as follows: * The disk provides a 2-morphism ∅→ S^1_1 in _2^,. * S^1_0 is the 2-framing on S^1 induced by the unique 1-framing. * The disk provides a 2-morphism S^1_-1→∅ in _2^. Note that it does not lie in _2^, as this bordism has an empty intersection with the outgoing boundary. We number the 2-framings on the circle as in <cit.>. The numbering in <cit.> is slightly different: the three 2-framings above are S^1_0, S^1_1, S^1_2, respectively. Using <ref> the values of Z(S^1_n)∈_() may be explicitly computed as follows. We have an equivalence Z(S^1_n)≅∘σ∘ (𝕀⊗ T^n)∘. In particular, Z(S^1_0)≅(x) and Z(S^1_1)≅(x). Consider the bordisms from <ref>. Our convention is that the 1-morphisms go from left to right and 2-morphisms go from bottom to top in the picture. Applying the functor Z we obtain the following operations: * A product ∧ Z(S^1_n)⊗ Z(S^1_m)→ Z(S^1_n+m-1). * A non-counital coproduct Z(S^1_1)→ Z(S^1_0)⊗ Z(S^1_0). In addition, there is a homotopy between 2-bordisms given by <ref> witnessing the Frobenius property. After applying Z it gives rise to the homotopies ∨' ((-)⊗ 1)∧(1)∼(-)∼(1)∧ (1⊗ (-)) of maps Z(S^1_1)→ Z(S^1_0)⊗ Z(S^1_0). We will explain a connection between this homotopy and the loop coproduct from <ref> in <ref>. §.§ Explicit description of the operations One can give a generators-and-relations presentation of the bordism category _2^, by equipping bordisms with Morse functions. Generators will then be given by bordisms equipped with Morse functions with a unique critical point in the interior. As an example of such approach, we refer to <cit.> for a presentation of the underlying 2-category of _2^ and <cit.> for a presentation of the underlying (3, 2)-category of _2^. All 2-bordisms we draw are equipped with a natural Morse function given by the height. In this section we split the bordisms given in <ref> into elementary ones to provide a definition of the TQFT operations independent of the cobordism hypothesis given in <ref>. We begin with a description of the duality data for ∈_2^,. Note that by <cit.> we have ^ = ∘σ∘(𝕀⊗ T). In particular, ^ coincides with up to a twist in the framing. Taking the dual of both sides, we obtain ^Ł = (T⊗𝕀)∘σ∘. The following is shown in <cit.>; we will not describe the framings on the bordisms and refer to <cit.> for a detailed description. The point ∈_2^, is a smooth object. The evaluation and coevaluation are shown in <ref>. The counit and unit of the adjunction ⊣^ are shown in <ref>. Using the symmetric monoidal functor Z_2^, →, we obtain the structure of a smooth object on x=Z()∈. We are now ready to describe the TQFT operations. The simplest operation is the unit on Z(S^1_1). The unit map for Z(S^1_1) is equivalent to the unit η𝕀_→^∘≅ Z(S^1_1) of the adjunction ⊣^. Next, we consider the multiplication on Z(S^1_1). The product ∧ Z(S^1_1)⊗ Z(S^1_1)→ Z(S^1_1) is equivalent to the composite Z(S^1_1)⊗ Z(S^1_1)≅^∘∘^∘^∘≅ Z(S^1_1). Similarly, the product ∧ Z(S^1_1)⊗ Z(S^1_0)→ Z(S^1_0) is equivalent to the composite Z(S^1_0)⊗ Z(S^1_1)≅∘∘^∘∘≅ Z(S^1_0). The cobordism representing the product S^1_1⊔ S^1_1→ S^1_1 can be decomposed as shown in <ref> which implies the claim. The coproduct Z(S^1_1)→ Z(S^1_0)⊗ Z(S^1_0) is equivalent to Z(S^1_1) ≅∘^Ł ≅ (⊗)∘(𝕀⊗∘^⊗𝕀)∘(⊗) (∘)⊗ (∘) ≅ Z(S^1_0)⊗ Z(S^1_0). The cobordism representing the coproduct S^1_1→ S^1_0⊔ S^1_0 can be decomposed as shown in <ref>, which implies the claim. The Shklyarov copairing []∈ Z(S^1_0)⊗ Z(S^1_0) is equivalent to (1). By <ref> [] is computed by traversing the diagram @C=0.5cm@R=1.5cm __x[dr] @=[rr] @[d]^(.2)="a"^(.63)="b" ^η@=> "a";"b" @=[dr] @=[ur] _(𝕀⊗_x^∨⊗𝕀)∘_x[dr] x⊗ x^∨__x^[ur] ^𝕀_x⊗ x^∨⊗_x⊗ x^∨^Ł[dr] @[d]^(.3)="c"^(.73)="d" ^ϵ@=> "c";"d" x⊗ x^∨⊗ x^∨∨⊗ x^∨^-𝕀_x⊗ x^∨⊗_x^∨[ur] @=[rr] x⊗ x^∨⊗ x^∨∨⊗ x^∨__x∘(𝕀⊗_x^∨⊗𝕀)[ur] By <ref> the diagram without η is equivalent to the coproduct Z(S^1_1)→ Z(S^1_0)⊗ Z(S^1_0). By <ref> the unit of Z(S^1_1) is η. Finally, we describe the homotopy ((-)⊗ 1)∧(1)∼(-) given by the left part of <ref>. It will be convenient to present the 2-bordisms in terms of sequences of horizontal slices, where the shown 2-morphism is understood to be acting on the green dashed area. The homotopy ((-)⊗ 1)∧(1)∼(-) is equivalent to the composite of the following sequence of homotopies. * Using <ref> the 2-morphism ((-)⊗ 1)∧(1) is given by the composite [thick] (0, 0) circle (0.3cm); [thin, green, dashed] (0.6, 0.2) rectangle (1, -0.2); (1.4, 0) node ; (2.1, 0) circle (0.3cm); (2.9, 0) circle (0.3cm); [thin, green, dashed] (2.9, 0.4) rectangle (3.3, -0.4); (3.6, 0) node ≅; (4.3, 0) circle (0.3cm); (5.1, -0.3) arc (270:90:0.3cm); (5.1, 0.3) arc (-90:90:0.3cm); (5.1, 0.9) arc (270:90:0.3cm); (5.1, 1.5) – (6.1, 1.5); (6.1, 1.5) arc (90:-90:0.3cm); (6.1, 0.9) arc (90:270:0.3cm); (6.1, 0.3) arc (90:-90:0.3cm); (6.1, -0.3) – (5.1, -0.3); [thin, green, dashed] (5.1, 1) rectangle (6.1, 0.2); (6.8, 0) node ; (7.5, 0) circle (0.3cm); (8.3, -0.3) arc (270:90:0.3cm); (8.3, 0.3) – (9.3, 0.3); (8.3, 0.9) arc (270:90:0.3cm); (8.3, 1.5) – (9.3, 1.5); (9.3, 1.5) arc (90:-90:0.3cm); (9.3, 0.3) arc (90:-90:0.3cm); (9.3, -0.3) – (8.3, -0.3); (8.3, 0.9) – (9.3, 0.9); [thin, green, dashed] (7.5, 0.4) rectangle (8.3, -0.4); (10, 0) node ; (10.7, 0.3) arc (90:270:0.3cm); (10.7, 0.3) – (12.5, 0.3); (11.5, 0.9) arc (270:90:0.3cm); (11.5, 1.5) – (12.5, 1.5); (12.5, 1.5) arc (90:-90:0.3cm); (12.5, 0.3) arc (90:-90:0.3cm); (12.5, -0.3) – (10.7, -0.3); (11.5, 0.9) – (12.5, 0.9); * Exchanging the last ϵ with the composite of the isomorphism and the first ϵ we obtain [thick] (0, 0) circle (0.3cm); [thin, green, dashed] (0.6, 0.2) rectangle (1, -0.2); (1.4, 0) node ; (2.1, 0) circle (0.3cm); (2.9, 0) circle (0.3cm); [thin, green, dashed] (2.1, 0.4) rectangle (2.9, -0.4); (3.6, 0) node ; (4.3, 0.3) arc (90:270:0.3cm); (4.3, 0.3) – (5.1, 0.3); (5.1, 0.3) arc (90:-90:0.3cm); (5.1, -0.3) – (4.3, -0.3); [thin, green, dashed] (5.1, 0.4) rectangle (5.5, -0.4); (5.8, 0) node ≅; (6.5, 0.3) arc (90:270:0.3cm); (6.5, 0.3) – (7.3, 0.3); (6.5, -0.3) – (7.3, -0.3); (7.3, 0.3) arc (-90:90:0.3cm); (7.3, 0.9) arc (270:90:0.3cm); (7.3, 1.5) – (8.3, 1.5); (8.3, 1.5) arc (90:-90:0.3cm); (8.3, 0.9) arc (90:270:0.3cm); (8.3, 0.3) arc (90:-90:0.3cm); (8.3, -0.3) – (7.3, -0.3); [thin, green, dashed] (7.3, 1) rectangle (8.3, 0.2); (9, 0) node ; (9.7, 0.3) arc (90:270:0.3cm); (9.7, 0.3) – (11.5, 0.3); (10.5, 0.9) arc (270:90:0.3cm); (10.5, 1.5) – (11.5, 1.5); (11.5, 1.5) arc (90:-90:0.3cm); (11.5, 0.3) arc (90:-90:0.3cm); (11.5, -0.3) – (9.7, -0.3); (10.5, 0.9) – (11.5, 0.9); * Applying the cusp 3-isomorphism witnessing the adjunction axioms for ⊣^, we cancel the composite of the first two 2-morphisms and obtain (-): [thick] (0, 0) circle (0.3cm); [thin, green, dashed] (0, 0.4) rectangle (0.4, -0.4); (0.7, 0) node ≅; (1.4, -0.3) arc (270:90:0.3cm); (1.4, 0.3) arc (-90:90:0.3cm); (1.4, 0.9) arc (270:90:0.3cm); (1.4, 1.5) – (2.4, 1.5); (2.4, 1.5) arc (90:-90:0.3cm); (2.4, 0.9) arc (90:270:0.3cm); (2.4, 0.3) arc (90:-90:0.3cm); (2.4, -0.3) – (1.4, -0.3); [thin, green, dashed] (1.4, 1) rectangle (2.4, 0.2); (3.1, 0) node ; (3.8, -0.3) arc (270:90:0.3cm); (3.8, 0.3) – (4.8, 0.3); (3.8, 0.9) arc (270:90:0.3cm); (3.8, 1.5) – (4.8, 1.5); (4.8, 1.5) arc (90:-90:0.3cm); (3.8, 0.9) – (4.8, 0.9); (4.8, 0.3) arc (90:-90:0.3cm); (4.8, -0.3) – (3.8, -0.3); The homotopy shown in <ref> consists of the following two operations: * The black critical point exchanges height with the leftmost orange critical point. * The two orange critical points collide and disappear, passing through a birth-death singularity. The first operation corresponds to an interchange 3-isomorphism. The second operation corresponds to the cusp 3-isomorphism witnessing the adjunction ⊣^ (see e.g. the proof of <cit.>). Combining the homotopy ∨' ((-)⊗ 1)∧(1)∼(-)∼(1)∧ (1⊗ (-)) given in (<ref>) with a nullhomotopy of (1) we obtain a secondary operation (x)→(x)⊗(x)[-1]. In examples we will be interested in the cases when (1) is not trivialized in (x), but only in some quotient. For this, assume that _() is a stable ∞-category. Consider a morphism (x)→(x) and assume that (1) lifts along (x)⊗(x). In this case the endpoints of the homotopy ∨' lie in (x)⊗(x) and (x)⊗(x), respectively. Therefore, ∨' gives rise to a secondary operation in the quotient. Let x∈ be a smooth object together with a morphism (x)→(x) whose cofiber in the stable ∞-category _() is (x)→(x). Suppose that the Shklyarov copairing [] admits a lift along the morphism (x)⊗(x)→(x)⊗(x). The secondary coproduct ∨(x)⟶(x)⊗(x)[-1] is given by composing the homotopy ∨' with the projection (x)⊗(x)→(x)⊗(x). § STRING TOPOLOGY TQFT Let M be a closed oriented d-manifold. In this section we denote by the same letter its underlying homotopy type. We use the same notation as in <ref> and replace the smooth category ^M by (M) = ((M), _) for concreteness. So, we obtain the TQFT operations described in <ref>. The goal of this section is to show that they are equivalent to the string topology operations from <ref>. §.§ Marked spaces In this section we introduce a convenient pictorial notation for morphisms of local systems that will be useful in the future sections. Fix a space X∈. Consider the symmetric monoidal (∞, 2)-category () of cocorrespondences in spaces (see <cit.> for the construction) which has the following informal description: * Its objects are spaces. * Its 1-morphisms from S to T are cocorrespondences S→ C← T of spaces. * Its 2-morphisms from S→ C_1← T to S→ C_2← T are given by commutative diagrams C_1 [d] C_2 S [uur] [ur] T [ul] [uul] The symmetric monoidal structure is given by the disjoint union of spaces. We have a natural symmetric monoidal functor (-, X)()^2⟶() to the (∞, 2)-category of correspondences, where we recall that the symmetric monoidal structure is given by the Cartesian product. Recall that in <ref> we have also introduced the symmetric monoidal functor _♯^*()⟶_k. So, we obtain the composite ()^2()_. By construction _()(∅, ∅) ≅, __(_, _)≅_ and the induced composite on units is ^_. It is easy to see that in ()^2 we have an adjunction (→←∅)⊣(∅→←) which after the application of the composite (<ref>) corresponds to the adjunction p_♯⊣ p^*. We will later see that working with pictures in ^ is much easier than working the corresponding spectra obtained via (<ref>). Let us now assume that X is finitely dominated. By <ref> the pullback functor p^*_→(X) admits a colimit-preserving right adjoint p_*(X)→_ given by p_*=p_♯(ζ_X⊗(-)) for some ζ_X∈(X). It can be interpreted in ()^2 by formally adjoining a right adjoint to (∅→←) which we denote by (→[red, fill=red] (0, 0) circle (2pt); ←∅). In other words, we allow marking points on the space. For instance, the picture on <ref> represents ζ_X⊗Δ^*Δ_♯_X≅Δ^*Δ_♯ζ_X. The adjunction data for Δ_♯⊣Δ^* and p^*⊣ p_♯(ζ_♯⊗(-)) has the following pictorial representation (see <ref>, where dashed lines denote arbitrary spaces): * The unit η_Δ→Δ^*Δ_♯ of the Δ adjunction attaches a loop at a given point. * The counit ϵ_ΔΔ_♯Δ^*→ of the Δ adjunction breaks an edge. * The unit η_p→ p_♯ζ_X of the p adjunction attaches a disjoint basepoint marked by ζ_X. * The counit ϵ_p_X⊗ p_♯(ζ_X⊗)→ of the p adjunction connects a marked point to an unmarked point. §.§ TQFT from local systems Let us explain how string topology operations form a part of a TQFT (this perspective goes back to <cit.>). Throughout this section we return to the case of a closed oriented manifold M. As before, we denote by Δ M→ M× M the diagonal map and p M→ the projection. Denote by Δ_12 M× M→ M× M× M the map (x, y)↦ (x, x, y) and similarly for Δ_23. Denote by p_i M× M→ M the projections on each factor. Recall that by <ref> the ∞-category (M) is smooth with the duality data given by () = Δ_♯_M∈(M× M). Let η_p→ p_♯_M and ϵ_p_M× M[-d]→Δ_♯_M be the coevaluation and evaluation of the relative duality (_M,_M[-d]). The following immediately follows from <ref>. The right adjoint to coevaluation ^(M× M)→_ is given by ^() = p_♯Δ^*[-d]. The unit of the adjunction is p_♯_M[-d] p_♯Δ^*Δ_♯_M[-d]. The counit of the adjunction is Δ_♯_M⊗ p_♯Δ^*[-d]≅Δ_♯ (p_1)_♯(_M⊠Δ^*)[-d]Δ_♯ (p_1)_♯(Δ_♯_M⊗ (_M⊠Δ^*))≅Δ_♯Δ^*. Recall the inverse Serre morphism from <ref>. The inverse Serre functor T(M)→(M) is given by ↦[-d]. In particular, it is invertible. The inverse Serre functor is T() = (p_2)_♯Δ_12^*(⊠Δ_♯_M)[-d]. Consider the composition of correspondences M× M _p_1[dl] ^Δ_23[dr] M× M ^𝕀[dr] _Δ_12[dl] M M× M× M M× M Then ↦Δ_12^*(⊠Δ_♯_M) is given by the composition of ♯-pushforward and *-pullback functors along these correspondences. But the composite of the correspondences is (M M M× M), so we obtain a natural isomorphism Δ_12^*(⊠Δ_♯_M)≅Δ_♯. Since the Serre functor is invertible, by the cobordism hypothesis (see <ref>) we obtain a positive-boundary framed 2-dimensional TQFT Z_2^, ⟶_ such that Z() = (M). In fact, (M) has a left Calabi–Yau structure of dimension d by <cit.> and <cit.>. As sketched in <cit.>, in this case Z descends to a symmetric monoidal functor defined on a twisted version of the ∞-category of oriented positive-boundary extended bordisms. For every n∈ there is an equivalence Z(S^1_n)≅_∙(LM)[-nd]. We have Z(S^1_n) ≅∘σ∘ (𝕀⊗ T^n)∘ ≅ p_♯Δ^*Δ_♯_M[-dn] where the first equivalence is given by <ref> and the second equivalence follows from the self-duality of (M) constructed in <ref> and the description of the inverse Serre functor from <ref>. The result follows from the base change property associated to the Cartesian diagram LM [r] [d] M [d] M [r] M× M The proof shows that Z(S^1_n) = _∙(LM^- M^⊕ n) and the orientation of M is used so get a Thom isomorphism. In particular, without assuming that M is oriented, the various pair-of-pants products Z(S^1_n) ⊗ Z(S^1_m) → Z(S^1_n+m-1) give operations LM^- M^⊕ n⊗ LM^- M^⊕ m→ LM^- M^⊕ n+m-1, that differ by twisting with M. The natural TQFT secondary coproduct ∨ Z(S^1_1)[1] → Z(S^1_0) ⊗ Z(S^1_0) is then a map Σ LM^-TM→Σ^∞ LM/M ⊗Σ^∞ LM/M. The element (1)∈ Z(S^1_0)⊗ Z(S^1_0) is equivalent to (LΔ) [_M]∈_∙(LM× LM), where [_M]∈_∙(LM) is the Euler characteristic and LΔ_∙(LM)→_∙(LM× LM) is the diagonal map on the loop space. By <ref> (1)∈ Z(S^1_0)⊗ Z(S^1_0) is equivalent to []∈((M))⊗((M)). Since ()=Δ_♯_M, we have [] = (LΔ) [_M] by <ref>. Under the identification Z(S^1_n)≅_∙(LM)[-nd] given by <ref> the loop product _̋∙(LM)⊗_̋∙(LM)⟶_̋∙-d(LM) is equal on homology to the TQFT product ∧ Z(S^1_0)⊗ Z(S^1_0)→ Z(S^1_-1). According to <ref> the TQFT product is given by the composite p_♯Δ^*Δ_♯_M⊗ p_♯Δ^*Δ_♯_M ≅ p_♯Δ^*Δ_♯(p_1)_♯(_M⊠Δ^*Δ_♯_M) p_♯Δ^*Δ_♯(p_1)_♯(Δ_♯_M⊗ (_M⊠Δ^*Δ_♯_M))[d] ≅ p_♯Δ^*Δ_♯(p_1)_♯Δ_♯Δ^*Δ_♯_M[d] ≅ p_♯Δ^*Δ_♯Δ^*Δ_♯_M[d] p_♯Δ^*Δ_♯_M[d]. In terms of marked spaces, it can be represented by maps (0, 0) circle (0.4cm); (1, 0) circle (0.4cm); [red, fill=red] (0.6, 0) circle (2pt); (1.8, 0) node ; (2.6, 0) circle (0.4cm); (3.4, 0) circle (0.4cm); (4.2, 0) node ; (5, 0) circle (0.4cm); The loop product is given by the composite p_♯_♯_LM⊗ p_♯_♯_LM (p_♯⊠ p_♯)(Δ_♯_M⊗(_♯_LM⊠_♯_LM))[d] ≅ (p_♯⊠ p_♯)Δ_♯(_♯_LM⊗_♯_LM)[d] ≅ p_♯(_♯_LM⊗_♯_LM)[d] p_♯_♯_LM[d]. In terms of marked spaces, it can be represented similarly: (0, 0) circle (0.4cm); (1, 0) circle (0.4cm); [red, fill=red] (0.6, 0) circle (2pt); (1.8, 0) node ; (2.6, 0) circle (0.4cm); (3.4, 0) circle (0.4cm); (4.2, 0) node ; (5, 0) circle (0.4cm); So, we are left with identifying m with ϵ_Δ in the second map. Reading the 1-morphisms horizontally, we can represent the counit ϵ_ΔΔ_♯Δ^*⇒𝕀 and Δ^*Δ_♯Δ^*Δ_♯_MΔ^*Δ_♯_M as (0, 0) arc (-90:90:0.4cm); (0.8, 0) arc (270:90:0.4cm); (1.4, 0.4) node ; (2, 0.8) – (2.8, 0.8); (2, 0) – (2.8, 0); (5.2, 0.4) circle (0.4cm); (6, 0.4) circle (0.4cm); (7, 0.4) node ; (8, 0) arc (270:90:0.4cm) – (8.8, 0.8) arc (90:-90:0.4cm) – (8, 0); There is a unique homotopy class of based maps on the left, and the induced map on the right is easily seen to be homotopic to m. Next, let us describe the TQFT origin of the loop coproduct. The coproduct _∙(LM)[-d]→_∙(LM)⊗_∙(LM) is homotopic to the composite _∙(LM)[-d]⟶_∙()⟶_∙(LM)⊗_∙(LM), where the first map is given by the intersection product associated to (_0, _1/2) LM→ M× M and the second map is induced by the projection → LM× LM. The claim follows from the description of the coproduct from <ref> as well as the description of the counit ∘^→𝕀 given in <ref>. In terms of marked spaces, the coproduct can be described as the composite (-0.8, 0) node ; (0, 0) circle (0.4cm); [red, fill=red] (0.4, 0) circle (2pt); (0.8, 0) node ; (1.6, 0) circle (0.4cm); (2.4, 0) circle (0.4cm); (3.2, 0) node ; (4, 0) circle (0.4cm); By <ref> and <ref> we have ((M))≅ Z(S^1_1)≅_∙(LM)[-d], ((M))≅_∙(LM). Let ((M))≅_∙(M) with ((M))→((M)) given by the inclusion of constant loops i_∙(M)→_∙(LM). Its cofiber is given by the complex of relative chains ((M)) ≅_∙(LM, M). By <ref> the class []∈_∙(LM× LM) is equivalent to (LΔ)[_M]. Further, using the Pontryagin–Thom lift of the Euler characteristic, the class [_M] is equivalent to i(e(M)). Using the commutative diagram M ^Δ[r] ^i[d] M× M ^i× i[d] LM ^-LΔ[r] LM× LM we see that [] admits a lift along (i× i)_∙(M)⊗_∙(M)→_∙(LM)⊗_∙(LM). Thus, we can apply the construction of the secondary coproduct from <ref> to obtain an operation ∨_̋∙+d-1(LM)⟶_̋∙(_∙(LM, M)⊗_∙(LM, M))≅_̋∙(LM× LM, M× LM∪ LM× M). The secondary coproduct (<ref>) coincides with the loop coproduct from <ref>. §.§ Proof of theorem <ref> We begin by slightly rephrasing the definition of the loop coproduct. Split diagram (<ref>) into two commutative diagrams LM ^J_i[r] ^[d] LM ^(_0, _1/2)[d] M ^-Δ[r] M× M for i=0, 1 for each LM term in the upper left corner. Consider the diagram _∙(LM)[-d] ^-e(M)[r] ^J_0[d] _∙(LM) ^ι_0[d] ^-𝕀×[r] _∙(LM× M) ^𝕀× i[d] _∙(LM)[-d] [r] _∙() [r] _∙(LM× LM) _∙(LM)[-d] ^-e(M)[r] _J_1[u] _∙(LM) _ι_1[u] ^-×𝕀[r] _∙(M× LM) _i×𝕀[u] where the two squares on the left are given by the relative intersection squares (<ref>) for (<ref>) for i=0,1. The middle row describes the coproduct by <ref> and the map _̋∙+d-1(LM)⟶_̋∙(LM× LM, LM× M∪ M× LM) in the definition of the loop coproduct is obtained by summing over the top and bottom rows and taking the cofiber of the resulting map into the middle row and postcomposing with the map induced by (LM × LM, LM × M ⊔ M × LM) → (LM × LM, LM × M ∪ M × LM). We will next make explicit the secondary coproduct. It is given by a homotopy (a⊗ 1)∧ ((i× i)∘Δ) e(M)∼(a)∼ ((i× i)∘Δ) e(M)∧ (1⊗ a) constructed from the following pieces: * The homotopy (<ref>) given by <ref> is (a⊗ 1)∧(1)∼(a)∼(1)∧ (1⊗ a). * Using <ref> we obtain homotopies (a⊗ 1)∧ (LΔ)[_M]∼ (a⊗ 1)∧(1) and (LΔ)[_M]∧ (1⊗ a)∼(1)∧ (1⊗ a). * Applying the lift of the Euler characteristic we obtain homotopies (a⊗ 1)∧ (LΔ) i(e(M))∼ (a⊗ 1)∧ (LΔ)[_M] and (LΔ)i(e(M))∧ (1⊗ a)∼ (LΔ)[_M]∧ (1⊗ a). * Using the commutative diagram M ^i[d] ^-Δ[r] M× M ^i× i[d] LM ^-LΔ[r] LM× LM we obtain homotopies (a⊗ 1)∧ ((i× i)∘Δ)(e(M))∼ (a⊗ 1)∧ (LΔ) i(e(M)), where the left-hand side is given by a morphism _∙(LM)[-d]→_∙(LM)⊗_∙(M), and ((i× i)∘Δ)(e(M))∧ (1⊗ a)∼ (LΔ)i(e(M))∧ (1⊗ a), where the left-hand side is given by a morphism _∙(LM)[-d]→_∙(M)⊗_∙(LM). Thus, we see that both the loop coproduct and the secondary coproduct are given by a homotopy commutative diagram of the form _∙(LM)⊗_∙(M) ^𝕀⊗ i[dr] _∙(LM)[-d] [ur] [dr] ^[rr] _∙(LM)⊗_∙(LM) _∙(M)⊗_∙(LM) _i⊗𝕀[ur] where the top and the bottom 2-morphisms are related by permuting the factors. So, to prove <ref> it is enough to identify the relevant two-morphisms appearing at the top of the loop coproduct and the secondary coproduct. It will be convenient to use the following notation. Consider a morphism ϕ_M[-d]→_♯_LM⊗_∙(LM). By adjunction it gives rise to a morphism ϕ(1)→_∙(LM)⊗_∙(LM). It also gives rise to an operation ϕ(a)_∙(LM)[-d]_∙()⊗_∙(LM)_∙(LM)⊗_∙(LM) as well as the morphism (a⊗ 1)∧ϕ(1) _∙(LM)[-d] _∙(LM)[-d] ⊗_∙(LM)⊗_∙(LM) _∙()⊗_∙(LM) _∙(LM)⊗_∙(LM). There is a homotopy ϕ(a)∼ (a⊗ 1)∧ϕ(1) given pictorially by (a ⊗ 1)∧ϕ(1) [r, "η_p"][dr, equal] [r, "ϕ"][d, "ϵ_p"] [d, "ϵ_p"] ϕ(a)[u, "∼"] [r, "ϕ"] [r, "ϵ_Δ"] Traversing this picture along the top part we obtain (a⊗ 1)∧ϕ(1) and traversing this picture along the bottom part we obtain ϕ(a). The left two-cell is given by the cusp isomorphism witnessing the adjunction p^*(-)⊣ p_♯(ζ_M⊗(-)) and the middle two-cell is given by the interchange two-cell. The strategy of the proof is as follows: * Step 1. We will construct four homotopic morphisms ϕ_1,ϕ_2,ϕ_3,ϕ_4_M[-d]→_♯_LM⊗_∙(LM), where ϕ_4 factors through _M[-d]→_M⊗_∙(LM)_♯_LM⊗_∙(LM). The homotopy ϕ(a)∼ (a⊗ 1)∧ϕ(1) gives rise to a homotopy commuting square (a⊗ 1)∧ϕ_1(1) [r, "∼"][d, "∼"] (a⊗ 1)∧ϕ_2(1) [r, "∼"][d, "∼"] (a⊗ 1)∧ϕ_3(1) [r, "∼"][d, "∼"] (a⊗ 1)∧ϕ_4(1) [d, "∼"] ϕ_1(a) [r, "∼"] ϕ_2(a) [r, "∼"] ϕ_3(a) [r, "∼"] ϕ_4(a). in __(_∙(LM)[-d], _∙(LM)⊗_∙(LM)). * Step 2. We will construct a homotopy (a)∼ϕ_1(a). * Step 3. We will identify the homotopy (a)∼ϕ_1(a)∼ (a⊗ 1)∧ϕ_1(1)∼ (a⊗ 1)∧ϕ_2(1)∼ (a⊗ 1)∧ϕ_3(1)∼ (a⊗ 1)∧ϕ_4(1) with a half of the homotopy (<ref>). * Step 4. We will identify the homotopy (a)∼ϕ_1(a)∼ϕ_2(a)∼ϕ_3(a)∼ϕ_4(a) with the bottom half of diagram in (<ref>) describing the loop coproduct. §.§.§ Step 1 We now define the homotopic elements ϕ_1, ϕ_2, ϕ_3, ϕ_4 ∈_(M)(_M[-d], _♯_LM⊗_∙(LM)) by ϕ_1 _M[-d] _♯_LM[-d] (_♯_LM)^⊗ 2_♯_LM⊗_∙(LM) ϕ_2 _M[-d] _♯_LM (_♯_LM)^⊗ 2_♯_LM⊗_∙(LM) ϕ_3 _M[-d] _M (_♯_LM)^⊗ 2_♯_LM⊗_∙(LM) ϕ_4 _M[-d] _M _M ⊗_∙(M) _♯_LM⊗_∙(LM) Note that with the previous notation we have ϕ_1(1) = (1) = [] = [Δ_♯∘ p^*] ϕ_2(1) = [Δ_♯] ∘ [p^*], ϕ_3(1) = [Δ_♯] ∘ i ∘ e(M), ϕ_4(1) = ((i× i)∘Δ)e(M) where [Δ_♯] = LΔ by <ref>. Pictorially, these morphisms are given as follows: ϕ_4 [ddd, "η_Δ⊔η_Δ"] ϕ_3 [u, "∼"] [ur, "ϵ_Δ"] [d, "η_Δ"] ϕ_2 [u, "∼"] [d, "η_Δ"] [ur, "e(M)"] [phantom, ur, ""name=T] [r, "ϵ_p"] [phantom, to=T, "!"] [d, "η_Δ"] ϕ_1 [u, "∼"] [r, "ϵ_p"] [r, "ϵ_Δ"] The triangle labeled by ! is provided by the lift of the Euler characteristic. The other two unlabeled two-cells are the interchange two-cells. §.§.§ Step 2 The homotopy (a)∼ϕ_1(a) is given pictorially by ϕ_1(a) [r, "η_Δ"] [dr, equal] [r, "ϵ_p"][d, "ϵ_Δ"] [r, "ϵ_Δ"][d, "ϵ_Δ"] [r, "ϵ_Δ"] (a) [u, "∼"] [r, "ϵ_p"] [urr, "ϵ_Δ"] §.§.§ Step 3 The composite (a)∼ϕ_1(a)∼ (a⊗ 1)∧ϕ_1(1)∼ (a⊗ 1)∧ϕ_2(1)∼ (a⊗ 1)∧ϕ_3(1)∼ (a⊗ 1)∧ϕ_4(1) in <ref> is given by (a ⊗ 1)∧ϕ_4(1) [-2em] [ddd, "η_Δ⊔η_Δ"] (a ⊗ 1)∧ϕ_3(1) [u, "∼"] [d, "η_Δ"] [ur, "ϵ_Δ"] (a ⊗ 1)∧ϕ_2(1)[u, "∼"] [ur, "e(M)"name=P] [r, "ϵ_p"][d, "η_Δ"][dl, equal] [d, "η_Δ"] [phantom, to=P, "!"] (a ⊗ 1)∧ϕ_1(1) [u, "∼"] [r, "η_p"][dr, equal] [r, "η_Δ"][d, "ϵ_p"] [r, "ϵ_p"][d, "ϵ_p"] [r, "ϵ_Δ"][d, "ϵ_p"] [r, "ϵ_p"] [d, "ϵ_p"] [r, "ϵ_Δ"] ϕ_1(a)[u, "∼"] [r, "η_Δ"] [dr, equal] [r, "ϵ_p"][d, "ϵ_Δ"] [r, "ϵ_Δ"][d, "ϵ_Δ"] [d, "ϵ_Δ"] [ur, equal] (a) [u, "∼"] [r, "ϵ_p"] [r, "ϵ_Δ"] [uurr, equal] The bottom three rows identify with the homotopy (a)∼ (a⊗ 1)∧(1), and the top 4 rows identify with the homotopies (a⊗ 1)∧(1)∼(a⊗ 1)∧ (LΔ)[_M]∼ (a⊗ 1)∧ (LΔ)i(e(M))∼ (a⊗ 1)∧ ((i× i)∘Δ)e(M) in (<ref>). §.§.§ Step 4 Let us first spell out the homotopy (a)∼ϕ_1(a)∼ϕ_2(a)∼ϕ_3(a)∼ϕ_4(a). It is obtained by gluing in an extra circle at the dashed line in (<ref>) and attaching the resulting diagram to (<ref>). We thus obtain ϕ_4(a) [ddd, "η_Δ⊔η_Δ"] ϕ_3(a) [u, "∼"] [ur, "ϵ_Δ"] [d, "η_Δ"] ϕ_2(a) [u, "∼"] [d, "η_Δ"] [ur, "e(M)"] [phantom, ur, ""name=T] [r, "ϵ_p"] [phantom, to=T, "!"] [d, "η_Δ"] ϕ_1(a) [u, "∼"] [r, "η_Δ"] [dr, equal] [ur, equal] [r, "ϵ_p"][d, "ϵ_Δ"] [r, "ϵ_Δ"][d, "ϵ_Δ"] [r, "ϵ_Δ"] (a) [u, "∼"] [r, "ϵ_p"] [urr, "ϵ_Δ"] To compare this to the definition of the loop coproduct as in (<ref>) (more precisely, the lower two rows) let us first spell out the relative intersection square. Specializing the definition of the relative intersection square (<ref>) we obtain [column sep=small] [rr, "η_Δ"] [rrrrrr, equal, bend left] [d, "e(M)"] [rr, equal] [d, "e(M)"] [rr, "ϵ_Δ"] [d, "e(M)"'] [dr, "ϵ_p"] [dr, phantom, ""name=T] [d, "ϵ_p"] [rr, "η_Δ"] [rr, equal] [r, "η_Δ"] [phantom, to=T, "!"] [rr, equal, bend right = 45] [r, "ϵ_Δ"] Here we used that the diagram (<ref>) determining a lift of the Euler characteristic is equivalent to the triangle ! using the Δ_♯⊣Δ^* adjunction. Namely, (<ref>) is given by Δ_♯_M[-d] [d, "e(M)"'] [rr, "η_Δ"] [dr, "ϵ_p"] [dr, phantom, ""name=T] _M× M[-d] [d, "ϵ_p"] Δ_♯_M [r, "ϵ_Δ"'] [phantom, to=T, "!"] [rr, bend right = 45, equal] Δ_♯Δ^*Δ_♯_M [r, "η_Δ"'] Δ_♯_M Adding the lower right square of (<ref>) we obtain [column sep=small] [rr, "η_Δ"] [rrrrrr, equal, bend left] [d, "e(M)"] [rr, equal] [d, "e(M)"] [rr, "ϵ_Δ"] [d, "e(M)"'] [dr, "ϵ_p"] [dr, phantom, ""name=T] [d, "ϵ_p"] [rr, "η_Δ"] [d, "ϵ_Δ"] [rr, equal] [d, "ϵ_Δ"] [r, "η_Δ"] [phantom, to=T, "!"] [rr, equal, bend right = 45] [r, "ϵ_Δ"] [rr, "η_Δ"] Simplifying and rearranging the diagram slightly we obtain [column sep=small, row sep = small] [rr, "η_Δ"] [rrrr, equal, bend left] [dd, "e(M)"'] [rr, "ϵ_Δ"] [dr, "ϵ_p"] [dd, "e(M)"'name=C] [d, "ϵ_p"] [phantom, to=C, "!"] [r, "ϵ_Δ"] [rr, "η_Δ"] [d, "ϵ_Δ"] [ur, "η_Δ"'] [urr, equal, bend right] [d, "ϵ_Δ"] [rr, "η_Δ"] Tensoring the !-triangle with the morphism η_Δ_♯_M→_♯_M⊗_♯_M we obtain a commutative prism. Two faces of the prism are given by the square as well as the !-triangle appearing above. We can hence replace this composite with the other three faces of the prism to obtain [column sep=small, row sep = small] [rr, "η_Δ"] [rrrr, equal, bend left] [dd, "e(M)"'name=T] [dr, "ϵ_p"] [rr, "ϵ_Δ"] [dr, "ϵ_p"] [d, "ϵ_p"] [to=T, phantom, "!"] [rr, "η_Δ"] [r, "ϵ_Δ"] [rr, "η_Δ"] [ur, "η_Δ"] [d, "ϵ_Δ"] [ur, "η_Δ"] [urr, equal, bend right] [d, "ϵ_Δ"] [rr, "η_Δ"] which finally coincides with diagram (<ref>).
http://arxiv.org/abs/2406.19102v1
20240627112850
Statements: Universal Information Extraction from Tables with Large Language Models for ESG KPIs
[ "Lokesh Mishra", "Sohayl Dhibi", "Yusik Kim", "Cesar Berrospi Ramis", "Shubham Gupta", "Michele Dolfi", "Peter Staar" ]
cs.CL
[ "cs.CL", "cs.AI", "cs.IR" ]
Coagulation-flocculation process on a lattice: Monte Carlo simulations Viktoria Blavatska^1,2[Author to whom any correspondence should be addressed], Jaroslav Ilnytskyi^1,3, and Erkki Lähderanta^4,5 July 1, 2024 ==================================================================================================================================== § ABSTRACT Environment, Social, and Governance (ESG) KPIs assess an organization's performance on issues such as climate change, greenhouse gas emissions, water consumption, waste management, human rights, diversity, and policies. ESG reports convey this valuable quantitative information through tables.Unfortunately, extracting this information is difficult due to high variability in the table structure as well as content. We propose /, a novel domain agnostic data-structure for extracting quantitative facts and related information. We propose translating tables to / as a new supervised deep-learning universal information extraction task. We introduce SemTabNet – a dataset of over 100K annotated tables. Investigating a family of T5-based Statement Extraction Models, our best model generates / which are 82% similar to the ground-truth (compared to baseline of 21%). We demonstrate the advantages of / by applying our model to over 2700 tables from ESG reports. The homogeneous nature of / permits exploratory data analysis on expansive information found in large collections of ESG reports. § INTRODUCTION It is invaluable to assess mankind's impact on climate. Climate change related information is often published in so-called “Environment, Social, and Governance (ESG)” reports. Corporations report valuable quantitative data regarding their efforts to improve their impact on environment, working conditions, and company culture in these ESG reports <cit.>. Like most technical documents, ESG reports present their key information in tables, making table understanding and information extraction (IE) an important problem <cit.>. This problem becomes further complicated due to the large variety and diversity of tabular representations used in these reports. Despite efforts to standardize these reports, this diversity makes the task of extracting information from these documents extremely challenging (see Appendix Fig. <ref> for an example table). Large Language Models (LLMs) have turned out to be excellent tools for IE, due to their ability to parse, understand, and reason over textual data <cit.>. This, in combination with their in-context learning ability, makes them excellent for IE from text <cit.>. This approach breaks down when applying the same techniques on tables <cit.>. In this paper, we present a general approach for universal IE from tables. Universal IE involves named entity recognition and relationship extraction among other tasks. To this end, we propose a new tree-like data structure, called `/', which can combine multiple (named) entities and (n-ary) relations (Fig. <ref>). It allows us to represent information in a homogeneous domain agnostic fashion. A / tree can contain content from different subjects, allowing for universal IE approach to tables across multiple domains. With the introduction of /, the IE problem from tables becomes a translation problem which we call `statement extraction' – translating the original table into a set of statements. ESG reports, to this day, are manually analyzed by consultancy firms and professional organisations <cit.>. With our proposed statement extraction, this process can now be fully automated. To evaluate our model generated statements, we propose a novel application of the well-established Tree Edit Distance <cit.>. We propose Tree Similarity Score (t_s) for measuring the similarity between two trees. As baseline, we experiment with in-context learning using state-of-the-art LLMs like Mistral <cit.>, Mixtral <cit.>, Llama2 <cit.>, and Falcon <cit.>. These models show an average t_s varying from 0% to 21%. On the other hand, our best-performing fine-tuned T5 based model shows a t_s of 82%.Our main contributions are: * We introduce a new knowledge model called / for mapping complex, irregular, and heterogeneous information to a uniform domain agnositc structure. * We present a new supervised deep learning universal IE task called `statement extraction'. The fine-tuned models show significant improvement over baseline experiments providing competitive benchmarks for the community. * We contribute to the field of table understanding, by providing “SemTabNet” a dataset containing over 100K annotated ESG tables. All cells in these tables are annotated to reflect their semantic relationship with other cells. * We propose Tree Similarity Score, which in a single number quantifies the quality of entities and relationships extraction in the statement. We begin, in Sect. <ref> discussing related works. In Sect. <ref> we explain the concept of `/' and present the SemTabNet dataset in Sect. <ref>. In sect. <ref>, we discuss the various experiments we performed and their results. We end the paper with an application of our model on ESG reports. § RELATED WORKS <cit.> group the applications of deep learning methods to tables or tabular data into four broad categories. (1) Tree based methods such as gradient-boosted decision trees <cit.> for predictions on tabular data. (2) Attention-based methods which includes developing models that learn tabular representations such as TAPAS <cit.>, TABERT <cit.>, and/or fine-tuning models for downstream tasks on tabular data like fact-checking <cit.>, question-answering <cit.>, semantic parsing <cit.>. (3) Regularization methods which attempts to modify model sensitivity to tabular features <cit.>. (4) Data transformation methods which aim at converting heterogeneous tabular inputs to homogeneous data, like an image <cit.> or feature engineering <cit.>. Another class of problem which is similar to the data transformation approach is (generative) information extraction (IE) which involves adopting LLMs to generate structural information from an information source. Recent studies have found that LLMs can also perform universal IE <cit.>. In a universal IE task, a model is trained to generate desirable structured information y, given a pre-defined schema s, and information source x <cit.>. Using pre-trained language models, <cit.> perform IE in two steps: argument extraction and predicate extraction. Based on this, they introduced a text-based open IE benchmark. <cit.> presented DeepEx for extracting structured triplets from text based data. <cit.> demonstrate that pre-training models on task-agnostic corpus lead to performance improvement on tasks like IE, entity recognition, etc. However, these approaches are limited to textual data. <cit.> have shown that LLMs can perform IE on tabular data when prompted with a table and a relevant extraction schema. Their approach is based on a human-in-the-loop in-context learning. A domain-expert is necessary for producing robust extraction schema, which instructs the model to generate structured records from a table. This strongly limits the adaptability of their approach to different domains. Although limited to text, <cit.> also propose a schema-driven universal IE system. They use a structure extraction language which generates structural schema prompt which guides the model in its IE tasks. As we show, the / data structure removes several limitations of previous universal IE approaches and is applicable to `wild' heterogenous information sources. § DEFINITION OF / The / data structure aims to homogenize data coming from complex, irregular, heterogeneous information source (text or tables). At its core, the / data structure is a tree structure (<ref>). From the root of the tree, we have `subject'-nodes, which contain information regarding the `subject' and the `subject-value' keys. From each subject-node, there are one or more predicate nodes, which define the `property', `property-value', and `unit' keys. Each predicate node carries an atomic piece of quantitative information. The / knowledge model can be applied to both text and tables. In Fig. <ref>, we show the same / structure which could be obtained from a text or a corresponding table. As such, the / structure is not bound only to tables, however, it shows its usefulness particularly when normalising information from heterogeneous tables. The details of how we create trees are presented with examples in <ref>. The tree structure of / allows us to quantify, with a single number, the transformation of information from a table. This is accomplished by computing the Tree Similarity Score (based on the Tree Editing Distance (TED) <cit.>) between predicted and ground-truth /. TED is defined as the minimum-cost sequence of node operations that transform one tree into another. Like the Levenshtein distances on strings <cit.>, TED involves three kinds of operations: node insertions, deletions, and renaming. The cost of each operation can be freely defined, which makes this metric both flexible and powerful. Two trees are exactly same when their tree similarity score is 100%. To ensure high quality statement extraction, we setup robust TED costs such that minor differences can lead to poor tree similarity scores. In <ref>, we demonstrate tree similarity score with some examples. It is also instructive to look at the edit types which converted the predicted statements into ground-truth statements. For this, we measure the ratio of edit type to the total number of edits. We find that the ratio of insertions and ratio of deletions carries the information about the structural similarity of two trees. If the model predicted too few nodes, the ratio of insertions will be high. Correspondingly, if the statements from the model's prediction has too many nodes, the deletion ratio dominates. If two trees are structurally similar, then the ratio of both insertion and deletions is low. In this case, the edits are dominated by renaming. While tree-based metrics are sensitive to both entity and relationship extraction, we also would like to understand the ability of a Statement Extraction Model (SEM) to extract entities alone [Here, `entity' refers to the values of attributes in a statement. For example, `scope 1 emissions' is an entity from the statement shown in <ref>.]. For this, we concatenate all the predicate nodes in a statement. We create sets of values corresponding to: subject, subject value, property, property value, unit. We count true positives when an entity is found in both the sets from model prediction and ground truth. True negatives are counted when an entity is present only in the ground truth set and false positives when the entity is present only in the predicted set. Based on these, we measure the standard accuracy, recall and F1 measures. § SEMTABNET: / DATA There are many large data sets of annotated tables which suffer from two major limitations: (1) they focus on understanding table structure only i.e. demarcating table headers from table content, and (2) contain little diversity in shape, size, and complexity of the table. Tables found in ESG reports are of high complexity with little common underlying pattern. In this work, we advance deep learning on table understanding by annotating the content of the table and annotating complex tables. We used the Deep Search toolkit [Available via: https://ds4sd.github.iohttps://ds4sd.github.io.] to collect over 10K ESG reports from over 2000 corporations. Deep Search crawled these PDF reports, converted them into machine readable format, and provided this data along with the metadata of each report in json format. We compiled a list of important keywords which capture many important concepts in ESG reports (see <ref>). Next, we select only those tables which have some relevance with the keywords. For this we used the following conditions: the ROUGE-L precision (longest common sub-sequence) score between raw data and keywords must be greater than 0.75 and there must be quantitative information in the table. We need a strategy for understanding the content of a table and extracting statements from it. After manually observing hundreds of table, we decided a two step approach to prepare our ground-truth data. First, we classify all the cells in a table based on the semantic meaning of their content into 16 categories which helps us in constructing statements. For each table, this step creates a `labels-table' with the same shape and structure as the original, but the cells of this labels-table only contain category labels (see <ref>). Secondly, we create a program which reads both the labels-table and the original table and extracts statements in a rule-based approach. The algorithm is described in <ref>. The 16 labels are: * Property, Property Value * Sub-property * Subject, Subject Value * Unit, Unit Value * Time, Time Value * Key, Key Value * Header 1, Header 2, Header 3 * Empty, Rubbish During annotation, all cells are mapped to one of the above labels. For cells which contain information pertaining to more than one label, we pick the label which is higher in our ordered list of labels. So “Revenue (US$)”, is labelled as . The `property' and `sub-property' cells always have associated `property value' cell(s). The `header' cells never have an associated value and often divide the table into smaller sections. Empty cells are labelled `empty'. When a table contain unnecessary parts due to faulty table recovery or non-quantitative information. We label such cells as `rubbish'. When a property/property value pair carries supplementary information, those cells are annotated as `key'/`key values'. Additionally, we observed that most tables can be reasonably classified into three baskets: simple, complex, and qualitative. There are simple tables whose structure cannot be further subdivided into any smaller table. There are complex tables whose structure can be further divided into multiple smaller tables. Finally, there are qualitative tables (like table of contents) which contain little valuable information for our endeavour. We collected about 2,800 tables and found ∼ 20% had simple layout, ∼ 20% had complex layout (composed of multiple simpler tables arranged hierarchically), and ∼ 60% were qualitative. We discarded all qualitative tables from any further analysis. To ensure that our data is not biased towards either simple or complex tables, we manually annotated all the cells of 569 simple tables and 538 complex tables. In total, we annotated 1,107 tables (84,890 individual cells) giving rise to 42,982 statements. Due to the nature of our strategy, one can extract statements from tables either directly in a zero shot manner (direct SE) or by predicting cell labels and then using the rule-based approach to construct statements (indirect SE) (see Fig. <ref>. We have experimented with both approaches. We further augmented the annotated tables to create a large training data. We shuffle the rows and columns of tables corresponding to property-values to create new augmented tables, while keeping their contents the same. While this is straightforward for simple tables, special care was taken for complex tables such that only rows/columns which belonged together within a category were shuffled. The maximum number of augmented tables emerging from the shuffling operations was limited to 130, leading to over 120K tables. To promote further research and development, we open source this large dataset of semantic cell annotations as SemTabNet[Links for code and data, respectively: https://github.com/DS4SD/SemTabNethttps://github.com/DS4SD/SemTabNet https://huggingface.co/datasets/ds4sd/SemTabNethttps://huggingface.co/datasets/ds4sd/SemTabNet]. Table <ref> shows the data counts in SemTabNet. § EXPERIMENTS & RESULTS Fig <ref> presents Statement Extraction as a supervised deep learning task. Due to the nature of how tables are annotated (see <ref>), it is possible to train models for statement extraction statements both directly and indirectly. We consider the following three seq2seq experiments: (1) SE Direct: the model is presented with an input table as markdown in a prompt. The model generates the tabular representation of the resulting statements as markdown. (2) SE Indirect 1D: In this experiment, the model input is the individual table cell contents. For a table with n cells, we predict n labels sequentially (hence, 1D) and then use this information to construct statements. Individual cell labels predicted by the model are stitched together to form the labels table, which is then used to construct the predicted statement by using our rule-based algorithm. (3) SE Indirect 2D: As opposed to SE Indirect 1D, in this experiment, we predict the cell labels of all cells in a table simultaneously. The entire table, as markdown, is input to the model (hence 2D) and the model generates the labels table, as markdown. Using the rule-based algorithm, the predicted labels table is converted into predicted statements. We use six special tokens, which allow us to control and parse model output. * Input table start token: * Input table stop token: * Output start token: * Output stop token: * Newline token: * Separate list item token: This allows us to parse the predicted statements from a LLM. Once successfully parsed, the output statements can be trivially converted from one representation to another. This is crucial because we compare model predicted statements with ground truth by converting statements into a tree structure. These tokens are added to the tokenizer vocabulary before fine-tuning any model. Since the nature of these tasks naturally fits the paradigm of sequence-to-sequence models, we fine-tune T5 models <cit.>. T5 models are encoder-decoder transformer architecture models which are suitable for many sequence-to-sequence tasks. In our experiments, we train T5 variants (Small, Base, Large, and 3B) to create a family of Statement Extraction Models (SEM). In our training data for tables, the input token count is less than 512 for 50% of the data, and it is less than 1024 for 90% of the data. Thus, except where mentioned, we train T5 models (small, base, large) with context windows of 512 and 1024, and T5-3b with context window of 512. All models are fine-tuned in a distributed data parallel (DDP) manner simultaneously across 4 GPU devices (Nvidia A100-40GB for T5-Small, T5-Base, T5-Large and NVIDIA A100-80GB for T5-3B). Additionally, the largest possible batch size was used for all models. The batch size is impacted by factors like model size, GPU memory, and context window. In turn it affects the number of epochs we can fine-tune in a reasonable time. For all tasks, we stop the fine-tuning process either after 500,000 steps or after 7 days. We use the AdamW optimizer with β_1 = 0.9 and β_2 = 0.999. All models are trained with a maximum learning rate of 5 × 10^-4. There is a warm-up phase of 1000 steps in which the learning rate increases linearly from 10^-10 to 5 × 10^-4. After another 1000 steps, the learning rate is exponentially decayed until it reaches its lowest value of 10^-6, where it remains until the end of the training. Table <ref> presents the key results of our experiments. For each table, we evaluate the statements predicted by the model (directly or indirectly) against the ground truth statements. For each task and each model therein, we present the averaged tree similarity score (t_s) (measuring entity & relationship extraction) and the averaged F1 score (measuring entity extraction). Also present are the averaged ratios of tree edit types, which helps us understand t_s. For all reported values, assuming a normal distribution, the standard error of the mean is below 5 × 10^-5 and the 99% confidence interval for all values is about ∼ 0.1%. Baseline Experiments: For baseline experiments, several state of the art LLMs were tested for their in-context learning ability. In the prompt, we show the model an example of direct statement extraction (1-shot), followed by a test table. The models produce statements in markdown format, which are evaluated against ground truth statements. The average tree similarity score across 1100 annotated tables varies from 0% for Falcon40b to 20% for Mixtral (8×7b models). For entity extraction, Llama2-13b performed the best with an average F1 score of 38. Not all outputs generated by the model were in correct markdown format. Minor changes in the prompt were found to create vast differences in the quality of extracted statements. In <ref>, we show examples of the prompt and the model output for some cases. Statement Extraction Indirect 1D: All models trained on this task have context window of 512. Their performance tends to scale with model size. These models can learn to extract entities, but relationship extraction is difficult. For SEM-T5-small, the ratio of insertion is ≈ 98% which means that the predicted statements does not have enough nodes. Statement Extraction Indirect 2D: All models trained on this task perform well on entity extraction with average F1 scores of over 95%. The highest performing model is the SEM-T5-3b (512) with an average tree similarity score of 81.76 %. Statement Extraction Direct:Based on tree similarity score, most models show poor performance in direct SE. The best performing model is SEM-T5-base with a context window of 1024. It gets an average F1 score of 76.99% and an average tree similarity score of only 11%. To understand, why these models performs so poorly on direct SE, we look at the ratio of tree edits. We note that the ratio of deletions for all models in this task is close to 0. On the other hand, the ratio of insertions for all models is high (from 88% to 98%). This suggests that the statement trees produced by these models is missing vast number of nodes compared to the ground truth. In fact, perusing the model output shows that while the output is of high quality, it contains significantly less nodes than ground truth statements. Discussion: SE Indirect 1D shows good performance on entity extraction, but performs poorly for both entity and relationship extraction. In this task, the model only sees the content of one cell at a time which makes it easy to extract entities. However, this does not allow the model to develop a strong capability to learn tabular relationships. On the other hand, SE Direct, gives poor performance on both entity extraction and relationship extraction. Direct SE expects the models to unravel a dense table into /, for which they must produce many output tokens. For example, the average number of output tokens in the test data for SE direct is 5773 ± 51, which is significantly larger than the number of tokens for SE indirect 2D (346 ± 1). Thus, direct SE is a very challenging task and might require different strategies to be executed successfully. SE Indirect 2D, avoids the disadvantages of both the tasks. In this case, the model sees the entire input table (has the chance to learn tabular relationships) and is only tasked with producing a labels table (can finish generation in a reasonable number of tokens). Our experiments clearly demonstrate that statement extraction via the Indirect 2D approach gives better results. This is an unexpected finding of our study, and we hope it motivates other researchers to improve zero-shot statement extraction capability. § APPLICATION TO ESG RESULTS Due to their homogeneous structure, statements enable large-scale exploratory data analysis and data science. To demonstrate the advantage of statements over traditional tabular data science, we applied SEM-T5-large (512 SE Indirect 2D) over 2700 tables published in over 1000 ESG reports in 2022. This lead to 14,766 statements containing over 100k predicates. This dataset containing ESG related KPIs is invaluable to researchers, policy-makers, and analysts. We filter this large dataset to contain only those predicates with quantitative property values. This subset contains 47 901 predicates from 601 corporate ESG reports. We search the properties in this dataset for some keywords representative of ESG KPIs. Fig. <ref> (top) shows the distribution of the number of predicates and the number of distinct organizations which matched our simple keyword search. For example, using `emission' as a keyword, we obtain over 4000 hits with results coming from over 300 distinct corporations. Fig. <ref> (bottoms) shows the total scope 1 emissions (left) and total scope 2 emission (right). Each box shows the distribution of emission from multiple corporations across sectors (∼20 in Healthcare to ∼100 in Technology and Industrial Goods) containing data from several years. The data reported in the original report contained emissions in different units, which were harmonized for creating this plot. Since we only took a small subset of 1000 reports for this analysis, our data is incomplete and is only representative. The / dataset allows one to study how emissions from individual companies or across sectors have evolved over time. This dataset can also serve as a starting point for many other downstream applications like question-answering, fact checking, table retrieval, etc. § CONCLUSION & FUTURE WORKS We have presented a novel approach to map complex, irregular, and heterogeneous information to a uniform structure, /. We presented Statement Extraction which is a new supervised deep-learning information extraction task. We contribute the field of table understanding by open-sourcing SemTabNet consisting of 100K ESG tables wherein all cells. Investigating three variations of the statement extraction task, we found that using a model to generate table annotations and then construct / produces best results. This approach has the advantage, that it produces homogeneous structured data with reduced hallucinations. / are an advantageous vehicle for quantitative factual information. They enable down-stream tasks like data science over a large collection of documents. We extracted over 100K facts (predicates) from only 1000 ESG reports. This work can be easily extended to include domains other than ESG. It can also be extended towards multi-modality by including text data. We leave for future exploration, the use of statements in downstream tasks like QA or document summarization. § LIMITATIONS Although, the ideas and the techniques we describe in this paper are domain agnostic, we limit the scope of this paper to the domain of corporate Environment, Social, and Governance (ESG) reports. This choice is motivated by two observations. First, corporations report valuable quantitative data regarding their efforts to improve their carbon emissions, working conditions, and company culture in ESG reports. These reports contain valuable information regarding the environmental impact of businesses, and the urgency of climate change motivates us to target this domain. Secondly, there is a large variety and diversity of tabular representations used in these reports. Despite efforts to standardize these reports, this diversity makes the task of extracting information from these documents extremely challenging, motivating our choice. The scope of this work is limited to declarative, explicit knowledge only. All other kinds of knowledge such as cultural, implicit, conceptual, tacit, procedural, conditional, etc. are ignored. We focus on information which one colloquially refers to as `hard facts'. Additionally, we limit the scope of this work to quantitative statements i.e. statements whose property values are numerical quantities. We implement this restriction in the notion that we avoid qualitative statements i.e. statements which are not quantitative. Our model training strategy was biased against large models. We trained all models for either 500K steps or 7 days using the largest possible batch size. This means smaller models learn more frequently (more epochs) than larger models. However, we do not believe this severely impacted the outcome of our experiments. Our resources were enough to recover well-known trends: improved model performance with model size and context-length. § ESG KEYWORDS §.§ Environment * Scope 1 GHG Emissions Scope 1 are all direct emissions from the activities of an organization under their control. This includes fuel combustion on site such as gas boilers, fleet vehicles and air-conditioning leaks. * Scope 2 GHG Emissions Market Volume Scope 2 are indirect emissions from electricity purchased and used by the organization. Emissions are created during the production of the energy and eventually used by the organization. A market-based method reflects emissions from electricity that companies have actively chosen to purchase or reflects their lack of choice. * Scope 2 GHG Emissions Location Volume Scope 2 emissions are indirect emissions from the generation of purchased energy. A location-based method reflects the average emissions intensity of grids on which energy consumption occurs (using mostly grid-average emission factor data) * Scope 2 GHG Emissions Other Volume Scope 2 emissions are indirect emissions from the generation of purchased energy. Overall, if not clearly defined whether it is market-based calculation or location-based calculation * Scope 3 GHG Emissions Scope 3 emissions are all other indirect emissions (excluding Scope 2) that occur in the value chain of the reporting company, including both upstream and downstream emissions. * Environmental Restoration and Investment Initiatives Monetary Value The fields represent the monetary value spent on environmental initiatives. * Total Water Discharged The fields represent the overall volume of water discharged by a company. * Total Water Withdrawal The fields represent the total volume of water withdrawn by a company. * Total Water Recycled The fields represent the total volume of water recycled or reused by a company. * Toxic Air Emissions - NOx The fields represent the total amount of nitrous oxide (NOx )emissions emitted by a company. * Toxic Air Emissions - SOx The fields represent the total amount of sulfur oxide (Sox) emissions emitted by a company. * Toxic Air Emissions - Overall The fields represent the total amount of air emissions emitted by a company. * Toxic Air Emissions - VOC The fields represent the total amount of volatile organic compound (VOC) emissions emitted by the company. * Hazardous Waste - Disposed to Aquatic The fields represent the total amount of hazardous waste disposed to aquatic environment. * Hazardous Waste - Disposed to Land The fields represent the total amount of hazardous waste disposed to non aquatic or land environment. * Hazardous Waste - Total Recycled The fields represent the total amount of hazardous waste recycled. * Hazardous Waste - Total Amount Generated The fields represent the total amount of hazardous waste generated by a company. * Hazardous Waste - Total Amount Disposed The fields represent the total amount of hazardous waste disposed. * Non-Hazardous Waste - Disposed to Aquatic The fields represent the total amount of non-hazardous waste disposed to the aquatic environment. * Non-Hazardous Waste - Disposed to Land The fields represent the total amount of non-hazardous waste to non aquatic or land environment * Non-Hazardous Waste - Total Recycled The field represents the total amount of non-hazardous waste recycled. * Non-Hazardous Waste - Total Amount Generated The fields represent the total amount of non-hazardous waste Generated by a company. * Non-Hazardous Waste - Total Amount Disposed The fields represent the total amount of non-hazardous waste disposed. * Total Waste Produced The fields represent the total amount of waste produced by a company. * Total Waste Recycled The fields represents the total amount of waste recycled by a company. * Total Waste Disposed This fields represent the total amount of waste disposed by a company. * Number of Sites in Water Stress Areas The field represents the number of sites located in water stress areas. * E-Waste Produced The field identifies the mass volume of f E- waste produced which are electronic products that are unwanted, not working, and nearing or at the end of their life. Examples of electronic waste include, but not limited to : computers, printers, monitors, and mobile phones * E-Waste Recycled The field identifies the mass volume of E- Waste Recycled. * E-Waste Disposed The field identifies the mass volume of E- waste disposed. * Number of Sites Operating in Protected and/or High Biodiversity Areas The field identifies the number of sites or facilities owned,leased, managed in or adjacent to protected areas and areas of high biodiversity value outside protected areas. * Impacted Number of Species on International Union of Conservation of Nature (IUCN) List The field identifies the number of impacted species on International Union of Conservation of Nature (IUCN) red list. * Impacted Number of Species on National listed Species The field identifies the number of impacted species on National Listed Species. * Baseline Level The field identifies the value at baseline or year that target is set against. * Target Year The field identifies the year in which the renewable energy goal is set to be completed. * Target Goal The field identifies the target goal for renewable energy. * Actual Achieved The fields identifies the actual value achieved for the renewable energy goal. * Baseline Level The field identifies the baseline emissions value. * Target Year The field identifies the year in which GHG emission goal is set to be completed. * Target Goal The field identifies the target goal for GHG emission reduction. * Actual Achieved The field identifies the value achieved of GHG emissions reduced compare - in metric tons. §.§ Social * Training Hours Per Employee The fields identifies the numerical value of training hours per employee. * Training Hours Annually The fields identifies the numerical values of training hours conducted within a year. * Lost Time Injury Overall Rate The fields identifies the total number of injuries that caused the employees and contractors to lose at least a working day. * Lost Time Injury Rate Contractors The fields identifies the number of injuries that caused the contractors to lose at least a working day. * Lost Time Injury Rate Employees The fields identifies the number of injuries that caused the employees to lose at least a working day. * Employee Fatalities The fields identifies the number of employee fatalities during a one year period. * Contractor Fatalities The fields identifies the number of contractor fatalities during a one year period. * Public Fatalities The fields identifies the number of general public fatalities during a one year period. * Number of Other Fatalities The fields identifies the number of fatalities during a one year period not broken down by employee, contractor, or public. * Total Incident Rate Overall Workers The field identifies the number of work-related injuries per 100 overall workers during a one year period for both employees and contractors. * Total Incident Rate Contractors The field identifies the number of contractor work-related injuries per 100 overall workers during a one year period. * Total Incident Rate Employees The field identifies the number of work-related injuries per 100 overall workers during a one year period for employees. * Employee Turnover - Gender Male Rate The field identifies the absolute number turnover rate by males in a company . * Employee Turnover - Gender Female Rate The field identifies the absolute number turnover rate by females in a company. * Employee Turnover Overall Rate The field identifies the absolute number turnover rate for overall employees in a company. * Median Gender Pay Gap - Global The field identifies the gender pay gap median value of the company at a global level. * Mean Gender Pay Gap - Global The field identifies the gender pay gap mean or average value of the company at a global level. * Median Gender Pay Gap by Location The field represents the gender pay gap median value of the company at a location or country level. * Mean Gender Pay Gap by Location The field represents the gender pay gap mean/average value of the company at a location or country level. * Employee Turnover by Age - Lower Value The field Identifies the minimum age in a given range for employee turnover statistics. * Employee Turnover by Age - Upper Value The field identifies the maximum age in a given range for employee turnover statistics. * Employee Turnover by Age - Rate The field identifies the employee turnover rate. * Employee Turnover by Location Rate The field identifies the absolute number of employee turnover rate by location. * Workforce Breakdown Rate The field identifies the absolute number of employees of a company based on seniority, ethnicity or gender. * Workforce Breakdown Job Category Data: Value (ABS) The field represents the employee count absolute value at a category level within a workforce. * Number Of Product Recalls The fields identifies the number of product recalls. * Product Recalls Annual Recall Rate The fields identifies the product recall rate of a company. §.§ Governance * Percentage of Negative Votes on Pay Practices Year * Board of Director Term Limit The field identifies maximum amount of years a board member can serve. * Board of Director Term Duration The field identifies number of years a board member can serve before reelection. * Auditor Election Year The field identifies when the current lead auditor elected. * Independent Auditor Start Year The field represents the start year the company started having the audit company as its independent auditor. * Average/Mean Compensation of Company Employees-Global The field represents the average or mean compensation for company employeesat a global level. * Ratio Average Compensation of CEO to Employee - CEO- Global The field represents the ratio between the compensation paid to the companies CEO and the average compensations received by employees at a global level. * Compensation of Company Employees by Location The field identifies the average compensation for company employees at a location level. * Number of Suppliers Complying with Code of Conduct The field identifies the number of suppliers that comply with companies supplier code of conduct. * Share Class Numeric The field identifies the share class numeric component. * Voting Rights The field identifies the number of voting rights per each share of stock within each class. * Shares Outstanding The field identifies the number of shares outstanding within a companies common stock. * Chairman Effective Begin Year The field indicates the year when the current chairman assume his or her position. This field is used if a full effective date is not available. * Chairman Effective End Year The field indicates the year when the chairman left the position. * CEO Effective Begin Year The field identifies the year the CEO assumed his or her position. * CEO Effective End Year The field indicates the year when the CEO left the position. * CEO Compensation Salary The field identifies the current CEO salary. * CEO Compensation Overall The field identifies the CEO's overall compensation including salary, bonuses and all awards. * CEO Cash Bonus The field identifies the cash bonus value for the CEO. * CEO Stock Award Bonus The CEO Stock Award Bonus value * CEO Option Awards The CEO Option Awards bonus value * CEO Other Awards The fields identifies other compensation outside of salary, cash bonus, stock award bonus and option awards. This could include change in pension and values categorized as "all other compensation" * CEO Pension The fields identifies the CEO pension amount. * Cash Severance Value The fields identifies the amount of cash the severance policy for each category. * Total Severance Value The fields identifies the total value amount of the severance policy. * CEO Share Ownership The field identifies the number of shares the CEO owns in the company. * CEO Share Class Numeric The field identifies the share class numeric component. * Board Member Age The field identifies the age of the members of the board. * Board Member Term in Years The fields identifies how long the individual board member has been on the board which is determined in years. * Board Member Effective Year (Director Since) The fields identifies the year the individual board member started serving on the board. * Board Profile As of Year The field identifies the year of the board information. An example would be the year of the proxy statement. * Participation On Other Company Board The field identifies the number of boards a member is part of outside of the organization. * For Value Negative Votes on Directors The field identifies the number of for value votes the director received. * Against Value Negative Votes on Directors The field identifies the number of against votes the director received. * Abstain Value Negative Votes on Directors The field identifies the number of votes that were abstained for a given director. * Broker Non Vote Value Negative Votes on Directors The field identifies the number of broker non votes for given director. * Number of Board Meetings Attended by Board Member The field identifies the number of board meetings attended by a board member. * Number of Board Meetings Held by Company The field identifies the number of board meetings held by a company while member was on the board. * Total Members on Board per Skill Set The field identifies the number of board members within a specific skillset type. § EXAMPLES OF STATEMENTS A / is complete when it contains all the predicates needed to completely specify objective knowledge pertaining to a subject, i.e. a / includes all co-dependent predicates. We borrow this notion of completeness from the fields of natural science. An important implication of these definitions is that within a single statement, multiple predicates cannot carry information about the same `property'. This implies, for example, multiple measurements of the same variable in n different conditions will lead to n different statements. While complete / are extremely valuable, we find that incomplete / are quite resourceful, especially as we apply our ideas to domains outside of natural science. Examples of statements from other domains are shown below. Basic Sciences: Consider the following piece of text or unstructured data. “At a pressure of one atmosphere (atm), water freezes (solidifies) at 0 ^∘C and water boils at 100 ^∘C.” We note that to completely describe the phase changes of water, we need to specify both temperature and pressure. Leaving any one of temperature or pressure out makes the information regarding phase change incomplete. This information is presented as / in the Tables <ref> and <ref>. This example demonstrates that multiple statements can be extracted from even single sentences. Physics: Independent properties make independent statements, as shown below. § TED SIMILARITY SCORE §.§ Creating Trees The / data structure can be viewed in many representations: hypergraphs, tree, table, records, and transforming the representation of this data structure in other formats is trivial. In our setup, when represented as a tree, all nodes in a / has four attributes: name, type, value, and parent. We start a tree with the root node with name as `/root', type as `root', and no value. This node does not have any parent node. Next, the statement nodes emerge as branches from the root. Each statement node has a name like `/root/s0' or `/root/s2' (here, `s' indicates that this is a statement node and the number acts as an index), type as `statement', no value and the root node as its parent. Further, attached to each statement node are predicate node(s) with names like `/root/s1/p0' or `/root/s0/p3', type as `predicate', no value and a statement node as its parent. Finally, in our current implementation, each predicate node has five children nodes attached to it. These leaf nodes can be of type: subject, subject-value, property, property-value, unit and the value attribute is populated with the actual value. The leaf nodes may have names like `/root/s2/p1/subject' or `/root/s0/p3/property-value'. In this representation, the name of a node completely determines the location of the node in a tree. As an example, we show the tree structure for the statements shown in <ref>: [backgroundcolor=gray!10] §.§ Computing Tree Similarity Score For comparing two statement trees, we setup strict costs for each edit operation. The predictions are maximally punished for any structural deviation from the ground truth, i.e. deletion and insertion each have a cost of 1. For renaming of the node's value attribute, we only allow two nodes to be renamed if they are of the same type. If both nodes' value attribute is of type string, then we calculate a normalized Levenshtein edit distance between the two strings. If both nodes' value attribute is of numerical type, then the two values are directly compared. In this case, the cost is 0 if the two values are the same, and 1 in all other cases. If the value attribute of both the ground truth and the prediction node is empty, then the cost operation is also 0. We denote TED with t. We define normalized TED (nTED or t) as the ratio of the distance to the number of edits between two trees. Using the normalized TED, a normalized Tree Similarity score can be computed as t_s = 1 - t. Consider comparing the trees for the two statements s0 and s1, from the example above. These two trees differ only in their numeric value but are otherwise similar to each other. Two edits are required to convert one tree into another: one corresponding to the property-value of `time' and the other corresponding to the property-value of `scope 1 emissions'. If the numeric values are interpreted as floats, then our strict setup will maximally punish for each edit giving an edit distance of 2 renaming, 0 deletions, and 0 insertions. The normalized tree edit distance (ratio of distance to total number of edits) would be 2 / 2 = 1. Thus, the TED similarity score would be 1 - 1 = 0. However, our model outputs numeric values as strings, which can be compared via normalized Levenshtein distance. Then, the first rename edit of year values will give a distance of 1/4 = 0.25, and the other rename edit will give a distance of 2/3 = 0.66. In this case, the total tree edit distance is 0.9166, the normalized tree edit distance is 0.4583. This gives a TED similarity score of 0.54. We will interpret this by saying that “the two tree (when the numeric value are interpreted as strings) are 54% similar to each other”. Given that the two trees are similar in their structure and only differ in their numeric values, this shows that our setup of TED similarity score is very strict. For illustrative purposes, let us consider another example. We consider that the s0 in the above example is the ground truth statement: [backgroundcolor=blue!10] And we have a model which makes the following prediction: [backgroundcolor=orange!10] We observe that the predicted tree is missing an entire predicate with time property. This happens when models stop generating new tokens. Compared to the previous example, the ground truth and model prediction have a major structural deviation. In addition, the model also made a mistake in the value of the `property' node. Instead of `scope 1 emissions' as in ground truth, the model predicted `scope 2 emissions'. To convert one tree into another, we need a total of 7 edits: six nodes need to be deleted (or inserted) (5 leaf nodes and 1 predicate node) and 1 renaming edit. All deletions or insertions have equal score of 1 each, and the renaming costs 1/17 ≈ 0.0588. The total tree edit distance becomes 6.0588, the normalized tree edit distance is 0.8655. This gives us a tree similarity score of 0.1344. We interpret that the two trees are only 13% similar to each other. § BASELINE EXPERIMENTS Example of successful statement extraction: Consider the above table, with a simple layout, from the 2022 ESG report of Splunk Inc. We prompt Mixtral with the above table using the following prompt. For rendering, we replace our line-break token `<br>' with actual line-breaks and remove some aspect of the example statement for brevity. [backgroundcolor=gray!10] The model output for the above prompt with greedy decoding was: [backgroundcolor=orange!10] This is an example of correct statement extraction. For the same table with a different example in the prompt, the output of the same model was: [backgroundcolor=orange!10] This is an invalid output without any correct markdown structure or content. This shows that the in-context approach is sensitive to the prompt and thus is not robust. § ALGORITHM FOR STATEMENT EXTRACTION We present the algorithm we used to extract statements. For this algorithm, the inputs are the original table and the labels table.
http://arxiv.org/abs/2406.18039v1
20240626033235
Diagnosis Assistant for Liver Cancer Utilizing a Large Language Model with Three Types of Knowledge
[ "Xuzhou Wu", "Guangxin Li", "Xing Wang", "Zeyu Xu", "Yingni Wang", "Jianming Xian", "Xueyu Wang", "Gong Li", "Kehong Yuan" ]
physics.med-ph
[ "physics.med-ph" ]
abbrvnat '26-12mu d problemProblem lemmaLemma theoremTheorem corollaryCorollary remarkRemark observationObservation propositionProposition assumptionAssumption assumptionsAssumptions definitionDefinition innercustomthmTheorem innercustomlemLemma innercustomhypHypothesis innercustompropProposition innercustomassAssumption
http://arxiv.org/abs/2406.17885v1
20240625184750
Enabling Regional Explainability by Automatic and Model-agnostic Rule Extraction
[ "Yu Chen", "Tianyu Cui", "Alexander Capstick", "Nan Fletcher-Loyd", "Payam Barnaghi" ]
cs.LG
[ "cs.LG", "cs.AI" ]
1,2]Yu Chen 1,2]Tianyu Cui 1,2]Alexander Capstick 1,2]Nan Fletcher-Loyd [1,2]Payam Barnaghip.barnaghi@imperial.ac.uk [1]Department of Brain Sciences, Imperial College London, London, United Kingdom [2]Care Research and Technology Centre, The UK Dementia Research Institute, London, United Kingdom In Explainable AI, rule extraction translates model knowledge into logical rules, such as IF-THEN statements, crucial for understanding patterns learned by black-box models. This could significantly aid in fields like disease diagnosis, disease progression estimation, or drug discovery. However, such application domains often contain imbalanced data, with the class of interest underrepresented. Existing methods inevitably compromise the performance of rules for the minor class to maximise the overall performance. As the first attempt in this field, we propose a model-agnostic approach for extracting rules from specific subgroups of data, featuring automatic rule generation for numerical features. This method enhances the regional explainability of machine learning models and offers wider applicability compared to existing methods. We additionally introduce a new method for selecting features to compose rules, reducing computational costs in high-dimensional spaces. Experiments across various datasets and models demonstrate the effectiveness of our methods. Enabling Regional Explainability by Automatic and Model-agnostic Rule Extraction * ================================================================================ XAI <cit.> has garnered increasing attention in recent years, driven by the widespread adoption of deep learning models. XAI generally aims to enable professionals to understand why a model has made a specific prediction (decision) and/or what a model has learned from the data. Rule extraction (also known as rule learning) <cit.> is a field in XAI that translates a model's knowledge through a set of logical rules, such as IF-THEN statements, to obtain explainability of a model's reasoning. Such rule-based explanations can offer interpretations that mimic human experts' reasoning in solving knowledge-intensive problems <cit.>. This type of interpretations are particularly desirable in applications with tabular data, such as drug discovery <cit.> and healthcare <cit.>. In practice, we often want to interpret a model's analysis over a specified subgroup of the data. For instance, interpreting the patterns captured by a model for diagnosing a specific disease is highly desirable in clinical applications (<ref>). Data samples labelled with the disease must share similar patterns that do not exist outside the specific subgroup. However, the subgroups of interest usually have sporadic occurrences in many real-world scenarios, including most applications that prefer rule explanations, such as disease diagnosis, drug discovery, and financial fraud detection <cit.>. Existing methods in rule extraction focus on global explainability, often compromising regional explainability of minor subgroups of data to achieve better overall performance across the entire data distribution. §.§ Regional rule extraction To better interpret data patterns for a specified subgroup, we propose a novel method for regional rule extraction, which aims to extract optimal rules from a specified data region. Formally, the outcome of regional rule extraction from a model is in the form of IF-THEN statements: IF x ∈𝕏, THEN y ∈𝕐, here 𝕏 denotes a subspace of a multivariate variable x confined by a set of rules, e.g. value ranges of several features. 𝕐 denotes a subset of a target variable y. For instance, when extracting rules from a classifier, y often refers to the model's prediction and 𝕐 denotes a class of interest (e.g. positive class in binary classification). Compared to global rule extraction, regional rule extraction does NOT generate rules for the data region y ∉𝕐. It is expected to gain two advantages through this reduction: i) obtaining more accurate and generalised rules for the specified data region y ∈𝕐, and ii) providing more precise control on key properties of the rules (e.g. number of rules, number of samples that satisfy these rules) for the specified data region. A relevant topic in rule extraction is sequential covering, which iteratively learns a rule list for a data subgroup at each step to create a decision set that covers the entire dataset <cit.>. In sequential covering, the method applied to learn the rule list for a specified subgroup often involves a global rule extraction method, such as decision trees. The learned rules at each step then become the decision path to the purest node of the specified subgroup in a decision tree. To this end, regional rule extraction can be directly applied at each step of sequential covering to extract rules for specific data subgroups. §.§.§ Implementation challenges Conventional ways to extract rules for data subgroups or the entire dataset often utilise various search strategies, such as hill-climbing, beam search, and exhaustive search <cit.>, as the objective is to search the best subspace of features that significantly elevates the occurrence of data of interest. Due to the vastness of the search space, these strategies often require predefined discretisation for numerical features, For instance, association rule classifiers <cit.> use association rule mining <cit.> to transform rule extraction into finding frequent item sets – groups of items that frequently appear together in training samples. Consequently, numerical features must be transformed to discretised items before this process. Similarly, KDRuleEx <cit.> was developed to extract decision tables using categorical features. Moreover, Falling rule list <cit.>, Scalable Bayesian Rule Lists (SBRL) <cit.>, and Bayesian Rule Sets (BRS) <cit.> are probabilistic rule extraction methods that use distributions of discrete variables for modelling rule generation. However, predefined discretisation is often neither applicable nor desirable in real-world applications. Besides, when features are discretised individually, meaning the discretisation considers only the marginal distributions of features rather than their joint distributions, there is an implicit assumption that these features are independent – a condition not valid in many real-world scenarios. Moreover, the computational cost of automatic discretisation may increase exponentially when considering dependencies among a large number of features. Decision Tree (DT) classifier <cit.> is the most popular approach applied in rule extraction for neural networks due to its flexibility and accessibility <cit.>. The most attractive advantage of decision trees is their accessibility: they do not require predefined discretisation for numerical features. However, decision trees are not well-suited for regional rule extraction as: i) the selection of binary splits may still be affected by other data regions that are not of interest, especially in multi-mode scenarios; ii) the fact that they scan every observed value of a feature to find an optimal split, leading to a tendency to overfit the minimum support criterion on top features. §.§ AMORE In order to enable regional explainability with easy access, we propose AMORE – a novel approach that can automatically generate rules (e.g. key intervals of numerical features) through a model-agnostic algorithm. More specifically, our main contributions are three-fold: * we enhance regional rule extraction for black-box machine learning models, offering more accurate and generalised rules for underrepresented data regions than global rule extraction, and also allowing for local rule extraction for a given sample; * we enable automatic rule generation with numerical features, without requiring predefined discretisation or making any assumption about feature distributions; * we develop an efficient method for selecting features to compose rules, which is designed to seamlessly integrate with our rule generation method and reduce computational cost in high-dimensional feature spaces. The workflow of our methods is illustrated in <Ref>. To the best of our knowledge, this is the first attempt to specifically address regional rule extraction with automated rule generation in the literature of XAI. § RESULTS We conducted experiments on several tasks with a set of publicly available datasets and various models to demonstrate the effectiveness of the proposed methods, including: * diabetes prediction <cit.> using a logistic regression model; * sepsis prediction <cit.> using a NCDE model <cit.>; * molecular toxicity prediction <cit.> using a GNN; * handwriting digits (MNIST <cit.>) classification using a CNN; * brain tumour MRI image classification <cit.> using an EfficientNet <cit.> pre-trained on ImageNet-1K dataset <cit.>. A summarised description of all tasks is provided in Table 1 in Supplementary material. Most of these tasks are imbalanced binary classification, a common scenario in applications requiring rule explanation. As a result, we observe that models often exhibit significantly better recall than precision even with techniques of up-weighting the minority class. In all these binary classification tasks, the positive class (i.e. c = 1) is designated as the class of interest. For obtaining a more balanced performance, we choose the prediction threshold y_th by maximising the difference between the True Positive Rate (TPR) and False Positive Rate (FPR) (i.e., TPR - FPR) using the ROC curve on the training set. Consequently, we consider the model predicting a positive sample when the model's output is greater than the threshold: ŷ_p > y_th, where ŷ_p represents the probability of a sample being from positive class given by the model. The MNIST and brain tumour MRI dataset are used to illustrate how our method can be applied to multi-classification tasks and how it can interpret image data through latent states. For both tasks, we select the class with the maximal ŷ_p as the predicted class. Although these experiments focus on classification tasks, the proposed method can be readily applied to regression tasks by transforming the target 𝕐 from a discrete value to a value range of the target variable y. §.§ Evaluation criteria In the rest of this article, we denote the true label of a sample as y^*, the model's prediction as ŷ, the class of interest as the target class c, the sample index as n, the number of training samples as N, and the indicator function as 𝕀(·). Since all the tasks are classification, we also refer to the specified data region of interest as the target region, denoted by ŷ = c. We apply three criteria for evaluating the quality of an extracted rule set 𝕏 from a model: * Support – the number of training samples that satisfy the rule set. It is a metric for measuring the coverage of the rule set, acting a role like recall in classification evaluation. A larger support is not necessarily better as there is usually a trade-off between support and confidence. Support = ∑_n=1^N𝕀(x^(n)∈𝕏) * Confidence (also called rule accuracy) <cit.> – the proportion of samples that belong to the target region under the rule set. This metric quantifies the purity of the subspace defined by the rule set. It can be computed as: Confidence = ∑_n=1^N𝕀((ŷ^(n)=c) & (x^(n)∈𝕏))/∑_n=1^N𝕀(x^(n)∈𝕏) * Fitness – a new metric we propose for regional rule extraction, being the ratio difference between samples that belong to the target region and those that do not, under the rule set, divided by the total number of samples in the target region. The higher the score, the better, and in the best case Fitness = 1: Fitness = ∑_n=1^N𝕀((x^(n)∈𝕏) & (ŷ^(n)=c))/∑_n=1^N𝕀(ŷ^(n) = c) - ∑_n=1^N𝕀((x^(n)∈𝕏) & (ŷ^(n)≠ c))/∑_n=1^N𝕀(ŷ^(n) = c) Support and confidence are widely applied in the literature of rule extraction. Typically, support and confidence are negatively correlated because including more samples likely introduces additional noise into the subspace. A better rule set should exhibit reduced correlation by increasing confidence while maintaining a similar support. We define fitness for evaluating regional rule sets, which takes into account the trade-off between support and confidence. This metric measures how well the subspace defined by a rule set fits into the target data region. A rule set with larger support and lower confidence, or one with smaller support and higher confidence, could both result in a lower fitness. Fitness can be negative when the subspace is mostly out of the target region. §.§ Baseline method In all of our experiments, we chose the DT classifier as the baseline method because, similar to our method, it does not require predefined discretisation for numerical features, and we were unable to find another method with this capability. Additionally, it allows us to verify the expected difference between the global and regional rule extraction. The DT classifiers were trained to classify samples belonging to the target region as positive class and others as negative class. The rules from a DT classifier are formed by the decision path to the node with the highest fitness. The rules from AMORE are the rule set with the highest fitness among all candidate rule sets. All results are provided in <Ref>. Results from AMORE are deterministic when both the model and training set remain unchanged. Similarly, the DT classifier yields consistent results upon varying the random seeds, as all features are involved in split selection in decision trees. Therefore, there is no variance reported in <ref>. §.§ Hyperparameters In general, there are two hyperparameters required to constrain the extracted rules: i) the maximum number of features included in a rule set is limited by l_max, constraining the complexity of extracted rules to not be too large; ii) the minimum number of training samples that satisfy the extracted rules is limited by s_min, constraining the support of the rules to not be too small. We configure the same l_max for both methods in each task for consistency. However, the hyperparameter s_min is highly correlated to the optimal fitness that a rule set can achieve. Therefore, we conducted a grid search for s_min, alongside other method-specific hyperparameters, for both methods in each task. We provide the selected hyperparameters for both methods in Table 2 in Supplementary Material. <Ref> visualises the comparison between the two methods while specifying different values of s_min. To avoid selecting rules that have slightly higher fitness but significantly lower confidence, we specify a confidence lower bound ι for choosing the best rule set across all experiments. This lower bound filters out any rule sets with confidence levels below it when selecting the best rule set based on fitness. However, if no rule set with a confidence above this lower bound is obtained, we simply select the one with the highest fitness. To verify the robustness of AMORE with respect to ι, we conducted a sensitivity analysis by varying ι from 0.7 to 0.9 (Figure 1 in Supplementary Material). AMORE maintained better or equivalent performance compared to DT classifiers. We chose ι = 0.8 as it is considered reasonably good for rule confidence while allowing a higher support. §.§ Task 1 – Diabetes prediction The Diabetes prediction dataset is a public dataset <cit.> that contains medical and demographic information of 100,000 patients along with their diabetes status. It consists of eight raw features including two binary features (, ), two categorical features (, ), and four numerical features (, , , and ). We trained a logistic regression model to classify whether a patient has diabetes or not, and then applied our methods to extract rules that interpret the key knowledge acquired by the model in recognising diabetes patients. We first identify highly influential features using our feature selection method (<Ref>) and then extract rules for the target region using those features. The most informative feature identified by both methods is HbA1c level (). As a measure of a person's average blood sugar level over the past 2-3 months, it reasonably emerges as a key factor in predicting diabetes, demonstrating the effectiveness of our feature selection method. As only one rule is allowed in this task, AMORE achieved much better performance than the decision tree classifier (<Ref>) due to the trade-off issue in global rule extraction is magnified in such scenarios. Furthermore, we demonstrate obtaining local rules for three test samples by our method, as shown in <Ref>: one with a very high predicted probability, another close to the decision threshold, and a third below the threshold. The obtained local rules exhibit lower confidence and fitness when the predicted probability is lower, indicating that the quality of local rules correlates with the model's uncertainty on the given sample. For samples that are distant from the target region (e.g. ŷ_p ≈ 0), our approach may not return valid local rules (rules that give a higher confidence than without them). §.§ Task 2 – Sepsis prediction The Sepsis dataset <cit.> is a public dataset consisting of hourly vital signs and lab data, along with demographic data, for 40,336 patients obtained from two distinct U.S. hospital systems. The objective is to predict the onset of sepsis within 72 hours from the time a patient has been admitted to the ICU. We treat the data as time series, with each data sample comprising 72 time steps (hours), and each time step including 34 features representing vital signs and laboratory data. For patients who do not have records for the full 72 hours, we treated features of missing hours as missing values filled by NaN before feature augmentation. We conducted feature augmentation by three cumulative operations on the 34 original features: i). denotes the cumulative number of time steps that a raw feature has non-missing values; ii). denotes the maximum value of a specific feature among all passed time steps; iii). denotes the value sum of a specific feature over passed time steps. We added the time index as a suffix to the augmented feature. For example, denotes the cumulative number of time steps that have recorded (beats per minute) at the 45-th time step. After augmentation, each time step has 136 features, resulting in a total of 9,792 features for each sample. We trained a NCDE model <cit.> on this dataset to predict the onset of sepsis. NCDE is able to deal with irregular sampled time series using controlled differential equations. ODE solvers are involved in the approximation of the gradients of the model parameters. This way, we demonstrate that our approach can also be applied to this type of models. More information about NCDE can be found in Supplementary Materials. The extracted rules can be found in <Ref>. The feature indicates temperature (^∘C) and is Systolic BP (mm Hg). As this task has a high dimensional feature space, our method selected different features with decision trees. We also observe that the top rules of both methods are from features augmented by cumulative time steps. This suggests that the model considers the frequency of taking those vital signs and lab tests as key information for predicting sepsis. §.§ Task 3 – Molecular toxicity prediction Predicting the toxicity of new medicines using molecular structure is crucial in drug discovery, given that over 30% of promising pharmaceuticals fail in human clinical trials due to toxicity discovered in animal models <cit.>. The Tox21 (The Toxicology in the 21st Century) challenge contained in the MoleculeNet <cit.> is a public dataset that assesses the toxicity of 8,014 compounds across 12 different categories. These categories include 7 nuclear receptors and 5 stress response pathways. We trained a graph attention network <cit.> to predict the categories of compounds using their chemical structures as input. This is a multi-label classification task, and we chose the first two categories for rule extraction as demonstration: the androgen receptor (AR) and aryl hydrocarbon receptor (AhR). We utilise RDKit descriptors <cit.> to map molecule structures to interpretable features, providing a consistent set of features that describe essential molecule properties. As the results presented in <Ref>, AMORE obtained higher fitness than DT classifiers in both cases. §.§ Task 4 –MNIST digits classification In addition to tabular data, we showcase the capability of our method to interpret models for image data by using MNIST dataset <cit.>. We trained a neural network with a convolutional layer and two fully connected layers for this task. The target region is set to ŷ=7, which identifies the model's key knowledge for classifying the digit 7. We used the model's latent representations (input to the final linear layer) as features to generate rules, which dimension is 128. The performance of extracted rules can be found in <Ref> as well. For a visual demonstration, we illustrate the extracted rules by samples that satisfy each rule in <Ref>. It is clear to see that the first rule emphasises the upper half of digit 7, while the second rule focuses on the lower half. §.§ Task 5 – Brain tumour MRI classification The brain tumour MRI dataset <cit.> is a public dataset that includes brain MRI images with four classes: glioma tumour, meningioma tumour, pituitary tumour, and normal. We used an EfficientNet <cit.> architecture for this classification task which is pre-trained on ImageNet-1K dataset <cit.>. Like the MNIST task, we also use the latent representations from the model as features to generate rules for a target region in this experiment, which dimension is 1280. We extracted rules for identifying “normal" brain MRI images where the class “normal" is the minority class in this dataset. The sample means of the MRI images are not visually interpretable. Therefore, we showcase two samples that satisfy rules extracted by AMORE in <Ref>: one is from class “normal" and the other is malignant. We visualise each rule by colouring pixels according to each pixel's impact on the latent feature that defines the rule, thereby highlighting the most impactful areas for each rule in an MRI image. § DISCUSSION In the presented experiments, we illustrate that AMORE can extract representative rules with large support and high confidence for specific target regions, achieving better or equivalent performance compared to DT based explanation. Our proposed approach is also applicable to a broad range of machine learning models without requiring predefined discretisation for numerical features. We compare the best rule sets with only “AND" operations here, however, a more comprehensive view could be constructed by post-processing that merges candidate rule sets using “OR" operations according to customised requirements. We consider this as future work. While two hyperparameters are configured to limit the minimum support (s_min) and the maximum length (l_max) of extracted rules for both methods, we observe that the rules extracted by AMORE often have actual support and length that are closer to these configurations. This aligns with our expectations to have more precise control over regional rule extraction. We demonstrated how to apply AMORE with data types other than tabular data in several experiments. For instance, in the molecular toxicity prediction task, we map the molecular structures to RDKit descriptors, providing interpretable rules by the descriptors. Likewise, in the MNIST and Brain tumour experiments, we extract rules from the latent space of models and then interpret them through visualisation in the raw pixel space. In general, for data with features that are not directly comprehensible in the form of rules, one can map the raw features to an interpretable feature space so that AMORE can be applicable. For instance, the speech and text data can be interpreted by mapping raw features to semantic concepts, taking into account specific requirements of domain knowledge for an application. Regarding regression and unsupervised tasks with continuous target variables, AMORE can be applied to extract rules by defining a value range of the target variable as the target region. However, AMORE is not intended to reveal the dynamic patterns of continuous variables in regression tasks, such as patterns indicating an increase or decrease in the variable. Future work may focus on the use of the dynamics (e.g., the first or second-order derivatives) of the target variable instead to enable extracting rules for dynamic patterns. § METHODS The objective of regional rule extraction is to find a subspace of data that maximises the purity of a specified subgroup within it. The subspace is defined by a rule set that must satisfy two constraints: the maximum length l_max and the minimum support s_min. We quantify the purity of a specified subgroup as its conditional probability within that subspace, which can be formulated as below: _𝕏(y∈𝕐|x ∈𝕏) s.t. |𝕏| ≤ l_max, ∑_n=1^N 𝕀(x^(n)∈𝕏) ≥ s_min Where |𝕏| denotes the length of the rule set. We define one rule by one feature, i.e. one value of a categorical feature or one value interval of a numerical feature. The length of the rule set is the number of features included in the rule set. It is computationally prohibitive to solve such an optimisation problem exactly, especially for numerical features in high-dimensional space. The challenges are mainly due to: [label=*)] * large search space for selecting feature combinations to compose rules; * large search space for selecting value intervals of numerical features. We tackle these challenges heuristically through two procedures: i) a feature selection procedure, and ii) a rule extraction procedure. §.§ Feature selection for rule extraction Although the feature selection procedure is optional, we recommend conducting it before rule extraction for high-dimensional data. To align with the purpose of rule extraction, we focus on selecting features that frequently appear in a decision path, aligning with the principles of decision-making. This can be achieved by using feature importance measures to identify the most influential features for given samples and then mining frequent, highly important feature sets across those samples. We propose a new method (<Ref>) for effectively selecting features of rules. It can be performed using any feature importance measures. Nonetheless, we have also designed a new measurement (<Ref>) for differentiable models, leveraging integrated gradients <cit.> to quantify feature importance as it can better align with the principle of decision rules. For non-differentiable models, such as tree models or ensemble models, one can use SHAP <cit.> or LIME <cit.> to measure feature importance. §.§.§ Preliminary: integrated gradients The integrated gradients (also known as integrated Jacobian) <cit.> is a pairwise measure that can quantify the contribution of changing an input feature on the shift of a target variable given by a trained differentiable model. This measure can be applied to identify feature sets that most frequently have a joint high impact on shifting a target variable, which is crucial for determining the target variable's value. Assume we have a model 𝒢: ℝ^D →ℝ to infer a variable y ∈ℝ by an observation x ∈ℝ^D, i.e. y = 𝒢(x). Here D is the number of features. Let x_i denote the i-th feature of a baseline sample x, x̃_i denote the i-th feature of a test sample x̃. We can compute the integrated gradient of the i-th feature with respect to the shift of y between the two samples as below <cit.>: 𝐣_i = (x_i - x̃_i)∫_0^1∂𝒢(x)/∂ x_i |_x=γ_i(λ) dλ, γ_i(λ) = x̃_i + λ (x_i - x̃_i), ∀λ∈ [0,1 ] Here γ_i(λ) represents a point between x_i and x̃_i. §.§.§ Feature importance measured by integrated gradients Based on <Ref>, we then estimate the importance of the i-th input feature on the shift of y as the following equation: 𝐣̃_i = |𝐣_i/y-ỹ|, where y = 𝒢(x), ỹ = 𝒢(x̃) In brief, 𝐣̃_i represents the ratio of the shift attributed to the i-th feature out of the total shift caused by all features. As the original integrated gradients are sensitive to the scale of the target variable, we normalise the integrated gradients by the absolute value of the shift of the target variable. This normalisation ensures that the importance of each feature is comparable across different sample pairs, which is crucial for identifying frequently important feature sets. Since this feature importance is measured using the shift between a baseline sample and a test sample, we suggest choosing class centres as baseline samples in classification tasks. Test samples should be a randomly selected subset from the training set to simulate decisions about the class origin of a random sample. We ensure the test samples are balanced across all classes, preventing the oversight of important features for minor subgroups of data . For regression or unsupervised tasks, the selection of baseline samples could depend on the specific application. A few general options are the mean, median, or mode of the training samples clustered by target regions. §.§.§ Frequently important feature sets The proposed feature importance (<Ref>) is measured by a pair of samples: a baseline sample and a test sample. When given B baseline samples, M test samples, and D features for each sample, one can produce an importance matrix 𝐉∈ℝ^B × M × D. For instance, 𝐉_b,m,i is the importance of the i-th feature on the shift of the target variable from the b-th baseline sample to the m-th test sample. This importance matrix 𝐉 can be readily transformed to a size of B M × D. For feature importance determined by other measures, one may directly construct the importance matrix of size N × D, where N is the number of samples. Then each row of the importance matrix 𝐉 can then be transformed into a sequence of feature indices, containing indices of features with an importance score greater than a threshold 𝐣_th. Through this transformation, we are able to apply the FP-Growth algorithm <cit.> to discover subsets of features that frequently have a high impact together. FP-Growth is an efficient algorithm for mining frequent item sets (i.e. items frequently appear in the same records). Here we treat a feature index as an item and one sequence as one record. The impact threshold 𝐣_th is a key parameter for filtering less important features. If it is too small, too many less important features may be included. If it is too large, there may be not enough features that satisfy the minimum frequency requirement. In principle, we prefer a threshold that can provide enough features and be as high as possible. To determine the threshold automatically, we suggest selecting it by the following equation: ∑_i=1^D 𝕀( (∑_b=1^B∑_m=1^M𝕀((𝐉_b,m,i≥𝐣_th)) ≥γBM) = 1, This equation requires the threshold 𝐣_th to be sufficiently large so that only one feature can have an impact score larger than 𝐣_th in at least γ BM rows of the importance matrix 𝐉, where γ∈ (c_min/BM,1], and c_min is the minimum occurrence of a frequent set. For instance, when γ=1, it requires that the most frequent feature has impact scores larger than 𝐣_th for all samples, similar to the root node of a decision tree. We set γ = 0.99 in all of our experiments. When γ is smaller, 𝐣_th will be higher, and more features will be filtered out. Our implementation uses a stepwise scanning process to find the threshold. As illustrated in <Ref>, we initialise the threshold with a small value and increase it stepwise until only one feature has a frequency no less than γ BM (<ref>). It is clear to see that the number of features that satisfy the γ BM frequency requirement decreases as the threshold increases. In this way, we ensure that the threshold is appropriate to include sufficiently representative features for rule extraction. The pseudo-code of feature selection is described in <ref>. In FP-Growth algorithm, a frequent item set should have appeared at least c_min times among all records. Additionally, it is often required to include fewer than k_max items. This requires c_min and k_max as two hyperparameters for the FP-Growth algorithm. c_min is determined by the least frequent item in a frequent item set, so we suggest a relatively small value for it, such as 10% of the training size. The use of threshold 𝐣_th automatically limits the number of items in a frequent item set, so we just set k_max to be large enough to encompass the longest item set. When there are multiple frequent item sets discovered, we choose the longest one, and if there are ties, we choose the most frequent one as the candidate feature set for rule extraction. §.§ AMORE After obtaining the frequently important feature set by using <ref>, the next step is to identify a subset of these features, with a length ≤ l_max, and determine the value ranges of those features that can maximise the conditional probability in <Ref>. We can explicitly decompose the overall condition to each selected feature, i.e. {x ∈𝕏}≜{x_f_1∈𝕏_f_1, …, x_f_l∈𝕏_f_l}, where {f_1,…,f_l} are the indices of selected features, 𝕏_f_i (∀ 1 ≤ i ≤ l) denotes a value range of f_i-th feature. According to the chain rule of conditional probability, we can then approximate the conditional probability of regional rule extraction as below: (y ∈𝕐|x_f_1∈𝕏_f_1, …, x_f_l∈𝕏_f_l) ∝∏_i=1^l(x_f_i∈𝕏_f_i|y∈𝕐, x ∈𝒳_l^i-1)/(x_f_i∈𝕏_f_i| x ∈𝒳_l^i-1) {x ∈𝒳_l^i-1}≜{x_f_1∈𝕏_f_1,…, x_f_i-1∈𝕏_f_i-1}, ∀ 1 < i ≤ l, 𝒳_l^0≜dom(x) 𝒳_l^i-1 denotes conditions composed by previous i-1 features. We treat this decomposition as a sequence of operations. In the i-th operation, we select the value range of f_i-th feature that maximises a probability ratio 𝐫_f_i, where 𝐫_f_i is defined as the following: 𝐫_f_i≜(x_f_i∈𝕏_f_i|y∈𝕐, x ∈𝒳_l^i-1)/(x_f_i∈𝕏_f_i| x ∈𝒳_l^i-1) Through this decomposed objective for each feature, we can obtain candidate rules one by one for a given order of features. However, the result is not guaranteed to be globally optimal if we do not search all possible orders and value ranges of features. As it is usually computationally prohibitive to find the exact global optimal solution, a feasible way is to select the feature f_i that has the maximum 𝐫_f_i among all remaining features in the frequently important feature set at the i-th step, which is analogous to adding branches in decision trees. This approach can achieve a global optimal solution when the features are independent of each other, as the order of features does not matter in such cases. Without any assumptions about the conditional distributions, we estimate the ratio 𝐫_f_i using training samples as below: 𝐫_f_i≈∑_n=1^N𝕀(x^(n)∈𝒳_l^i-1)/∑_n=1^N𝕀(x^(n)_f_i∈𝕏_f_i & x^(n)∈𝒳_l^i-1)× ∑_n=1^N𝕀(x^(n)_f_i∈𝕏_f_i & (x^(n),y^(n)) ∈ (𝒳_l^i-1,𝕐) )/∑_n=1^N𝕀((x^(n),y^(n)) ∈ (𝒳_l^i-1,𝕐)), One advantage of using this probability ratio as our stepwise objective is that we can directly discard candidate rules with a ratio not larger than 1 because it does not increase the overall conditional probability in <ref>. We provide detailed pseudo code of this process in <ref>. The input argument n_g is the number of grids for initialising value intervals of numerical features, which will be explained in <ref>. K is the maximum number of candidate rules that can be added at each step. We may obtain multiple rule sets when K > 1 to cope with multi-mode scenarios in data distributions. Note that there may be fewer than K rules added at each step as we discard rules with a ratio not larger than 1. We set K = 3 in all of our experiments. §.§.§ Generating rules for numerical features To generate candidate rules for categorical features, each category can be considered as a candidate for the designated rule. However, it is not trivial to generate candidate rules for numerical features. The question here is how to automatically identify intervals of numerical features that have high enough probability ratios and large enough support. We propose a histogram-based approach that initiates a value interval by a specified grid and iteratively expands the interval to its neighbour grids when certain criteria are met. This process begins with a binning step that divides the value range of a feature into a number of grids n_g. We offer a few options for the binning strategy in our implementation, including “uniform",“kmeans", and “quantile". The “uniform" strategy ensures all grids in each feature have identical widths; “kmeans" assigns values in each grid have the same nearest centre of a 1D k-means cluster; “quantile" results in grids with an equal number of points across each feature. We found that the “uniform" and “kmeans" strategies work better than “quantile" in all of our experiments. Intuitively, we should start with a peak grid that gives the highest ratio 𝐫_f_i and expand it when possible. Nonetheless, this might not be able to find the optimal interval if the ratios in its neighbour grids decrease rapidly. An example is demonstrated in <Ref>, in which the peak grid in the interval 𝕏_a is higher than the peak grid in the interval 𝕏_b, however, the overall ratio of the interval 𝕏_a is lower than the overall ratio of the interval 𝕏_b. It is also possible that there exist multiple intervals with similar ratios and we can not know which one is better before finishing the search for a whole rule set. Considering such multi-mode scenarios, we search intervals that start with every peak grid and choose candidate intervals with the top K ratios. We implemented a simple method based on first-order derivatives to find peaks of the ratio histogram, which can be replaced by more advanced methods. We only select peak grids with a ratio larger than 1. Here we choose the same K as in <Ref>. After initiating an interval by specifying a peak grid, we then expand it by the following criteria: * when the support of this interval is smaller than the minimum support s_min we expand the interval to its neighbour grids if possible; * when the support of this interval is not smaller than s_min, we only expand the interval to a neighbour grid when this neighbour grid has a higher ratio. This will guarantee that an interval with enough support only expands when its ratio gets increased (<Ref>). The pseudo-code of this rule generation process for numerical features are described in <ref>. <Ref> visualises this process with synthetic data in a 2-dimensional feature space. Additionally, we provide local rule extraction for a given sample (as demonstrated in <ref>), which forces the value interval of a feature to include the feature value of a given sample. If the feature intervals obtained by <ref> do not meet this requirement, we apply a local searching process. This can be directly achieved by starting with the grid that includes the given sample. The local rule extraction may not be able to obtain any valid rules (i.e. the ratios of any value intervals are not greater than 1) if the given sample is an outlier of the target region. Given two exclusive intervals of a feature: 𝕏_a, 𝕏_b, where 𝕏_a∩𝕏_b = ∅, and their corresponding ratios 𝐫_a, 𝐫_b, if 0 ≤𝐫_a < 𝐫_b, then the merged interval 𝕏_m = 𝕏_a∪𝕏_b has a ratio 𝐫_m that satisfies 𝐫_a < 𝐫_m < 𝐫_b. We first rewrite the density ratio as 𝐫_a = α_a/β_a, 𝐫_b = α_b/β_b, where α_a, α_b are non-negative integers, β_a,β_b are positive integers. According to <Ref>, then we have 𝐫_m = (α_a+α_b)/(β_a+β_b). So we can have: 𝐫_m - 𝐫_a = α_a+α_b/β_a+β_b - α_a/β_a = α_b - α_a β_b/β_a/β_a+β_b = α_b/β_b-α_a/β_a/β_a/β_b+1 > 0 Similarly, we can prove 𝐫_m - 𝐫_b < 0, which completes the proof. § DATA AVAILABILITY The Diabetes prediction dataset is available on: <https://www.kaggle.com/datasets/iammustafatz/diabetes-prediction-dataset>. The sepsis dataset is available on: <https://physionet.org/content/challenge-2019/1.0.0/>. The Tox21 challenge dataset is available on: <https://deepchemdata.s3-us-west-1.amazonaws.com/datasets/tox21.csv.gz>. The MNIST dataset is available on: <http://yann.lecun.com/exdb/mnist/>. The brain tumour MRI dataset is available on: <https://www.kaggle.com/datasets/thomasdubail/brain-tumours-256x256/>. § CODE AVAILABILITY All source code is available on: <https://github.com/tmi-lab/AMORE>. The libraries and their versions and dependencies that are used in the code are also provided as a separate configuration file in JSON/YAML format. The code is available under the CC-BY-4.0 license. § ACKNOWLEDGEMENTS This study is funded by the UKRI Engineering and Physical Sciences Research Council (EPSRC) PROTECT Project (grant number: EP/W031892/1), and the UK Dementia Research Institute (UKDRI) Care Research and Technology Centre funded by the Medical Research Council (MRC), Alzheimer’s Research UK, Alzheimer’s Society (grant number: UKDRI-7002). Infrastructure support for this research was provided by the NIHR Imperial Biomedical Research Centre (BRC) and the UKRI Medical Research Council (MRC). The funders were not involved in the study design, data collection, data analysis or writing the manuscript. § AUTHOR CONTRIBUTIONS Y.C. and P.B. conceptualised the study. Y.C. developed the method and drafted the manuscript. T.C. conducted molecular toxicity experiments and contributed to the writing. A.C. assisted with visualisation and open-sourcing the code. N.F. contributed to the writing. All authors reviewed the manuscript. § COMPETING INTERESTS The authors declare no competing interests. § SUPPLEMENTARY §.§ Summary of all tasks In this section, we provide details of the predictive model applied in each task (<ref>) and the selected hyperparameters of both rule extraction methods (<ref>). §.§ NCDE NCDE is a type of models that use neural networks to simulate the dynamic function of variables and use ODE solvers to estimate integrals for the dynamic function. It is based on NODE <cit.> which can use neural networks to approximate a latent state that has no explicit formulation with its dynamics. For instance: z(t_1) = z(t_0) + ∫_t_0^t_1 f_θ(z(t), t) dt. Here z(t) denotes latent states at time t, f_θ(·) represents an arbitrary neural network. NCDE <cit.> uses a controlled different equation, in which the dynamics of z(t) are assumed to be controlled or driven by an input signal x(t). z(t_1) = z(t_0) + ∫_t_0^t_1f̃_θ(z(t), t) dx(t) = z(t_0) + ∫_t_0^t_1f̃_θ(z(t), t) dx/dt dt, where f̃_θ: R^H→ R^H× D, H is the dimension of z, D is the dimension of x. By treating f_θ(z(t), t) dx/dt as one function g_θ(z(t), t, x), one can solve NCDE by regular methods for solving NODE. The trajectory of control signals (x(t)) in NCDE can be fitted independently with f̃_θ, which enables NCDE to deal with irregular sampled time series and work with online streaming data. We cannot compute the integrated gradients of NCDE models readily by <Ref> because z ≠𝒢(x) in this case. Interestingly, we can obtain the Jacobian ∂ z/∂ x easily when we i) use observations as control signals; ii) x(t) is approximated by an invertible function. According to the chain rule: ∂ z/∂ x = ∂ z/∂ tdt/dx = ( f̃_θ(z(t), t) dx/dt)dt/dx = f̃_θ(z(t), t) This way only needs a forward computation instead of a back-propagation to obtain the Jacobian ∂ z/∂ x.
http://arxiv.org/abs/2406.19221v1
20240627144721
Quantum-like product states constructed from classical networks
[ "Gregory D. Scholes", "Graziano Amati" ]
quant-ph
[ "quant-ph" ]
gscholes@princeton.edu Department of Chemistry, Princeton University, Princeton New Jersey 08544 U.S.A.=-1 § ABSTRACT The power of quantum states arises in part from superpositions of product states. Can complex classical systems be designed to exhibit such superpositions of tensor products of basis states, thereby mimicking quantum states? We exhibit a one-to-one map between the product basis of quantum states comprising an arbitrary number of qubits and the eigenstates of a construction comprising classical oscillator networks. Specifically, we prove the existence of this map based on Cartesian products of graphs, where the graphs depict the layout of oscillator networks. Quantum-like product states constructed from classical networks Graziano Amati July 1, 2024 =============================================================== Quantum states encode information differently from classical states, allowing certain processes to be carried out more effectively, that is, using less resources, than is possible by using classical states—a fact known as the quantum advantage. Underpinning the power of quantum states is their construction as superpositions of tensor products of the basis states<cit.>. However, it is a challenge to produce robust quantum states, especially incorporating many qubits, so as to leverage the quantum advantage. It is, moreover, difficult to imagine how a quantum advantage might be exploited in complex systems like large chemical or biological systems. The question we address in the present work is: Can complex classical systems be designed to exhibit superpositions of tensor products of basis states that mimic corresponding quantum states<cit.>? In recent work, we explored how special networks of oscillators, based on expander graphs, can be constructed so as to generate a discrete emergent state that represents a superposition of two states of coupled oscillator sub-networks<cit.>. We called that the `quantum-like' (QL) bit. The key advances were: (i) to demonstrate how the properties of the expander graph network enable the system to be remarkably resistant to decoherence and (ii) to exhibit a two-state system that serves as a QL bit. We must now show explicitly how to scale the QL states to accommodate an arbitrary number of QL bits. This is an important point because an advantage of quantum states is the special way they encode the basis as product states, thereby building correlations into the basis that enable the superposition states to have unique properties. Specifically, the quantum mechanical state space needed to encode all the information associated to a system with many qubits is built from the tensor product of the Hilbert spaces of each qubit in a system, ℋ = ℋ_1⊗ℋ_2⊗⋯⊗ℋ_n. One possible basis for describing the state of a quantum system can then be produced by setting the qubits in either the |0⟩ or |1⟩ state, to obtain 2^n unique product states in ℋ. This generates a basis of tensor products. The general states can comprise superpositions of product states—states of special interest because to date they have no classical counterpart. Here we provide an explicit construction of superpositions of product states of QL bits. We do this by producing new graphs formed from Cartesian products of an arbitrary number of QL bits. The emergent eigenstates of these product graphs are tensor products of the emergent states of the constituent QL bits. A QL-bit is constructed from a pair of interacting networks, or subgraphs. The model is abstract, but one way to make it concrete is to define the nodes of the network (vertices in the graph representation) be be classical oscillators and the links between nodes (edges in the graph) are couplings between the oscillators. We thus represent the map of how the oscillators in a network are coupled to each other using graphs, and the properties of those graphs tell us about the properties of the corresponding network. A graph G(n,m), that we often write simply as G, comprises n vertices and a set of m edges that connect pairs of vertices. The size of a graph or subgraph, that is, the number of vertices, is written |G|. The spectrum of a graph G is defined as the spectrum (i.e. eigenvalues in the case of a finite graph) of its adjacency matrix A. For background see <cit.>. For the QL bits, we suggested that each of the subgraphs is an expander graph, that is, they have a special form of connectivity among the vertices that guarantees the highest eigenvalue of each graph is associated with an emergent state—a state isolated from all other eigenstates of the graph<cit.>. We refer to the emergent state as the state of largest eigenvalue, consistent with standard work on spectra of graphs. However, in practice, by setting the sign of coupling-edge entries in the adjacency matrix to negative values, then the emergent state is the least eigenvalue<cit.>. The spectral isolation enabled by the emergent state protects it from decoherence<cit.>. Families of graphs with this property are known as expander graphs<cit.>. The d-regular graphs are prototypical expander graphs; here every vertex is adjacent to d other vertices in the graph: (d-regular graph) A graph G is d-regular if every vertex has degree (valency) d. That is, every vertex connects to d edges. Spectra of d-regular graphs on n vertices<cit.> comprise two distinct features: the emergent state with eigenvalue d and a set of `random states' with eigenvalues in the interval [-2√(d-1), 2√(d-1)]. For d-regular graphs, the gap between the first and second eigenvalues (λ_0 and λ_1 respectively) can be substantial, and is set by the Alon-Boppana bound<cit.>: (Alon-Boppana) Let G be a d-regular graph, then λ_1 ≤ 2√(d - 1) - o_n(1). To produce the QL-bit, define two d-regular graphs that are denoted basis graphs, labelled, for example, G_a1 and G_a2. These basis graphs are coupled to each other by randomly adding a small fraction of edges from vertices in G_a1 to G_a2. In the present paper we do this by randomly adding edges with probability p, set to 0.1 or 0.2. The resulting emergent states are the in- and out-of-phase linear combinations of the emergent states of the base graphs. We write the emergent eigenstate of G_a1 as a_1 and that of G_a2 as a_2. Hence we write the emergent states of the QL bit as a_1 + a_2. and a_1 - a_2. In this notation, a_1 ± a_2 is not an algebraic sum. It denotes the coefficients of the eigenstates that are divided into two sets, one associated with subgraph G_a1, a_1, and one with subgraph G_a2, a_2. In prior work<cit.> we showed that QL bits can be connected pairwise to produce arbitrary new states, QL states. Here we propose a general construction using products of QL bit graphs. We exhibit a one-to-one map between the tensor product basis of states of two-level systems (the basis described above used for systems of qubits) and the states generated by Cartesian products<cit.> of QL bits. (Cartesian product of graphs) G H is defined on the Cartesian product of vertex sets, V(G) × V(H). Let {u, v, …}∈ V(G) and {x, y …}∈ V(H). Let E(G) and E(H) be the set of edges in G and H respectively. The edge set of the product graph G H is defined with respect to all edges in G and all edges in H as follows. We have an edge in G H when either [u,v] ∈ E(G) and x = y or [x,y] ∈ E(H) and u = v. Some properties of the product might be useful. The Cartesian product of graphs G H is connected if and only if both G and H are connected. The Cartesian product of graphs is associative, i.e., for three graphs G, H and K, (G H) K ≅ G (H K), where ≅ denotes an equivalence after index relabeling. Thus, we have an explicit mapping from the usual product basis of the Bell states to the basis graphs G_a1 and G_a2 of QL bit A and G_b1 and G_b2 of QL bit B as follows: G_a1 G_b1 → |0⟩_A |0⟩_B G_a1 G_b2 → |0⟩_A |1⟩_B G_a2 G_b1 → |1⟩_A |0⟩_B G_a2 G_b2 → |1⟩_A |1⟩_B The map indicated by the arrow means that the graph eigenvector corresponding to the emergent state represents the product basis state. The map is made explicit by a projection onto a new basis that we now describe. The eigenstates of the graphs are determined with respect to the basis of the (many) vertices of the graph. In order to describe the usual quantum states space for interacting two-level systems, we define a map from the eigenvectors of each QL bit to an orthonormal basis of two-state systems in terms of the Hilbert space of each QL bit. The graph product states can thereby be related to the product states of a system of two-level entities. We have an orthonormal basis for the QL state { u_1, u_2, …, u_n, x_1, x_2, …, x_k }, where subgraph G_a1 with n vertices is associated to basis functions labeled u_1, u_2, …, u_n and subgraph G_a2 with k vertices is associated to basis functions labeled x_1, x_2, …, x_k. For the emergent eigenstate if the graph we define the vectors U_a1 = c_1u_1 + c_2u_2 + … + c_nu_n and X_a2 = d_1x_1 + d_2x_2 + … + d_nx_k, where c_i and d_i ∈ℝ are coefficients from the graph eigenvector. By construction, U_a1 and X_a2 are orthogonal. We define two orthogonal vectors, J_a1 = 1/√(n)(u_1 + u_2 + … + u_n) and J_a2 = 1/√(k)(x_1 + x_2 + … + x_k). J_a1 is the basis vector corresponding to the state |0⟩_A of the two-level representation of QL bit A, and J_a2 is the basis vector corresponding to the state |1⟩_A. Hence we can project G_A G_B → (U_a1 + X_a2) ⊗ (U_b1 + X_b2) to α_00 |0⟩_A ⊗ |0⟩_B + α_01 |0⟩_A ⊗ |1⟩_B + …, where the α_ij are coefficients, by evaluating α_00 = ⟨ U_a1, J_a1⟩⟨ U_b1, J_b1⟩, α_01 = ⟨ U_a1, J_a1⟩⟨ X_b2, J_b2⟩, etc. Here ⟨ , ⟩ means inner product. Using this method we can project any eigenstate of arbitrary graph products to an explicit tensor product state on the basis of G_a1→ |0⟩_A, G_a2→ |1⟩_A, G_b1→ |0⟩_B, etc. Now to prove Proposition 1, we simply need to define the spectrum of the Cartesian product (see, for instance <cit.>). (Spectrum of a Cartesian product of graphs) Given A graph G, for which its adjacency matrix A_G has eigenvalues λ_i and eigenvectors X_i, and A graph H, for which its adjacency matrix A_H has eigenvalues μ_i and eigenvectors Y_i, then the spectrum of G H contains eigenvalues λ_i + μ_j and the corresponding eigenvectors are X_i ⊗ Y_j. The validity of Proposition 1 is also evident from the structure of the adjacency matrix of the Cartesian product G H between two graphs G and H of size n and k, respectively, given by A_G H = A_G⊗ I_n + I_k⊗ A_H, where I_m is the identity matrix of size m. Arbitrary linear combinations of product states are produced when the basis graphs, for example G_a1 and G_a2, are coupled to produce the superposition states described above, G_a1 + a2, etc. For instance, we can produce the Bell states from graphs by suitable linear combinations of the following states: G_a1 + a2 G_b1 + b2 → |0⟩_A |0⟩_B + |0⟩_A |1⟩_B + |1⟩_A |0⟩_B + |1⟩_A |1⟩_B G_a1 - a2 G_b1 + b2 → |0⟩_A |0⟩_B + |0⟩_A |1⟩_B - |1⟩_A |0⟩_B - |1⟩_A |1⟩_B G_a1 + a2 G_b1 - b2 → |0⟩_A |0⟩_B - |0⟩_A |1⟩_B + |1⟩_A |0⟩_B - |1⟩_A |1⟩_B G_a1 - a2 G_b1 - b2 → |0⟩_A |0⟩_B - |0⟩_A |1⟩_B - |1⟩_A |0⟩_B + |1⟩_A |1⟩_B We begin by showing some examples of spectra of graph products, starting with a model cycle graph on five vertices (C_5). Spectra of products of the 5-cycle, C_5, are displayed in Figure <ref>a-c. The largest eigenvalue of C_5 is 2, and the gap to the second eigenvalue λ_0 - λ_1 = 1.38. Thus, for the products we find λ_0(C_5 C_5) = λ_0(C_5) + λ_0(C_5) = 4 and λ_1(C_5 C_5) = λ_0(C_5) + λ_1(C_5) = 1.38, and so on. Notice that the gap between the highest two eigenvalues, λ_0 - λ_1, remains constant as we take products. In Figure <ref>d we show how the product C_5 C_5 is produced explicitly. Let's label the graphs G and H. One of the graphs, say H (drawn in green), templates the product. For each vertex in the graph H we draw one copy of the the graph G (the blue graphs). The vertices are indexed by the index pair, one corresponding to the graph G and one index deriving from the vertex of H associated with each copy of G. We then use these second indices to draw edges between identical vertices of the copies of G templated according to the edges in H. One set are shown as the black edges. Finally we have the product graph, which can be drawn to display explicitly the graphs G, as shown in the figure, or we could draw a similar picture that clearly shows five copies of H. The graph product thereby gives a physical picture of the tensor product. We now show examples of d-regular random graphs, which are the basis for the QL bit subgraphs. A d-regular random graph on n vertices has a total of dn/2 edges, arranged such that each vertex connects to precisely d edges. We produce these graphs as described previously<cit.>. In these calculations d = 8 and the graph G(n,m) has n=12 vertices and m=44 edges. Each graph G(n,nd/2) is randomly generated and nd/2-m=4 edges are randomly deleted. We do not know precisely how the vertices are connected in each graph, highlighting that the building blocks for QL bits are weakly dependent on the precise details of the subgraphs. To further emphasize the resilience of these states to disorder<cit.>, we introduce `frequency disorder' by adding values to the diagonal elements of the adjacency matrix from a normal distribution. The spectrum plotted has a standard deviation of the distribution of σ = 2.0. In Figure <ref> we show an ensemble spectrum of a graph product of these d-regular random graphs, G_1 G_2 G_3. The three graphs G_1, G_2, G_3 differ by the random deletion of the four edges. We identify the set of random states, see labels in Figure <ref>, which are complemented by bands of `hybrid states' that comprise eigenvalue sums of random-states and one or two emergent-state eigenvalues. The emergent state is prominent, and in the product graph it remains separated from the second eigenvalue (a hybrid state) by the Alon-Boppana bound. In Figure <ref> we show spectra of representative QL states formed from products of QL bit states. The two QL bits are each set up so that QL bit A has the highest emergent state a_1 + a_2 and for QL bit B we have b_1 + b_2. Each subgraph of the QL bits (e.g. G_a1) comprises 20 vertices and is a d-regular random graph with d = 15. Edges connect the subgraphs with probability p, noted in the figure caption. This random addition of connecting edges introduces disorder in the spectrum. That is evident in the ensemble calculation for the spectrum of QL bit A, Figure <ref>a. This spectrum of one QL bit, composed of two coupled d-regular basis graphs, comprises two emergent states with eigenvalue d split or `repelled' by the coupling or magnitude Δ = n_c/n, where n_c is the number of coupling edges connecting the subgraphs<cit.>. In Figure <ref>b we show an example of one spectrum for the QL state, that is, the spectrum of the graph product. The emergent state with highest eigenvalue, labelled A, is associated with G_a1 + a2 G_b1 + b2 that projects to |0⟩_A |0⟩_B + |0⟩_A |1⟩_B + |1⟩_A |0⟩_B + |1⟩_A |1⟩. The other states are also contained in this spectrum: B labels the degenerate states from G_a1 - a2 G_b1 + b2 and G_a1 + a2 G_b1 - b2, while C labels G_a1 - a2 G_b1 - b2. By changing the signs of the edges in the graphs, we can associate any of these states with the highest emergent eigenvalue. The corresponding ensemble spectrum is shown in Figure <ref>c. These spectra display four emergent states. One state, A, has eigenvalue d_a + Δ_a + d_b + Δ_b, two states, B, have eigenvalues d_a - Δ_a + d_b + Δ_b and d_a + Δ_a + d_b - Δ_b, and one state, C, is located at d_a - Δ_a + d_b - Δ_b. We extend the numerical results to display spectra of graph products of three and four QL-bits, Figure <ref>d-f. The random states, hybrid states, and emergent states—defined above for the d-regular graph product—are noted. The spectrum of a Cartesian product of N QL bits can include all 2^N emergent eigenvalues, although it is likely that the lower eigenvalues will be hidden by the random state spectrum. Assuming for simplicity that all the QL bit basis graphs are d-regular and that each splitting is Δ then the greatest eigenvalue is Nd + NΔ. It is separated from the next highest eigenvalue by 2Δ, regardless of the value for N. The non-random part of the spectrum is centered at Nd and has a spectral width of [-NΔ, +NΔ]. In terms of physical scaling of the system of QL bits, the number of oscillators increases with the number of QL bits. For QL bits each comprising n total vertices, the total number of vertices in a product graph of N QL bits is n^N. Thus we have a polynomial scaling of the physical resources for a commensurate polynomial scaling of the state space. Owing to the design of the QL bits, it turns out that there is a more practical product graph construction that gives a physical scaling of n2^N and lifts to the full product graphs described here. Details will be described in future work. There are other practical considerations, but the key point is that the resources scale as a polynomial in the size of each QL bit, suggesting that QL states might provide a feasible resource for computation or function that is intermediate between the classical and quantum limits. Superpositions of product states are one key reason for the special properties of quantum systems, including entanglement and nonlocality. Here we have exhibited a one-to-one map between the product basis of quantum states, states in the state space comprising the tensor product of the Hilbert spaces of each qubit in a system, ℋ = ℋ_1⊗ℋ_2⊗⋯⊗ℋ_n, and the eigenstates of a construction comprising classical oscillator networks. Specifically, we have proved the existence of this map based on Cartesian products of graphs, where the graphs depict the layout of the oscillator networks. The complex structure of these graphs (networks) gives a physical basis from which to understand the power of the tensor product basis of quantum states. We have shown that a state space comprising superpositions of product states—the hallmark of a quantum states—can be generated, using polynomial resources, from classical systems. This raises an open question: Do these states have a `quantum-like' advantage for function intermediate between the limits defined by classical and quantum systems? This research was funded by the National Science Foundation under Grant No. 2211326 and the Gordon and Betty Moore Foundation through Grant GBMF7114.
http://arxiv.org/abs/2406.18142v1
20240626074704
Innovating for Tomorrow: The Convergence of SE and Green AI
[ "Luís Cruz", "Xavier Franch Gutierrez", "Silverio Martínez-Fernández" ]
cs.SE
[ "cs.SE", "cs.AI" ]
L.Cruz@tudelft.nl 0000-0002-1615-355X Delft University of Technology The Netherlands xavier.franch@upc.edu 0000-0001-9733-8830 Universitat Politècnica de Catalunya Barcelona Spain silverio.martinez@upc.edu 0000-0001-9928-133X Universitat Politècnica de Catalunya Barcelona Spain § ABSTRACT The latest advancements in machine learning, specifically in foundation models, are revolutionizing the frontiers of existing software engineering (SE) processes. This is a bi-directional phenomona, where 1) software systems are now challenged to provide AI-enabled features to their users, and 2) AI is used to automate tasks within the software development lifecycle. In an era where sustainability is a pressing societal concern, our community needs to adopt a long-term plan enabling a conscious transformation that aligns with environmental sustainability values. In this paper, we reflect on the impact of adopting environmentally friendly practices to create AI-enabled software systems and make considerations on the environmental impact of using foundation models for software development. <ccs2012> <concept> <concept_id>10011007</concept_id> <concept_desc>Software and its engineering</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010147.10010178</concept_id> <concept_desc>Computing methodologies Artificial intelligence</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Software and its engineering [500]Computing methodologies Artificial intelligence Innovating for Tomorrow: The Convergence of SE and Green AI Silverio Martínez-Fernández July 1, 2024 =========================================================== § INTRODUCTION Software-related CO2 emissions from the ICT sector currently account for 2.1%–3.9% of global emissions <cit.>. In today's context with the widespread use of AI systems, there have been many calls from industry leaders and AI experts admitting a further increase of these emissions. OpenAI chief executive Sam Altman warned that the next wave of generative AI systems will consume vastly more power than expected <cit.>. Hugging Face Climate Lead Sasha Luccioni has shown the scalability impact of AI systems' inference: “While inference on a single example requires much less computation than that required to train the same model, inference happens far more frequently than model training — as many as billions of times a day for a model powering a popular user-facing product such as Google Translate” <cit.>. Focusing only on AI systems, in a middle-ground scenario, by 2027 AI servers could use between 85 to 134 terawatt hours (Twh) annually <cit.>. That is similar to the annual electricity usage of countries such as Argentina, the Netherlands and Sweden individually and constitutes approximately 0.5 % of the world's current electricity consumption <cit.>. The convergence of Software Engineering (SE) and AI is a bi-directional phenomena raising key sustainability concerns. On the one hand, software systems of diverse domains are now challenged to provide AI-enabled features to their users. To reduce the emissions from these AI systems from their development and training, to their usage and inference, and to its retirement, emerging AI software development lifecycles shall incorporate energy-aware capabilities. On the other hand, the potential of using AI and foundation models to automate tasks within the software development lifecycle is very promising, with dedicated events such as AIware[<https://2024.aiwareconf.org/> (visited on April 5, 2024)] and LLM4Code [<https://llm4code.github.io/> (visited on April 5, 2024)]. To fully embrace these technologies, the concerns regarding the emitted emissions from its usage shall be understood. Therefore, the objective of this position paper is twofold: * Identifying the trans-disciplinary dimensions and dichotomies in which the research from the SE community shall contribute to build greener AI systems, as well as reasoning on the evolution of SE practices in such dimensions and dichotomies. * Discussing the environmental sustainability of one application domain of AI systems: generative AI for SE tasks like generation of requirements, architecture, or code in which humans and intelligent agents jointly create software. § OVERVIEW OF GREEN AI We define Green AI as a trans-disciplinary field that aims to make AI systems environmentally sustainable. Environmental sustainability of software (including AI systems) refers to engineering systems having minimal impact in our planet throughout its whole lifecycle <cit.>. We distinguish this from AI for Sustainability or AI for Green, where AI is used to make different domains more environmentally friendly – e.g., using AI to make agriculture more sustainable. We argue that it is important to make a clear separation between Green AI and AI for Green as they require different approaches and different scientific backgrounds. Given the complexity of AI systems, Green AI needs to be tackled from different angles, having each of these angles an important contribution to the overall carbon emissions of the systems. Data, AI models, and the code of the software systems are the foundations of AI systems <cit.>. As such, to build and maintain sustainable AI systems, we shall focus in each of these three aspects. As depicted in Figure <ref>, we divide Green AI across three major dimensions: data-centric, model-centric, and system-centric. These dimensions are pinpointed below in <Ref>. The figure also intersects Green AI with two dichotomies: hardware vs. software and reporting & monitoring vs. best practices. Hardware ↔ Software With the current trend of specializing hardware for particular AI tasks, practitioners can no longer be agnostic of the hardware they use to run their software. The choice of hardware is essential to ensure that a particular AI pipeline runs properly with no issues downstream. Hence, practitioners now face the challenge of having to be hardware experts. Moreover, previous research demonstrates how different hardware configurations can significantly impact the energy consumption of training deep-learning models <cit.>. However, these differences are not obvious and require meticulous trade-off analysis, which is sub-optimal in real-world scenarios. Hardware design for AI poses more concerns. Since AI chips are specialized to particular AI tasks, updates to AI models might lead to different hardware requirements. This can significantly reduce the lifetime of hardware, posing challenges to the emerging “Right to repair” legislation efforts being discussed by the EU, the US, the United Nations and so on. Not surprising, the waste created by disposing hardware devices – coined as e-waste – is already a pressing environmental problem, as reported by the technology activist Murzyn[Murzyn is on a mission to report the hazards from handling electronic waste, a responsibility that has been delegated to Global South countries: <https://murzyn.asia/en/mission-00-24-42/> (visited on April 5, 2024)]. Reporting & Monitoring ↔ Best Practices It is important that software systems are designed in a way that it is possible to report and monitor sustainability indicators <cit.>). Practitioners need to provide accurate information about the carbon footprint of their experiments and models. This can be done e.g. via meta-data in repositories such as Hugging Face or using more elaborated and structured templates such as the concept of ID-card <cit.>. On the other side of the spectrum, we have best practices <cit.>. They provide practitioners with strategies that can be adopted to reduce the environmental footprint of their software. These two poles complement each other. Effective best practices can only be established when reliable measurement mechanisms are in place. Monitoring and reporting not only incentivize practitioners to adopt these best practices but also provide essential feedback on their implementation and effectiveness. §.§ Data-centric Green AI Data-centric Green AI revolves around preparing data in a way that is expected to reduce the overall energy footprint of AI systems without hindering their performance. Previous work has addressed this from multiple ways. Reducing data dimensionality, using feature selection or stratified random sampling already leads to linear gains in energy consumption <cit.>. Further strategies are being explored. Active learning & coreset extraction have promising potential by identifying the most valuable examples that contribute to the learning process and excluding redundant or non-informative ones with confidence <cit.>. Knowledge transfer/sharing consists of using knowledge rather than solely relying on data to train a model. For instance, existing pre-trained models can be used to train new models. Preliminary research showcases improvements in energy consumption by a factor of 15 with this technique <cit.>. Dataset distillation or dataset condensation involves synthesizing a smaller dataset derived from the original dataset, aiming to train a model on this reduced set while achieving test accuracy comparable to that of a model trained on the original dataset. This approach differs from coreset extraction by focusing on synthesizing informative samples rather than selecting existing ones. Data distillation has been used to reduce the size of the MNIST image set <cit.> into just 10 synthetic distilled images (one per class) and achieve close to original performance <cit.>. Curriculum learning consists of presenting training examples to the model in a specific order, starting with simpler examples and gradually increasing the difficulty of the examples as training progresses. This strategy provides the model with a structured learning experience that mimics how humans learn and requires less iterations to converge, with time reductions of 70% <cit.>. §.§ Model-centric Green AI Model-centric Green AI means developing experimental research to improve the AI model performance and energy efficiency. The target is to build and optimize AI models that can achieve similar outcomes while requiring fewer resources. A notable example is the BLOOM model, whose carbon footprint have been studied during the training and inference stages to compete with other Large Language Models (LLMs) <cit.>. A more recent example are small language models (SLMs), like “Phi” that has shown performance comparable to models 5x larger <cit.>. Indeed, there is evidence that there is not always a trade-off between green and performance metrics. A mining repository study on 1.417 models from Hugging Face could not find a correlation between model performance and carbon emissions of ML models <cit.>. This shows the potential for lightweight AI models architectures <cit.>. Another model-centric strategy is the compression of existing models (aka AI model optimization): structured and unstructured pruning, quantization and binarization, efficient training and inference of models under different resource constraints. In this regard, Pytorch and TensowFlow are already offering AI models optimization libraries <cit.>. §.§ System-centric Green AI System-centric green AI refers to the decisions we can take regarding the software architecture and serving of ML systems to make them environmentally sustainable. The ecological footprint of an AI model extends beyond its algorithmic design to encompass the entire system infrastructure that supports it. For instance, suboptimal hardware choices can significantly inflate energy consumption <cit.>), while implementing batched inference can enhance energy efficiency  <cit.>. It is essential to recognize the continuum from cloud to edge computing in considering energy footprints <cit.>. This continuum encompasses end-user devices (edge), cloud servers, and the network infrastructure that connects them. Each component plays a role in determining the overall environmental impact of AI systems, highlighting the need for holistic considerations in system design and deployment for sustainability. We anticipate that sustainability challenges will initially be tackled with a system-centric approach, given the overlap between AI systems and traditional software systems. However, we argue that emerging challenges lie in the data-centric and model-centric domains of AI, where SE plays a critical role <cit.>. The significance of data quality has already been acknowledged for its impact on the reliability, robustness, efficiency, and trustworthiness of modern software systems <cit.>. From a model-centric perspective, there is an unexplored potential to employ SE to challenge the status quo. For example, adopting energy-concious practices for model adaptation <cit.>, hyperparameter tuning <cit.>, and so on. § REFLECTIONS ON THE FUTURE OF SE When reflecting about the future of SE, the emergence of Green AI introduces several challenges that warrant reflection. In this section, we delve into these potential challenges, providing context and outlining their implications for SE. 1. Consideration of the business case. Context. Not all problems are alike. Diverse business cases require different positioning with respect to environmental sustainability. An image processing model for cancer diagnose shall prioritize precision no matter what is the carbon footprint required for training. On the contrary, a film affinity recommender system will hardly be considered life-critical, therefore little gains of accuracy at the cost of dramatic increments of energy consumption is not justified; instead, digital sobriety[<https://www.vox.com/climate/2024/3/28/24111721/ai-uses-a-lot-of-energy-experts-expect-it-to-double-in-just-a-few-years> (visited on April 5, 2024)] should start to be the norm, and not the exception. Implications for SE. Business case elaboration and analysis lies at the heart of the requirements engineering area <cit.>. Its application to AI systems, often denoted by RE4AI, is currently a mainstream in SE, yielding to a number of actionable findings, e.g., definition of new scopes for requirements <cit.> or new quality requirement types <cit.>. However, as Habibullah et al. remark, there is a research gap in establishing trade-offs among quality requirements <cit.>. Understanding and specifying these trade-offs would help to translate the business case into quantifiable requirements, associated metrics, and optimization functions allowing for a proper validation of the AI system. 2. Consolidation of fundamental concepts. Context. Design of AI systems is still a young research area. As a consequence, fundamental concepts are still not fully understood and lead to inconsistent or even wrong terminology. For instance, we may find papers using the terms `energy efficiency' with the meaning of `energy consumption', or defining metrics with an incorrect measurement unit. This situation hampers communication and interdisciplinary collaboration <cit.>. Implications for SE. Although there are several excellent works that focus on establishing a consolidated terminology for environmental sustainability <cit.>, the truth is that they have not made their way in the SE community. Observing what happens in other areas, a standard playing the same role as for instance ISO/IEC 25010 in the field of software quality <cit.> could be a significant asset towards this consolidation. Eventually, such a standard could be the upcoming ISO/IEC 20226 on AI environmental sustainability[<https://www.iec.ch/blog/importance-sustainable-ai> (visited on April 5, 2024)]. 3. Monitoring sustainability. Context. Collecting energy data is not a trivial task. Even after collecting it, practitioners have to process and analyze energy data and tracing it back to the AI pipelines, to hotspots and make necessary adjustments if possible. To make matters worse, improving sustainability is often a trade-off problem: we want to reduce energy consumption without hindering other requirements of the system. For example, improving energy efficiency at the cost of privacy is likely unacceptable, depending on the use case. On another note, the impacts of AI systems on the environment can hardly be simplified by looking at energy consumption alone. For the same energy consumption of executing a AI software, the carbon footprint can vary depending on the time, region, and the type electricity used to power the servers at that particular. Besides, carbon footprint, we also have the problematic of water footprint – i.e., the water being evaporated due to the software execution; mostly due to cooling down servers. Moreover, embodied carbon footprint also plays an important role. GPU usage needs to be optimized to make sure hardware is not stalling idle most of the time during their lifetime <cit.>. Implications to SE. There is an open challenge of developing user-friendly energy monitoring tools that seamlessly integrate with various environments, including Edge AI devices, and virtual ones like Docker containers. These tools should not only enable detailed analysis but also offer a comprehensive overview of the energy consumption associated with software products. Additionally, they need to accommodate diverse resource metrics and provide actionable insights to facilitate environmentally-conscious decision-making throughout the development and maintenance phases of these systems <cit.>. 4. Clarification of roles' involvement. Context. The emergence of AI demands knowledge and skills that go beyond those traditionally required in classical SE. The consideration of new roles and the interactions among them have been subject of investigation. Specially, the fit of data scientists in the SE team has been thoroughly investigated <cit.>. In the context of Green AI, knowledge on environmental sustainability is a must for the optimal consideration of carbon footprint in system design. However, experts on sustainability are hardly involved in the development of AI systems Implications for SE. Same as sometimes software engineers criticize data scientists for building code not adhering to SE principles, software engineers may be criticized by addressing environmental sustainability without having profound expertise in the team. The role of environmental sustainability expert should be explicitly recognized in the Green AI development team, same as data scientist or domain expert are. This role can be played either by real sustainability professionals, or by software engineers who have acquired along time the necessary knowledge and skills. 5. Changes in the ML lifecycle. Context. There has been ongoing efforts to reshape the development lifecycle of modern software systems to cover the inclusion of AI-enabled features. However, along this push to redefine the development lifecycle of AI-enabled software, environmental sustainability is often overlooked <cit.>. Implications. It is quintessential to adding sustainabilty concerns across the whole lifecycle, from the very early stages until the retirement of software and/or respective AI models. There is an unexplored potential to employ SE to challenge the status quo. For example, model retraining is still the default approach for model adaptation, leaving out other strategies that may be more cost-effective and sustainable <cit.>. Moreover, adhering to the three R's principle—reduce, reuse, and recycle (in that specific order)—is essential in fostering sustainability practices within AI systems <cit.>. We need frameworks that help practitioners 1) opt for competitive alternatives to AI when available (reduce), 2) consume existing models instead of training their own (reuse), 3) monitor model degradation and consider model adaptation strategies (recycle) before disposing a model and training a new one <cit.>. 6. Quest for open science. Context. Nowadays, the SE community is fully aware of the importance of open science as a basic principle supporting transparency and replication <cit.>. Initiatives at all levels (journals, conferences, national research programs, ...) push the general adoption of open science by researchers. In the AI field, it has become customary to give access to (preprint of) papers, data, software and models in public repositories such as arxiv, Zenodo, GitHub and Hugging Face. However, there is a lack of clear guidelines on what type of information to store in these repositories. For instance, in the case of Green AI, Castaño et al. report that carbon emission related information is only marginally reported in the most popular model repository, Hugging Face <cit.>. This means that in spite of community willingness to go open, it is still difficult to replicate experiments or to conduct long-term cohort studies. Implications for SE. The research community shall produce consolidated and agreed guidelines to allow software engineers be systematic and rigorous in the documentation of the sustainability dimension of their experiments. Two different actions along this direction are needed: (i) determine what is the information related to environmental sustainability to be described, (ii) creating domain-specific description sheets that can be used to consolidate such information. An example in this direction is the ID-card proposed by Abualhaija et al. in the NLP domain <cit.>. Another example focusing on environmental sustainability is GAISSALabel, which gathers a repository of training and inference emissions footprints of ML models <cit.>. Alternatively, the information can be coded as meta-data in the repositories, although usually this will result in less comprehensive descriptions. 7. The role of Education Context. Equally important is the imperative to educate the next generation of AI developers in sustainable AI literacy. Green SE is still a niche topic <cit.>, with very few available materials to support educators. Moreover, the rapidly evolving landscape of AI technologies presents challenges in adapting AI and Computer Science curricula to incorporate Green AI practices. In fact, many Green AI practices are very recent and experimental <cit.>. Implications for SE. The education of SE must explicitly incorporate topics on responsible AI and, more specifically, the environmental considerations of AI systems. Educational materials should be openly accessible to empower aspiring software engineers to prioritize sustainability within their organizations. 8. Construction of theories. Context. There is a large corpus of research works that report experiments of any kind with an environmental sustainability dimension. In fact, the original set of 98 studies considered by Verdecchia et al. in their 2023's paper <cit.> is falling short today after only a couple of years. While every individual experiment may be valuable per se, their results still remain mainly de-aggregated and are difficult to be integrated into a holistic body of knowledge, i.e., a theory <cit.>. Implications for SE. Software engineers shall gradually adopt what Stol and Fitgerald call `a theory-focused research approach' <cit.>. Building theories is not only important because of the outcome, but also a theory construction process brings the need of critical thinking to researchers and make them aware of the need of better reporting their experiments as to allow building theories on solid basis. A critical point is the consideration of context <cit.>, determined by the independent and confounding variables of the experiment. Modeling context in the theory may facilitate, for instance, integrating the source of energy in the calculation of carbon emissions. § GREEN FOUNDATION MODELS FOR SE. The usage of large language code models is a powerful tool that is becoming more and more popular. It has been reported recently by Microsoft that, amongst GitHub Copilot users, 40% of the code they commit is `AI-generated and unmodified'[Scott Guthrie, Executive of the Cloud and AI group at Microsoft, as of April 5, 2024: <https://www.microsoft.com/en-us/Investor/events/FY-2023/Morgan-Stanley-TMT-Conference>]. Foundation models open the stage for developing software systems without developers creating a single line of code. Whole software systems can soon be generated from natural language prompts. A wide adoption of foundation models in coding activities poses an important question on the impact it has on the environmental sustainability of SE. On one hand, there is a clear reduction of number of developers that can lead to smaller carbon footprints inherited from having teams working together. On the other hand, there is a concerning energy overhead from continuously prompting code generation models to converge to a version of the software that can go to production. Furthermore, it is not clear whether generated code will abide by energy efficiency coding practices. State of the art code generation models are purely statistical: we argue that the most frequent coding patterns are not necessarily the most energy efficient. Foundation models ought to provide transparency in terms of sustainability indicators at different stages of the software lifecycle. Such indicators should paint a clear picture on the ecological footprint of training those models, using them, but also using their outputs. Strategies should be adopted to make sure these models are more likely to learn energy-efficient coding practices than regular ones. ACM-Reference-Format
http://arxiv.org/abs/2406.17988v1
20240626000829
DICE: End-to-end Deformation Capture of Hand-Face Interactions from a Single Image
[ "Qingxuan Wu", "Zhiyang Dou", "Sirui Xu", "Soshi Shimada", "Chen Wang", "Zhengming Yu", "Yuan Liu", "Cheng Lin", "Zeyu Cao", "Taku Komura", "Vladislav Golyanik", "Christian Theobalt", "Wenping Wang", "Lingjie Liu" ]
cs.CV
[ "cs.CV" ]
Catching Chameleons: Detecting Evolving Disinformation Generated using Large Language Models Bohan Jiang*, Chengshuai Zhao*, Zhen Tan, Huan Liu School of Computing and Augmented Intelligence Arizona State University, USA {bjiang14, czhao93, ztan36, huanliu}@asu.edu July 1, 2024 ======================================================================================================================================================================================================= § ABSTRACT Reconstructing 3D hand-face interactions with deformations from a single image is a challenging yet crucial task with broad applications in AR, VR, and gaming. The challenges stem from self-occlusions during single-view hand-face interactions, diverse spatial relationships between hands and face, complex deformations, and the ambiguity of the single-view setting. The first and only method for hand-face interaction recovery, Decaf <cit.>, introduces a global fitting optimization guided by contact and deformation estimation networks trained on studio-collected data with 3D annotations. However, Decaf suffers from a time-consuming optimization process and limited generalization capability due to its reliance on 3D annotations of hand-face interaction data. To address these issues, we present , the first end-to-end method for Deformation-aware hand-face Interaction reCovEry from a single image. estimates the poses of hands and faces, contacts, and deformations simultaneously using a Transformer-based architecture. It features disentangling the regression of local deformation fields and global mesh vertex locations into two network branches, enhancing deformation and contact estimation for precise and robust hand-face mesh recovery. To improve generalizability, we propose a weakly-supervised training approach that augments the training set using in-the-wild images without 3D ground-truth annotations, employing the depths of 2D keypoints estimated by off-the-shelf models and adversarial priors of poses for supervision. Our experiments demonstrate that achieves state-of-the-art performance on a standard benchmark and in-the-wild data in terms of accuracy and physical plausibility. Additionally, our method operates at an interactive rate (20 fps) on an Nvidia 4090 GPU, whereas Decaf requires more than 15 seconds for a single image. Our code will be publicly available upon publication. § INTRODUCTION Hand-face interaction is a common behavior observed up to 800 times per day across all ages and genders <cit.>. Therefore, faithfully recovering hand-face interactions with plausible deformations is an important task given its wide applications in AR/VR <cit.>, character animation <cit.>, and human behavior analysis <cit.>. Given the speed requirement of downstream applications like AR/VR, fast and accurate 3D reconstruction of hand-face interactions is highly desirable. However, several challenges make monocular hand-face deformation and interaction recovery a cumbersome task: 1) the self-occlusion involved in hand-face interaction, 2) the diversity of hand and face poses, contacts, and deformations, and 3) ambiguity in the single-view setting. Most existing methods <cit.> only reconstruct the hand <cit.> and face <cit.> meshes, or unified as a whole body <cit.> with the body parts, without capturing contacts and deformations. A seminal advance, Decaf <cit.>, recovers hand-face interactions with deformations and contacts taken into account. However, it requires time-consuming optimization, which takes more than 15 seconds per image, rendering it unsuitable for interactive applications. The iterative fitting process of Decaf relies on an accurate estimation of hand and face keypoints and contacts on the hand and face surfaces, which could fail when significant occlusion is present in the image; See Fig. <ref> in the Appendix <ref>. Additionally, Decaf cannot scale up their training to fruitful hand-face interaction data in the wild, as they require 3D ground-truth annotations, i.e., contact labels and deformations. To tackle the issues above, we present , the first end-to-end approach for Deformation-aware hand-face Interaction reCovEry from a monocular image. We use a Transformer-based model with the attention mechanism to effectively capture hand-face relationships. Motivated by the global nature of the pose and shape of hand and face and the local nature of the deformation field and contact probabilities, we further propose disentangling the regression of deformation from the pose and shape of hand and face represented by mesh vertex positions into two network branches, which enhances the estimation of deformations and contacts while resulting in accurate and robust hand and face mesh recovery. Instead of directly regressing hand and face parameters, we learn an intermediate non-parametric mesh representation. We then use this representation to regress the pose and shape parameters of hand and face with a neural inverse-kinematics network. Compared with directly regressing the pose and shape parameters which learns the abstract parameters is a highly non-linear process and suffers from image-model misalignment, predicting vertex positions in Euclidean space and then applying inverse-kinematics enhances the reconstruction accuracy <cit.>. Consequently, our model achieves higher reconstruction accuracy than all previous regression and optimization-based methods. It also reaps the benefits of an animatable parametric hand and face representation that could be readily used by downstream applications. Meanwhile, despite containing rich annotations, the existing benchmark dataset <cit.> collected in a studio is still limited in the diversity of hand motions, facial expressions, and appearances. Training a model only on such a dataset limits its generalization capability when applied to in-the-wild data. To achieve robust and generalizable hand-face interaction and deformation recovery, we introduce a weak-supervision training pipeline that utilizes in-the-wild images without the reliance on 3D annotations. In addition to the 2D keypoint supervision for in-the-wild images, we propose a novel depth supervision pipeline. This pipeline leverages the robust depth prior from a diffusion-based monocular depth estimation model <cit.>, which provides essential geometric information for accurate mesh recovery and captures spatial relationships critical for contact state and deformation estimation. To improve our model's robustness, we further employ pose priors of the hand and face by introducing hand and face parameter discriminators that learn rich hand and face motion priors from multiple datasets on hand or face separately <cit.>. By incorporating a small set of real-world images alongside the Decaf dataset and leveraging our weak-supervision pipeline, we markedly enhance the accuracy and generalization capacity of our model. As a result, our method achieves superior performance in terms of accuracy, physical plausibility, inference speed, and generalizability. It surpasses all previous methods in accuracy on both standard benchmarks and challenging in-the-wild images. Fig. <ref> visualizes some results of our method. We conduct extensive experiments to validate our method. In summary, our contribution is three-fold: * We propose , the first end-to-end learning-based approach that accurately recovers hand-face interactions and deformations from a single image. * We propose a novel weak-supervised training scheme with depth supervision on keypoints to augment the Decaf data distribution with a diverse real-world data distribution, significantly improving the generalization ability. * achieves superior reconstruction quality compared to baseline methods while running at an interactive rate (20 fps). § RELATED WORK Extensive efforts have been made to recover meshes from monocular images, including human bodies <cit.>, hands <cit.>, and faces <cit.>. This also includes recovering the surrounding environments <cit.> and interacting objects <cit.> while reconstructing the mesh. The acquired versatile behaviors play a crucial role in various applications, including motion generation <cit.>, augmented reality (AR), virtual reality (VR), and human behavior analysis <cit.>. In the following, we mainly review the related works on hand, face and full-body mesh recovery. 3D Interacting Hands Recovery. Recent advancements have markedly enhanced the capture and recovery of 3D hand interactions. Early studies have achieved reconstruction of 3D hand-hand interactions utilizing a fitting framework, employing resources such as RGBD sequences <cit.>, hand segmentation maps <cit.>, and dense matching maps <cit.>. The introduction of large-scale datasets for interacting hands <cit.> has motivated the development of regression-based approaches. Notably, these include regressing 3D interacting hand directly from monocular RGB images <cit.>. Additionally, research has extended to recovering interactions between hands and various objects in the environment, including rigid <cit.>, articulated <cit.>, and deformable <cit.> objects. Following <cit.>, our work distinguishes itself by introducing hand interactions with a deformable face, characterized by its non-uniform stiffness—a significant difference from conventional deformable models. This innovation presents unique challenges in accurately modeling interactions. 3D Human Face Recovery. Research in human face recovery encompasses both optimization-based <cit.> and regression-based <cit.> methodologies. Beyond mere geometry reconstruction, recent approaches have evolved to incorporate training networks with the integration of differentiable renderers <cit.>. These methods estimate variables such as lighting, albedo, and normals to generate facial images and compare them with the monocular input. However, a significant limitation in much of the existing literature is the neglect of the face's deformable nature and hand-face interactions. Decaf <cit.> represents a pivotal development in this area, attempting to model the complex mimicry of musculature and the underlying skull anatomy through optimization techniques. In contrast, our work introduces a regression-based, end-to-end method for efficient problem-solving, setting a new benchmark in the field. 3D Full-Body Recovery. The task of monocular human pose and shape estimation involves reconstructing a 3D human body from a single-color image. Optimization-based approaches <cit.> employ the SMPL model <cit.>, fitting it to 2D keypoints detected within the image. Conversely, regression-based methods <cit.> leverage deep neural networks to directly infer the pose and shape parameters of the SMPL model. Hybrid methods <cit.> integrate both optimization and regression techniques, enhancing 3D model supervision. Distinct from these approaches, we follow parametric methods <cit.> due to its flexibility for animation purposes. Unlike most research in this domain, which primarily concentrates on the main body with only rough estimations of hands and face, our methodology uniquely accounts for detailed interactions between these components. § METHOD Problem Formulation. Following Decaf <cit.>, we adopt the FLAME <cit.> and MANO <cit.> parametric models for hand and face. Given a single RGB image 𝐈∈^224 × 224 × 3, the objective of this task is to reconstruct the vertices of hand mesh 𝐕_H ∈ ^778 × 3 and face mesh 𝐕_F ∈ ^5023 × 3, along with capturing the face deformation vectors 𝐃∈ ^5023 × 3 caused by hand-face interaction and its non-rigid nature, and per-vertex contact probabilities of hand 𝐂_H ∈ ^778 and face 𝐂_F∈ ^5023. §.§ Transformer-based Hand-Face Interaction Recovery Our model incorporates a two-branch Transformer architecture and integrates inverse-kinematic models, specifically MeshNet, InteractionNet, and IKNets. A differentiable renderer <cit.> is used to compute depth maps from the predicted mesh for depth supervision, and the hand and face discriminators are used as priors for constraining the hand and face poses; See Fig. <ref> for an overview. Given a monocular RGB image 𝐈, we use a pretrained HRNet-W64 <cit.> backbone to extract a feature map X_I∈^H × W × C, where H, W are the spatial dimension and C is the channel dimension. Following <cit.>, we flatten the image feature maps and upsample the H × W feature maps to N feature maps, one for each keypoint and coarse vertex of both hand and face. We then concatenate the 𝐅' ∈^N × C-dimensional feature maps with a N × 3-dimensional downsampled hand and face vertices and keypoints of the mean pose, as mesh vertices and joints queries for the transformer, resulting in a feature map 𝐅∈^N × (C+3). The mesh vertices and joints also serve as the positional encoding. To effectively model the interaction between the vertices, we mask the image feature maps corresponding to a random subset of vertices. We propose to use two separate branches, MeshNet and InteractionNet, splitting the regression of mesh vertices and a deformation field for their semantic difference: the mesh positions are more global while the deformation vectors and contact states are relatively local. Specifically, the network is followed by two progressively downsampling transformers: MeshNet, which takes the feature map 𝐅 as input and regresses the rough vertex positions of hand 𝐕_H' and face, 𝐕_F'; and InteractionNet, which first downsamples the feature map 𝐅, then uses it to predict the 3D deformation field 𝐃 at each face vertex along with the contact labels for each hand and face vertices, 𝐂_H and 𝐂_F. Note the contacts and deformations are regressed in the same encoder to model their close relationship: the contacts cause the deformations. We validate our design in Sec. <ref>. Next, instead of directly regressing the hand and face vertices, we regress the pose and shape of the parametric hand and face models, making the output readily animatable for downstream applications. This is achieved by a neural inverse kinematics model, named IKNet, similar to Kolotouros et al. <cit.>. Our IKNet takes roughly estimated hand and face mesh vertices 𝐕_H' and 𝐕_F' as inputs and predict hand and face pose, shape and expression parameters (θ_h, β_h), (θ_f-pose, β_f, θ_f-exp), along with the position and orientation for hand <cit.> and face <cit.>, respectively. We use the predicted parameters to first obtain the hand mesh and undeformed face mesh 𝐕_H, 𝐕_F^*. Then, we apply the deformation 𝐃 predicted by the InteractionNet on 𝐕_F^* to get the final deformed face 𝐕_F. Regressing parameters offers several advantages: first, it enables readily animatable meshes; second, compared to non-parametric regression methods, where meshes typically contain artifacts such as spikes <cit.>, the mesh quality is significantly improved; third, the compact parameter space facilitates a more effective discriminator, which will be discussed in the following section. §.§ Weakly-Supervised Training Scheme Although the aforementioned benchmark, Decaf <cit.>, accurately captures hand, face, self-contact, and deformations, it consists of eight subjects and is recorded in a controlled environment with green screens. Training a model only with Decaf limits its generalization capability to in-the-wild images that have far more complex and diverse human identities, hand poses, and face poses. To further enhance the generalization capability of our model, we train our model with 500 diverse in-the-wild images of hand-face interaction collected from the internet without the reliance on the 3D ground truth annotations. First, we use 2D hand-and-face keypoints detected by <cit.> and <cit.>. Then, we propose to use Marigold <cit.>, a diffusion-based monocular depth estimator pre-trained on a large number of images to generate 2D affine-invariant depth maps for supervision in the direction of depth (see Eq. <ref>). The depth supervision provides a strong depth prior, which guides the spatial relationship between hand and face meshes, promoting accurate modeling of hand-face interaction. For supervision, we use a differentiable rasterizer <cit.> to compute a depth map from the predicted hand and face meshes and supervise the network using a depth loss calculated between the depth values of hand and face keypoints and corresponding points on the predicted depth map. The introduced weak-supervision pipeline significantly enhances our model's generalization capability and robustness, which we investigate in Sec. <ref>. In our experiment, we found that when training the model with only a small dataset of 500 images, we could significantly improve the model's accuracy and generalization capability. Moreover, we train adversarial priors on the hand and face parameter space on multiple hand and face pose datasets: the face-only RenderMe-360 <cit.>, the hand-only FreiHand <cit.>, and Decaf <cit.>. This ensures the plausibility of generated face and hand poses and shapes while allowing for flexible poses and shapes beyond the Decaf data distribution to handle in-the-wild cases. §.§ Loss Functions Mesh losses ℒ_𝓂ℯ𝓈𝒽: For richly annotated data in Decaf <cit.>, we employ L_1 loss for 3D keypoints, 3D vertices, and 2D reprojected keypoints against their respective ground-truths, following common practice in human- and hand-mesh recovery <cit.>. We further apply a L_1 loss ℒ_params on the estimated hand and face pose, shape, and facial expression against the ground-truth parameters. For in-the-wild data, only the 2D reprojected keypoints are supervised, as this is the only type with corresponding ground truth. Interaction losses ℒ_𝒾𝓃𝓉ℯ𝓇𝒶𝒸𝓉𝒾ℴ𝓃: For data in Decaf <cit.>, we impose Chamfer Distance losses to enforce touch for predicted contact vertices and discourage collision. We also have a binary cross-entropy loss to supervise contact labels and a deform loss with adaptive weighting to supervise deform vectors, similar to <cit.>. For in-the-wild data, we also impose touch and collision losses since they do not require annotations. Adversarial loss ℒ_adv are applied to the predicted hand and face parameters for in-the-wild data to constrain their parameter space, and for decaf data to facilitate the training of the discriminators. The adversarial loss is given by: ℒ_adv(E)= 𝔼_θ_F∼ p_E[log(1-D_F(E(I)))] + 𝔼_θ_H∼ p_E[log(1-D_H(E(I)))]. The losses for the hand and face discriminators are given by: ℒ_adv(D_F)= -(𝔼_θ_F∼ p_E[log(1-D_F(E(I)))] + 𝔼_θ_F∼ p_data[log(D_F(θ))]) and ℒ_adv(D_H)=-(𝔼_θ_H∼ p_E[log(1-D_H(E(I)))] + 𝔼_θ_H∼ p_data[log(D_H(θ))]), where E jointly denotes the image backbone, the mesh encoder and the parameter regressor, θ_F = concat(θ_face-shape, θ_face-jaw, θ_face-exp), θ_H = concat(θ_hand-shape, θ_hand-pose). Depth loss ℒ_depth: To provide pseudo-3D hand and face keypoints supervision for in-the-wild data, we use a modified SILog Loss <cit.>, an affine-invariant depth loss as our depth supervision ℒ_depth. Formally, let K̂_D denote the pseudo-ground-truth affine-invariant depth of the face and hand keypoints, and K_D denote the rendered depth for the keypoints, ℒ_depth = [𝐕𝐚𝐫(log(K_D + ε) - log(K̂_D + ε)) ]^1/2, where 𝐕𝐚𝐫 is the standard variance operator and ε = 10^-7. Overall, our loss for the mesh and interaction networks is formulated by ℒ = λ_meshℒ_𝓂ℯ𝓈𝒽 + λ_interactionℒ_𝒾𝓃𝓉ℯ𝓇𝒶𝒸𝓉𝒾ℴ𝓃 + λ_advℒ_𝒶𝒹𝓋 + λ_depthℒ_𝒹ℯ𝓅𝓉𝒽, where λ_mesh = 12.5, λ_interaction = 5, λ_depth = 2.5, λ_adv = 1 for all the experiments in the paper; See more details in Appendix <ref>. § EXPERIMENTAL RESULTS §.§ Datasets and Metrics Datasets We employ Decaf <cit.> for reconstructing 3D face and hand interactions with deformations, along with the in-the-wild dataset we collected with 500 images. We use the hand and face shape, pose, and expression data from Decaf <cit.>, RenderMe-360 <cit.>, and FreiHand <cit.> for training the adversarial priors. We use the training set of the aforementioned datasets for network training. We utilize the Decaf test set for quantitative evaluation, and additionally, we visualize in-the-wild images from the test set for qualitative evaluation. Metrics. We adopt commonly-used metrics for mesh recovery accuracy following <cit.>: ∙ Mean Per-Joint Position Error (MPJPE): the average Euclidean distance between predicted keypoints and ground-truth keypoints. ∙ PAMPJPE: MPJPE after Procrustes Analysis (PA) alignment. ∙ Per Vertex Error: per vertex error (PVE) with translation. Following Decaf <cit.>, we use the plausibility metrics mentioned below: ∙ Collision Distance (Col. Dist.): the average collision distances over vertices and frames; ∙ Non-Collision Ratio (Non. Col.): the proportion of frames without hand-face collisions; ∙ Touchness Ratio: the ratio of hand-face contacts among ground truth contacting frames; ∙ F-Score: the harmonic mean of Non-Collision Ratio and Touchness Ratio. §.§ Implementation Details We train the MeshNet, InteractionNet, and IKNet along with the face and hand discriminators with three identical AdamW optimizers with a learning rate of 6 × 10^-4, and a learning rate decay of 1 × 10^-4, optimizing in an alternating manner. Our batch size is set to 16 during the training stage. Each training task takes 40 epochs using 48 hours. The model is trained and evaluated on 8 Nvidia A6000 GPUs with an AMD 128-core CPU. For a fair comparison, all baseline models used the settings in their original papers. Inference times are calculated on a single Nvidia A6000 GPU. §.§ Performance on Hand-Face Interaction and Deformation Recovery In addition to baselines considered in Decaf <cit.>, we compare our method with a representative work in human body/hand mesh recovery, METRO, an end-to-end transformer-based model. For a fair comparison, we compare our method with a modified version of METRO <cit.> for predicting hand and face meshes, with extra output heads added to predict contact and deformation. §.§.§ Quantitative Evaluations Reconstruction Accuracy In Tab. <ref>, our method surpasses all baseline methods in terms of reconstruction accuracy, achieving a 7.5% reduction in per-vertex error compared to the current state-of-the-art, Decaf. Note that our method is regression-based and allows inference at an interactive rate, while Decaf <cit.> uses a cumbersome test-time optimization process, taking more than 700x more time per image. Decaf also requires using temporal information in successive frames, while our method only uses a single frame. Our method shows a 30% reduction in reconstruction error compared to the modified METRO baseline, and up to 79% reduction compared to other end-to-end baselines. Plausibility In addition to high accuracy, our method achieves the highest overall physical plausibility (F-Score) among all regression-based methods. Note that Touchness and Non-Collision ratio are complement to each other and are meaningless when considered individually, while F-Score measures the two values as a whole. Our method has a much lower interpenetration distance (Col. Dist.) compared to Benchmark and PIXIE (hand+face), which consider hand and face separately, therefore generating implausible interactions. Note that PIXIE (whole body) and METRO* show lower collision distances with a much lower Touchness than our method, indicating that the reconstructed hands and faces often appear incorrectly as if they are not interacting. On the other hand, our method shows low collision distance with a high Touchness, indicating plausible hand-face interaction reconstruction. Contact Estimation In Tab. <ref>, achieves superior contact estimation performance on Decaf dataset, surpassing previous work <cit.> in F-Score for both face and hand contacts. Here, the F-score provides a comprehensive measure of both the precision and the recall ratio combined. These two metrics are complementary and less meaningful when only considered individually; See Fig. <ref> for qualitative results. §.§.§ Qualitative Evaluations As discussed in Sec. <ref>, the Decaf <cit.> dataset is collected in an indoor environment with a green screen, which doesn't reflect the complex environment where real-world hand-face interactions occur. Therefore, a model only trained with the Decaf dataset might have generalization issues when tested on in-the-wild data. Fig. <ref> confirms this result by our model's superior generalization performance on in-the-wild data with unseen identity and pose. As shown in Fig. <ref>, our method faithfully reconstructs hand-face interaction and deformation and accurately labels the area of contact. §.§ Ablation Study In-the-wild data As shown in Tab. <ref>, adding weak-supervision training and in-the-wild data for training improves all reconstruction error metrics (PVE*, MPJPE, PAMPJPE) while maintaining a high plausibility (F-Score). This is because the limited pose and identity distribution of the Decaf training dataset may cause the model to overfit, and the inclusion of in-the-wild images out of the Decaf data distribution effectively improves the generalization capability of . Depth Supervision Although depth supervision is only applied to in-the-wild data, as shown in Tab. <ref>, removing it also significantly affects performance on the Decaf validation set. Without depth loss, wrong predictions in depth are not penalized for in-the-wild data, introducing noise in the training process, and resulting in erroneous depth predictions in the Decaf dataset. As shown in Appendix Fig. <ref>, the absence of depth supervision introduces ambiguity in the z-direction, resulting in artifacts such as self-collision. Adversarial Prior The adversarial prior incorporates diverse but realistic pose and shape distribution beyond Decaf <cit.>, ensuring the reality of regressed mesh while allowing for generalization. As shown in Tab. <ref>, introducing adversarial supervision improves the accuracy and physical plausibility. Parameter Supervision Supervising parameters directly, in addition to the indirect supervision of parameters by the mesh losses, improves both plausibility and accuracy. This is because direct parameter supervision eliminates ambiguity, without which the network may resort to other parameter combinations that produce incorrect meshes similar in terms of Euclidean distance. Intermediate Supervision Removing the constraint that 𝐕_F', 𝐕_H' being rough meshes and treating them only as feature maps results in a substantial drop in accuracy and a slight drop in plausibility (F-Score). Also, note that there is a sharp increase in collision distance, which is attributed to the spatial inaccuracy of the final output mesh. This indicates that using an intermediate mesh feature instead of an ordinary feature map increases the spatial accuracy of the output meshes, which also benefits plausibility. Network Design In Tab. <ref>, adopting the two-branch architecture, which separates deformation and interaction estimation from mesh vertices regression, improves both accuracy and plausibility. §.§ Limitations and Future Works While our method achieves SotA accuracy on the Decaf <cit.> dataset and generalizes well to unseen scenes and in-the-wild cases, we still have failure cases when the hand-pose interactions are extremely challenging and have severe occlusions (see Appendix <ref>). Moreover, while our method effectively recovers hand and face meshes with visually plausible deformations, there remains room for improvement in deformation accuracy and physical plausibility. In the future, physics-based simulation <cit.> can be used as a stronger prior, producing more physically accurate estimations. In this paper, although we found using 500 in-the-wild images significantly improves the model's generalization ability, scaling up to a larger amount of in-the-wild data, on the order of millions or billions, would further enhance performance, which we will study in future work. § CONCLUSION In this work, we present , the first end-to-end approach for reconstructing 3D hand and face interaction with deformation from monocular images. Our approach features a two-branch transformer structure, MeshNet and InteractionNet, to model local deform field and global mesh geometry. An inverse-kinematic model, IKNet, is used to output the animatable parametric hand and face meshes. We also proposed a novel weak-supervision training pipeline, using a small amount of in-the-wild images and supervising with a depth prior and an adversarial loss to provide pose priors. Benefitting from our network design and training scheme, demonstrates state-of-the-art accuracy and plausibility, compared with all previous methods. Meanwhile, our method achieves a fast inference speed (20 fps), allowing for more downstream interactive applications. In addition to strong performance on the standard benchmark, also achieves superior generalization performance on in-the-wild data. splncs04 § IMPLEMENTATION DETAILS §.§ CNN Backbone The CNN backbone used in our framework is an HRNet-W64 <cit.>, initialized with ImageNet-pretrained weights. The weights of the backbone would be updated during training. We extract a (49 × H)-dim feature map from this network and upsamples it to a (N × H)-dim feature map, where N = N_h_k + N_f_k + N_h_v + N_h_v, the total number of head and hand keypoints N_h_k, N_f_k and vertices N_h_v, N_f_v. Then, we concatenate the keypoints and the vertices corresponding to the head and hand mean pose as keypoints and vertex queries, resulting in a ((N+3) × H)-dim feature map. Random masking of keypoints and vertex queries of rate 30% is applied, following <cit.>. §.§ MeshNet and InteractionNet Our MeshNet and InteractionNet have similar progressive downsampling transformer encoder structures, see Fig. <ref> for an illustration. The MeshNet has three component transformer encoders with decreasing feature dimensions. The InteractionNet starts with a fully connected layer that downsamples the feature dimension, followed by two transformer encoders. Each transformer encoder has a Multi-Head Attention module consisting of 4 layers and 4 attention heads. In addition to head and hand mesh features, MeshNet also regresses head and hand keypoints, which are only for supervision and not used by any downstream components. §.§ IKNet Our IKNets take in rough mesh features 𝐕_F', 𝐕_H' and output the pose and shape parameters (θ, β), as well as the global rotation and translation (R, T). They feature a Multi-Layer Perceptron (MLP) structure, each consisting of five MLP Blocks and a final fully connected layer. Each MLP Block contains a fully connected layer, followed by a batch normalization layer <cit.> and a ReLU activation layer. There are two skip-connections, connecting the output of the first block with the input of the third block, and the output of the third block with the input of the final fully connected layer. See Fig. <ref> for an illustration. The hand and head IKNets have the same structure, differing only in their input and output dimensions. The hidden dimensions of the two IKNets are 1024. §.§ Training and Testing Details To be consistent with the training setting of Decaf[Confirmed by the authors of Decaf] <cit.>, in the Decaf dataset, we use all eight camera views and the subjects S2, S4, S5, S7, and S8 in the training data split for training. For testing, we use only the front view (view 108) and the subjects S1, S3, and S6 in the testing data split. The low, mid, and high-resolution head mesh consists of 559, 1675, and 5023 vertices, respectively. The low and high-resolution hand mesh consists of 195 and 778 vertices, respectively. We use the middle-resolution head mesh and the high-resolution hand mesh as the inputs of head and hand IKNets. § MORE QUALITATIVE COMPARISONS We demonstrate qualitatively the effect of the absence of the depth loss in Fig. <ref>. When trained without depth loss, the network is only supervised with 2D information on in-the-wild data, without any constraints in the z-direction. As a result, artifacts such as self-penetration frequently occur in this case. The introduction of depth loss eliminates this ambiguity, allowing the correct relative positioning of hand and face. § ADDITION DETAILS ON LOSSES Here, we provide the details of the mesh losses and the interaction losses. The details of the adversarial loss and the depth loss are already mentioned in the main paper. §.§ Mesh losses The mesh loss ℒ_mesh consists of four components. ℒ_mesh = ℒ_reproj + 4ℒ_vert + 2ℒ_key + 2ℒ_params. Vertices Loss L_1 loss is used for predicted rough 3D face and hand vertices 𝐕_f', 𝐕_h', FLAME-regressed undeformed 3D face vertices 𝐕_f^* and MANO-regressed 3D hand vertices 𝐕_h against the ground-truth 3D undeformed face vertices 𝐕̂_f and 3D hand vertices 𝐕̂_h. ℒ_vert = λ_h (μ_nonpara𝐕_h' - 𝐕̂_h_1 + 𝐕_h - 𝐕̂_h_1) + λ_f (μ_nonpara𝐕_f' - 𝐕̂_h_1 + 𝐕_f^* - 𝐕̂_f_1), where λ_h, λ_f are empirically set to 3 and 1 respectively. μ_nonpara is set to 4 to emphasize the supervision on the more complex non-parametric mesh features. Keypoints Loss We use L_1 loss for predicted rough 3D face and hand keypoints 𝐊_f', 𝐊_h', 3D face and hand keypoints extracted from rough mesh 𝐊_f_mesh, 𝐊_h_mesh, FLAME-regressed 3D face keypoints 𝐊_f and MANO-regressed 3D hand keypoints 𝐊_h against the ground-truth 3D undeformed face keypoints 𝐊̂_f and 3D hand keypoints 𝐊̂_f. ℒ_key = μ_nonpara(𝐊_h' - 𝐊̂_h_1 + 𝐊_h_mesh - 𝐊̂_h_1 + 𝐊_f' - 𝐊̂_f_1 + 𝐊_f_mesh - 𝐊̂_f_1 ) + 𝐊_f - 𝐊̂_f_1 + 𝐊_h - 𝐊̂_h_1, where μ_nonpara is empirically set to 4, to put more weight on the non-parametric mesh with high degrees of freedom. Reprojection loss L_1 loss is used for reprojected rough 3D face and hand keypoints 𝐊_f', 𝐊_h', 3D face and hand keypoints extracted from rough mesh 𝐊_f_mesh, 𝐊_h_mesh, FLAME-regressed 3D face keypoints 𝐊̂_f and MANO-regressed 3D hand keypoints 𝐊̂_h against the ground-truth face and hand 2D keypoints 𝐊̂_f_2D, 𝐊̂_h_2D. ℒ_reproj = λ_h(Π(𝐊_h') - 𝐊̂_h_2D_1 + Π(𝐊_h_mesh) - 𝐊̂_h_2D_1 + Π(𝐊_h) - 𝐊̂_h_2D_1) + λ_f(Π(𝐊_f') - 𝐊̂_f_2D_1 + Π(𝐊_f_mesh) - 𝐊̂_f_2D_1 + Π(𝐊_f) - 𝐊̂_f_2D_1), where Π is the learned camera projection function. λ_h, λ_f are set to 4 and 1 respectively. Parameter loss We apply L_1 loss on the regressed hand and face pose, shape, and facial expression parameters against their respective ground truths. ℒ_face-params = (β_f - β̂_f_1 + θ_f-exp - θ̂_f-exp_1 + θ_f-pose - θ̂_f-pose_1 ) / 3 ℒ_hand-params = (β_h - β̂_h_1 + θ_h - θ̂_h_1) / 2 ℒ_params = ℒ_face-params + ℒ_hand-params §.§ Interaction losses The interaction loss ℒ_interaction consists of four components: ℒ_interaction = 0.2 ℒ_touch + 0.6ℒ_contact + ℒ_collision + 6ℒ_deform. Deformation loss Due to the human anatomy, some vertices on the face are more easily deformed than other vertices. Therefore, we impose an adaptive weighting on each vertex and use square loss to penalize large deformation. We also have a regularization term to penalize extremely large deformations. ℒ_deform = ∑_i ∈ℐ (1 + μd̂_̂î_2)d̂_̂î - d_i_2^2 + λ∑_i ∈ℒd_i, where ℐ is the set of indices of face vertices, d_i, d̂_̂î are the predicted and ground truth deformation vector for index i, and ℒ = {i ∈ℐ: d_i_2 > 3cm} the vertices of large deformations. μ and λ are empirically set to be 5000, 100 respectively. Touch loss Let 𝐕_F_C and 𝐕_H_C denote the set of face and hand vertices that are predicted by the model to have contact probability greater than 0.5. ℒ_touch = CD(𝐕_F_C, 𝐕_H_C) + CD(𝐕_H_C, 𝐕_F_C), where CD(X, Y) gives the mean Chamfer Distance (CD) between each point in X to the closest point in Y. Collision loss Let 𝐕_H_Col denote the set of hand vertices that penetrates the face surface, 𝐕_F and 𝐃_F denote the predicted face mesh vertices and deformations. ℒ_collision = CD(𝐕_H_Col, 𝐕_F - 𝐃_F). Contact loss Let 𝐂_H and 𝐂_F denote the predicted hand and face contact probabilities and Ĉ_H, Ĉ_F denote the ground-truth contact labels. ℒ_contact = BCE(𝐂_H, Ĉ_H) + BCE(𝐂_F, Ĉ_F), where BCE denotes the binary cross-entropy loss. § MORE DISCUSSIONS §.§ Performance under Challenging Occlusion. As seen in Fig. <ref>, our end-to-end method is robust under challenging self-occlusion cases, such as the hand covering more than half of the face. On the other hand, Decaf <cit.>, which requires an initial keypoint prediction for test-time optimization, performs poorly in this situation. §.§ Failure Cases In Fig. <ref>, we demonstrate the failure cases of our method. As shown in Fig. <ref> (a), when there is a complex interaction between the hand and face, such as the presence of a cleaning sponge, there is a drop in the reconstruction accuracy of the hand mesh recovery. Also, as in Fig. <ref> (b), When the face completely occludes the hand, a highly challenging scenario unseen in the training data, our model could not faithfully reconstruct the hand position.
http://arxiv.org/abs/2406.18229v1
20240626102541
Advancing Robotic Surgery: Affordable Kinesthetic and Tactile Feedback Solutions for Endotrainers
[ "Bharath Rajiv Nair", "Aravinthkumar T.", "B. Vinod" ]
cs.RO
[ "cs.RO" ]
CoDA: Interactive Segmentation and Morphological Analysis of Dendroid Structures Exemplified on Stony Cold-Water Corals Kira Schmitt, Jürgen Titschack, and Daniel Baum ======================================================================================================================= § ABSTRACT The proliferation of robot-assisted minimally invasive surgery highlights the need for advanced training tools such as cost-effective robotic endotrainers. Current surgical robots often lack haptic feedback, which is crucial for providing surgeons with a real-time sense of touch. This absence can impact the surgeon's ability to perform delicate operations effectively. To enhance surgical training and address this deficiency, we have integrated a cost-effective haptic feedback system into a robotic endotrainer. This system incorporates both kinesthetic (force) and tactile feedback, improving the fidelity of surgical simulations and enabling more precise control during operations. Our system incorporates an innovative, cost-effective Force/Torque sensor utilizing optoelectronic technology, specifically designed to accurately detect forces and moments exerted on surgical tools with a 95% accuracy, providing essential kinesthetic feedback. Additionally, we implemented a tactile feedback mechanism that informs the surgeon of the gripping forces between the tool's tip and the tissue. This dual feedback system enhances the fidelity of training simulations and the execution of robotic surgeries, promoting broader adoption and safer practices. § INTRODUCTION Surgical robots are increasingly utilized for minimally invasive surgical procedures, allowing surgeons to operate through small incisions using specialized instruments and an endoscopic camera. These robots offer enhanced dexterity, greater precision, and reliability, while also minimizing the impact of hand tremors caused by surgeon fatigue. The benefits of robot-assisted surgery include reduced pain, fewer complications, quicker recovery times, and minimal scarring. During manual minimally invasive surgery, surgeons use all their senses including their sense of touch to operate surgical tools. But, most of the surgical robots available today are teleoperated and have physically disconnected master side and slave side. This in turn does not provide any haptic feedback to the surgeon. Surgeons control the master side of the robot to perform the surgery relying completely on visual feedback from 3D cameras <cit.>. Although visual feedback provides helpful information for performing robotic surgery, the absence of haptic feedback is an obvious disadvantage in terms of safety and ease of usage<cit.>. Haptic feedback can be categorized into two types- kinesthetic (force) feedback and cutaneous (tactile) feedback <cit.>. Kinesthetic feedback provides the surgeon a feel of the force acting on the surgical tool whereas tactile feedback provides a sense of the gripping forces and textures. Many studies were conducted which led to the development of kinesthetic feedback systems for robot-assisted minimally invasive surgery. Implementing kinesthetic feedback requires precise measurement of the forces acting on the surgical tool at the patient’s side. The six-axis force/torque sensor developed by the German Aerospace Center (DLR) is probably one of the most advanced <cit.>. It utilises a Stewart platform for force measurements. Another popular and widely used Force/Torque sensor is the ATI Nano sensor (ATI Industrial Automation, Apex, NC, USA). These sensors have outstanding sensing capabilities. However, these Force/Torque sensors are very expensive. These sensors also come in predefined shapes and sizes, which makes it difficult to integrate the sensor for particular applications. Hence, to overcome these limitations, a 3-axis Force/Torque sensor based on optoelectronic technology <cit.> was developed. The light intensity based measurement using optoelectronic sensors has many advantages including high resolution, low power consumption, lesser noise and low cost. This sensor was designed keeping in mind the need to effectively integrate it with the surgical tool. It is also extremely important for the surgeon to know the gripping force between the tool tip and the tissues <cit.>. Such information can be provided with the help of tactile feedback. Extensive research has been conducted on developing technologies for sensing forces <cit.> and providing tactile feedback across various applications <cit.>. Tactile sensors and tactile feedback systems have been designed to provide surgeons with information about the material characteristics of the tissues that the surgical tool is in contact with <cit.>. Hence, a simple tactile feedback system was designed which provides vibration feedback that varies with gripping force between the surgical tool tip and tissues. Incorporation of tactile feedback in the surgical robot will reduce tissue damage and breakage and hence will reduce healing time of the patient. § THE ROBOTIC ENDOTRAINER The robotic endotrainer (or surgical robot training system) was developed with the aim of providing actual robot based surgical training for surgeons at a low cost. Hence, the design is very similar to that of the daVinci Si surgical robot (Intuitive Surgical, Mountain View, CA, USA). §.§ Master Side The master side of the robotic endotrainer has two arms to control two manipulators on the slave side. The human arm has seven degrees of freedom (DOFs), hence each of the master arms needs at least seven DOFs. The master arms for the robot have seven DOFs, three for positioning and four for orientation. The prototype and structural diagram of the master side of the robotic endotrainer is shown in Fig. <ref>. The joints utilized for positioning are equipped with incremental rotary encoders (Autonics Corporation, South Korea) and the joints utilized for orientation are equipped with magnetic rotary encoders (Renishaw PLC, United Kingdom) to measure the joint angles. Commercially available gimbal motors are attached to the master manipulator joints to provide haptic feedback based on the force sensed by the novel force/torque sensor on the surgical tool. §.§ Slave Side The slave side includes two slave manipulators for handling two different surgical instruments, along with one camera arm for controlling an endoscope camera. Each slave arm features seven DOFs, with three dedicated to positioning and four for orientation. All the joints in slave manipulators are equipped with DC motors (Maxon, Switzerland) for precise control. Remote Centre of Motion <cit.> is achieved with the help of parallelogram mechanism. The camera arm has four DOFs, three for positioning of the endoscope camera and one for orientation. The joints are equipped with DYNAMIXEL actuators (Robotis, USA). The prototype and structural diagram of the slave side of the robotic endotrainer is shown in Fig. <ref>. §.§ System Architecture The flowchart of system architecture is shown in Fig. <ref>. The surgeon's movements on the master side are recorded by the encoders and sent to a Linux based PC. Based on these inputs, the controller will enable a similar but more precise and scaled down motion of the slave manipulators. The surgeon can also switch between controlling the slave manipulators and the endoscopic camera arm by using the foot pedal. The forces sensed by the force/torque sensor is sent to the PC which computes the individual torques to be exerted by each motor on the master side to provide kinesthetic feedback. Similarly, the vibration intensity of the vibration motor on the master side is computed based on the sensed gripping forces between tool tip and tissues to provide tactile feedback. § KINESTHETIC FEEDBACK The motors on the master arm should generate the same force and torques as that experienced by the slave manipulator in order to provide kinesthetic feedback. A three axis force/torque sensor based on optoelectronic technology was developed to measure the force acting on the slave manipulator. The torque required to be applied by the motors at each joint of the master arm are then computed based on the sensed forces. The combined effect of the joint torques at the master side produces forces similar to that experienced by the slave manipulator. §.§ Mechanical Design of Force/Torque Sensor A unique mechanical design was adopted for the three-axis Force/Torque sensor to measure the forces and torques acting on the slave side surgical tool. The exploded view of the force/torque sensor can be seen in Fig. <ref>. The main components of the force/torque sensor are the top plate, bottom plate, three springs and three photo sensors. The developed sensor has an outer diameter of 40mm and a height of 28mm. The sensor also has a through hole with a diameter of 8.5mm to allow for the passage of all the strings within the surgical tool. The upper and lower plates of force/torque sensor are made up of PLA (Polylactic Acid) type of plastic. These components were 3D printed by using the Fused Deposition Modeling (FDM) technique, which is fast and cheap. Stainless steel springs are used to maintain a gap between the sensor plates. The deflections of the springs measured by the photo sensors are directly proportional to the magnitude of force and torque acing on the instrument. A snap fit mechanism was adopted to attach the top plate with the bottom plate. The snap fit mechanism allows for relative motion of the top and bottom plates when a force is applied. The snap fit mechanism also allows easy assembling and dismantling of the sensor. The developed three-axis force/torque sensor is shown in Fig. <ref>. The surgical instrument was cut horizontally to incorporate the force/torque sensor for effective measurement of the force and torque. The collet mechanism on each sensor plate holds the outer casing of the surgical tool tightly without the use of any fastener. The integration of the force/torque sensor with the surgical instrument is depicted in Fig. <ref>. §.§ Sensing Forces and Torques The developed sensor is capable of measuring the value of force acting along the z-axis and moments along x-axis and y-axis. The structure of the force/torque sensor can be simplified to three springs symmetrically arranged within the sensor at an angle of 120° between them. The three photo sensors are placed diametrically opposite to the springs to measure the deflection of the springs. The positioning of the springs, photo sensors and assumed direction of the z-axis, x-axis and y-axis is shown in Fig. <ref>. When an external force or moment is applied, each of the three springs is deflected. The photo sensors can measure the deflections of the three springs. The force and moments are then computed from the values of spring deflections. Let k represent the stiffness of each of the springs and d represent the radial distance of the springs from the center of the sensor. In the presence of only F_z, all the springs will deflect by the same amount. This deflection of the springs in the presence of F_z is denoted by δ_1Fz, δ_2Fz and δ_3Fz. F_z = 3kδ δ_1Fz = δ_2Fz = δ_3Fz = F_z/3k In the presence of only M_x, spring 1 will elongate and spring 2 and spring 3 will compress. This deflection of the springs in the presence of M_x is denoted by δ_1Mx, δ_2Mx and δ_3Mx. M_x = k.δ_1M_x.d - k.δ_2M_x.d/2 - k.δ_3M_x.d/2 From basic trigonometry, δ_1M_x = -2δ_2M_x = -2δ_3M_x δ_1M_x = 2M_x3kd , δ_2M_x = -M_x3kd , δ_3M_x = -M_x3kd In the presence of only M_y, spring 2 will elongate and spring 3 will compress by equal amounts due to symmetry. spring 1 will not undergo any deflection. The deflection of the springs in the presence of M_y is denoted by δ_1M_y, δ_2M_y and δ_3M_y. M_y = k.δ_2M_y.√(3)d2 - k.δ_3M_y.√(3)d2 δ_1M_y = 0 , δ_2M_y = M_y√(3)kd , δ_3M_y = -M_y√(3)kd The total deflection of the three springs when acted upon by a combination of F_z, M_x and M_y is represented by δ_1, δ_2 and δ_3. δ_1 = δ_1F_z + δ_1M_x + δ_1M_y = 13kF_z + 23kdM_x δ_2 = δ_2F_z + δ_2M_x + δ_2M_y = 13kF_z - 13kdM_x + 1√(3)kdM_y δ_3 = δ_3F_z + δ_3M_x + δ_3M_y = 13kF_z - 13kdM_x - 1√(3)kdM_y Since the photo sensors are placed diametrically opposite to the springs, the change in distance measured by the sensors will be equal and opposite to the deflection of the springs. δ_A = -δ_1, δ_B = -δ_2, δ_C = -δ_3 Representing equations (<ref>) , (<ref>), (<ref>), (<ref>) in matrix form, we get: [ δ_A; δ_B; δ_C ] = - [ 1/3k 2/3kd 0; 1/3k -1/3kd 1/√(3)kd; 1/3k -1/3kd -1/√(3)kd ][ F_z; M_x; M_y ] The matrix equation that estimates the values of F_z, M_x and M_y when given the values of δ_A, δ_B and δ_C is shown below in equation (<ref>). On substituting the values of the spring constant k = 0.196N/mm and the distance between the center of the sensor to the spring d = 16mm, we get: [ F_z; M_x; M_y ] = - [ 1/3k 2/3kd 0; 1/3k -1/3kd 1/√(3)kd; 1/3k -1/3kd -1/√(3)kd ]^-1[ δ_A; δ_B; δ_C ] = - [ 0.196 0.196 0.196; 3.135 -1.567 -1.567; 0 2.717 -2.717 ][ δ_A; δ_B; δ_C ] Here, the force is obtained in Newtons and the torques are obtained in N-mm. §.§ Force and Torque Sensing: Experimental Findings The developed three-axis force/torque sensor was tested by applying different values of forces and moments. The results are shown in Fig. <ref>. The accuracy of the developed force/torque sensor was found out to be around 95%. It was observed that the experimental values for F_z were slightly above and below the theoretical values, while the experimental values for M_x and M_y were generally marginally higher than the theoretical values. § TACTILE FEEDBACK Humans can perceive both frequency and intensity changes in a vibration signal. Consequently, a variation in force can be communicated to the surgeon by changing the frequency or intensity of a vibration signal <cit.>. A tactile feedback system outputting different intensities of vibration for different ranges of gripping force between surgical tool tip and tissue was designed to provide surgeons with better haptic feedback. A commercially available force sensor, FlexiForce™ A101 Sensor (TekScan, Inc., South Boston, Massachusetts, USA) was used for estimating the gripping force. The sensing area of sensor is a circle with 5mm diameter. The sensor is attached to the inner side of the surgical tool tip as shown in Fig. <ref>. This setup is intended to aid in the development and refinement of surgical skills, not for use on patients. The value of the gripping force is obtained from the force sensor and is further used for providing vibration feedback to the surgeon. Two commercially available eccentric rotating mass vibration motors are integrated to the finger clutch of each manipulator of the master side to provide vibration feedback. The arrangement is shown in Fig. <ref>. In the designed tactile feedback system, each manipulator on the master side has two vibration motors attached to the finger clutch. One vibration motor is programmed to provide a linear vibration response proportional to the force sensed by the force sensor, giving the surgeon feedback on changes in the applied gripping force between the surgical tool tip and the tissues. The second vibration motor activates at the highest vibration frequency only when the sensed force exceeds a threshold safe limit, which the surgeon can set before the procedure. This ensures that the surgeon does not apply excessive force, thereby preventing tissue damage. § CONCLUSION In this study, a master-slave robotic endotrainer, akin to the DaVinci surgical robot, was developed to provide low-cost robotic surgical training for surgeons. Both kinesthetic and tactile feedback systems were effectively implemented in the robotic endotrainer to enhance the training experience. A novel, low-cost three-axis force torque sensor was designed and integrated seamlessly into the surgical tool on the slave side. This sensor had 95% accuracy in capturing force information from the interaction between the surgical tool and its environment. The acquired force data was utilized to provide kinesthetic feedback, allowing surgeons to gain a better understanding of the forces at play during surgical procedures. The developed force/torque sensor not only enhances the capabilities of the robotic endotrainer but also holds potential for broader applications. With slight modifications to its mechanical design and ruggedness, this sensor could be adapted for use in industrial robots, opening avenues for its deployment in various fields requiring precise force measurement and feedback. Additionally, a simple yet effective vibration-based tactile feedback system was developed. This system offered clear indications of the gripping forces between the surgical tool tip and the tissues, ensuring that the user could discern subtle changes in force application, thereby improving their tactile sensitivity and precision. Overall, this study successfully demonstrates the feasibility of developing a cost-effective robotic endotrainer that integrates advanced haptic feedback systems, contributing to the improvement of surgical training. The authors acknowledge the DST-GITA funded, India - Republic of Korea Joint applied R&D Programme titled “Design and Development of Robotic Endotrainer” (Ref.No: GITA/DST/IKCP201401-005/2015, DT: 07.05.2015) for financial support for this work. spiebib
http://arxiv.org/abs/2406.18055v1
20240626041839
Filtering Reconfigurable Intelligent Computational Surface for RF Spectrum Purification
[ "Kaining Wang", "Bo Yang", "Zhiwen Yu", "Xuelin Cao", "Mérouane Debbah", "Chau Yuen" ]
cs.IT
[ "cs.IT", "eess.SP", "math.IT" ]
UTF8gbsn Journal of Class Files, Vol. 14, No. 8, August 2021 Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals Filtering Reconfigurable Intelligent Computational Surface for RF Spectrum Purification Kaining Wang, Bo Yang, Zhiwen Yu, Xuelin Cao, Mérouane Debbah, and Chau Yuen K. Wang, B. Yang, and Z. Yu are with the School of Computer Science, Northwestern Polytechnical University, Xi'an, Shaanxi, 710129, China. X. Cao is with the School of Cyber Engineering, Xidian University, Xi'an, Shaanxi, 710071, China. M. Debbah is with the Center for 6G Technology, Khalifa University of Science and Technology, P O Box 127788, Abu Dhabi, United Arab Emirates. C. Yuen is with the School of Electrical and Electronics Engineering, Nanyang Technological University, Singapore. March 18, 2024 =============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT The increasing demand for communication is degrading the electromagnetic (EM) transmission environment due to severe EM interference, significantly reducing the efficiency of the radio frequency (RF) spectrum. Metasurfaces, a promising technology for controlling desired EM waves, have recently received significant attention from both academia and industry. However, the potential impact of out-of-band signals has been largely overlooked, leading to RF spectrum pollution and degradation of wireless transmissions. To address this issue, we propose a novel surface structure called the Filtering Reconfigurable Intelligent Computational Surface (FRICS). We introduce two types of FRICS structures: one that dynamically reflects resonance band signals through a tunable spatial filter while absorbing out-of-band signals using metamaterials and the other one that dynamically amplifies in-band signals using computational metamaterials while reflecting out-of-band signals. To evaluate the performance of FRICS, we implement it in device-to-device (D2D) communication and vehicular-to-everything (V2X) scenarios. The experiments demonstrate the superiority of FRICS in signal-to-interference-noise ratio (SINR) and energy efficiency (EE). Finally, we discuss the critical challenges faced and promising techniques for implementing FRICS in future wireless systems. FRICS, metamaterial, spectrum purification, interference cancellation. § INTRODUCTION Metasurface is a two-dimensional planar structure composed of meta-atoms arranged in a specific pattern, typically with a thickness smaller than the wavelength of electromagnetic (EM) waves. This structure exhibits unique EM properties, offering flexible and efficient control over various characteristics of EM waves, such as polarization, amplitude and phase. In this context, reconfigurable intelligent metasurfaces (RISs) have recently emerged as a representative technology capable of managing wireless signal propagation and enhancing communication quality <cit.>. For example, in scenarios where users experience dead zones, implementing RISs, especially at the periphery of a cell, can create propagation paths that enable beamforming, thereby improving signal reception <cit.>. Although RISs are advancing as a technology for creating intelligent wireless RF environments, there remains a critical need to enhance their capability to adapt to complex radio frequency (RF) communication environments where multiple signals (e.g., the desired signals and interfering signals) overlap through passive reflection <cit.>. Addressing this challenge, Yang et al. proposed an innovative approach integrating task-oriented computing and communication functions using computational metamaterials <cit.>. They introduced a novel metasurface structure known as reconfigurable intelligent computational surface (RICS), designed to enable dynamic tunable signal reflection and amplification via computational metamaterials. However, in complex RF environments with significant interference, both RICS and traditional RISs struggle to effectively mitigate interferences. For instance, when signals out-of-band interact with a metasurface, they often undergo unpredictable and uncontrolled reflection. This unmanaged reflection sometimes will exacerbate interference issues, impacting receiver efficiency and causing severe spectrum pollution, ultimately degrading overall communication system performance. Various methods have been explored to achieve interference cancellation in wireless communications. In passive RISs, such as those discussed by Tang et al. <cit.>, interference elimination is achieved by utilizing signal reflection from the RISs. However, passive RISs can only degrade interference from the front side of the metasurface and are ineffective against interference from the back side. To address this limitation, STAR-RIS proposed by the authors in <cit.> divides the incident signal into two components, allowing for both reflection and refraction. This capability enables interference elimination from both the front and back sides through phase shift optimization. Nonetheless, these methods often result in significant signal attenuation after passing through the metasurface, thereby reducing the effectiveness of interference cancellation. In contrast, active RISs, as discussed in <cit.>, integrate amplifier components directly onto the metasurface. This design enhances the amplitude of the incident signal, thereby increasing signal power and maximizing interference cancellation at the receiving end. However, the incorporation of amplifiers increases energy consumption, which can impact the overall energy efficiency of the system. To tackle the challenges previously discussed, we propose a novel filtering reconfigurable intelligent computational surface, called FRICS, which can leverage absorbing metamaterials to effectively absorb interference and computational metamaterials to amplify the desired signal. The four types of metasurfaces are compared in Table <ref>. We observe that passive RISs leverage reflection to cancel interference and STAR-RIS enhances cancellation by combining reflection and refraction, active RISs offer better interference cancellation through signal amplification despite potential drawbacks in energy efficiency. In contrast with them, the innovative configuration enables FRICS to achieve comprehensive interference elimination while simultaneously reducing energy consumption. Compared to existing interference cancellation methods, FRICS enhances both the signal-to-interference-plus-noise ratio (SINR) and energy efficiency (EE) of the system. § FUNDAMENTALS OF FRICS As shown in Fig. <ref>, our proposed FRICS comprises three layers: a spatial filtering layer, a metamaterial intermediate layer, and a control layer. The control layer includes a circuit board hosting an intelligent controller responsible for adjusting the tunable parameters of the spatial filtering layer. This adjustment is commonly facilitated by a field-programmable gate array (FPGA). In the subsequent sections, we will detail the structure of the spatial filtering layer and explore its collaborative design with the metamaterial intermediate layer to achieve RF spectrum purification in typical application scenarios. §.§ Spatial Filtering Layer Based on the generalized Huygens' principle and the theory of metamaterial equivalent circuit models <cit.>, a metasurface can be conceptually represented as a series connection of capacitive grids and inductive grids, thereby forming a series LC resonant circuit. This equivalence arises because the metal resonators within the metasurface structure can be analogously considered inductors. In contrast, the gaps or spaces between these resonators can be analogously treated as capacitors in an electrical circuit model. In the equivalent LC resonant circuit, the capacitive reactance of the equivalent capacitance decreases with increasing frequency, while the inductive reactance of the equivalent inductance increases. This behavior impacts the performance of ring-shaped elements in the circuit, which exhibit a relatively high shunt impedance, thus behaving similarly to an open circuit. Consequently, EM waves can effectively pass through the FRICS at both higher and lower frequencies. However, the transmission performance of the FRICS is weakest in the intermediate frequency range, resulting in a bandstop characteristic, as illustrated in Fig. <ref>(a). At intermediate frequencies, the combination of the decreasing capacitive reactance and the increasing inductive reactance creates conditions that inhibit the efficient transmission of EM waves, leading to a notable dip in performance. On the other hand, as illustrated in Fig. <ref>(b), aperture-shaped elements exhibit the opposite behavior to ring-shaped elements, resulting in a bandpass characteristic. This means that EM waves can pass through the structure more effectively at certain intermediate frequencies, rather than being blocked. The unit structure of the spatial filtering layer of Design B is depicted on the left side of Fig. <ref>(b). In this figure, the yellow color represents the substrate, the green color represents the metal patches, and the blue color represents the variable capacitor diodes. These diodes can adjust their capacitance values, which in turn influences the transmission characteristics of the spatial filtering layer. The right figure in Fig. <ref>(b) shows the transmission coefficient of the spatial filtering layer with different capacitance values of the variable capacitor diodes. By varying the capacitance, the transmission properties of the filter can be tuned, allowing for the selective passage of certain frequency ranges while blocking others. This tunability is key to achieving the desired bandpass characteristic, enabling the filter to efficiently manage the passage of EM waves based on their frequency. In practical scenarios, the frequencies of the desired signal and the interfering signal can vary due to the communication environment. To address this, we employ variable capacitors, commonly referred to as tuning diodes, to dynamically adjust the operating frequency range of the FRICS by controlling the bias voltage. This allows the metasurface to be integrated into the spatial filtering layer to achieve frequency tuning. Specifically, when a reverse bias voltage is applied to the variable capacitor, its capacitance changes accordingly. In this paper, we utilize the variable capacitor SMV2019-079LF as an example <cit.>, in which the capacitance changes with the reverse bias voltage. For example, when the bias voltage of the diode changes from 0V to -19V, the capacitance of the diode changes from 2.31pF to 0.24pF. Consequently, the tunable control layer can dynamically adjust the reflection frequency range of the FRICS by controlling the reverse bias voltage of each unit using a FPGA. We further explore the CST simulations and we vary the values of the RLC series circuit to simulate the electrical characteristics of the variable capacitor under different bias voltages. By implementing this dynamic tuning capability, the FRICS can effectively adapt to varying signal frequencies, enhancing its performance in diverse communication environments. §.§ Metamaterial Intermediate Layer To fully leverage the potential of the metamaterial intermediate layer, it is crucial to appropriately design the surface structure. This can be achieved by using absorbing materials to absorb interference signals or employing computational materials to amplify desired signals. Specifically, as a novel type of material that can significantly weaken or absorb EM wave energy, absorber metamaterials can help to reduce EM interference <cit.>. Additionally, absorber metamaterials possess characteristics such as being lightweight, corrosion-resistant, and temperature-resistant, making them suitable for deployment in wireless communication environments. According to the material compositions, the common absorber materials can be classified into carbon-based absorber materials, iron-based absorber materials, ceramic-based absorber materials, and chiral materials. In our design, iron oxide material is implemented as the intermediate layer due to its high magnetic permeability, high resistivity, and good impedance matching characteristics. For example, BaCo_0.9Si_0.95Fe_10.1O_19 exhibits an absorption rate of up to 99% within the frequency range of 11-13 GHz <cit.>. On the other hand, computational metamaterials refer to a specially designed block of metasurface that allows mathematical operations on the incident waves, such as spatial differentiation, integration, and convolution <cit.>. To be specific, a computational metamaterial can achieve amplification of the incident signal that impings the FRICS. Therefore, leveraging the properties of the computational metamaterial can enhance the signal-to-noise ratio by amplifying the desired signal while avoiding additional energy consumption. To sum up, by incorporating absorbing materials, the intermediate layer can effectively mitigate unwanted/interfering signals, thereby enhancing the overall signal quality. Meanwhile, using computational materials can help to amplify the desired signals, improving their strength and clarity. This dual capability of absorption and amplification makes the metamaterial intermediate layer a vital component in the FRICS design, enabling it to adapt to various signal environments and maintain efficient communication. Therefore, the intermediate layer's design should be tailored to the specific requirements of the application, ensuring optimal performance of the FRICS system. §.§ FRICS Architecture Design According to the configuration of the spatial filtering layer and the metamaterial intermediate layer, FRICS has two designs, which are described as follows. * FRICS Design A: bandstop filtering layer + absorbing intermediate layer. In this design, the spatial filtering layer achieves a bandstop filtering effect while an absorber material is used as the metamaterial intermediate layer. As simulated using CST in the right of Fig. <ref>(a), the transmission coefficient exhibits a notable decrease within the frequency range of 16 GHz to 18 GHz, indicating reflective characteristics. In the remaining frequency range, the transmission coefficient remains relatively unchanged, demonstrating transmission characteristics. Therefore, as shown in the left figure of Fig. <ref>(a), the in-band signal (f_2) can be reflected and the out-of-band signals (f_1 and f_3) can be passed through the filtering layer. Then absorbing material can absorb these signals to achieve interference cancellation. * FRICS Design B: bandpass filtering layer + computational intermediate layer. In this design, the spatial filtering layer achieves a bandpass filtering effect while the computational metamaterial is used as the intermediate layer. As illustrated in Fig. <ref>(b), there is a notable decrease in the reflection coefficient within the frequency range of 17 GHz to 19 GHz, indicating transmission characteristics. In the out-of-band range, the reflection coefficient remains relatively unchanged, demonstrating reflective characteristics. Consequently, as illustrated in the left figure of Fig. <ref>(b), the desired signal (f_2) passes through the spatial filtering layer and then can be amplified with the aid of computational metamaterials. Meanwhile, the out-of-band signals (f_1 and f_3) can be reflected through the filtering layer. Based on the two designs of FRICS, for the communication scenario where the desired signal and interfering signal coexist, the proposed FRICS can effectively separate the desired signals from the interfering signals. Meanwhile, the unwanted interfering signals can be absorbed and the desired signal can be amplified by configuring the design of FRICS appropriately, thereby improving energy efficiency. § APPLICATIONS AND PERFORMANCE EVALUATION Based on the two designs introduced above, we validate the superiority of FRICS in two typical scenarios. §.§ Application Scenarios §.§.§ Interference signal filtering and absorbing As illustrated in Fig. <ref>(a), we examine a device-to-device (D2D) communication scenario for efficient transmission, where a D2D indoor communication pair operating at the frequency f_1 and may experience significant co-frequency interference from outdoor cellular communications. To eliminate the co-frequency interference from outdoor cellular communication, the FRICS is implemented on the exterior wall of the building. This setup allows the incoming interfering signal at f_1 or f_3 to pass through the spatial filtering layer of the FRICS. Once through this layer, the signal is absorbed by the metamaterial intermediate layer, which consists of absorbing materials designed to mitigate interference. At the same time, the signal between the users and the base station (BS) at f_2 will be reflected to enhance the communication quality. It is observed that the implementation of FRICS plays a crucial role in maintaining high-quality indoor communication by mitigating external interferences. §.§.§ Desired signal filtering and amplification As illustrated in Fig. <ref>(b), we consider a vehicular-to-everything (V2X) scenario, where a vehicle is communicating with a roadside unit (RSU) at frequency f_2. Concurrently, a vehicle-to-vehicle (V2V) pair transmits at frequencies f_1 or f_3. To improve the SINR for the RSU, when the uplink signal from the vehicle impinges the FRICS, the signal is allowed to go through the filtering layer and then amplified through computational metamaterial. This amplification ensures that the uplink signal is strong and clear when it reaches the RSU, thereby boosting the intended signal's strength and thus enhancing communication reliability and efficiency. At the same time, V2V communication is occurring. The FRICS can reflect the V2V communication signals, thereby enhancing their communication quality. By reflecting these signals, the FRICS helps to maintain robust and uninterrupted V2V communication, which is crucial for safety and coordination between vehicles. §.§ Simulation Results and Discussions The configuration parameters for the spatial filtering layer are shown in Table <ref>. To demonstrate the benefits of FRICS, we evaluated energy efficiency with the bandwidth set to 1 MHz. To accurately compare the energy consumption of different metasurfaces designs with interference mitigation when using variable capacitors, the power consumption of the RIS reflection units is 0, the power consumption of the driver generator is 250 mW, the power consumption of the bias voltage amplifier is 180 mW, and the FPGA power consumption is 1.5 W <cit.>. §.§.§ Simulation results of Design A In this scenario, we compare the SINR and EE of our proposed FRICS with the latest RIS models (passive RIS, and active RIS and star RIS) in terms of interference mitigation. The D2D transmission power ranges from 15 dBm to 39 dBm. For the communication simulation scenarios, we assume a D2D communication distance of 10 meters and a cellular communication distance of 20 meters. As illustrated in Fig. <ref>(a), a comparison is presented between the SINR after interference mitigation utilizing FRICS, passive RIS, active RIS, and star RIS at the D2D receiver indoor. It is observed that FRICS achieves the highest SINR due to its ability to completely absorb the interfering signals. This superior performance can be attributed to the efficient absorption capabilities of the metamaterial intermediate layer, which effectively eliminates interference. In comparison, passive RIS experiences a greater degree of signal attenuation when eliminating interference, which ultimately affects its ability to eliminate interference signals. The star RIS, which can simultaneously transmit and reflect signals, even experiences a significant reduction in reflected signals compared to passive RIS, which affects its interference mitigation performance. Active RIS, on the other hand, amplifies the signals by incorporating additional amplifying circuits, resulting in stronger signals reaching the receiving end and effectively mitigating interference, thus achieving a higher SINR. Fig. <ref>(b) illustrates the energy efficiency of FRICS for interference cancellation. As the D2D transmit power increases, the SINR at the D2D communication receiver can be improved, resulting in a gradual increase in the energy efficiency for interference cancellation. Notably, the energy efficiency of FRICS remains the highest due to its ability to effectively cancel interference without introducing additional energy, resulting in near-zero power consumption for the driving generator. Given the related power consumption parameters, the energy efficiency of amplifying signals for active RIS (with the amplification factor of 2 and each reflection unit consumes 150 mW of energy) is higher than the others. §.§.§ Simulation results of Design B In this scenario, we evaluate the SNR achieved at the RSU when the transmission power of the vehicle ranges from 15 dBm to 39 dBm. We assume that the gain of star RIS and FRICS is 2. Fig. <ref>(c) illustrates a comparison between the FRICS and the benchmark RISs designs in terms of SNR for V2X communication, with the transmission power of the vehicle ranging from 15 to 39 dBm. We observe that for the RSU, the FRICS achieves the highest SNR performance. This is because the computational metamaterials amplify the desired signal. In contrast, passive RIS has no signal amplification capability, resulting in the worst performance. Similarly, Fig. <ref>(d) illustrates that only FRICS can extend the performance of V2X by amplifying in-band signal while improving the quality of V2V communication via out-of-band signal reflection. To sum up, we observe from Fig. <ref>(a)-(d) that the proposed FRICS demonstrates superior interference mitigation capabilities without the need for additional energy input, achieving the highest SINR/SNR among the compared benchmark schemes. Its ability to absorb interfering signals and amplify the desired signals completely provides a significant advantage in maintaining high communication quality and energy efficiency. § CHALLENGES AND FUTURE DIRECTIONS The introduction of the FRICS designs has significant implications for future wireless communication systems, particularly in the context of upcoming sixth-generation (6G) networks with a more complicated RF communication environment. In the following, we outline the challenges and potential directions, which aim at paving the way for more feasible solutions to achieving RF spectrum purification in future 6G networks. §.§ FRICS Design for Energy Harvesting In this paper, we have utilized absorbing materials as the intermediate layer to achieve interference cancellation. However, we believe that the metamaterial intermediate layer holds even greater potential. By implementing energy-conversion metamaterials, such as photovoltaic conversion metamaterials, as the intermediate layer, we can achieve energy harvesting when the filtered signals, including interference signals, impinge on the intermediate layer <cit.>. This approach enhances the deployment flexibility of the metasurfaces by enabling the collection of energy from various sources, such as solar power or wireless signals. It eliminates the need for additional power sources, allowing metasurface deployment on various object surfaces. This capability can significantly extend the sustainability of the metasurfaces, making them more adaptable to diverse environments and applications. Looking ahead, there are opportunities to explore different types of metasurfaces to achieve additional functionalities. For example, time-targeting metasurfaces can dynamically adjust their properties based on temporal changes, enabling precise control over when certain signals are absorbed or amplified. By integrating such advanced functionalities, future metasurfaces can offer even greater capabilities and versatility, paving the way for innovative applications in wireless communication, energy harvesting, and beyond. §.§ FRICS Design for Phase Modulation We have observed that although FRICS can effectively eliminate interference signals, controlling the phase of the reflected signals remains a challenge since precise phase control is crucial for several metasurfaces-based applications such as beamforming. Recent works have achieved filtering while adjusting the phase <cit.>. However, their multi-layer structures and additional amplifiers consume significant energy, which violates the principle of metasurface design. Moreover, they ignore the issue of same-frequency interference, which is more serious in practice. Therefore, it holds promise to investigate how to directly modulate the phase of FRICS. Furthermore, when we consider the coexistence of the FRICS and existing RISs within the same physical space, the functionalities provided by FRICS can complement the limitations of traditional RISs. For instance, when the metamaterial intermediate layer of FRICS employs optoelectronic conversion materials, the energy obtained by the FRICS can not only sustain its operation but also serve as an energy supplement for other RISs. Meanwhile, the beamforming capability of the RISs can compensate for the limitations of FRICS by enabling phase adjustment. §.§ Interplay between FRICS Design and ISAC Integrated sensing and communications (ISAC) has emerged as a crucial technology in the context of 6G aiming to achieve both perception and communication <cit.>. Traditional approaches to integrated sensing often prioritize either communication-centric or perception-centric designs, making it challenging to achieve a unified ISAC system. However, the proposed FRICS holds the potential to address this challenge by filtering signals in different frequency bands, allowing the reflected signals to be used for communication and the filtered signals for sensing. For instance, the signal filtered to the intermediate layer of FRICS can directly serve as information for environmental sensing, such as humidity, pollutant density, and more. Furthermore, to effectively suppress interferences using the metamaterial intermediate layer of FRICS, the transmission coefficient of FRICS should be configured appropriately based on the frequency of signals. To obtain this frequency information, ISAC may play a crucial role in detecting the frequency of the signals, e.g., with the aid of deep learning methods <cit.>. This dual functionality enables FRICS to enhance the capabilities of ISAC systems, providing a seamless integration of communication and sensing tasks within a single framework. §.§ Low-Energy and Low-Complexity Transceiver Design In traditional wireless communication systems, the receiver's antenna captures wireless signals, which then require filtering to extract the desired signal. However, these conventional designs often result in increased receiver complexity and higher energy consumption. In contrast, the proposed FRICS offers several advantageous characteristics, including low energy consumption and tunability. By integrating FRICS into receiver designs, it mitigates receiver complexity and reduces energy consumption significantly. Moreover, FRICS facilitates dynamic adjustment of the filtering frequency, which aligns with principles of green and low-carbon design. This capability not only provides enhanced flexibility but also holds the potential to enhance communication performance. If we go one step further and suppose that the receiver is surrounded by multiple FRICS unis, by assigning different operating frequencies to each directional FRICS, we can selectively ensure that the desired signal is received while filtering out interfering signals. This approach leverages the directional capabilities of FRICS to achieve significant improvements in signal reception quality, interference management, and overall communication performance. § CONCLUSION This paper investigated a novel filtering RICS (FRICS) structure and introduced two different designs. Design A uses a tunable spatial filter to dynamically reflect signals within the resonance band while absorbing out-of-band signals. Design B enables the transmission and amplification of the desired signals within the resonant band while reflecting out-of-band signals. Furthermore, this paper explores two typical application scenarios where FRICS can be employed and evaluates its performance in these contexts. Finally, we outline several challenges and opportunities, highlighting promising directions for future research in this field. IEEEtran
http://arxiv.org/abs/2406.18726v1
20240626195153
Data-driven identification of port-Hamiltonian DAE systems by Gaussian processes
[ "Peter Zaspel", "Michael Günther" ]
eess.SY
[ "eess.SY", "cs.LG", "cs.NA", "cs.SY", "math.NA" ]
Chandra detects low-luminosity AGN with M_BH=10^4-10^6 M_⊙ in nearby (z<0.5), dwarf and star-forming galaxies Santosh Harish July 1, 2024 ============================================================================================================== [remember picture,overlay] [anchor=north east,inner sep=20pt] at (current page.north east) < g r a p h i c s > ; § ABSTRACT Port-Hamiltonian systems (pHS) allow for a structure-preserving modeling of dynamical systems. Coupling pHS via linear relations between input and output defines an overall pHS, which is structure preserving. However, in multiphysics applications, some subsystems do not allow for a physical pHS description, as (a) this is not available or (b) too expensive. Here, data-driven approaches can be used to deliver a pHS for such subsystems, which can then be coupled to the other subsystems in a structure-preserving way. In this work, we derive a data-driven identification approach for port-Hamiltonian differential algebraic equation (DAE) systems. The approach uses input and state space data to estimate nonlinear effort functions of pH-DAEs. As underlying technique, we us (multi-task) Gaussian processes. This work thereby extends over the current state of the art, in which only port-Hamiltonian ordinary differential equation systems could be identified via Gaussian processes. We apply this approach successfully to two applications from network design and constrained multibody system dynamics, based on pH-DAE system of index one and three, respectively. 0.9 Keywords: Port-Hamiltonian systems, data-driven approach, Gaussian process regression, coupled dynamical systems, structure preservation § INTRODUCTION In multiphysics modelling, one is faced with solving coupled systems of differential-algebraic equations (DAE), which describe the individual PDE subsystems after a semidiscretization with respect to space. The automatic generation of the subsystem models, usually based on a network approach, involves redundant information, and hence leads to DAE models instead of ODE models. Structure preservation is mandatory here: if the single subsystems are dissipative or energy-preserving, for example, the joint system has to inherit these properties. Hence one has to choose a coupling approach which preserves these structures. Here, the port-Hamiltonian framework provides the right tool. It allows for designing overall port-Hamiltonian systems (pHS) that inherit the structrual properties of the subsystems provided that all subsystems are pHS and a linear (power-conserving) interconnection between the inputs and outputs of the subsystems is provided <cit.>. In realistic applications this approach reaches its limits if for a specific subsystem, either (a) no knowledge that would allow the definition of a physics-based pHS is available, or (b) one is forced to use user-specified simulation packages with no information of the intrinsic dynamics (which may follow a pHS dynamics or not), and thus only the input-output characteristics are available, or (c) a pHS description is available, but too expensive for being used in the overall simulation. However, there is an interest to construct a model that is guaranteed to follow pHS properties in order to be used as part of a bigger coupled pHS system. Data-driven models allow to overcome these limitations. In case of an unknown physical pHS model, measurement data is collected and used to fit a pHS surrogate model, while in the case of too expensive model evaluations or black-box simulation packages, data is collected from simulation runs. This surrogate model guarantees structural pHS properties, is cheap to be evaluated and can hence be efficiently used in realistic multiphysics applications. Data-driven identification of dynamic pHS models data has been a topic of research during the last years, with a focus on ODE systems, especially on linear systems. For the latter a frequency approach is obvious, see, for example <cit.>, where the Loewner framework has been proposed. Other approaches include a parameterization approach based on using unconstrained optimization solvers during identification <cit.> or infer frequency response data using time-domain input-output data <cit.>. Dynamic mode decompositions are used in <cit.> for identifying a linear pH-ODE model based on input, state-space and output time-domain data and a given quadratic Hamiltonian. Further, a two-step identification procedure has been proposed in <cit.> based on deriving first a best-fit linear state-space model and then inferring a nearest linear pH-ODE model. As a one-step procedure, two iterative gradient-based and adjoint-based approaches have been derived in <cit.>. For nonlinear pH-ODE systems, Koopman theory is used in <cit.> to shift the nonlinear problem to a linear one. Regarding pH-DAE systems, <cit.> discusses an approach based on discrete-time input-output data, however, assuming data steeming from symplectic integration schemes. Another rather general approach is given by physics-informed neural networks (PINNs), which combine data-based and physics-based approaches. Such approaches have been introduced in  <cit.> for applications in stiff chemical reaction kinetics (nonlinear pH-ODEs) and power network (nonlinear pH-DAEs). Recently, Beckers et al. <cit.> have proposed an alternative approach for nonlinear pH-ODE systems based on Gaussian processes (GPs). Data-driven function approximation based on GPs, also often referred to as Gaussian process regression (GPR), is a well known statistics oriented machine learning approach that can easily deal with noisy data. This makes it an ideal candidate for realistic data-driven applications. Prior to the work of Beckers et al. on pH-ODE identification, GP based approaches for Hamiltonian ODE identification <cit.> and dissipative ODE system identification <cit.> have been developed, in which the Hamiltonian of the system is found from input and state space data. These works share that the scalar Hamiltonian is modeled as a Gaussian process and learned via its gradient. Dissipation and the full pHS structure is treated by linear transformations and shifts of the GP, respectively. Beckers et al. <cit.> have in particular shown that their data-driven approximation of the pH-ODE system fulfills the pHS properties, hence is structure preserving. Thereby, the GP based data-driven approach can both deal with noisy data and is structure preserving, hence becomes a very appealing method for data-driven pHS identification. In this work, we extend prior work on data-driven pH-ODEs using Gaussian processes towards the case of nonlinear pH-DAE systems, which – to the best of our knowledge – has not been done before. In difference to prior approaches, we identify the vector-valued effort function of the pH-DAE system, i.e. no longer the (gradient of) the scalar Hamiltonian. To this end, we have to treat the case of vector-valued function approximation, which we achieve by multi-task Gaussian processes <cit.>. Moreover, we need to cover the algebraic constraints in the system. In our numerical results, in which we cover a basic test cases from electric network design and constrained multibidy system dynamics, we empirically observe excellent approximations up to the data's discretization accuracy with only very few training samples. Overall, our proposed approach (a) fills the gap between nonlinear pH-ODE and pH-DAE systems, (b) constructs models which are preserving structure exactly and (c) does not demand synthetic data arising from structure-preserving discretization as it can deal with noisy data. The paper is organized as follows. In Section <ref>, we describe the identification task by introducing pH-ODEs and stating the identification problem. Section <ref> introduces the novel data-driven identification of the effort function in pH-DAEs. To this end (multi-task) Gaussian process regression its application in the pH-DAE case is discussed with some implementation details. In Section <ref>, we introduce the two model systems and give a detailed analysis of prediction/identification errors of the novel method on some exemplary test data. This is followed by conclusions. § IDENTIFICATION OF PH-DAE SYSTEMS §.§ Port-Hamiltonian differential algebraic equations We consider port-Hamiltonian differential-algebraic (short-hand: pH-DAE) systems <cit.> of the special structure E(𝐱) ·𝐱̇ = (J(𝐱)- R(𝐱)) 𝐳(𝐱) + B(𝐱) 𝐮, 𝐲 = B(𝐱)^⊤𝐳(𝐱) without feed-through, with time-dependent state 𝐱, input 𝐮, output 𝐲 and effort function 𝐳(𝐱). The solution depenent matrices J(𝐱), R(𝐱) and B(𝐱) denote the skew-symmetric structure, the positiv-semidefinit dissipation and the port matrix. In DAEs, the solution-dependent flow matrix E(𝐱) is singular. Then, if the compatability condition E(𝐱)^⊤𝐳(𝐱) = ∇ H(𝐱) holds, with H the Hamiltonian of the system, which is at least continously differentiable, the pH-DAE fulfills the dissipativity inequality d/dt H(𝐱) = ∇ H(𝐱)^⊤𝐱̇ = - ∇ H(𝐱)^⊤ R(𝐱) H(𝐱) + 𝐲^⊤𝐮≤𝐲^⊤𝐮. The main advantage of PHS modeling is that coupled pH-DAE systems define again a pH-DAE system fulfilling a dissipativity inequality <cit.>: consider s subsystems of pH-DAE systems E_i d/dt𝐱_𝐢 = (J_i(𝐱) - R_i(𝐱)) 𝐳_𝐢(𝐱_𝐢) + B_i(𝐱) 𝐮_𝐢, 𝐲_𝐢 = B_i(𝐱)^⊤𝐳_𝐢(𝐱_𝐢), i=1,…,s with J_i=-J_i^⊤ and R_i=R_i^⊤≥ 0. The inputs 𝐮_𝐢, the outputs 𝐲_𝐢 and B_i(𝐱_𝐢) are split into 𝐮_𝐢= [ û_𝐢; 𝐮̅_𝐢 ], 𝐲_𝐢= [ ŷ_𝐢; 𝐲̅_𝐢 ], B_i = [ B̂_i B̅_i ] according to external and coupling quantities. The subsystems are coupled via external inputs and outputs by [ û_1; ⋮; û_𝐤 ] + Ĉ[ ŷ_1; ⋮; ŷ_𝐤 ] = 0, Ĉ = - Ĉ^⊤. These s systems can be condensed to one large pH-DAE system E(𝐱) d/dt𝐱 = (J̃( 𝐱) -R( 𝐱)) 𝐳( 𝐱) + B̅𝐮̅, 𝐲̅ = B̅^⊤𝐳( 𝐱) with J̃=J - B̂ĈB̂^⊤, the Hamiltonian H(𝐱) given by H(𝐱):=∑_i=1^s H_i(𝐱_𝐢) and the condensed quantities 𝐯^⊤ = (𝐯_1^⊤,…, 𝐯_𝐬^⊤) for 𝐯∈{𝐱, 𝐮̅, 𝐲̅, 𝐳}, F= (F_1,…, F_s) for F ∈{E, Q, J, R, B̂, B̅}. Equation (<ref>) defines now a pH DAE of type (<ref>). For many pH-DAE systems arising in engineering and science applications the system matrix E(𝐱) is given in block-diagonal form of the type E(x)=diag(D,0) with a constant regular matrix D. If we split 𝐱:=(𝐱_1,𝐱_2) and 𝐳(𝐱):=(𝐳_1(𝐱_1,𝐱_2),𝐳_2(𝐱_1,𝐱_2)) with respect to the dimensions of the block structure above, the compatability condition now reads [ 𝐳_1(𝐱_1,𝐱_2); 0 ] = [ D^-⊤·∇_𝐱_1 H(𝐱_1,𝐱_2); ∇_𝐱_2 H(𝐱_1,𝐱_2) ], and thus H(𝐱_1,𝐱_2) =H(𝐱_1). 0 This system is analytically equivalent to semi-explicit pH-DAE systems [ I 0; 0 0 ]·𝐱̇ = ( J(𝐱)- R(𝐱)) 𝐳(𝐱) + B(𝐱) 𝐮, 𝐲 = B(𝐱)^⊤𝐳(𝐱) with J(𝐱)= [ D^-1 0; 0 I ] J(𝐱) [ D^-⊤ 0; 0 I ], R(𝐱)= [ D^-1 0; 0 I ] J(𝐱) [ D^-⊤ 0; 0 I ], B(𝐱)= [ D^-1 0; 0 I ] B(𝐱), 𝐳(𝐱) = [ D^⊤ 0; 0 I ]𝐳(𝐱) and compatability condition [ 𝐳_1( 𝐱_1,𝐱_2); 0 ] = [ ∇_𝐱_1 H(𝐱_1,𝐱_2); ∇_𝐱_2 H(𝐱_1,𝐱_2) ], The differential index is given by one, if ∂/∂𝐱_2( [ 0 I ]·[ (J(𝐱)-R(𝐱))𝐳(𝐱) + B(𝐱) 𝐮 ]) = ∂/∂𝐱_2( [ 0 I ]·[ (J(𝐱)-R(𝐱)) 𝐳(𝐱) + B(𝐱) 𝐮 ]) is regular in a neighborhood of the solution. The differential index is given by one, if ∂/∂𝐱_2( [ 0 I ]·[ (J(𝐱)-R(𝐱)) 𝐳(𝐱) + B(𝐱) 𝐮 ]) is regular in a neighborhood of the solution. The skew-symmetric structure matrix J is usually defined by the structure or topology of the system, resp., and thus constant in most cases. Hence we set J to be constant in the following. In addition, for simplicity, also R and B are assumed to be constant. §.§ Identification problem The identification task discussed in this paper is given as follows: for given data {(𝐮(t_i), 𝐱(t_i)}_i=1^N_T, t_i∈ [0,T] ∀ i=1, … N_T the nonlinear effort function 𝐳 has to be identified. For this, one has first to derive derivative data {𝐱̇(t_i) }_i=1^N_T from the given data (see section <ref> for details). Then (<ref>) yields for all time points t_i the identity (D,0) ·𝐱̇(t_i) - B 𝐮(t_i) = (J-R) 𝐳(𝐱(t_i)), which allows for reconstructing {𝐳(t_i)}_i=1^N_T provided that J(𝐱)-R(𝐱) is regular. However, if J-R is singular, then the non-trivial part of 𝐳(𝐱) lying in the kernel of J-R cannot be identified unless additional information is available. However, the regularity of J-R is often demanded in engineering and science applications, when a unique solution of the operation point analyis is required: the system should have a unique constant solution for a constant input, i.e., if 𝐱̇ is set to zero. In this case system (<ref>) yields the linear equation (J-R) 𝐳(𝐱) = - B 𝐮, which has a unique solution only if J-R is regular. §.§ Construction of derivative data It remains to obtain derivative data {𝐱̇(t_i) }_i=1^N_T from the given data {(𝐮(t_i), 𝐱(t_i)}_i=1^N_T. This can be done either by numerical finite differences or first constructing an interpolation function based on Gaussian processes, cubic splines or any other interpolation procedure, and then analytically differentiate the interpolation function and evaluate the derivative at t_i. The error will remain small, if the data is sampled at a sufficiently high rate. However, large differences between time points, caused by a low sampling rate or inappropriate choice of training data, will yield large approximation errors for the obtained derivatives. § DATA-DRIVEN IDENTIFICATION VIA GAUSSIAN PROCESSES We model the effort function 𝐳(𝐱) as a vector-valued Gaussian process. In the following, standard Gaussian processes, their use in function approximation / regression and the necessary extension to vector-valued functions by multi-task Gaussian processes (MT-GP) are reviewed. Based on this, the identification of the effort function in the pH-DAE setting is introduced. §.§ Gaussian process regression Let us assume for now that we are interested to find a function z:ℝ^d→ℝ from (training) data 𝒯={(𝐱_i,y_i)}_i=1^N_T. We follow the common notation in GP literature and introduce the set of inputs as 𝐗={𝐱_i}_i=1^N_T. This allows to write the vector of outcomes of function z on inputs 𝐗 as z(𝐗)=[z(𝐱_1) ⋯ z(𝐱_N_T)]^⊤. Similarly, the data vector is Y= [y_1 ⋯ y_N_T]^⊤. GP regression starts from putting a Gaussian process as prior distribution on the to be found function z, hence z∼𝒢𝒫(m,k) , where m:ℝ^d→ℝ is the mean function and k:ℝ^d×ℝ^d →ℝ is the covariance of the Gaussian process. The covariance function is also often called kernel. It is common in GP regression to manually choose the positive definite kernel function, where a typical choice is the Gaussian kernel k(𝐱,𝐱^') = exp(-𝐱-𝐱^'^2/2ϕ^2) with scaling parameter ϕ, which will be used throughout this work. With the choice of z following a GP, we obtain z(𝐗) ∼𝒩(m(𝐗),k(𝐗,)) , hence the vector of outcomes of function z follows a multivariate normal distribution with mean m(𝐗), which is the vector of outcomes of m applied to the set 𝐗, and covariance matrix k(𝐗,𝐗) = (k_ij)_i,j=1^N_T, with k_ij=k(𝐱_i,𝐱_j). In GP regression, it is common to use a zero mean, hence m ≡ 0, giving z(𝐗) ∼𝒩(0,k(𝐗,𝐗)). We are now interested to make predictions. To this end, we consider N_eval evaluation points 𝐗_∗ = {𝐱_∗,1, …𝐱_∗,N_eval}, giving z(𝐗_∗)=[z(𝐱_∗,1) ⋯ z(𝐱_∗,N_eval)]^⊤. In a first step we restrict ourselves to noise-free observations, i.e. have the interpolatory condition y_i = z(𝐱_i), i=1,…, N_T , thus Y = z(𝐗). Then, the joint distribution of the training outputs Y and the predicted outputs Y_∗ is (using zero mean) [[ Y; Y_∗ ]] = [[ z(𝐗); Y_∗ ]] ∼𝒩( 0, [[ k(𝐗,𝐗) k(𝐗,𝐗_∗); k(𝐗_∗,𝐗) k(𝐗_∗, 𝐗_∗) ]] ) . Here, we have used, e.g., the notation k(𝐗,𝐗_∗) to describe the kernel matrix of size N_T× N_eval with entries k_ij(𝐱_i,𝐱_∗,j). Conditioning and a simple linear algebra argument then allows to derive the posterior distribution for the predicted outputs as Y_∗| z, 𝐗_∗,𝐗, ϕ∼𝒩( k(𝐗_∗,𝐗) k(𝐗,𝐗)^-1Y, k(𝐗_∗,𝐗_∗) - k(𝐗_∗,𝐗) k(𝐗,𝐗)^-1 k(𝐗,𝐗_∗)) . The predictions are given via the mean of the posterior distribution, while the covariance gives a measure on the uncertainty in the prediction. To capture and appropriatly regularize the case of noisy data, we further assume a Gaussian noise model on the observed data. In other words, instead of the interpolatory condition in (<ref>), it is assumed to have y_i = z(𝐱_i) + ϵ_i, ϵ_i ∼𝒩(0,σ^2), i=1,…,N_T . Then the observed data y_i, conditioned to the GP z, the input 𝐱_i and the noise variance σ^2 follows a normal distribution with mean z(𝐱_i) and variance σ^2, i.e. y_i | z,𝐱_i,σ^2,ϕ∼𝒩(z(𝐱_i),σ^2) . It is immediate that the vector of observed outputs Y then has the corresponding likelihood Y | z,𝐗,σ^2,ϕ∼𝒩(z(𝐗),σ^2 I_N_T) , with I_N_T the N_T-dimensional identity matrix. It can be shown that then the joint distribution of the training outputs Y and the predicted outputs Y_∗ is [[ Y; Y_∗ ]] ∼𝒩( 0, [[ k(𝐗,𝐗) + σ^2 I_N_T k(𝐗,𝐗_∗); k(𝐗_∗,𝐗) k(𝐗_∗, 𝐗_∗) ]] ) , and conditioning gives Y_∗| z, 𝐗_∗,𝐗, σ^2, ϕ∼𝒩( k(𝐗_∗,𝐗) [k(𝐗,𝐗)+ σ^2 I_N_T]^-1Y, k(𝐗_∗,𝐗_∗) - k(𝐗_∗,𝐗) [k(𝐗,𝐗)+ σ^2 I_N_T]^-1 k(𝐗,𝐗_∗)) . The hyperparameter ϕ of the chosen kernel and the noise variance σ^2 is typically estimated jointly in the inference process. A common approach for this is to use a maximum likelihood estimator. That is, parameters σ^2,ϕ are found that maximize the marginal (log) likelihood of the outputs 𝐘, i.e. log p(Y|𝐗,σ^2,ϕ). In difference to eq. (<ref>), the marginal likelihood has been marignalized wrt. the GP z, i.e. z is integrated out. As further discussed in <cit.>, the log marginal likelihood of 𝐘 is given by log p(Y|𝐗,σ^2,ϕ) = -1/2Y^⊤ [k(𝐗,𝐗)+ σ^2 I_N_T]^-1Y - 1/2log | k(𝐗,𝐗)+ σ^2 I_N_T | - N_T/2log 2π . While GP regression for fixed hyperparameters has a computational complexity of O(N^3_T) due to the inversion of the covariance or kernel matrix k(𝐗,𝐗), the joint estimation of the hyperparameters by the maximum likelihood approach increases the computational complexity to O(N_opt N_T^3), where N_opt is the number of optimization steps required to maximize the (log) marginal likelihood. §.§ Multi-task Gaussian process regression To be able to identify the vector-valued effort function z:ℝ^d→ℝ^D, we require vector-valued Gaussian processes. Here, we start from data 𝒯={(𝐱_i,𝐲_i)}_i=1^N_T, with 𝐲_i=[y^(1)_i ⋯ y_i^(D)]^⊤∈ℝ^D, for i=1,…, N_T. While 𝐗={𝐱_i}_i=1^N_T, is as before, we introduce 𝐳(𝐗) as the N_T· D-dimensional vector of outcomes of the vector-valued 𝐳 for all inputs. The entries of this vector are sorted such that first all first components, then all second components, etc. are concatenated, i.e. 𝐳(𝐗)=[z^(1)(𝐱_1) ⋯ z^(1)(𝐱_N_T) z^(2)(𝐱_1) ⋯ z^(2)(𝐱_N_T) ⋯]^⊤ . The N_T· D-dimensional data vector 𝐘 then has the same ordering, which is 𝐘=[y^(1)_1 ⋯ y^(1)_N_T y^(2)_1 ⋯ y^(2)_N_T⋯]^⊤ . In the vector-valued case, we have 𝐳∼𝒢𝒫(𝐦,𝐤) , where 𝐦:ℝ^d→ℝ^D is the vector-valued mean function. The kernel function 𝐤 is matrix-valued, hence 𝐤:ℝ^d×ℝ^d →ℝ^D×ℝ^D, where the entry (𝐤(𝐱,𝐱^'))_j,j^' gives the covariance between the jth and j^'th dimension of the output. While many different choices are possible, see <cit.>, we choose in this work the kernel that has been introduced in <cit.> as multi-task kernel. More precisely, see <cit.>, this is a separable kernel of type (𝐤(𝐱,𝐱^'))_j,j^' = k(𝐱,𝐱^')k_T(j,j^') , where we use a standard scalar kernel k, i.e. here the Gaussian kernel, to describe the covariance in the data, and the intertask covariance k_T to capture the covariance between the output dimensions. In <cit.>, k_T is simply a D× D matrix with entries that are treated as hyperparameters. To reduce the number of to be identified hyperparameters, <cit.> approximates this matrix is further by a low-rank approximation. We specifically use a rank-1 matrix, i.e. we have k_T(j,j^') = (𝐯𝐯^⊤)_j,j^' and need to estimate the D-dimensional vector 𝐯 as hyperparameter. This hyperparameter and any hyperparameter for the scalar kernel are collected in a set Φ. Similar to the single-valued case, we obtain 𝐳(𝐗) ∼𝒩(𝐦(𝐗),𝐤(𝐗,𝐗)) , where the N_T· D-dimensional mean vector 𝐦(𝐗) concatenates the vector-valued outputs as done for 𝐳(𝐗). The N_T· D × N_T· D covariance matrix 𝐤(𝐗,𝐗) is given by 𝐤(𝐗,𝐗) = [[ (𝐤(𝐗,𝐗))_1,1 ⋯ (𝐤(𝐗,𝐗))_1,D; ⋮ ⋱ ⋮; (𝐤(𝐗,𝐗))_D,1 ⋯ (𝐤(𝐗,𝐗))_D,D; ]] As before, a zero mean is assumed, giving 𝐳(𝐗) ∼𝒩(0,𝐤(𝐗,𝐗)). When making predictions, we consider again N_eval evaluation points 𝐗_∗ = {𝐱_∗,1, …𝐱_∗,N_eval}, giving the N_eval· D-dimensional vector 𝐳(𝐗_∗). We immediately consider the noisy case with zero mean, where the Gaussian noise model 𝐲_i = 𝐳(𝐱_i) + ϵ_i, ϵ_i ∼𝒩(0,Σ), i=1,…,N_T , and noise covariance Σ = (σ_1^2,…, σ_D^2), is applied. The observed data 𝐲_i conditioned to the GP 𝐳, the input 𝐱_i and the noise covariance Σ hence is distributed as 𝐲_i | 𝐳,𝐱_i,Σ,Φ∼𝒩(𝐳(𝐱_i),Σ) , where Φ collects all potential hyperparameters of the kernel, as indicated above. For the vector of observed outputs 𝐘, the likelihood is given by 𝐘 | 𝐳,𝐗,Σ,Φ∼𝒩(𝐳(𝐗),Σ) , with Σ = Σ⊗ I_N_T. It can be shown that then the joint distribution of the training outputs Y and the predicted outputs Y_∗ is (with zero mean) [[ 𝐘; 𝐘_∗ ]] ∼𝒩( 0, [[ 𝐤(𝐗,𝐗) + Σ 𝐤(𝐗,𝐗_∗); 𝐤(𝐗_∗,𝐗) 𝐤(𝐗_∗, 𝐗_∗) ]] ) , with, e.g., 𝐤(𝐗,𝐗_∗) the N_T· D × N_eval· D kernel matrix combining evaluations on the training and the evaluation inputs. Condition finally gives 𝐘_∗| 𝐳, 𝐗_∗,𝐗, Σ, Φ∼𝒩( 𝐤(𝐗_∗,𝐗) [𝐤(𝐗,𝐗)+ Σ]^-1𝐘, 𝐤(𝐗_∗,𝐗_∗) - 𝐤(𝐗_∗,𝐗) [𝐤(𝐗,𝐗)+ Σ]^-1𝐤(𝐗,𝐗_∗)) . The hyperparameters can be found by maximum likelihood estimation with the log marginal likelihood, which is given, following <cit.>, by log p(𝐘|𝐗,σ^2,Φ) = -1/2𝐘^⊤ [𝐤(𝐗,𝐗)+ Σ]^-1𝐘 - 1/2log | 𝐤(𝐗,𝐗)+ Σ | - N_TD/2log 2π . Estimation of vector-valued GPs, when including the hyperparameter optimization, is very computaionally challenging, with O(N_opt N_T^3 D^3) operations. §.§ pH-DAE identification via MT-GP regression As anticipated in Section <ref>, the main idea behind the GP-based identification of pH-DAEs lies in the search for an effort function 𝐳(𝐱) that fits the modified problem E(𝐱) ·𝐱̇ - B(𝐱) 𝐮 = (J(𝐱)- R(𝐱)) 𝐳(𝐱) for given input-to-output data pairs { (𝐱(t_i), E(𝐱(t_i)) 𝐱̇(t_i) - B(𝐱(t_i)) 𝐮(t_i)) }_i=1^N_T . As before, one defines the inputs 𝐱_i := 𝐱(t_i), i=1,…,N_T and collects them in 𝐗. More importantly the output training data is defined as 𝐲_i := E(𝐱(t_i)) 𝐱̇(t_i) - B(𝐱(t_i)) 𝐮(t_i)) and collected in a vector of outputs 𝐘. While, with this data, one cannot learn the effort function directly, one can still learn its linear transformation (J(𝐱)- R(𝐱)) 𝐳(𝐱). As discussed in <cit.>, and applied to the setting of pH-ODEs in <cit.>, a linearly transformed GP is again a GP. This gives rise to a modified regression problem, where the linearly transformed function 𝐳_JR(𝐱):= (J(𝐱)-R(𝐱))𝐳(𝐱) is defined and replaced by a vector-valued Gaussian process 𝐳_JR∼𝒢𝒫(0,𝐤_JR) . In this construction, 𝐤_JR is the linearly transformed kernel 𝐤_JR(𝐱,𝐱^')= (J(𝐱)-R(𝐱))𝐤(𝐱,𝐱^')(J(𝐱)-R(𝐱))^⊤ . All ideas from above identically carry over to the linearly transformed kernel, immediately giving 𝐘_∗| 𝐳_JR, 𝐗_∗,𝐗, Σ, Φ∼𝒩( 𝐤_JR(𝐗_∗,𝐗) [𝐤_JR(𝐗,𝐗)+ Σ]^-1𝐘, 𝐤_JR(𝐗_∗,𝐗_∗) - 𝐤_JR(𝐗_∗,𝐗) [𝐤_JR(𝐗,𝐗)+ Σ]^-1𝐤_JR(𝐗,𝐗_∗)) . Hyperparameter optimization is done using the log marginal likelihood log p(𝐘|𝐗,σ^2,ϕ) = -1/2𝐘^⊤ [𝐤_JR(𝐗,𝐗)+ Σ]^-1𝐘 - 1/2log | 𝐤_JR(𝐗,𝐗)+ Σ | - N_TD/2log 2π . The only step that is missing is to recover 𝐳(𝐱) from the learned 𝐳_JR(𝐱). Similar to an idea in <cit.>, this is solved by conditioning. Hence, we derive for the joint distribution of the predicted outputs 𝐘_∗ and the to be found effort function outputs 𝐳(𝐗_∗) [[ 𝐘_∗; 𝐳(𝐗_∗) ]] ∼𝒩( 0, [[ 𝐤_JR(𝐗_∗,𝐗_∗) 𝐤_JR,I_D(𝐗_∗,𝐗_∗); 𝐤_I_D,JR(𝐗_∗,𝐗_∗) 𝐤(𝐗_∗, 𝐗_∗) ]] ) , with the D-dimensional identity matrix I_D in “mixing” kernels 𝐤_JR,I_D(𝐱,𝐱^')= (J(𝐱)-R(𝐱))𝐤(𝐱,𝐱^')(I_D)^⊤ = (J(𝐱)-R(𝐱))𝐤(𝐱,𝐱^') , 𝐤_I_D,JR(𝐱,𝐱^')= I_D𝐤(𝐱,𝐱^')(J(𝐱)-R(𝐱)) = 𝐤(𝐱,𝐱^')(J(𝐱)-R(𝐱))^⊤ . Then, the necessary mean of the Gaussian process for the effort function is found via conditioning to be 𝐳(𝐗_∗) = 𝐤_I_D,JR(𝐗_∗,𝐗_∗) 𝐤_JR(𝐗_∗,𝐗_∗)^-1𝐘_∗ . To summarize, the algorithm to identify the effort function for given data 𝒯={(𝐱(t_i),𝐮(t_i))}_i=1^N_T is as follows: * Construct the derivative data {𝐱̇(t_i)}_i=1^N_T * Assemble the set of inputs 𝐗 with 𝐱_i := 𝐱(t_i), i=1,…,N_T and the vector of outputs 𝐘 with 𝐲_i := E(𝐱(t_i)) 𝐱̇(t_i) - B(𝐱(t_i)) 𝐮(t_i)). * Select evaluation points 𝐗_∗. * Optimize hyperparameters using the log marginal likelihood log p(𝐘|𝐗,σ^2,ϕ). * Derive (via conditioning) the mean of the posterior 𝐘_∗| 𝐳_JR, 𝐗_∗,𝐗, Σ, Φ = 𝐤_JR(𝐗_∗,𝐗) [𝐤_JR(𝐗,𝐗)+ Σ]^-1𝐘, i.e. the predictor for the output data. * Estimate the mean effort function 𝐳(𝐗_∗) = 𝐤_I_D,JR(𝐗_∗,𝐗_∗) 𝐤_JR(𝐗_∗,𝐗_∗)^-1𝐘_∗. §.§ Implementation details The implementation of the identification of pH-DAEs largely relies on the use of the GPyTorch package <cit.> in version 1.11 as well as the packages PyTorch 2.0.1 and NumPy 1.26.0. Visualizations are done using Matplotlib 3.8.0. In contrast to the default of the GPyTorch package, all calculations are carried out in double precision. To construct the derivative data, we build for each dimension of the data {𝐱(t_i)}_i=1^N_T a Gaussian process without approximations (). The derivative data is then obtained by finding the exact derivatives of the Gaussian process using standard techniques of PyTorch. One challenge is the construction of the linearly transformed multi-task Gaussian process 𝐳_JR∼𝒢𝒫(0,𝐤_JR). Here, we start from the class , which is already present in GPyTorch, and derive a new class. In that class, the application of the kernel, i.e. the method, is implemented by evaluating the matrix-valued multitask kernel 𝐤(𝐱,𝐱^') from the parent class and then updating it according to equation (<ref>). For convenience, we also add methods to evaluate the kernels 𝐤_JR,I_D and 𝐤_I_D,JR. Similarly, we derive a new mean class for the linearly transformed Gaussian process. The constructed GP model is then derived from an , hence we do not apply any approximation techniques present in GPyTorch. Moreover, we can reuse the existing . A manual construction of the log marginal likelihood is not necessary, as GPyTorch does this automatically. Similarly, the minimization of the likelihood is done using standard techniques and the ADAM optimizer present in GPyTorch. The same holds for the estimation of the mean posterior 𝐘_∗| 𝐳_JR, 𝐗_∗,𝐗, Σ, Φ. To calculate the mean effort function 𝐳(𝐗_∗), we extract the kernel coefficient vector α= 𝐤_JR(𝐗_∗,𝐗_∗)^-1𝐘_∗ via from the trained linearly transformed multi-task GP model and apply to it the properly evaluated, previously implemented, kernel 𝐤_I_D,JR. § NUMERICAL RESULTS §.§ Model systems We consider two model systems from electrical network design and constrained multibody systems modeling. §.§.§ Electrical network test example The first model is given by an electrical network model. It consists of a circuit with two nodes and three network elements, see Figure <ref>. It has a a capacitance at node e_1 with charge function q_C=q(u_C)=u_C^2/2, a linear resistor with resistance R between nodes e_1 and e_2 and a time-dependent voltage source with input u(t) at node e_2 and current I_V through the voltage source. Using the charge-conserving modelling approach defined in <cit.> yields a semi-explicit index-1 pH-DAE with 𝐱=(x_1, x_2, x_3)^T=(q_C, u_2, I_V)^⊤ and nonlinear function 𝐳(𝐱)=(√(2 q_C),u_2, I_V)^⊤: E ·𝐱 = (J-R) 𝐳(𝐱) + B𝐮, 𝐲 = B^⊤𝐳(𝐱) with system matrices E=[ 1 0 0; 0 0 0; 0 0 0 ], J=[ 0 0 0; 0 0 1; 0 -1 0 ], R=[ G -G 0; -G G 0; 0 0 0 ], B= [ 0; 0; 1 ] and Hamiltonian H(𝐱)= 2/3√(2) x_1^3/2, which only depends on x_1. Note that J-R is regular for G≠ 0. One easily verifies the compatibility condition E^⊤𝐳(𝐱) = ∇ H(𝐱), which reads in this case: [ z_1(𝐱) ] = ∇_x_1 H(x_1), ∇_x_2,x_3 H(x_1) =0. §.§.§ Constrained multibody system test example The equation of motion for a constrained multibody system, described by the augmented Hamiltonian H(𝐪,𝐩,λ):= 1/2𝐩^⊤ M^-1𝐩 + U(𝐪) - g(𝐪)^⊤λ with constraint 0=g(𝐪), is given by the Port-Hamiltonian DAE systen for 𝐱=(𝐪,𝐩,λ) as (I,I,0) ẋ = ( J-R ) 𝐳(𝐱) + B 𝐮 with J = [ 0 I 0; -I 0 0; 0 0 0 ], R = [ 0 0 0; 0 0 0; 0 0 I ], B = [ 0; 𝐛; 0 ], 𝐳(𝐱) = [ H_𝐪(𝐪,𝐩,λ); H_𝐩(𝐪,𝐩,λ); H_λ(𝐪,𝐩,λ) ]= [ ∂ U(𝐪)/∂𝐪 - ∂ g(𝐪)/∂𝐪^⊤λ; M^-1𝐩; -g(𝐪) ], if an external force B 𝐮(t) is applied to the system. The compatibility condition diag(I,I,0)^⊤𝐳(𝐱) = ∇ H(𝐱) holds due to g(𝐪)=0. For the simple pendulum in the plane of length l, mass m and the gravitational constant g̃ <cit.>, we have M= (m,m,m l/12), U(q)=[ 0; g m q_2; 0 ], g(𝐪)=[ q_1-l/2cos(q_3); q_2 - l/2sin(q_3) ], b= [ 0; 1; 0 ], if an external force is applied to the the pendulum acting in y-direction. For the simple mathematical pendulum in the plane of length l, mass m and gravitational constant g̃ <cit.> shown in Fig. <ref> we have M= (m,m), U(q)=g̃ m q_2, g(𝐪)=𝐪^⊤𝐪 - l^2, 𝐛= [ 0; m ], if an external force is applied to the pendulum acting in y-direction. §.§ Benchmark data sets The data utilized in our numerical results is generated by numerical integration from the respective test examples. For the first test example, we set G=1 and u(t)=a ·(1+sin(ω t)) and choose the parameters a= 1 and ω = 1. For these parameter choices, we solve first the scalar ODE ẋ_1 = G · (u(t)-√(2 x_1)), x_1(0)=2 on t ∈ [0, 10 π/ ω] using the explit Euler scheme with constant step size 10 π/ (3000 ω). For the derived data { (t_i,x_1(t_i)}_i=0^N_T with t_0=0 and t_N_T=10 π, we then define the algebraic components {x_2(t_i)}_i=0^N_T and {x_3(t_i)}_i=0^N_T by x_2(t_i) := u(t_i), x_3(t_i) :=G · (x_1(t_i)-x_2(t_i)). If needed, the derivative data {ẋ_1(t_i)}_i=0^N_T can be obtained by ẋ_1(t_i) = G ·(u(t_i)-√(2 x_1(t_i))). For the second test example, we set m=l=g̃=1 and choose the input u(t)=βg̃cos(2 ·π/τ t) for τ = 1 and β = 1. For these parameter choices, we solve the equation of motion in minimal coordinates given by the scalar second order ODE α̈= -1/l( g̃-u(t) ) sin(α(t)), α(0)=π/4, α̇(0)=0 on t ∈[0,30] using the explicit Euler scheme with step size h=0.01 applied to the transformed first order system. For the derived data { (t_i,α(t_i)}_i=0^N_T and { (t_i,α̇(t_i)}_i=0^N_T with t_0=0 and t_N_T=30, we then define the solution of the pH-DAE (<ref>) {q(t_i)}_i=0^N_T, {p(t_i)}_i=0^N_Tand {λ(t_i)}_i=0^N_T by q(t_i) := l [ sin(α(t_i)); - cos(α(t_i)) ], p(t_i) := ml α̇[ cos(α(t_i)); sin(α(t_i)) ], λ(t_i) :=m/2l( cos(α(t_i)) (u(t_i)-g̃) - l α̇(t_i)^2). If needed, the derivative data {q̇(t_i)}_i=0^N_T and {ṗ(t_i)}_i=0^N_T can be obtained by q̇(t_i) = 1/m p(t_i), ṗ(t_i) = [ 0; -m g̃ ] + 2 q(t_i) λ(t_i) + [ 0; m ] u(t_i). §.§ Benchmark setup In the construction of the Gaussian processes for the derivative data, we use the Gaussian kernel, where the hyperparameters are automatically determined by minimizing the respective negative log marginal likelihoods using the ADAMS optimizer with a learning rate of 0.1. The optimizer is stopped after 200 iteration with a stagnating likelihood. Note that the approximation of the derivatives has an impact on the prediction. It is necessary to differentiate the case of first calculating the derivatives on the full dataset and then selecting subsets of this dataset for training and the case of first extracting these subsets and only after that applying the derivative calculation on the subsets. While in the first case, a consistent (good) approximation of the derivative data is obtained, the derivatives in the second case are constructed from potentially much worse resolved datasets, leading to potentially strongly deteriorated derivative approximations. Still, the second case is the more realistic setting from a practical perspective of real data. While our main numerical error analysis of the identification method in the next subsection will work with idealized data, i.e. the first case, we provide a separate analysis on the impact of the derivative data approximation in Section <ref>. The main regression task using the linearly transformed multi-task GP, is also carried out with the Gaussian kernel. As before, the set of hyperparameters is automatically found with 200 steps of the ADAMS optimizer with learning rate 0.1. Errors are always reported as root mean square errors (RMSE). To test the prediction error of the constructed Gaussian processes, N_test samples of the original dataset are randomly sub-selected and set aside as test set 𝒯_test, which is kept identical throughout the full analysis. Prediction errors are then reported using learning curves, i.e. we report the RMSE of the model on the test set for a growing number of training samples in a double-logarithmic plot. The training samples are randomly sub-selected from the dataset 𝒯∖𝒯_test, such that in a single test run N_T=2,4,8,16,… samples are extracted that form nested sets. The RMSE for each of the training sets then forms the data for a single learning curve. Since each learning curve is however subject to the randomization of the data selection, the process is repeated such that five such learning curves are constructed and averaged. In general, we expect the learning curves to decrease for growing number of training samples. However, it may happen that – starting from a given amount of training data – the model no longer improves. This can have various reasons including noise in the data, the selected regularization, etc., which are discussed, if necessary. §.§ Identification error analysis: Electrical network test example In the following, we study the identification error that we obtain, when using the newly introduced pH-DAE effort function identification method. The identification comprises two parts. In a first step the GP 𝐳_JR:=(J-R) 𝐳 is estimated, which is the main learning task. The output samples in this electrical network test example are 𝐲_i=(ẋ_1,0,-u(t)), with i=0,…,N_T. When analyzing this data, one observes that only the first component z_JR,1(𝐱) is dependent on data (ẋ_1) that is “noisy”, more precisely it comes from the solution of a discretized pH-ODE, as discussed in Section <ref>. The output data for the second component z_JR,2(𝐱) is constant zero, while the third trained component 𝐳_JR,3(𝐱) only depends on the analytically given input. This has an impact on the prediction error reported for growing number of training samples as learning curve in Figure <ref>, on the left-hand side. While the second and third component consistently decrease until they reach an error level of about 10^-5 with 128 training samples, the error in the first component already stagnates in an error range of 10^-2 for only 16 samples. The stagnation of the error at the error level of 10^-5 comes from the fact that optimizer selects the regularizing noise variance σ^2 in the range of 10^-5 for the various runs. The surprisingly early stagnation of the first component at about 10^-2 perfectly matches the numerical noise of the state space data. As a reminder, we generate the training data using the first-order accurate explicit Euler method with a step size of 10^-2. This noise gets immediately translated into the prediction error of the method for the first component. In a second step, the actual effort function 𝐳(𝐱) is derived. Evaluation errors for the effort function on the test data are reported in Figure <ref>, on the right-hand side. The matrix (J-R) couples the first two components of the effort function, while the third component of the identified function 𝐳_JR(𝐱) is connected to the second component of the effort function. As a consequence, the first components of 𝐳(𝐱) show a similar error stagnation in the range of 10^-2 as the first component of 𝐳_JR(𝐱), while the third component shows the exact same error behavior as 𝐳_JR,2(𝐱), which is the expected behavior. Overall, we obtain a prediction error in the range of the discretization error, hence data error, range of the training data with only 16 training samples, which is a quite significant result. To get an intuition for the identified effort function, we evaluate in Figure <ref> the identified effort function trained on N_T=32 training samples, which are visualized by black crosses, on all training points and report both the evaluated effort function and its point-wise error with respect to the exact solution. Visually, there is now way to distinguish the model from the exact solution. The error is rather equally distributed over the full time interval. In the first and second component of the effort function, it becomes noticeable that the random selection of the training samples did not select training samples towards the beginning of the time interval. Therefore, initially, the effort function shows an increased error, which is not visible in the full RMSE due to averaging. To avoid such artifacts, one may want to enforce a more even distribution of training samples. In a further study, depicted in Figure <ref>, we limit the random selection of the training data only to the time interval t∈ [0,20] and still evaluate the model on the full available state space data. Thereby, we can observe, how well the model extrapolates using the given data. Even in this more challenging analysis, the model with N_T=32 training samples matches the exact solution with a similar error range as before. In fact, this is no surprising result as the effort function does not directly depend on the time, however it depends on the state of the system. By picking samples from the time interval t∈ [20,30], we already properly cover the state space (of this limited test case), hence the effort function gets properly resolved. §.§ Identification error analysis: Constrained multibody system example We repeat the same study as for the electrical network test case for the constrained multibody system, i.e. the pendulum, to verify that the approach is generally applicable to different applications. As a small deviation from the electrical network test case, we here approximate the derivative information for 𝐱̇ by simple finite differences. All other parameters remain identical. In Figure <ref>, the learning curves for the transformed effort function 𝐳_JR and the finally identified effort function 𝐳 are depicted. The prediction error of the actually “trained” transformed effort function 𝐳_JR, decreases until reaching the discretization error limit for the first four components. Only the error for the fifth component reaches a lower level. The reason for this is that the first four components rely on the (discretized) state space information and its time derivative, hence are subject to the time discretization error. The fifth component is only weakly depending on this data via the input features to the model, while in fact approximating the constant zero function. This can apparently be done at higher accuracy. Overall, the discretization error level is reached with about N_T=64 training samples. Very similar to this is the result for the finally identified effort function, in the same Figure <ref>. Again, all components have reached the discretization error limit with about N_T=64 training samples. Beyond that, Figure <ref> depicts the predicted effort function output for N_T=32. Visually, the predicted effort function output and the exact solution are full agreement, while the error for this model, depicted in the same figure, is typically in the range of the discretization error of the state space data, i.e. 10^-2. In Figure <ref>, the results of the final extrapolation study are presented. With N_T=32 training samples taken from the time interval t∈ [0,20], the effort function still gets properly extrapolated towards the non-observed time-interval t∈ [20,30]. §.§ Influence of derivative approximation We return to the electrical network test example in order to analyze the influence of the approximation of the derivatives 𝐱̇ on the identification of the effort function. To this end, we redo the numerical experiments with learning curves from Section <ref>. However, this time, we compare results for (1) analytically exact derivative data, (2) derivative data generated with GPR on the full data set as used before, and (3) derivative data generated with GPR only on the individual training sets. We had earlier, in Section <ref>, motivated that the third case is indeed the most realistic one, however, it is expected to introduce a higher error in the derivative approximation. The results for this study are depicted in Figure <ref> for the first component of the transformed effort function z_JR,1 and for the first component of the finally identified effort function z_1. When using exact derivative data, the error no longer stagnates at the discretization error level of 0.01. This is an expected result, as the discretization error has no influence on the analytically derived derivative. Of higher relevance is the comparison between the use of GPR derivative approximations on the full data set and the training data set only. Indeed, even thought the derivatives are generated in the latter case from much fewer points, the prediction error, for growing number of training samples, also reaches the discretization level. This shows, that in this test case, the discretization and the prediction error are properly balanced: Both, the prediction error and the derivative approximation error for growing training set size decay in a similar way, leading to an overall converging scheme. Still, it needs to be noted that the more realistic GPR derivative approximation on only the training set requires more training for a model of similar accuracy, compared to the other GPR approximation approach. § CONCLUSIONS In this work, we have introduced an approach for the identification of the effort function of a nonlinear pH-DAE system from input and state space data. The effort function is replaced by a multi-task Gaussian process, which allows its identification on noisy data. In our results, we have shown that the developed approach succeeds to identify the nonlinear effort in two model example cases up to the discretization error of the provided training data. First tests promise good generalizing properties outside of the provided test data. It is worth noting that the proposed approach using Gaussian processes has no difficulty in identifying pH-DAE systems with a higher index - the constrained multibody system has index 3. Future work needs to cover larger and more realistic test cases, where also larger parts of the pH-DAE, i.e. not only the effort function, are identified. While the presented study could be easily executed on a laptop, future work also needs to cover fast methods for multi-task GPR, in order to reduce the high computational complexity for larger-scale applications. siam
http://arxiv.org/abs/2406.19122v1
20240627120854
Mitigation of fine hydrophobic liquid aerosols by polydispersed uncharged and charged water droplets
[ "Debabrat Biswal", "Bahni Ray", "Debabrata Dasgupta", "Rochish M. Thaokar", "Y. S. Mayya" ]
physics.flu-dyn
[ "physics.flu-dyn" ]
IITD]Debabrat Biswal IITD]Bahni Raymycorrespondingauthor IITD]Debabrata Dasgupta IITB]Rochish M. Thaokar IITB]Y.S. Mayya [IITD]Department of Mechanical Engineering, Indian Institute of Technology Delhi, Hauz Khas, New Delhi - 110016, India [IITB]Department of Chemical Engineering, Indian Institute of Technology Bombay, Powai, Maharashtra - 400076, India [mycorrespondingauthor]Corresponding authors: bray@iitd.ac.in(Bahni Ray), Tel: (91)-11-2659-6393, Fax: (91)-11-2658-2053 § ABSTRACT One of the harmful contaminants in the atmosphere, which negatively affects the well-being of both humans and animals, is the suspended respirable particles. The most difficult aspect of the study is now removing these fine respirable particles from the atmosphere. This study investigates the scavenging phenomenon of fine hydrophobic liquid aerosols (10 nm to 1050 nm) by uncharged and charged droplets in a self-made scaled test rig. In this study, a hollow cone nozzle with a 1 mm orifice diameter uses tap water to disperse liquid into fine droplets. The paraffin oil and Di-Ethyl-Hexyl-Sebacat (DEHS) solution are aerosolized to be scavenged by water droplets. This research employs a high-speed imaging technique and theoretical modeling approach to measure the size distribution and charge acquired by water droplets respectively. The findings of this study show that uncharged droplets dispersed through a hollow cone nozzle exhibit a scavenging efficiency of approximately 55%. On the other hand, charged droplets demonstrate significantly higher scavenging efficiency, about 99%. The investigation also reveals that the scavenging efficiency of both uncharged and charged droplets at lower potentials (1 and 2 kV) is nearly identical. Charged droplets liquid aerosol DC electric field § INTRODUCTION One of the primary concerns for both human health and the environment is the release of particulate matter carried in flue gases emitted from industrial and automotive exhausts <cit.>. Among all the pollutants, the most detrimental to air quality are the very fine aerosols found in solid, liquid, and gaseous forms. In recent years, the concentration of these fine (≤ 2 μ m) and ultrafine (≤ 0.2 μ m) <cit.> particles in the atmosphere has surpassed that of micron-sized particles <cit.>. These tiny particles can directly infiltrate the bloodstream and human lungs. Prolonged exposure to unhealthy environments leads to various incurable ailments, such as cardiovascular and pulmonary diseases <cit.>. Due to the adverse catalytic effect of such pollutants on the environment, researchers and scientists are facing a challenging task in controlling and reducing the concentration of fine and ultra-fine particles in the atmosphere. This challenge has driven them to devise various mechanisms and equipment to tackle the issue effectively <cit.>. Several conventional air filtration methods have been developed to eliminate these hazardous particles from the environment. The techniques such as electrostatic precipitators (ESP) <cit.>, cyclones <cit.>, wet scrubbers <cit.>, and fibrous filters <cit.> have been employed for this purpose. Additionally, HEPA (High-Efficiency Particulate Air) filters are commonly used to capture tiny particles. Through various mechanisms, such as inertial impaction, inertial interception, and Brownian diffusion <cit.>, these tiny particles are effectively removed in the HEPA filter. Nevertheless, with extended use, the pores of the HEPA filter become blocked, resulting in a substantial drop in pressure. This increase in pressure ultimately leads to a reduction in the filter's effectiveness over time, necessitating its eventual replacement <cit.>. The Stokes number and Reynolds number are crucial in influencing inertial impaction, while the Peclet number and Reynolds number affect the diffusion mechanism. The conventional mechanisms, such as cyclone separators, and wet scrubbers, are effective for larger particle sizes, typically exceeding 10 μm. Electrically driven devices like electrostatic precipitators, electro-spray-based air purifiers, and charged droplets have been developed to remove the fine particulate matter smaller than 2 μm. The functioning of electrostatic scrubbers and electro spray is based on the basic principle of Coulomb attraction between charged droplets and particles. The particle removal mechanism using electrospray and ESP has been the subject of numerous experimental and numerical studies. These studies are delineated in the next section. T. Gary et al. <cit.>, conducted research on an electrospray technique that utilizes zero pressure drop and ozone for the removal of small aerosols through electrospray wick sources. The study revealed that this electrospray-based purification method performed effectively for a wide range of particle sizes while consuming less water and power. The efficiency of air purification is influenced by particle size, air flow rate, and system design factors. W. Balachandran et al. <cit.>, performed experiments to remove uncharged cigarette smoke and charged smoke obtained through a corona charger using fine droplets produced by a rotary atomizer. The results showed that charged droplets significantly enhanced the removal efficiency compared to uncharged droplets. Specifically, the removal efficiency was found to be four times higher for charged smoke particles compared to uncharged smoke. An experimental investigation was carried out to assess the particle removal efficiency of ESP and electrospray by K. Jung–Ho et al. <cit.>. The combined performance of electrospray and ESP yielded improved results in particle collection. Furthermore, it was observed that the utilization of ESP in conjunction with electrospray consumed less energy. W. Tessum et al. <cit.> investigated the influence of surfactant in the spray, particle charge, and diameter on the scavenging efficiency. The results showed that larger particles (with a diameter exceeding 2 μm) were more efficiently removed compared to smaller ones. Furthermore, employing a charged spray with surfactant (opposite polarity to the particle charge) led to the significant removal of highly charged particles. S. Singh et al. <cit.>, experimentally studied the removal of three different types of particles i.e., smoke, NaCl, and metallic aerosol using a cloud of charged droplets generated by an electrohydrodynamic atomizer. The investigation revealed that the scavenging efficiency for smoke particles and NaCl varied from 19% to 90% respectively within the size range of 30 to 200 nm. Additionally, metallic particles exhibited significant scavenging effectiveness for sizes larger than 30 nm. A. Jaworek et al. <cit.> studied the trajectory and deposition of small particles by charged droplets. It was observed that uncharged droplets were ineffective for removing particles smaller than 10 μm. However, charged droplets demonstrated efficient capturing of such particles, with smaller droplets exhibiting notably high collection efficiency. An investigation was conducted on the removal of particulate matter emitted from diesel engines using an electrostatic water spraying scrubber by H. H. Tran et al. <cit.>. The study revealed that as the engine's load increased, the concentration of particulate matter also increased, and the removal efficiency by uncharged spray also improved. Additionally, it was observed that the scavenging efficiency of particles by charged droplets increased by 4 to 7 times compared to uncharged droplets under constant engine load conditions. S. Lipeng et al. <cit.>, experimentally examined how particle morphology and hydrophilicity influenced the scavenging efficiency of particles by uncharged and charged droplets. Uncharged droplets achieved higher collection efficiency for nearly spherical particles in comparison to agglomerated particles, but this efficiency decreased when the droplets were charged. Moreover, charging the droplets improved the removal efficiency of hydrophobic particles. The removal of particles by ESP and electrospray has been the subject of numerous experimental and numerical research. The majority of the existing literature has studied the removal of bigger-sized particles. However, it is rare to find research that concentrates on scavenging nanometre-sized particles. According to the literature, in order to remove nanometre-sized particles, a higher voltage should be used. This may cause electric breakdown and the atmosphere may be harmed by the creation of ozone. There should be the optimum working parameter (applied potential, spray parameter, and selection of aerosol) to make an effective electrospray-based air cleaner for fine particles. To preserve these factors for future generations, the present experimental work shows how charged and uncharged droplets remove hydrophobic liquid aerosol in the nanometre size range. This research also emphasizes determining the removal efficiency of aerosol along with the water consumption required for this study. § EXPERIMENTAL SETUP AND METHODOLOGY §.§ Experimental setup The layout of the experimental apparatus used to carry out this study is depicted in Figure <ref>. This experimental configuration comprises five distinct sections, namely: (1) An experimental chamber designed for particle droplet interaction, (2) Aerosol generation and feeding setup, (3) A mechanism for creating sprays, (4) The droplet charging system, and (5) A sampling unit for measuring particle size and number distribution. We use a transparent 5 mm thick acrylic sheet to make the experimental chamber for droplet and particle interaction. The scavenging chamber has dimension of 600 mm × 200 mm × 200 mm (height × length × width). The volume of the chamber is a maximum of 24 liters. A spraying hollow cone nozzle with a 1 mm orifice is mounted on the top of the experimental chamber. A copper ring with a 40 mm diameter is placed below the nozzle at a distance of 10 mm separation. The nozzle and copper ring are fixed so that the center of the nozzle and ring coincide with each other. The cross-flow of aerosols relative to the droplet falling direction occurs into and out of the chamber through 8 mm vents. The 8 mm vents are provided on two opposite side walls of the chamber. The 8 mm vents are located at a distance of 500 mm from the top of the chamber. To avoid the accumulation of water coming from the spray, a 20 mm drainage passage is provided at the bottom of the chamber. The paraffin oil and Di-Ethyl-Hexyl-Sebacat (DEHS) solution (Sigmaaldrich) are aerosolized using the Topas ATM 228 atomizer. The aerosol concentration can be varied by changing the operating vapor pressure of the aerosol generator i.e., 0-800 hPa. As the generated aerosol possesses low pressure and low velocity, an external blower (BOSCH GBL620 Air Blower) with variable flow rate supplies pressurized air to feed the aerosol into the experimental chamber. The flow rate of compressed air is measured using a 10 liter/min air rotameter. §.§ Methodology Aerosol-air-laden flows into the experimental chamber through an inlet. After supplying the aerosol to the experimental chamber, a period of around 30 minutes is kept to maintain the steady-state flow of the aerosol. once the aerosol comes to a steady state, the initial concentration and size distribution of the aerosol are measured using the GRIMM Scanning Mobility Particle Sizer (SMPS) with Condensation Particle Counter (CPC) Model 5416. Water droplets are dispersed from the top of the experimental chamber through a hollow conical nozzle on the aerosol. Here, water is pumped from a water reservoir to the hollow cone nozzle using a Crompton self-priming pump (Model mini strong I). The water flow rate and pressure are measured to be 0.6 lpm and 2.5 bar respectively, using a rotameter (maximum reading of the rotameter is 10 lpm) and pressure gauge (maximum reading is 8 bar). After a continuous spray, a 30-minute time period is similarly maintained to get a steady state of particle removal. Here to avoid the entrapment of water droplets into the SMPS, a diffusion dryer is kept at the inlet of SMPS. Then the final aerosol concentration after spray impinging on it is measured. using SMPS. To study the effect of the electric field in the spray on aerosol removal, the droplet is charged. To charge the water droplets, the ring electrode with a diameter of 40 mm is grounded, and the spray nozzle (hollow cone nozzle/needle) is connected to the positive polarity of a high-power DC source (10 kV, 10 mA, dual polarity, Ionics Pvt. Ltd). When the charged droplets impinge on the aerosol, the remaining aerosol concentration is sampled by SMPS. § RESULTS AND DISCUSSIONS The electrostatic force of attraction and the size of the droplets have a significant role in removing fine aerosols in a wet electrostatic scrubber. This study focuses on analyzing (1) the droplet size distribution using a high-speed imaging technique, (2) the non-uniform electric field distribution and charge acquired by droplets using theoretical modeling, and (3) the effectiveness of the uncharged and charged droplets in removing aerosols. §.§ Analysis of droplet size distribution using image processing technique A high-speed camera (Photron MINI UX 100) connected to a high-magnification lens (Navitar 12X) is used to visualize and record the spray phenomenon to evaluate droplet size distribution. Here the spray is generated using a hollow cone nozzle with a flow rate of 0.6 liter/min and a pressure of 2.5 bar. The droplets are charged using a high-power DC source with various applied potentials from 1 kV to 9 kV with a 1 kV step-size increment. An LED light source is utilized to illuminate the spray regime. A series of spray images at different locations is visualized and captured using a high-speed camera with various fps (frames per second) as shown in Figure <ref>. During the analysis of droplet size distribution, the following assumptions are considered. * There is no evaporation of water droplets. * No entrapment of air molecules in droplets. * No coalescence occurs among the droplets. * Droplets are assumed to be spherical. Figures <ref> (a) and (b) show the image of the spray captured at 500 fps and 4000 fps respectively. In this image, the droplets appear distorted and spike. Therefore at lower fps (500 and 4000 fps), it is difficult to determine the size of the droplets generated through the spray. Figure <ref> (c) represents the images of the droplets captured at a camera speed of 32000 fps. This image distinguishes the droplet and the shape of the droplet looks spherical. However, the series of images captured at higher camera speed (32000 fps) are considered for droplet size distribution with the help of image processing method. Subsequently, the high frame rate (32000 fps) images are analyzed using image processing software known as "Image J (version 2.9)". The steps followed to do the droplet size analysis are discussed below. The hundred images are selected from different spray regimes for image processing. Each raw image is imported to "Image J" software and converted to an 8-bit binary image. The image is subtracted from a reference image and processed for auto-thresholding. The binary image is filtered through a "fill hole" algorithm to avoid empty holes inside the droplets that appear due to illumination effects. The area of each spherical droplet is determined using the "particle analyze" algorithm with maintaining a sphericity ratio between 0.9 - 1. The roundness of an object equals 1 for a perfect spherical and < 1 otherwise. The equivalent diameter of each droplet is determined using equation <ref> <cit.>. D_eq=2√(A/π) Here D_eq and A are the equivalent diameter of each droplet and each detected area respectively. Here we initially determine the droplet diameter in terms of pixels. To obtain the diameter of the droplets from pixels to physical units (mm or μm), the camera and lens are calibrated using a measuring scale. After the diameter of the droplets is determined and converted into μm. The size distribution of the droplets is plotted as shown in Figure <ref>. It is observed that this distribution graph follows a normal distribution pattern. Figure <ref> reveals that the maximum size of the charged droplets at 9 kV falls within the range of 70 – 150 μm. However, since the spray is mechanically generated using a hollow cone nozzle rather than electrically driven, the droplet size is not significantly changed at various applied potentials (1 - 9 kV). But when the spray is generated because of an external electric field, the droplet size is the function of the applied potentials. When the applied potentials are increased, the droplet size decreases. As the applied potential increases, the electric field also increases which causes enhancement of electric stress. The electric stress leads to the breakage and fragmentation of the droplets into tiny ones at higher applied potential. The high electric stress causes a reduction in droplet diameter <cit.>. As from the existing collision mechanisms theory, all the collision efficiency inversely depends on the droplet diameter. Consequently, when the applied potential is increased, the droplet diameter decreases, leading to a higher collision efficiency. §.§ Determination of charge induced on the droplet through mathematical modeling To remove fine particles through the electrostatic scrubbing method, the charged spray can be generated through various conventional mechanisms, (1) through the method of induction <cit.>: the charge is induced due to high voltage applied on the surface of a jet of water ejected from mechanically forced nozzle <cit.>. (2) Electrospraying: The spray is produced from a capillary solely through the electrodynamic interaction of the electric field acting on the liquid's surface <cit.>. (3) Corona charging: the droplets produced by the nozzle traverse through ionic current created by an electric discharge <cit.>. Out of the above three mechanisms of charged spray generation, the charging of droplets through induction is employed for this experimental work. After droplet size distribution is determined, the second key parameter of this study is to find out the charge of the droplets. Here the droplets get charged due to the method of induction. Different experimental studies are utilized to measure the charge of the droplets. Due to the constraints of experimental equipment in this study, a theoretical approach is adapted to evaluate the charge of the droplets. A mathematical expression given in Eq. <ref> is employed to determine the charge of the droplets <cit.>. q = 4πϵ_0 (1.65Ea) Where, ϵ_0 is the permittivity of free space, E is the electric field, and a is droplet radius. Due to the non-uniformity of the electric field in the pin-ring electrode configuration, evaluating the electric field strength is not a straightforward task. The expression for the electric field distribution <cit.> in this configuration is shown in Eqs. <ref>, <ref>. E_y(r,y) = a V ∑_i=0^∞[ξ_i (y-z_i)/[(y-z_i)^2+r^2]^2/3 - ξ_i (y+z_i)/[(y+z_i)^2+r^2]^2/3] E_r(r,y) = a V ∑_i=0^∞[ξ_i/[(y-z_i)^2+r^2]^2/3 - ξ_i/[(y+z_i)^2+r^2]^2/3] Here a is the radius of the droplet, V is the applied potential, z_i and ξ_i are the axial position and normalized charge of the i^th image charge given by recurrent relations. z_i = a^2/z_0+z_i-1 ξ_i = q_i/q_0 To obtain the charge values of droplets, the mathematical expressions representing the electric field along both the axial and radial directions are numerically solved in MATLAB. The estimated charges of various-size droplets at different applied potentials are given in Table <ref>. It is observed that with the enhancement of applied potential and droplet size, the charge of the droplet increases as shown in Table <ref>. 140-micron size droplet acquires a maximum charge of 0.8 pC at 9 kV applied potential. 70-micron size droplet acquires a minimum charge of 0.02 pC at an applied potential of 1 kV. tableCharge acquired by droplets: V is the applied potential, q_70, q_100, q_120, and q_140 are the charge acquired by 70, 100, 120, and 140 μm size droplets. V (kV) q_70 (C) q_100 (C) q_120 (C) q_140 (C) 1 2.26× 10^-14 4.61× 10^-14 6.65× 10^-14 9.05× 10^-14 2 4.51× 10^-14 9.22× 10^-14 1.33× 10^-13 1.81× 10^-13 3 6.77× 10^-14 1.38× 10^-13 1.99× 10^-13 2.72× 10^-13 4 9.02× 10^-14 1.84× 10^-13 2.66× 10^-13 3.62× 10^-13 5 1.13× 10^-13 2.31× 10^-13 3.32× 10^-13 4.53× 10^-13 6 1.35× 10^-13 2.77× 10^-13 3.99× 10^-13 5.43× 10^-13 7 1.58× 10^-13 3.23× 10^-13 4.65× 10^-13 6.34× 10^-13 8 1.80× 10^-13 3.69× 10^-13 5.32× 10^-13 7.24× 10^-13 9 2.03× 10^-13 4.15× 10^-13 5.98× 10^-13 8.15× 10^-13 §.§ Scavenging phenomenon of liquid aerosol by charged and uncharged droplets In the wet electrostatic scrubber, aerosol-laden air collides with the charged droplets when it passes from one region to another. During the collision between the aerosol and droplets, the aerosols are captured due to inertial impaction, interception, Brownian diffusion, and electrostatic force of attraction <cit.>. Once the aerosols are attached to the droplets, the removal mechanics of the aerosol is entirely controlled by the dynamics of the droplets and electrostatic force. This current research focuses on the removal efficiency of two different hydrophobic aerosols. The findings of this study are elucidated in the subsequent sections. §.§.§ Comparison of number and mass distribution of test aerosols In this study, paraffin oil and DEHS solution are atomized for fine liquid aerosol generation. These aerosols are effectively used to determine the removal efficacy of uncharged and charged droplets. Table <ref> presents the physical and chemical properties of these liquid solutions. These solutions are hydrophobic. In this study, both solutions are compared based on the number and mass concentration of the aerosol produced by the aerosol generator. The two solutions are aerosolized at different vapor pressures of the aerosol generator. If the solution is aerosolized at high vapor pressure, the aerosol concentration will be more and vice versa. Therefore, the generation of aerosol concentration strongly depends on the operating vapor pressure of the aerosol generator. In this experiment, a series of aerosol concentrations is considered by varying vapor pressure from 100 to 500 hPa with a 50 hPa interval. Out of all the different vapor pressures, the mass and number concentration of aerosols generated from paraffin oil and DEHS solution are sampled using SMPS at 300 hPa vapor pressure. The mass and number concentration of aerosol produced from two solutions are depicted in Figure <ref> and <ref>. It is observed that when the aerosol generator atomizes the DEHS solution, high concentrations of aerosol are generated as shown in Figures <ref> and <ref>. A comparatively lower aerosol concentration is produced when paraffin oil is considered. When the DEHS solution is aerosolized at 300 hPa vapor pressure, the maximum number and mass concentration of aerosol produced are 2.33× 10^8 1/cc and 3.14 × 10^7 μ g/m^3 respectively. Whereas, in the case of paraffin oil, the maximum number and mass concentration of aerosol generated at the same vapor pressure were 7.44× 10^7 1/cc and 8.5× 10^6 μ g/m^3 respectively. Figure <ref> shows that when DEHS solution is aerosolized, the maximum mass of the aerosol is produced in the size range of 800 - 1000 nm. When paraffin oil is aerosolized, the maximum mass of aerosols is produced in the 600 - 800 nm size range. Figure <ref> shows that the maximum number of aerosols are produced with the size range of 600 - 800 nm and 400 - 600 nm for DEHS solution and paraffin oil respectively. This concludes that high concentrations of aerosol are generated from DEHS solution. Whereas, very fine aerosols are produced when paraffin oil is considered. tablePhysical and chemical properties DEHS solution and paraffin oil at 293K. Solution name DEHS Paraffin Color White White Chemical formula C_26H_50O_4 C_15H_11ClO_7 Solubility insoluble in water insoluble in water Density (kg/m^3) 912 827 - 890 Vapor pressure (Pa) ≤ 1 0.5 Melting point (^∘C) -48 -24 Surface tension (N/m) 3.2×10^-2 3.5×10^-2 Dynamic viscosity (mPa s) 22 - 24 5 - 17 §.§ Effect of low electric field intensity on aerosol removal This section delineates the efficacy of uncharged and charged droplets on aerosol removal with a low-intensity non-uniform electric field. Here the droplets are charged at lower applied potential (1 and 2 kV). The 300 hPa vapor pressure of the aerosol generator is operated to aerosolize the DEHS solution. Before the aerosols and droplet interaction, the aerosol generated at 300 hPa vapor with compressed air is fed to the experimental chamber at a flow rate of 10 liter/min. The outlet of the experimental chamber is connected to SMPS. The sampling unit SMPS samples the aerosol at a flow rate of 0.3 L/min. The initial number and mass concentration of aerosol generated at 300 hPa are measured as 2.33× 10^8 1/cc and 3.14 × 10^7 μ g/m^3 respectively. In the first case of this work, the efficacy of uncharged droplets produced from the spray on removing fine aerosols is studied. Once the aerosol-air-laden flow becomes steady inside the experimental chamber, the spray without any external electric field impinges on the aerosols. 10 mins of the period are maintained to make a steady flow of the droplets coming from the spray. After 10 minutes of droplets and aerosol interaction, the remaining aerosol present in the chamber is sampled by SMPS. It is found that the remaining mass of DEHS aerosols present in the chamber is 1.76× 10^7 μ g/m^3 as shown in Figure <ref>. This signifies that the uncharged droplets remove around half the aerosol from the experimental chamber. In the next series of experiments, the droplets are charged at applied potentials of 1 kV and 2 kV. The charged droplets fall on the aerosols in the chamber. Five sets of experiments are conducted for each 1 kV and 2 kV applied potentials. It is observed that when the charged droplet at 1 kV collides with aerosols, the remaining mass of the aerosol left in the chamber is 1.7× 10^7 μ g/m^3. Similarly, the mass of the particles accumulated in the chamber after the charged droplet at 2 kV is 1.68× 10^7 μ g/m^3. However, with uncharged and charged droplets at 1 and 2 kV applied potential, around half the aerosols are removed as shown in Figures <ref>, <ref>. This indicates wet electrostatic scrubber at lower applied potentials i.e., up to 2 kV achieves the same effectiveness as a wet scrubber. This also implies that the electrostatic force of attraction has no significant role in capturing more aerosols. The above findings lead to a hypothesis that the aerosols are captured because of inertial impaction, interception, and Brownian diffusion. §.§ Effect of high electric field intensity on aerosol removal From the above-obtained results, particles removed by uncharged and charged droplets at lower applied potentials ( 1 kV and 2 kV) are not quite an emerging method for removing large amounts of the particles. The main objective of this study is to maximize the removal capacity of the charged droplet for cleaning the aerosols. This study employs a high-electric field intensity between the electrodes. The droplets are charged at higher applied potentials i.e., 3 to 9 kV. It is observed that with this nozzle-ring electrode configuration, there is no occurrence of electric breakdown at 9 kV. But the electric breakdown happens at 10 kV. Similar to the previous experiment, the removal of both the aerosols i.e., paraffin oil and DEHS solution are studied using charged droplets at higher applied potential. Both Figures <ref>, <ref> depict that the mass and number concentration of the paraffin oil possesses a lower value compared to the DEHS solution. It is observed that when the charged droplets at higher applied potentials (3 to 9 kV) collide with aerosol present in the experimental chamber, a significant of the particles are removed as shown in Figures. <ref>, <ref> <ref>, and <ref>. Before the spray on the aerosols, the initial mass and number concentrations of the DEHS aerosol are sampled as 3.14 × 10^7 μ g/m^3 and 2.33× 10^8 1/cc respectively as shown in Figures <ref> and <ref>. When the charged droplet at 3 kV interacts with aerosol, the remaining aerosol left in the chamber is found to be 4.51 × 10^6 μ g/m^3 in terms of mass (as shown in Figure <ref>) and 1.09× 10^8 1/cc in terms of number (as shown in Figure <ref>). However, when the charged droplet at 9 kV impinged on the DEHS aerosol, the remaining mass concentration of the aerosol in the chamber is 1.22 × 10^5 μ g/m^3. Similarly for paraffin oil, before the commencement of the spray in the experimental chamber, the aerosol's initial mass concentration is 8.5 × 10^6 μ g/m^3. When charged droplets 3 kV and 9 kV make interaction with aerosol, the remaining mass concentrations of the aerosol in the chamber are sampled to be 5.95 × 10^6 μ g/m^3 and 3.47 × 10^5 μ g/m^3 respectively as shown in Figure <ref>. The finding of this study signifies that when the droplets are charged at 3 kV or more, the removal of the aerosols becomes significant. Therefore, this indicates that the electrostatic force of attraction dominates other capturing mechanisms (inertial impaction, interception, and Brownian diffusion) for removing more concentration of the particles. From above the obtained results and graphs, it is observed that the aerosols whose size lies between 10 nm to 300 nm are non-removable. This criticizes this study and recommends conducting more studies in the future for removing very fine (10 nm to 300 nm) liquid aerosols. §.§ Analysis of total aerosol concentration The above findings illustrate the aerosol removals by uncharged and charged droplets considering each diameter (10 - 1050 nm) of the aerosol. This section will provide a comprehensive idea about total aerosol removal (including all the diameters) by uncharged and charged droplets. The total concentration of the aerosol is determined by adding the concentration of all the individual-size aerosols. When the DEHS aerosol is used, the initial mass concentration is 1.9× 10^7 μ g/m^3 in the experimental chamber as shown in Figure <ref>. When the charged droplets at 3 kV, 4 kV, 5 kV, 6 kV, 7 kV, 8 kV, and 9 kV applied potentials collided with aerosol, the aerosol concentration (μ g/m^3.) of the chamber with corresponding applied potentials changes to 4.5× 10^6, 1.8× 10^6, 8.3× 10^5, 4.3× 10^5, 2.2× 10^5, 1.5× 10^5, and 1.2× 10^5 respectively as shown in Figure <ref>. Similarly, the total initial number concentration of DEHS aerosol is 1.35× 10^8 1/cc. After the charged droplets interact with particles, the number concentration (1/cc)of the particles changes to 1.1× 10^8 at 3 kV, 9.9× 10^7 at 4 kV, 6.7× 10^7 at 5 kV, 4.8× 10^7 at 6 kV, 3.9× 10^7 at 7 kV, 3.2× 10^7 at 8 kV, and 2.7× 10^7 at 9 kV as shown in Figure <ref>. This indicates that with the enhancement of applied potentials, the removal capacity of aerosol by charged droplets gradually increases. Similarly for paraffin oil, the same trend of particle removal in terms of total number and mass concentration is observed as shown in Figures <ref>, <ref>. §.§ Analysis of scavenging efficiency of the aerosols Since the scavenging efficiency is the prime factor for this experimental study. The scavenging efficiency of the particles for uncharged and charged droplets is determined using mathematical expression as mentioned in Eq. <ref> <cit.>. η = (1 -C_f/C_i) × 100 Here, η is the percentage of scavenging efficiency, C_f and C_i are final and initial concentrations (in terms of mass or number) respectively. Since, the number concentration signifies which size of the particles can be removed using charged droplets, while the mass concentration represents the total amount of particles scavenged from the chamber. So it is necessary to determine the scavenging efficiency of the particles based on both number and mass concentrations. §.§.§ Effect of aerosol size on scavenging efficiency In the initial case of this work, the dependency of the size of the aerosol on scavenging efficiency is studied. It is observed that uncharged droplets only remove aerosols whose size is larger than 500 nm as illustrated in Figure <ref> (a). The scavenging efficiency of aerosols by uncharged droplets is determined for the two ranges of aerosol size. Here we consider the aerosol whose size lies between 500 - 700 nm as shown in Figure <ref> (a) and 700 - 1050 nm as shown in Figure <ref> (b). The uncharged droplets remove only 4% of 500 nm size aerosol and 50% for 700 nm size aerosol as shown in Figure <ref> (a). It is observed that with the increase of aerosol size from 500 to 700 nm, the scavenging efficiency linearly increases as shown in Figure <ref> (a). Whereas, when the size of aerosols increases from 700 to 1050 nm, the scavenging efficiency varies polynomial with order 2 as shown in Figure <ref> (b). The maximum scavenging efficiency by uncharged droplets is found to be 90% for larger-sized aerosol (1050 nm). The next analysis of this study focuses on determining the scavenging efficiency of aerosol by charged droplets. Figure <ref> (c) shows that the minimum size of aerosol that can be removed by charged droplets at 9 kV is 300 nm. This signifies that the charged droplets at 9 kV fail to remove the aerosol whose size is less than 300 nm. It is observed that charged droplets at 9 kV clean 55% of 300 nm size aerosols. With the increase of aerosol size, the scavenging efficiency is exponentially increased as shown in Figure <ref> (c). The maximum scavenging efficiency obtained by charged droplets at 9 kV is 99.8% for 1050 nm size aerosol. Figure <ref> (d) represents that with the increase of aerosol size and applied potential, the scavenging efficiency of aerosol also increases. From the above study, it is observed that a minimum applied potential of 6 kV is necessary to remove 99% of aerosols of size larger than 500 nm by charged droplets. From the above discussion, it is observed that applied potential has a significant effect on the aerosol removal from the experimental chamber. Here we investigate to estimate a correlation between the scavenging efficiency and applied potential. The scavenging efficiency of the particles is plotted with various applied electric potentials. It is observed that with the enhancement of applied potential, the scavenging efficiency of DEHS and paraffin aerosol by charged droplets also increases. The higher applied potential causes droplets to acquire more charge leading to more electrostatic force of attraction between aerosol and droplets. The overall scavenging efficiency of DEHS aerosol and paraffin oil by uncharged droplets are determined to be 55% as shown in Figure <ref> and 20% as shown in Figure <ref> respectively. The maximum scavenging efficiency by charged droplets at 9 kV is 99% for both the DEHS and paraffin oil as depicted in Figures <ref> and <ref>. Similarly, the scavenging efficiency of aerosols based on the number concentration is illustrated. It is found that overall scavenging efficiency by uncharged droplets is 15% for DEHS solution as shown in Figure <ref> and 10% for paraffin oil as shown in Figure <ref>. The maximum scavenging efficiency by charged droplets at 9 kV is determined to be 80% for both DEHS and paraffin oil. It is also observed that at higher applied potentials i.e., 7 kV, 8 kV, and 9 kV, the efficiency curve becomes asymptotic. It means that at higher applied potentials the efficiency of aerosol is almost equal. A relation is estimated between the scavenging efficiency of the particles and applied electric potentials with statistical curve fittings. As shown in Figures <ref>, <ref>, it is discovered that in cases of mass concentration, the scavenging efficiency follows an exponential relation with applied potential with 0.99 R-squared value. As demonstrated in Figures <ref>, <ref>, the scavenging efficiency for number concentration follows a similar trend to mass concentration with a 0.99 R-squared value. §.§.§ Effect of aerosol concentrations on scavenging efficiency Since, this study is conducted by considering different concentrations (depending on the different vapor pressure of the aerosol generator i.e., 100 - 500 hPa) of aerosols, the scavenging efficiency of each concentration is estimated. It is found that the scavenging efficiency at higher electric applied potentials is very similar with all particle concentrations, as shown in Figure <ref>. This indicates that the effectiveness of charged droplets in scavenging the fine aerosols is independent of the aerosol concentration for this experimental test rig. This signifies that if the droplets acquire sufficient charge i.e., enough electrostatic force of attraction between the aerosols and particles, the charged droplets can remove 99% of the particles, regardless of whether the concentration is high or low. §.§.§ Scavenging efficiency based on mass concentration and number concentration As discussed in the above sections, all sizes of the aerosols are not removed by either uncharged or charged droplets. It tells that some sizes of the aerosols are inseparable through the wet electrostatic scrubber. It is necessary to compare the scavenging efficiency of the aerosols based on their number concentration and mass concentrations. Comparisons are made between the mass concentration and number concentration of the particles eliminated by uncharged and charged droplets. The results show that the scavenging efficiency of particles based on mass concentration is significantly higher than that of particles based on number concentration as shown in Figure <ref> for DEHS aerosol and Figure <ref> for paraffin oil. The maximum scavenging efficiency based on mass concentration is around 99%, whereas it is around 80% based on number concentration. This is because some of the fine aerosols are inseparable through charged droplets. §.§.§ Scavenging efficiency comparison of paraffin oil and DEHS aerosol The last comparison is made between the two test aerosols i.e., aerosol generated from DEHS solutions and paraffin oil. The scavenging efficiency of DEHS solution and paraffin oil is compared at a fixed concentration of aerosols generated with 300 hPa operating vapor pressure of aerosol generator. It is obtained that the DEHS solution possesses higher scavenging efficiency than paraffin oil at an applied potential between 0 to 4 kV. Paraffin oil and DEHS solution possess equal scavenging efficiency at an applied potential between 5 to 9 kV as shown in Figure. <ref>. This is because of smaller particles generated from paraffin oil compared to DEHS solution. At applied potential (1 to 4 kV), impaction, interception, Brownian, and electric force both play a key role in removing the particles, whereas higher potential electric potential ( 5 - 9 kV), electrostatic force of attraction between aerosols and droplets dominates over other mechanisms. §.§ Theoretical discussion on particle removal mechanisms Based on the above study, it is observed that charged droplets demonstrate superior capture efficiency compared to uncharged droplets. To understand this hypothesis, some theoretical analysis is discussed. When the uncharged/charged droplet falls on the space occupied with particles/aerosols, the droplets interact with the particles through the following key mechanism. * Inertial impaction * Direct interception * Brownian diffusion * Electrostatic attraction §.§.§ Inertial impaction This process is essential in eliminating particles with a diameter exceeding 5 μm <cit.>. In this mechanism, particles larger in size deviate from the streamline and directly impact the surface of the droplet due to their substantial size. The efficiency of the inertial mechanism is quantified by a dimensionless parameter, the Stokes number (stk), as articulated in Eq.<ref> stk = ρ_p^2d_p^2 U/18μ D Here ρ_p and d_p are the density and diameter of the particle. μ is the dynamic viscosity of air. D is the droplet diameter and U is the relative velocity between the droplet and particle. Litch <cit.> derived inertial impaction collection efficiency of the particles in wet scrubber as mentioned in Eq.<ref>. η_imp = (stk/stk+0.35)^2 §.§.§ Direct interception This process aids in the elimination of particles of intermediate size, ranging from 1 to 2.5 microns. In this scenario, particles adhere to the streamlined path, and because their radius exceeds the thickness of the boundary layer, they are captured when passing within one particle radius from the surface of the water droplet. The effective description of particle captures by droplets, achieved through the inertial interception mechanism, is accurately described by a non-dimensional parameter denoted as R. This parameter is defined in Eq. <ref> as the ratio of the particle diameter to the droplet diameter. R = d_p/D Jung and Lee <cit.> also established the collection efficiency of a single liquid sphere due to interception as follows: η_int = [(1-α/3+σ K)1/D]d_p + [(1-α/3+σ K)(3σ+4/2D^2)]d_p^2 §.§.§ Brownian diffusion Brownian diffusion emerges as the primary mechanism for collecting small particles (with sizes below 1 micron) in wet scrubbers. Given that the diffusion coefficient is inversely related to particle size, smaller particles exhibit higher diffusion coefficients. Jung and Lee <cit.> formulated the subsequent expression for the diffusive collection efficiency of an individual liquid sphere, accounting for the impact of induced internal circulation within a liquid droplet: η_diff = 0.7{4/√(3)(1-α/J+σ K)^1/2Pe^-1/2 + 2(√(3)π/4Pe)^2/3[(1-α)(3σ+4)/J+σ K]^1/3} Where σ is the density ratio of liquid to air, α is the packing density. J = 1 - 6/5α^1/3 + 1/5α^2 K = 1 - 9/5α^1/3 + α + 1/5α^2 Pe is the Peclet number defined as Pe = DU/D_diff Where D_diff is the diffusion coefficient of the particles defined in Eq.<ref> D_diff = k_BTC_c/3πμ d_p Where k_B is the Boltzmann constant, T is the absolute temperature, and C_c is the Cunningham slip correction factor. §.§.§ Electrostatic attraction The electrostatic plays a crucial role when the particles and droplets possess sufficient charges. The particles are attracted towards the charged droplet due to the electrostatic force of the attraction. Nielsen and Hill <cit.> developed imperial relation for obtaining the collision efficiency between the droplet particles due to the electrostatic force of attraction given in Eq.<ref> η_ES = {15π8(ϵ_p - 1ϵ_p + 2)4C_c[4q/(π a^2)]^2d_p^23πμ_g Uϵ_0a}^0.4 The overall collision efficiency of particles with a single charged droplet is determined using the above theoretical expressions, incorporating experimental data such as droplet size, droplet charge, the intensity of the electric field, aerosol size, and properties. Figure <ref> indicates that as the size of the particles increases, the collision efficiency through interception and inertial impaction rises. The collision efficiency of the particle due to impaction and interception is directly proportional to the square of the diameter of the particle. In contrast, the collision efficiency for Brownian diffusion is higher with smaller particles and vice versa. As the collision efficiency due to Brownian diffusion inversely depends on the size of the aerosols. Among all the mechanisms considered, electrostatic interaction predominantly contributes to the collision efficiency over other mechanisms. § CONCLUSIONS This study delves into the scavenging of aerosols, considering both their mass and number concentrations before and after spraying. Concerning the number concentration, a binomial distribution of particles is noted. The small distribution signifies the presence of atmospheric aerosols in the chamber before feeding particles into the chamber, along with tiny droplets generated through spray entering the Scanning Mobility Particle Sizer (SMPS). This occurrence might be attributed to the diffusion dryer's inefficiency, used to absorb water, as observed in multiple experiments. Since the SMPS counts fine water droplets as particles, the efficiency in terms of number concentration is lower compared to mass-based concentration. Therefore, it is crucial to distinguish between the efficiencies based on mass and number concentrations. From this experimental study, the following vital outcomes are observed. * The uncharged droplets effectively remove the aerosols with a minimum size of more than 500 nm. Whereas, the charged droplets at 9 kV remove a minimum 300 nm size of the aerosols. The aerosols with 10 - 290 nm are inseparable either by uncharged or charged droplets. * The maximum scavenging efficiency of the aerosols by uncharged droplets in this experimental test rig is around 55%. Whereas the highest collection efficiency by charged droplets is found to be 99%. * The efficacy of particle removal utilizing uncharged and charged droplets with lower potentials of +1 kV and +2 kV is approximately the same. So, eliminating the particles with an electrospray with a low potential for the same system is ineffective. The electrospray is effective for the abatement of tiny particles by charged droplets at an electric potential of at least +3 kV. * With the enhancement of the applied potentials from 3 kV to 9 KV to droplets, the collection efficiency of the particles also increases. It is also observed that at higher potentials i.e., 7, 8, and 9 KV, the scavenging efficiency of the particles for all the different concentrations of aerosol is the same. Acknowledgment The Authors acknowledge the Department of Science and Technology (DST/TMD/CERI/Air Pollution/2018/009), India for providing financial support for this experimental work. [1]stkStoke number [-] [2]ρ_pDensity of particle [kg/m^3] [3]URelative velocity between particle and droplet [m/s] [4]d_pDiameter of particle [m] [5]μDynamic viscosity of air [Pa s/m^2] [6]η_impImpaction efficiency [-] [7]η_intInterception efficiency [-] [8]η_diffDiffusion efficiency [-] [9]σDensity ratio of liquid to air [-] [10]αPacking density [-] [11]PePeclet number [-] [12]D_diffDiffusion coefficient [m^2/s] [13]k_BBoltzman constant [kg m^2/s^2/K] [14]C_cCunningham slip correction factor [-] [15]TAbsolute temperature [K] [16]C_iInitial concentration Mass/number of aerosols without spray [17]C_0Mass/number concentration of particles with uncharged spray [17]C_1Mass/number concentration of particles with charged spray at 1 kV [17]C_2Mass/number concentration of particles with charged spray at 2 kV [17]C_3Mass/number concentration of particles with charged spray at 3 kV [17]C_4Mass/number concentration of particles with charged spray at 4 kV [17]C_5Mass/number concentration of particles with charged spray at 5 kV [17]C_6Mass/number concentration of particles with charged spray at 6 kV [17]C_7Mass/number concentration of particles with charged spray at 7 kV [17]C_8Mass/number concentration of particles with charged spray at 8 kV [17]C_9Mass/number concentration of particles with charged spray at 9 kV [17]PVapour pressure of aerosol generator [hPa] [17]qCharge on droplet [C] [17]VApplied DC potential [kV] [17]ENon-uniform electric field distribution [V/m] [17]E_rNon-uniform radial electric field distribution [V/m] [17]E_zNon-uniform axial electric field distribution [V/m] [17]aRadius of droplet [m] [17]ϵ_0Permittivity of free space [F/m] [17]η_mMass efficiency of the aerosol [%] [17]η_nNumber efficiency of the aerosol [%] elsarticle-num
http://arxiv.org/abs/2406.18249v1
20240626105144
Foundational Models for Pathology and Endoscopy Images: Application for Gastric Inflammation
[ "Hamideh Kerdegari", "Kyle Higgins", "Dennis Veselkov", "Ivan Laponogov", "Inese Polaka", "Miguel Coimbra", "Junior Andrea Pescino", "Marcis Leja", "Mario Dinis-Ribeiro", "Tania Fleitas Kanonnikoff", "Kirill Veselkov" ]
cs.LG
[ "cs.LG", "cs.CV" ]
§ INTRODUCTION In this section, we first explain how AI can transform the detection and surveillance of upper gastrointestinal (GI) cancer, followed by a discussion on foundation models as a new era in medical imaging, specifically in endoscopy and pathology. §.§ AI in Upper GI Cancer: Transforming Detection and Surveillance Gastric cancer (GC) is one of the leading causes of cancer mortality globally. It’s pathogenesis is related with chronic inflammation which causes changes in the mucosa including atrophy, intestinal metaplasia (IM), and dysplasia <cit.> as shown in Figure <ref>.A. Recognising these conditions early through regular and precise endoscopic surveillance as presented in Figure <ref>.B is pivotal for enhancing early diagnosis and treatment outcomes <cit.>. The advent of artificial intelligence (AI) in medical imaging and diagnostics brings a promising solution to these challenges, especially in the realm of GI endoscopy. AI and FM offer a revolutionary approach to interpreting endoscopy and pathology images for risk stratification and determination of the appropriate surveillance intervals for patients with upper GI precancerous conditions. These AI-driven models can potentially transform the management of patients at risk for upper GI cancers by providing precise, real-time analysis of endoscopic images, identifying premalignant lesions with high accuracy, and predicting the risk levels of patients based on the characteristics of detected lesions. This emphasis on AI's role in enhancing the identification and surveillance of high-risk patients marks a significant step forward. By leveraging AI for the interpretation of endoscopic and pathology images, healthcare providers can achieve more accurate risk stratification and timely intervention, ultimately aiming to increase the surveillance rate among high-risk patients. This not only addresses the current shortfall in surveillance adherence but also paves the way for a more proactive and prevention-oriented approach in managing the risk of upper GI cancers. §.§ Expanding horizons: Foundation Models in Endoscopy and Pathology Imaging The advent of FM has marked a pivotal shift in the landscape of medical imaging analysis, particularly in the domains of pathology and endoscopy. These models undergo training on large and varied datasets, often employing self-supervision methods on an extensive scale. After this initial training, they can be further refined—through processes like fine-tuning—to perform a broad spectrum of related downstream tasks, enhancing their applicability based on the original dataset. By leveraging vast amounts of data to learn rich representations, they have the potential to facilitate the diagnosis, for a personalized treatment approach. This review aims to explore the cutting-edge advancements in FM applied to pathology and endoscopy imaging, elucidating their impact, challenges, and the promising avenues they pave for future research. Pathology and endoscopy, the two critical fields in medical diagnostics, generate a wealth of image data that encapsulate intricate details vital for accurate disease diagnosis and management. Traditionally, the analysis of such images has been heavily reliant on the expertise of highly trained professionals. While indispensable, this approach is time-consuming, subject to variability, and scales linearly with the volume of data. FM emerge as a powerful solution to these challenges, offering a way to automate and enhance the analysis process through textually and visually prompted models. In the context of pathology, textually prompted models leverage textual data as prompts to guide the analysis of images. These textual prompts could be descriptions of histological features, diagnostic criteria, or other relevant annotations that guide the model's interpretation of the images. On the other hand, visually prompted models operate by utilizing visual cues, such as points, boxes, or masks, to guide the model's focus within an image. For a pathology image, a visually prompted model could be prompted to concentrate on specific areas of a tissue slide that are marked by a pathologist or identified by preliminary analysis. In the context of endoscopy, only visually prompted models are utilized since clinical routines for endoscopy videos do not involve text data. However, the integration of FM into pathology and endoscopy poses unique challenges. These include ensuring model interpretability, managing the privacy and security of sensitive medical data, and addressing the potential biases inherent in the training datasets. Moreover, the dynamic nature of medical knowledge and the continuous evolution of diseases necessitate that these models are adaptable and capable of learning from new data incorporating previously acquired knowledge. This paper provides an introductory overview, followed by an in-depth analysis of the principles underlying FM for medical imaging, with a focus on pathology and endoscopy (see Figure <ref>.A for a taxonomy of vision-language FM). We classify the current state-of-the-art (SOTA) models based on their architectural designs, training objectives, and application areas. Furthermore, we discuss the recent works in the field, highlighting the innovative approaches that have been developed to address the specific needs of pathology and endoscopy image analysis. Finally, we outline the challenges faced by current models and propose several directions for future research, aiming to guide and inspire further advancements in this rapidly evolving field. Our major contributions include: * We conduct a thorough review of FM applied in the field of pathology and endoscopy imaging, beginning with their architecture types, training objectives, and large-scale training. Then, they are classified into visually and textually prompted models (based on prompting type), and then their subsequent application/utilization is discussed. * We also discuss the challenges and unresolved aspects linked to FM in pathology and endoscopy imaging. § INCLUSION AND SEARCH CRITERIA For this review, we have focused on identifying and evaluating studies related to the application of FM in pathology and endoscopy imaging. We included original research articles, review articles, and preprints that discuss their development, validation, and application. Only articles published in English were included to maintain consistency and accessibility of the content. We selected studies specifically addressing the use of FM in the detection, diagnosis, and management of GI conditions, including gastric inflammation and cancer. The databases searched included PubMed, Scopus, IEEE Xplore, Web of Science, Google Scholar, and specialized databases like Embase for medical and biomedical literature. Keywords and Medical Subject Headings (MeSH) terms used included "foundation models," "deep learning," "machine learning," "pathology," "endoscopy," "AI diagnostics," "pretrained models," and "transfer learning." Boolean operators were utilized to refine the search: AND was used to combine different concepts (e.g., "foundation models" AND "pathology"), OR to include synonyms or related terms (e.g., "endoscopy" OR "gastroscopy"), and NOT to exclude unwanted terms if necessary. Filters were applied for human studies, years of publication, and types of articles, excluding editorials and reviews if focusing on primary research. The search strategy involved a comprehensive review of electronic databases using the specified keywords. Initial search results were screened based on titles and abstracts to assess relevance, and full texts of potentially relevant articles were retrieved and reviewed. References from these articles were also examined to identify additional relevant studies. Comprehensive searches were conducted in the mentioned databases, and titles and abstracts were screened to identify studies meeting the inclusion criteria. Articles that did not meet these criteria were excluded at this stage. Full texts of the remaining articles were reviewed for relevance and inclusion, with any discrepancies in study selection resolved through discussion among the authors. Studies involving human subjects undergoing pathology analysis or endoscopy and those using FM on pathological data and endoscopic images or videos were included. Only studies employing FM, such as large-scale pretrained machine learning models that are then adapted or fine-tuned for specific tasks in pathology and endoscopy, were considered. By applying these inclusion and search criteria, we aimed to provide a comprehensive and focused review of the current state and future directions of FM in pathology and endoscopy imaging, specifically for the application in gastric inflammation and cancer. § FOUNDATIONAL MODELS IN COMPUTER VISION §.§ Architecture Type Vision language models primarily utilize four distinct architectural frameworks, as depicted in Figure <ref>.B. The initial framework, known as Dual-Encoder, employs parallel visual and textual encoders to produce aligned representations. The second framework, Fusion, integrates image and text representations through a fusion decoder, facilitating the learning of combined representations. The third framework, Encoder-Decoder, features a language model based on encoder-decoder mechanisms alongside a visual encoder, enabling sequential joint feature encoding and decoding. Finally, the Adapted LLM framework incorporates an LLM as its foundational element, with a visual encoder that transforms images into a format that LLM can understand, thus capitalizing on the LLM's enhanced generalization capabilities. Following this overview, we will explore the loss functions utilized to train these various architectural types. §.§ Training objectives §.§.§ Contrastive Objectives Contrastive objectives train models to form distinct representations, effectively narrowing the gap between similar sample pairs and widening it between dissimilar ones in the feature space <cit.>. Image contrastive loss is designed to enhance the uniqueness of image features. It achieves this by aligning a target image more closely with its positive keys—essentially, versions of itself that have undergone data augmentation—and ensuring that it remains clearly differentiated from its negative keys, which are distinct, unrelated images, within the embedding space. Image-Text Contrastive (ITC) loss is a type of contrastive loss function that aims to create distinctive image-text pair representations. This is accomplished by bringing together the embeddings of matched images and texts and pushing apart those that do not match <cit.>. Given a batch of N examples, ITC loss aims to match correct image-text pairs among N × N possible configurations. ITC loss maximizes cosine similarity between N correct pairs and minimizes it among N^2 - N incorrect pairs. Let (x_i, t_i) be i-th image-text example and (v_i, t_i) be its corresponding representations, then Image-Text Contrastive (ITC) loss is calculated as follows: loss_ITC = - log[ exp(sim(v_i, t_i)/τ)/∑_j=1^nexp(sim(v_i, t_j)/τ)] This loss is calculated by concentrating on the relationship between images and texts, taking into account the temperature parameter τ. ITC loss was used by <cit.> to learn to predict correct image-text pairs. Image-Text Matching (ITM) loss <cit.> is another type of contrastive loss that aims to correctly predict whether a pair of images and text is positively or negatively matched. To achieve this, a series of perceptron layers are introduced to estimate the likelihood of a pair being matched. Subsequently, the loss is computed using the cross-entropy loss function. Additionally, several other contrastive loss functions have been utilized for various applications. They include variants of ITC losses such as FILIP loss <cit.>, Text-to-Pixel Contrastive loss <cit.>, Region-Word Alignment <cit.>, Multi-label Image-Text Contrastive <cit.>, Unified Contrastive Learning <cit.>, Region-Word Contrastive loss <cit.>, and image-based self-supervision loss like simple contrastive learning of representations (SimCLR) <cit.>. §.§.§ Generative Objectives Generative objectives focus on training networks to create images or textual contents, enabling them to learn semantic attributes through activities such as image generation <cit.> and language production <cit.>. A prevalent generative loss function in computer vision is Masked Image Modeling (MIM) <cit.>. This approach involves the acquisition of cross-patch correlations by employing masking and image reconstruction methods. In MIM, certain patches of an input image are randomly obscured, and the network's encoder is tasked with reconstructing these hidden patches using the visible sections as a reference. For a given batch of N images, the loss function is computed as follows: LMIM = -1/N∑_i=1^Nlog f_θ( x_i^-I | x_i^∧ I) where x_i^-I and x_i^∧ I represent the masked and unmasked patches within x_i^I, respectively. Similarly, Masked Language Modeling (MLM) <cit.> is a widely adopted generative loss technique in Natural Language Processing (NLP). In MLM, a certain percentage of input text tokens are randomly masked, and the model is trained to predict these masked tokens based on the context provided by the unmasked tokens. The loss function used in MLM is akin to that of the MIM, focusing on the reconstruction of masked elements. However, the difference lies in the elements being reconstructed: in MLM, the focus is on masked and unmasked tokens, as opposed to the masked and unmasked patches in MIM. In similar fashion, various other generative loss techniques have been proposed. Examples include Masked Multimodal Modeling (MMM) loss <cit.>, Semi-Casual Language Modeling <cit.>, Image-conditioned Masked Language Modeling (IMLM) loss <cit.>, Image-grounded Text Generation (ITG) loss <cit.>, and Captioning with Parallel Prediction (CapPa) loss <cit.>. §.§ Large-scale Training Large-scale data is central to the development of vision and language foundational models. The datasets used for pre-training these models are categorized into three main types: image-text datasets, partially synthetic datasets, and combination datasets. Among these, image-text datasets like WebImageText, which are used in CLIP <cit.>, demonstrate the significant impact of web-scale image-text data in training FM. Such data is typically extracted from web crawls and the final dataset emerges from a rigorous filtering process designed to eliminate noisy, irrelevant, or detrimental data points. Unlike image-text datasets, partially synthetic datasets are not readily available on the web and require significant human annotation effort. A cost-effective approach is to utilize a proficient teacher model to transform image-text datasets into mask-description datasets <cit.>. To address the challenge of curating and training on web-scale datasets, combination datasets have been employed <cit.>. These datasets amalgamate standard vision datasets, including those featuring image-text pairs such as captioning and visual question answering, and sometimes modify non-image-text datasets using template-based prompt engineering to transform labels into descriptions. Furthermore, large-scale training, coupled with effective fine-tuning and strategic prompting at the inference stage, has been an essential component of vision foundational models. Fine-tuning adjusts the model's parameters on a task-specific dataset, optimizing it for particular applications like image captioning or visual question answering. It enables leveraging the vast knowledge captured by pre-trained vision-language models, making them highly effective for a wide range of applications with relatively less data and computational resources than required for training from scratch. Prompt engineering, meanwhile, involves the strategic creation of input prompts to guide the model in generating accurate and relevant responses, leveraging its pre-trained capabilities. Both practices are vital for customizing vision language models to specific needs, enabling their effective application across various domains that require nuanced interpretations of visual and textual information. § PATHOLOGY FOUNDATION MODELS §.§ Visually Prompted Models This section reviews visually prompted pathology foundational models that have been designed for segmentation and classification of pathology images. §.§.§ Pathology Image Segmentation Semantic segmentation plays a crucial role in digital pathology, involving the division of images into distinct regions that represent different tissue structures, cell types, or subcellular components. The segment anything model (SAM) <cit.> emerges as the first promptable foundation model specifically designed for image segmentation tasks. Trained on the SA-1B dataset, SAM benefits from a vast amount of images and annotations, granting it outstanding zero-shot generalization capabilities. Utilizing a vision transformer-based image encoder, SAM extracts image features and computes image embeddings. Additionally, its prompt encoder embeds user prompts, enhancing interaction. The combined outputs from both encoders are then processed by a lightweight mask decoder, which generates segmentation results by integrating the image and prompt embedding with output tokens as illustrated in Figure <ref>.C. While SAM has proven effective for segmenting natural images, its potential for navigating the complexities of medical image segmentation, particularly in pathology, invites further investigation. Pathology images present unique challenges, such as structural complexity, low contrast, and inter-observer variability. To address these, the research community has explored various extensions of SAM, aiming to unlock its capabilities for pathology image segmentation tasks. For example, Deng et al. <cit.> evaluated SAM in the context of cell nuclei segmentation on whole slide imaging (WSI). Their evaluation encompassed various scenarios, including the application of SAM with a single positive point prompt, with 20 point prompts comprising an equal number of positive and negative points, and with comprehensive annotations (points or bounding boxes) for every individual instance. The findings indicated that SAM delivers exceptional performance in segmenting large, connected objects. However, it falls short in accurately segmenting densely packed instances, even when 20 prompts (clicks or bounding boxes) are used for each image. This shortfall could be attributed to the significantly higher resolution of WSI images relative to the resolution of images used to train SAM, coupled with the presence of tissue types of varying scales in digital pathology. Additionally, manually annotating all the boxes during inference remains time-consuming. To address this issue, Cui et al. <cit.> introduced a pipeline for label-efficient finetuning of SAM, with no requirement for annotation prompts during inference. Such a pipeline surpasses previous SOTA methods in nuclei segmentation and achieves competitive performance compared to using strong pixel-wise annotated data. Zhang et al. <cit.> introduced SAM-Path for semantic segmentation of pathology images. This approach extends SAM by incorporating trainable class prompts, augmented further with a pathology-specific encoder derived from a pathology FM. SAM-Path improves upon SAM's capability for performing semantic segmentation in digital pathology, eliminating the need for human-generated input prompts. The findings highlight SAM-Path's promising potential for semantic segmentation tasks in pathology. In another study, CellSAM <cit.>, proposed as a foundational model for cell segmentation, generalizes across a wide range of cellular imaging data. This model enhances the capabilities of SAM by introducing a novel prompt engineering technique for mask generation. To facilitate this, an object detector named CellFinder was developed, which automatically detects cells and cues SAM to produce segmentations. They demonstrated that this approach allows a single model to achieve SOTA performance in segmenting images of mammalian cells (both in tissues and cell cultures), yeast, and bacteria, across different imaging modalities. Archit et al. <cit.> presented segment anything for microscopy as a tool for interactive and automatic segmentation and tracking of objects in multi-dimensional microscopy data. They extended SAM by training specialized models for microscopy data that significantly improve segmentation quality for a wide range of imaging conditions. In addition to image segmentation, SAM was also utilized to generate pixel-level annotations to train a segmentation model for pathology images. Li et al. <cit.> investigated the feasibility of bypassing pixel-level delineation through the utilization of SAM applied to weak box annotations in a zero-shot learning framework. Specifically, SAM's capability was leveraged to generate pixel-level annotations from mere box annotations and employed these SAM-derived labels to train a segmentation model. Results demonstrated that the proposed SAM-assisted model significantly reduces the labeling workload for non-expert annotators by relying solely on weak box annotations. A summary of mentioned FM used in pathology image segmentation is presented in Table <ref>.A. To conclude, SAM delivers satisfactory performance on histopathological images, particularly with objects that are sharply defined, and significantly facilitates the annotation process in segmentation tasks where dedicated deep learning models are either unavailable or inaccessible. However, SAM's application in annotating histopathological images encounters several challenges. Firstly, SAM faces difficulties with objects that are interconnected or have indistinct borders (e.g., vascular walls), are prone to prompt ambiguity (e.g., distinguishing between an entire vessel and its lumen), or blend into the background due to low contrast (e.g., sparse tumor cells within an inflammatory backdrop). These limitations can be attributed not only to the absence of microscopic images in the training set but also to the intrinsic characteristics of histopathological images compared to conventional images: (1) the color palette is often limited and similar across different structures, and (2) histopathological tissues are essentially presented on a single plane, contrasting with the three-dimensional perspectives captured in real-world photography, which naturally enhances object delineation. Additionally, generating accurate masks for non-object elements, such as the stroma or interstitial spaces, remains challenging even with extensive input. Technical artifacts such as edge clarification in biopsy samples, and tearing artifacts further impair segmentation performance. Secondly, SAM's overall efficacy falls short of the benchmarks set by SOTA models specifically designed for tasks like nuclei segmentation, particularly in semantic segmentation tasks. §.§.§ Pathology Image Classification Pathology image classification leverages computational methods to categorize and diagnose diseases from medical images acquired during pathology examinations. This process plays a vital role in medical diagnostics, as the precise and prompt classification of diseases can greatly influence the outcomes of patient treatments. The images used in pathology, often derived from biopsies, are intricate, featuring detailed cellular and tissue structures that signify a range of health conditions, such as cancers, inflammatory diseases, and infections. Table <ref>.B presents a summary of recent FM used in pathology image classification. The subsequent paragraph provides detailed explanations of these models. In recent years, numerous self-supervised techniques for computational pathology image classification have been proposed. For example, Hierarchical Image Pyramid Transformer (HIPT) <cit.> is a Vision Transformer (ViT) with less than 10 million parameters, trained on approximately 100 million patches extracted from 11,000 WSIs from The Cancer Genome Atlas (TCGA) <cit.>. HIPT utilized student-teacher knowledge distillation <cit.> at two successive representation levels: initially at the local image patch level and subsequently at the regional image patch level, which is derived from the learned representations of multiple local patches. CTransPath, a Swin Transformer equipped with a convolutional backbone, featuring 28 million parameters was proposed by <cit.>. It was trained on 15 million patches extracted from 30,000 WSIs sourced from both The TCGA and the Pathology AI Platform (PAIP) <cit.>. Similarly, Ciga et al. <cit.> trained a 45 million parameter ResNet on 25 thousands WSIs with the addition of 39 thousands patches, all collected from TCGA and 56 other small datasets. Recently, Filiot et al. <cit.> utilized a ViT-Base architecture, which has 86 million parameters, and employed the Image BERT pre-training with Online Tokenizer (iBOT) framework <cit.> for its pre-training. This model was pre-trained on 43 million patches derived from 6,000 WSIs from the TCGA and surpassed both HIPT and CTransPath in performance across various TCGA evaluation tasks. In a similar endeavor, Azizi et al. <cit.> developed a model with 60 million parameters using the SimCLR framework <cit.> and trained it on 29,000 WSIs, covering nearly the entire TCGA dataset. The mentioned studies highlight models that possess up to 86 million parameters and incorporate a teacher distillation objective for training on the extensive TCGA dataset, which includes over 30,000 WSIs. In contrast, Virchow <cit.> distinguishes itself by its significantly larger scale and more extensive training data. It boasts 632 million parameters, marking a 69-fold increase in size compared to the largest models mentioned in the previous studies. Virchow employs the vision transformer and is trained using DINOv2 <cit.> self-supervised algorithm which is based on a student-teacher paradigm. When evaluated on downstream tasks, such as tile-level pan-cancer detection and subtyping, as well as slide-level biomarker prediction, Virchow surpasses SOTA systems. UNI <cit.>, a ViT-large model trained on 100 thousand proprietary slides (Mass-100k: a large and diverse pretraining dataset containing over 100 million tissue patches from 100,426 WSIs across 20 major organ types including normal tissue, cancerous tissue, and other pathologies) is another foundation model for pathology image classification. They assessed the downstream performance on 33 tasks including tile-level tasks for classification, segmentation, and retrieval and slide-level classification tasks and showed generalizability of the model in anatomic pathology. Roth et al. <cit.> benchmarked the most popular pathology vision FM like DINOv2 ViT-S, DINOv2 ViT-S finetuned, CTransPath and RetCCL <cit.> as feature extractors for histopathology data. The models were evaluated in two settings: slide-level classification and patch-level classification. Results showed that finetuning a DINOv2 ViT-S yields at least equal performance compared to CTransPath and RetCCL but in a fraction of domain specific training time. Campanella et al. <cit.> trained the largest academic foundation model on 3 billion image patches from over 400,000 slides. They compared pre-training of visual transformer models using the masked autoencoder (MAE) and DINO algorithms. The results demonstrate that pre-training on pathology data is beneficial for downstream performance compared to pre-training on natural images. Also, the DINO algorithm achieved better generalization performance across all tasks tested. Furthermore, Dippel et al. <cit.> demonstrated that integrating pathological domain knowledge carefully can significantly enhance pathology foundation model performance, achieving superior results with the best available performing pathology foundation model. This achievement comes despite using considerably fewer slides and a model with fewer parameters than competing models. Although the mentioned studies successfully applied FM for pathology image analysis, Prov-GigaPath <cit.> stands out from them due to its unique combination of a larger and more diverse dataset (1.3 billion 256×256 pathology image tiles in 171,189 whole slides), and its innovative use of vision transformers with dilated self-attention. §.§ Textually Prompted Models Textually prompted models are increasingly recognized as foundational in the field of medical imaging, particularly in computational pathology. These models learn representations that capture the semantics and relationships between pathology images and their corresponding textual prompts (shown in Figure <ref>.D). By leveraging contrastive learning objectives, they bring similar image-text pairs closer together in the feature space while pushing dissimilar pairs apart. Such models are crucial for tasks related to pathology image classification and retrieval. Architectural explorations have included dual-encoder designs—with separate visual and language encoders—as well as fusion designs that integrate image and text representations using decoder and transformer-based architectures. The potential of these models for pathology image classification is highlighted in numerous studies, which are discussed in the following section and summarized in Table <ref>.C. §.§.§ Pathology Image Classification In the context of pathology image classification, TraP-VQA <cit.> represents the pioneering effort to utilize a vision-language transformer for processing pathology images. This approach was evaluated using the PathVQA dataset <cit.> to generate interpretable answers. More recently, Huang et al. <cit.> compiled a comprehensive dataset of image-text paired pathology data sourced from public platforms, including Twitter. They employed a contrastive language-image pre-training model to create a foundational framework for both pathology text-to-image and image-to-image retrieval tasks. Their methodology showcased promising zero-shot capabilities in classifying new pathological images. PathAsst <cit.> utilizes FM, functioning as a generative AI assistant, to revolutionize predictive analytics in pathology. It employs ChatGPT/GPT-4 to produce over 180,000 samples that follow instructions, thereby activating pathology-specific models and enabling efficient interactions based on input images and user queries. PathAsst is developed using the Vicuna-13B language model in conjunction with the CLIP vision encoder. The outcomes from PathAsst underscore the capability of leveraging AI-powered generative FM to enhance pathology diagnoses and the subsequent treatment processes. Lu et al. <cit.> introduced MI-Zero, an intuitive framework designed to unlock the zero-shot transfer capabilities of contrastively aligned image and text models for gigapixel histopathology whole slide images. This framework allows multiple downstream diagnostic tasks to be performed using pre-trained encoders without the need for additional labeling. MI-Zero reimagines zero-shot transfer within the context of multiple instance learning, addressing the computational challenges associated with processing extremely large images. To pre-train the text encoder, they utilized over 550,000 pathology reports along with other available in-domain text corpora. By harnessing the power of strong pretrained encoders, their top performing model, which was pretrained on more than 33,000 histopathology image-caption pairs, achieves an average median zero-shot accuracy of 70.2% across three distinct real-world cancer subtyping tasks. Lu et al. <cit.> proposed CONCH, a foundational model framework for pathology that integrates a vision-language joint embedding space. Initially, they trained a ViT on a dataset comprising 16 million tiles from 21,442 proprietary in-house WSIs using the iBOT <cit.> self-supervised learning framework. Subsequently, leveraging the ViT backbone, they developed a vision-language model utilizing the CoCa framework <cit.>, trained on 1.17 million image-caption pairs derived from educational materials and PubMed articles. The model's efficacy was evaluated across 13 downstream tasks, including tile and slide classification, cross-modal image-to-text and text-to-image retrieval, coarse WSI segmentation, and image captioning. Another study <cit.> proposed the Connect Image and Text Embeddings (CITE) method to improve pathological image classification. CITE leverages insights from language models pre-trained on a wide array of biomedical texts to enhance FM for better understanding of pathological images. This approach has shown to achieve superior performance on the PatchGastric stomach tumor pathological image dataset, outperforming various baseline methods particularly in scenarios with limited training data. CITE underscores the value of incorporating domain-specific textual knowledge to bolster efficient pathological image classification. § ENDOSCOPY FOUNDATION MODELS Endoscopic video has become a standard imaging modality and is increasingly being studied for the diagnosis of gastrointestinal diseases. Developing an effective foundational model shows promise in facilitating downstream tasks requiring analysis of endoscopic videos. §.§ Visually Prompted Models Since clinical routines for endoscopy videos typically do not involve text data, a purely image-based foundational model is currently more feasible. In response to this need, Wang et al. <cit.> developed the first foundation model, Endo-FM, which is specifically designed for analyzing endoscopy videos. Endo-FM utilizes a video transformer architecture to capture rich spatial-temporal information and is pre-trained to be robust against diverse spatial-temporal variations. A large-scale endoscopic video dataset, comprising over 33,000 video clips, was constructed for this purpose. Extensive experimental results across three downstream tasks demonstrate Endo-FM's effectiveness, significantly surpassing other SOTA video-based pre-training methods and showcasing its potential for clinical application. Additionally, Cui et al. <cit.> demonstrated the effectiveness of vision-based FM for depth estimation in endoscopic videos. They developed a foundation model-based depth estimation method named Surgical-DINO, which employs a Low-rank Adaptation (LoRA) <cit.> of DINOv2 specifically for depth estimation in endoscopic surgery. The LoRA layers, rather than relying on conventional fine-tuning, were designed and integrated into DINO to incorporate surgery-specific domain knowledge. During the training phase, the image encoder of DINO was frozen to leverage its superior visual representation capabilities, while only the LoRA layers and the depth decoder were optimized to assimilate features from the surgical scene. The results indicated that Surgical-DINO significantly surpasses all other SOTA models in tasks related to endoscopic depth estimation. Furthermore, trends in the development of video FM indicate promising applications for endoscopy, such as video segmentation <cit.> for identifying lesions in endoscopy footage, and enhancing endoscopy videos by reconstructing masked information <cit.> to reveal obscured lesions. § CHALLENGES AND FUTURE WORK The pathology and endoscopy FM discussed in this review have their respective shortcomings and open challenges. This section aims to provide a comprehensive overview of the common challenges these approaches face, as well as highlight the future directions of FM in pathology and endoscopy analysis along with FUTURE-AI guidelines that guide their deployments. Despite the potential of FM for disease diagnosis, their application in the medical domain including pathology and endoscopy image analysis, faces several challenges. Firstly, FM are susceptible to “hallucination” <cit.>, where they generate incorrect or misleading information. In the medical domain, such hallucinations can lead to the dissemination of incorrect medical information, resulting in misdiagnoses and consequently, inappropriate treatments. Secondly, FM in vision and language can inherit and amplify “biases” <cit.> present in the training data. Biases related to race, underrepresented groups, minority cultures, and gender can result in biased predictions or skewed behavior from the models. Addressing these biases is crucial to ensure fairness, inclusivity, and the ethical deployment of these systems. Additionally, patient privacy and ethical considerations present significant hurdles that must be overcome to ensure the ethical and equitable use of FM in medical practice. Moreover, training large-scale vision and language models demands substantial computational resources and large datasets, which can limit their application in real-time inference or on edge devices with limited computing capabilities. The lack of evaluation benchmarks and metrics for FM also poses a challenge, hindering the assessment of their overall capabilities, particularly in the medical domain. Developing domain-specific and FM-specific benchmarks and metrics is essential. Despite these challenges, the future direction of FM in pathology and endoscopy analysis is likely to include several innovative and transformative approaches. These models are set to significantly enhance diagnostic accuracy, efficiency, and the overall understanding of disease processes. A key future direction is the integration with multimodal data. FM are expected to evolve beyond text and incorporate the integration of multimodal data, combining pathology images, endoscopic video data, genomic information, and clinical notes. This will enable a more comprehensive and nuanced understanding of patient cases, facilitating more accurate diagnoses and personalized treatment plans. FM could also automate the generation of pathology and endoscopy reports, synthesizing findings from images, patient history, and test results into coherent, standardized, and clinically useful reports. This would streamline workflows, reduce human error, and allow pathologists and endoscopists to concentrate on complex cases. In endoscopy, FM could provide real-time analysis and guidance, identifying areas of interest or concern during a procedure, which could assist less experienced endoscopists and potentially reduce the rate of missed lesions or abnormalities. Future FM are likely to feature more sophisticated algorithms for detecting and classifying diseases from pathology slides and endoscopy videos, trained on vast datasets to recognize rare conditions, subtle abnormalities, and early disease stages with high accuracy. Despite their major advances, the deployment and adoption of FM like other medical AI tools remain limited in real-world clinical practice. To increase adoption in the real world, it is essential that medical AI tools are accepted by patients, clinicians, health organizations and authorities. However, there is a lack of widely accepted guidelines on how medical AI tools should be designed, developed, evaluated and deployed to be trustworthy, ethically sound and legally compliant. To address this challenge, the FUTURE-AI guidelines <cit.> were proposed that aim to guide the development and deployment of AI tools in healthcare that are ethical, legally compliant, technically robust, and clinically safe. It consists of six guiding principles for trustworthy AI including Fairness, Universality, Traceability, Usability, Robustness, and Explainability as shown in Figure <ref>.A. This initiative is crucial, especially in domains like gastric cancer detection where AI's potential for early detection and improved patient outcomes is significant. The specificity of vision FM in detecting such conditions necessitates adherence to guidelines ensuring the models' ethical use, fairness, and transparency. By adopting FUTURE-AI's guidelines, researchers and developers can mitigate risks like biases and errors in AI models, ensuring these tools are trustworthy and can be seamlessly integrated into clinical practice. The structured approach provided by FUTURE-AI facilitates the creation of AI tools that are ready for real-world deployment, encouraging their acceptance among patients, clinicians, and health authorities. Additionally, Figure <ref>.B shows our proposed AI development framework which provides a structured outline for documenting AI models for the transparency and reliability of gastric cancer detection tools. The Model Cards section details the model's purpose, its limitations, the nature of the training data, and how the model's performance is measured. Training and Versioning are recorded, tracing the evolution of the model through updates and refinements. Privacy and Security considerations are paramount, detailing the protective measures such as encryption and anonymisation to ensure patient data confidentiality during model training and deployment. This multi-dimensional approach to documentation is essential for the end-users and developers to understand the model's capabilities, limitations, and to ensure its responsible use in the future healthcare settings. § CONCLUSION In this survey, we have conducted a comprehensive review of the recent advancements in FM for pathology and endoscopy imaging. Our survey begins with an introductory section, followed by a discussion on the principles of vision FM, including architecture types, training objectives (i.e., contrastive and generative), and large-scale training. Section <ref> delves into pathology FM, which are classified into visually (section <ref>) and textually (section <ref>) prompted models. Visually prompted models are applied to pathology image segmentation and classification, whereas textually prompted models are utilized solely for pathology image classification. Section <ref> describes recent works on endoscopy FM, which are exclusively visually prompted models. In conclusion, our survey not only reviews recent developments but also lays the groundwork for future research in FM. We propose several directions for future investigations (section <ref>), offering a roadmap for researchers aiming to excel in the field of FM for pathology and endoscopy imaging. This research is part of AIDA project, funded by the European Union (grant number 101095359) and UK Research and Innovation (grant number 10058099). -0cm References 999 yoon2015diagnosis Yoon, H.; Kim, N. Diagnosis and management of high risk group for gastric cancer. Gut And Liver 2015, 9, 5. pimentel2019management Pimentel-Nunes, P.; Libânio, D.; Marcos-Pinto, R.; Areia, M.; Leja, M.; Esposito, G.; Garrido, M.; Kikuste, I.; Megraud, F.; Matysiak-Budnik, T; & Others. Management of epithelial precancerous conditions and lesions in the stomach (maps II): European Society of gastrointestinal endoscopy (ESGE), European Helicobacter and microbiota Study Group (EHMSG), European Society of pathology (ESP), and Sociedade Portuguesa de Endoscopia Digestiva (SPED) guideline update 2019. Endoscopy 2019,51, 365-388. matysiak2020recent Matysiak-Budnik, T.; Camargo, M.; Piazuelo, M.; Leja, M. Recent guidelines on the management of patients with gastric atrophy: common points and controversies. Digestive Diseases And Sciences 2020, 65, 1899-1903. chen2020simple Chen, T.; Kornblith, S.; Norouzi, M; & Hinton, G. A simple framework for contrastive learning of visual representations. International Conference On Machine Learning. 2020, 1597-1607. radford2021learning Radford, A.; Kim, J.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J; & Others. Learning transferable visual models from natural language supervision. International Conference On Machine Learning. 2021, 8748-8763. jia2021scaling Jia, C.; Yang, Y.; Xia, Y.; Chen, Y.-T.; Parekh, Z.; Pham, H.; Le, Q.; Sung, Y.-H.; Li, Z.; and Duerig, T. Scaling up visual and vision-language representation learning with noisy text supervision. In International Conference on Machine Learning. 2021, 4904–4916. li2021align Li, J.; Selvaraju, R.; Gotmare, A.; Joty, S.; Xiong, C.; and Hoi, S.C.H. Align before fuse: Vision and language representation learning with momentum distillation. Advances in Neural Information Processing Systems, 34, 2021, 9694–9705. yao2021filip Yao, L.; Huang, R.; Hou, L.; Lu, G.; Niu, M.; Xu, H.; Liang, X.; Li, Z.; Jiang, X.; and Xu, C. Filip: fine-grained interactive language-image pre-training. arXiv preprint arXiv:2111.07783, 2021. wang2022cris Wang, Z.; Lu, Y.; Li, Q.; Tao, X.; Guo, Y.; Gong, M.; and Liu, T. Cris: Clip-driven referring image segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, 11686–11695. li2022grounded Li, L.H.; Zhang, P.; Zhang, H.; Yang, J.; Li, C.; Zhong, Y.; Wang, L.; Yuan, L.; Zhang, L.; Hwang, J.-N.; et al. Grounded language-image pre-training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, 10965–10975. xu2022groupvit Xu, J.; De Mello, S.; Liu, S.; Byeon, W.; Breuel, T.; Kautz, J.; and Wang, X. Groupvit: Semantic segmentation emerges from text supervision. Computer Vision And Pattern Recognition, 2022. yang2022unified Yang, J.; Li, C.; Zhang, P.; Xiao, B.; Liu, C.; Yuan, L.; and Gao, J. Unified contrastive learning in image-text-label space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, 19163–19173. zhang2022glipv2 Zhang, H.; Zhang, P.; Hu, X.; Chen, Y.-C.; Li, L.; Dai, X.; Wang, L.; Yuan, L.; Hwang, J.-N.; and Gao, J. Glipv2: Unifying localization and vision-language understanding. Advances in Neural Information Processing Systems, 35, 2022, 36067–36080. bao2021beit Bao, H.; Dong, L.; and Wei, F. Beit: Bert pre-training of image transformers. arXiv preprint arXiv:2106.08254, 2021. liu2019roberta Liu, Y.; Ott, M.; Goyal, N.; Du, J.; Joshi, M.; Chen, D.; Levy, O.; Lewis, M.; Zettlemoyer, L.; and Stoyanov, V. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019. zhang2023toward Zhang, X.; Zeng, Y.; Zhang, J.; and Li, H. Toward building general foundation models for language, vision, and vision-language understanding tasks. arXiv preprint arXiv:2301.05065, 2023. singh2022flava Singh, A.; Hu, R.; Goswami, V.; Couairon, G.; Galuba, W.; Rohrbach, M.; and Kiela, D. Flava: A foundational language and vision alignment model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, 15638–15650. hao2022language Hao, Y.; Song, H.; Dong, L.; Huang, S.; Chi, Z.; Wang, W.; Ma, S.; and Wei, F. Language models are general-purpose interfaces. arXiv preprint arXiv:2206.06336, 2022. li2023blip2 Li, J.; Li, D.; Savarese, S.; and Hoi, S. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. arXiv preprint arXiv:2301.12597, 2023. tschannen2023image Tschannen, M.; Kumar, M.; Steiner, A.; Zhai, X.; Houlsby, N.; and Beyer, L. Image captioners are scalable vision learners too. arXiv preprint arXiv:2306.07915, 2023. chen2020uniter Chen, Y.-C.; Li, L.; Yu, L.; El Kholy, A.; Ahmed, F.; Gan, Z.; Cheng, Y.; and Liu, J. Uniter: Universal image-text representation learning. In Computer Vision–ECCV 2020: 16th European Conference, Part XXX, 2020, 104–120. tsimpoukelli2021multimodal Tsimpoukelli, M.; Menick, J.L.; Cabi, S.; Eslami, S.M.; Vinyals, O.; and Hill, F. Multimodal few-shot learning with frozen language models. Advances in Neural Information Processing Systems, 34, 2021, 200–212. xu2022unifying Xu, H.; Zhang, J.; Cai, J.; Rezatofighi, H.; Yu, F.; Tao, D.; and Geiger, A. Unifying flow, stereo and depth estimation. arXiv preprint arXiv:2211.05783, 2022. kirillov2023segment Kirillov, A.; Mintun, E.; Ravi, N.; Mao, H.; Rolland, C.; Gustafson, L.; Xiao, T.; et al. Segment anything. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, 4015–4026. deng2023segment Deng, R.; Cui, C.; Liu, Q.; Yao, T.; Remedios, L.W.; Bao, S.; Landman, B.A.; Wheless, L.E.; Coburn, L.A.; Wilson, K.T.; et al. Segment anything model (sam) for digital pathology: Assess zero-shot segmentation on whole slide imaging. arXiv preprint arXiv:2304.04155, 2023. cui2023allin Cui, C.; Deng, R.; Liu, Q.; Yao, T.; Bao, S.; Remedios, L.W.; Tang, Y.; and Huo, Y. All-in-sam: from weak annotation to pixel-wise nuclei segmentation with prompt-based finetuning. arXiv preprint arXiv:2307.00290, 2023. zhang2023sampath Zhang, J.; Ma, K.; Kapse, S.; Saltz, J.; Vakalopoulou, M.; Prasanna, P.; and Samaras, D. Sam-path: A segment anything model for semantic segmentation in digital pathology. arXiv preprint arXiv:2307.09570, 2023. israel2023foundation Israel, U.; Marks, M.; Dilip, R.; Li, Q.; Schwartz, M.S.; Pradhan, E.; Pao, E.; et al. A foundation model for cell segmentation. bioRxiv, 2023, 2023-11. archit2023segment Archit, A.; Nair, S.; Khalid, N.; Hilt, P.; Rajashekar, V.; Freitag, M.; Gupta, S.; Dengel, A.; Ahmed, S.; and Pape, C. Segment anything for microscopy. bioRxiv, 2023, 2023-08. li2023leverage Li, X.; Deng, R.; Tang, Y.; Bao, S.; Yang, H. and Huo, Y. Leverage Weakly Annotation to Pixel-wise Annotation via Zero-shot Segment Anything Model for Molecular-empowered Learning. arXiv preprint arXiv:2308.05785, 2023. chen2022scaling Chen, R.J.; Chen, C.; Li, Y.; Chen, T.Y.; Trister, A.D.; Krishnan, R.G.; Mahmood, F. Scaling vision transformers to gigapixel images via hierarchical self-supervised learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, 16144–16155. weinstein2013cancer Weinstein, J.N.; Collisson, E.A.; Mills, G.B.; Shaw, K.R.; Ozenberger, B.A.; Ellrott, K.; Shmulevich, I.; Sander, C.; Stuart, J.M. The cancer genome atlas pan-cancer analysis project. Nature Genetics, 2013, 45(10): 1113–1120. caron2021emerging Caron, M.; Touvron, H.; Misra, I.; Jégou, H.; Mairal, J.; Bojanowski, P.; Joulin, A. Emerging properties in self-supervised vision transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, 9650–9660. wang2022transformer Wang, X.; Yang, S.; Zhang, J.; Wang, M.; Zhang, J.; Yang, W.; Huang, J.; Han, X. Transformer-based unsupervised contrastive learning for histopathological image classification. Medical Image Analysis, 2022, 81: 102559. kim2021paip Kim, Y.J.; Jang, H.; Lee, K.; Park, S.; Min, S.-G.; Hong, C.; Park, J.H.; Lee, K.; Kim, J.; Hong, W.; et al. Paip 2019: Liver cancer segmentation challenge. Medical Image Analysis, 2021, 67:101854. ciga2022self Ciga, O.; Xu, T.; Martel, A.L. Self supervised contrastive learning for digital histopathology. Machine Learning with Applications, 2022, 7:100198. filiot2023scaling Filiot, A.; Ghermi, R.; Olivier, A.; Jacob, P.; Fidon, L.; Mac Kain, A.; Saillard, C.; Schiratti, J.-B. Scaling self-supervised learning for histopathology with masked image modeling. medRxiv, 2023, 2023–07. zhou2021ibot Zhou, J.; Wei, C.; Wang, H.; Shen, W.; Xie, C.; Yuille, A.; Kong, T. ibot: Image bert pre-training with online tokenizer. arXiv preprint arXiv:2111.07832, 2021. azizi2023robust Azizi, S.; Culp, L.; Freyberg, J.; Mustafa, B.; Baur, S.; Kornblith, S.; Chen, T.; Tomasev, N.; Mitrović, J.; Strachan, P.; et al. Robust and data-efficient generalization of self-supervised machine learning for diagnostic imaging. Nature Biomedical Engineering, 2023, 1–24. oquab2023dinov2 Oquab, M.; Darcet, T.; Moutakanni, T.; Vo, H.; Szafraniec, M.; Khalidov, V.; Fernandez, P.; Haziza, D.; Massa, F.; El-Nouby, A.; et al. Dinov2: Learning robust visual features without supervision. arXiv preprint arXiv:2304.07193, 2023. vorontsov2023virchow Vorontsov, E.; Bozkurt, A.; et al. Virchow: A million-slide digital pathology foundation model. arXiv preprint arXiv:2309.07778, 2023. chen2023general Chen, R.J.; Ding, T.; Lu, M.Y.; Williamson, D.F.K.; Jaume, G.; Chen, B.; Zhang, A.; et al. A general-purpose self-supervised model for computational pathology. arXiv preprint arXiv:2308.15474, 2023. roth2024low Roth, B.; Koch, V.; Wagner, S.J.; Schnabel, J.A.; Marr, C.; Peng, T. Low-resource finetuning of foundation models beats state-of-the-art in histopathology. arXiv preprint arXiv:2401.04720, 2024. wang2023retccl Wang, X.; Du, Y.; et al. Retccl: clustering-guided contrastive learning for whole-slide image retrieval. Medical Image Analysis, 2023, vol. 83, 102645. campanella2023computational Campanella, G.; Kwan, R.; Fluder, E.; Zeng, J.; Stock, A.; Veremis, B.; Polydorides, A.D.; Hedvat, C.; Schoenfeld, A.; Vanderbilt, C.; et al. Computational pathology at health system scale–self-supervised foundation models from three billion images. arXiv preprint arXiv:2310.07033, 2023. dippel2024rudolfv Dippel, J.; Feulner, B.; Winterhoff, T.; Schallenberg, S.; Dernbach, G.; Kunft, A.; Tietz, S.; Jurmeister, P.; Horst, D.; Ruff, L.; Müller, K.R. RudolfV: A Foundation Model by Pathologists for Pathologists. arXiv preprint arXiv:2401.04079, 2024. xu2024whole Xu, H.; Usuyama, N.; Bagga, J.; Zhang, S.; Rao, R.; Naumann, T.; Wong, C.; et al. A whole-slide foundation model for digital pathology from real-world data. Nature, 2024, 1–8. naseem2022vision Naseem, U.; Khushi, M.; Kim, J. Vision-language transformer for interpretable pathology visual question answering. IEEE Journal of Biomedical and Health Informatics, 2022. he2020pathvqa He, X.; Zhang, Y.; Mou, L.; Xing, E.; Xie, P. Pathvqa: 30000+ questions for medical visual question answering. arXiv preprint arXiv:2003.10286, 2020. huang2023leveraging Huang, Z.; Bianchi, F.; Yuksekgonul, M.; Montine, T.; Zou, J. Leveraging medical twitter to build a visual–language foundation model for pathology ai. bioRxiv, 2023, 2023–03. sun2023pathasst Sun, Y.; Zhu, C.; Zheng, S.; Zhang, K.; Shui, Z.; Yu, X.; Zhao, Y.; Li, H.; Zhang, Y.; Zhao, R.; Lyu, X. Pathasst: Redefining pathology through generative foundation ai assistant for pathology. arXiv preprint arXiv:2305.15072, 2023. lu2023visual Lu, M.Y.; Chen, B.; Zhang, A.; Williamson, D.F.; Chen, R.J.; Ding, T.; Le, L.P.; Chuang, Y.S.; Mahmood, F. Visual Language Pretrained Multiple Instance Zero-Shot Transfer for Histopathology Images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023. lu2023towards Lu, M.Y.; Chen, B.; Williamson, D.F.; Chen, R.J.; Liang, I.; Ding, T.; Jaume, G.; Odintsov, I.; Zhang, A.; Le, L.P.; Gerber, G. Towards a visual-language foundation model for computational pathology. arXiv preprint arXiv:2307.12914, 2023. yu2022coca Yu, J.; et al. Coca: Contrastive captioners are image-text foundation models. arXiv preprint arXiv:2205.01917, 2022. zhang2023text Zhang, Y.; Gao, J.; Zhou, M.; Wang, X.; Qiao, Y.; Zhang, S.; Wang, D. Text-guided foundation model adaptation for pathological image classification. In International Conference on Medical Image Computing and Computer-Assisted Intervention, 2023, 272–282. wang2023foundation Wang, Z.; Liu, C.; Zhang, S.; Dou, Q. Foundation model for endoscopy video analysis via large-scale self-supervised pre-train. In International Conference on Medical Image Computing and Computer-Assisted Intervention, 2023, 101–111. beilei2024surgical Cui, B.; Mobarakol, I.; Long, B.; Hongliang, R. Surgical-DINO: Adapter Learning of Foundation Model for Depth Estimation in Endoscopic Surgery. arXiv preprint arXiv:2401.06013, 2024. hu2021lora Hu, E.J.; Shen, Y.; Wallis, P.; Allen-Zhu, Z.; Li, Y.; Wang, S.; Wang, L.; Chen, W. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021. cheng2023segment Cheng, Y.; Li, L.; Xu, Y.; Li, X.; Yang, Z.; Wang, W.; Yang, Y. Segment and track anything. arXiv preprint arXiv:2305.06558, 2023. song2022takes Song, Y.; Yang, M.; Wu, W.; He, D.; Li, F.; Wang, J. It takes two: Masked appearance-motion modeling for self-supervised video transformer pre-training. arXiv preprint arXiv:2022, 2022. ji2023survey Ji, Z.; Lee, N.; Frieske, R.; Yu, T.; Su, D.; Xu, Y.; Ishii, E.; Bang, Y.J.; Madotto, A.; Fung, P. Survey of hallucination in natural language generation. ACM Computing Surveys, 2023, 55(12):1–38. hoelscher2023detecting Hoelscher-Obermaier, J.; Persson, J.; Kran, E.; Konstas, I.; Barez, F. Detecting edit failures in large language models: An improved specificity benchmark. arXiv preprint arXiv:2305.17553, 2023. lekadir2023future Lekadir, K.; Feragen, A.; Fofanah, AJ.; Frangi, A.; Buyx, A.; Emelie, A.; Lara, A.; et al. FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare. arXiv preprint arXiv:2309.12325, 2023.
http://arxiv.org/abs/2406.18992v1
20240627083335
Semi-supervised Concept Bottleneck Models
[ "Lijie Hu", "Tianhao Huang", "Huanyi Xie", "Chenyang Ren", "Zhengyu Hu", "Lu Yu", "Di Wang" ]
cs.CV
[ "cs.CV", "cs.AI", "cs.LG" ]
Investigating a new neutral heavy gauge boson within the mono-Z^' model via simulated pp collisions at √(s) = 14 TeV at the HL-LHC. S. Elgammal July 1, 2024 =================================================================================================================================== § ABSTRACT Concept Bottleneck Models (CBMs) have garnered increasing attention due to their ability to provide concept-based explanations for black-box deep learning models while achieving high final prediction accuracy using human-like concepts. However, the training of current CBMs heavily relies on the accuracy and richness of annotated concepts in the dataset. These concept labels are typically provided by experts, which can be costly and require significant resources and effort. Additionally, concept saliency maps frequently misalign with input saliency maps, causing concept predictions to correspond to irrelevant input features - an issue related to annotation alignment. To address these limitations, we propose a new framework called SSCBM (Semi-supervised Concept Bottleneck Model). Our SSCBM is suitable for practical situations where annotated data is scarce. By leveraging joint training on both labeled and unlabeled data and aligning the unlabeled data at the concept level, we effectively solve these issues. We proposed a strategy to generate pseudo labels and an alignment loss. Experiments demonstrate that our SSCBM is both effective and efficient. With only 20% labeled data, we achieve 93.19% (96.39% in a fully supervised setting) concept accuracy and 75.51% (79.82% in a fully supervised setting) prediction accuracy. § INTRODUCTION Recently, deep learning models, such as ResNet <cit.>, often feature complex non-linear architectures, making it difficult for end-users to understand and trust their decisions. This lack of interpretability is a significant obstacle to their adoption, especially in critical fields such as healthcare <cit.> and finance <cit.>, where transparency is crucial. Explainable artificial intelligence (XAI) models have been developed to meet the demand for transparency, providing insights into their behavior and internal mechanisms <cit.>. Concept Bottleneck Models (CBMs) <cit.> are particularly notable among these XAI models for their ability to clarify the prediction process of end-to-end AI models. CBMs introduce a bottleneck layer that incorporates human-understandable concepts. During prediction, CBMs first predict concept labels from the original input and then use these predicted concepts in the bottleneck layer to determine the final classification label. This approach results in a self-explanatory decision-making process that users can comprehend. A major issue in original CBMs is the need for expert labeling, which is costly in practice. Some researchers address this problem through unsupervised learning. For example, <cit.> proposes a Label-free CBM that transforms any neural network into an interpretable CBM without requiring labeled concept data while maintaining high accuracy. Similarly, Post-hoc Concept Bottleneck models <cit.> can be applied to various neural networks without compromising performance, preserving interpretability advantages. However, these methods have three issues. First, those unsupervised methods heavily rely on large language models like GPT-3, which have reliability issues <cit.>. Second, the concepts extracted by these models lack evaluation metrics, undermining their interpretability. Third, the assumption that no concept labels are available is too stringent in practice. In reality, obtaining a small portion of concept labels is feasible and cost-effective. Therefore, we can maximize the use of this small labeled concept dataset. This is the motivation for introducing our framework, which focuses on the semi-supervised setting in CBM. In this paper, we introduce a framework called the SSCBM (Semi-supervised Concept Bottleneck Model). Compared to a supervised setting, semi-supervised CBMs have two main challenges. First, obtaining concept embeddings requires concept labels, so we need to generate pseudo labels to obtain these concept embeddings. To achieve this, SSCBM uses a KNN-based algorithm to assign pseudo-concept labels for unlabeled data. While such a simple pseudo-labeling method is effective and has acceptable classification accuracy, we also find that the concept saliency map often misaligns with the input saliency map, meaning concept predictions frequently correspond to irrelevant input features. This misalignment often arises from inaccurate concept annotations or unclear relationships between input features and concepts, which is closely related to the broader issue of annotation alignment. In fact, in the supervised setting, there is a similar misalignment issue <cit.>. Existing research seeks to improve alignment by connecting textual and image information <cit.>. However, these methods only focus on the supervised setting and cannot be directly applied to our settings because our pseudo-labels are noisy. Our framework achieves excellent performance in both concept accuracy and concept saliency alignment by leveraging joint training on both labeled and unlabeled data and aligning the unlabeled data at the concept level. To achieve this, we leverage the relevance between the input image and the concept and get other pseudo-concept labels based on these similarity scores. Finally, we align these two types of pseudo-concept labels to give the concept encoder the ability to extract useful information from features while also inheriting the ability to align concept embeddings with the input. Comprehensive experiments on benchmark datasets demonstrate that our SSCBM is both efficient and effective. Our contributions are summarized as follows. * We propose the SSCBM, a framework designed to solve the semi-supervised annotation problem, which holds practical significance in real-world applications. To the best of our knowledge, we are the first to tackle these two problems within a single framework, elucidating the behavior of CBMs through semi-supervised alignment. * Our framework addresses the semi-supervised annotation problem alongside the concept semantics alignment problem in a simple and clever manner. We first use the KNN algorithm to assign a pseudo-label to each unlabeled data, which has been experimentally proven to be simple and effective. Then, we compute a heatmap between concept embeddings and the input. After applying a threshold, we obtain the predicted alignment label. Finally, we optimize the alignment loss between these two pseudo-concept labels to mitigate the misalignment issue. * Comprehensive experiments demonstrate the superiority of our SSCBM in annotation and concept-saliency alignment, indicating its efficiency and effectiveness. With only 1% labeled data, we achieved 88.38% concept accuracy and 62.19% predicted accuracy. With 20% labeled data, we achieve 93.19% (96.39% in a fully supervised setting) concept accuracy and 75.51% (79.82% in a fully supervised setting) predicted accuracy. § RELATED WORK Concept Bottleneck Models. Concept Bottleneck Model (CBM) <cit.> is an innovative deep-learning approach for image classification and visual reasoning. By introducing a concept bottleneck layer into deep neural networks, CBMs enhance model generalization and interpretability by learning specific concepts. However, CBMs face two primary challenges: their performance often lags behind that of models without the bottleneck layer due to incomplete information extraction, and they rely heavily on laborious dataset annotation. Researchers have explored various solutions to these challenges. <cit.> extended CBMs into interactive prediction settings by introducing an interaction policy to determine which concepts to label, thereby improving final predictions. <cit.> addressed CBM limitations by proposing a Label-free CBM, which transforms any neural network into an interpretable CBM without requiring labeled concept data, maintaining high accuracy. Post-hoc Concept Bottleneck models <cit.> can be applied to various neural networks without compromising performance, preserving interpretability advantages. Related work in the image domain includes studies <cit.>. In the graph concept field, <cit.> provide a global interpretation for Graph Neural Networks (GNNs) by mapping graphs into a concept space through clustering and offering a human-in-the-loop evaluation. <cit.> extend this approach by incorporating both global and local explanations. For local explanations, they define a concept set, with each neuron represented as a vector with Boolean values indicating concept activation. However, existing works rarely consider semi-supervised settings, which are practical in real-world applications. Our framework addresses these issues effectively. Semi-supervised Learning. Semi-supervised learning (SSL) combines the two main tasks of machine learning: supervised learning and unsupervised learning <cit.>. It is typically applied in scenarios where labeled data is scarce. Examples include computer-aided diagnosis <cit.>, medical image analysis <cit.>, and drug discovery <cit.>. In these cases, collecting detailed annotated data by experts requires considerable time and effort. However, under the assumption of data distribution, unlabeled data can also assist in building better classifiers <cit.>. SSL, also known as self-labeling or self-teaching in its earliest forms, involves the model iteratively labeling a portion of the unlabeled data and adding it to the training set for the next round of training <cit.>. The expectation-maximization (EM) algorithm proposed by <cit.> uses both labeled and unlabeled data to produce maximum likelihood estimates of parameters. <cit.> and <cit.> focus on consistency regularization. 2-model <cit.> combines both supervised cross-entropy loss and unsupervised consistency loss while perturbing the model and data based on the consistency constraint assumption. A temporal ensembling model integrates predictions from models at various time points. Mean teacher <cit.> addresses the slow updating issue of the temporal ensembling model on large datasets by averaging model weights instead of predicting labels. MixMatch <cit.> unifies and refines the previous approaches of consistency regularization, entropy minimization, and traditional regularization into a single loss function, achieving excellent results. Pseudo labeling, as an effective tool for reducing the entropy of unlabeled data <cit.>, has been increasingly attracting the attention of researchers in the field of semi-supervised learning. <cit.> proposes that directly using the model's predictions as pseudo-labels can achieve good results. FixMatch <cit.> predicts and retains the model, generating high-confidence pseudo-labels. <cit.> continuously adjusts the teacher based on feedback from the student, thereby generating better pseudo-labels. While there has been a plethora of work in the semi-supervised learning field, the focus on semi-supervised concept bottleneck models remains largely unexplored. Our work focuses on this new area. § PRELIMINARIES Concept Bottleneck Models <cit.>. We consider a classification task with a concept set denoted as 𝒞 ={p_1, ⋯, p_k} with each p_i is a concept given by experts or LLMs, and a training dataset represented 𝒟 = {(x^(i), y^(i), c^(i))}_i=1^N. Here, for i∈ [N], x^(i)∈𝒳⊆ℝ^d represents the feature vector (e.g., an image’s pixels), y^(i)∈𝒴⊆ℝ^l denotes the label (l is the number of classes), c^(i) =(c_i^1, ⋯, c_i^k) ∈ℝ^k represents the concept vector (a binary vector of length k, where each value indicates whether the input x^(i) contains that concept). In CBMs, the goal is to learn two representations: one called concept encoder that transforms the input space to the concept space, denoted as g: ℝ^d →ℝ^k, and another called label predictor that maps the concept space to the downstream prediction space, denoted as f: ℝ^k →ℝ^l. Usually, the map f is linear. For any input x, we aim to ensure that its predicted concept vector ĉ=g(x) and prediction ŷ=f(g(x)) are close to their underlying counterparts, thus capturing the essence of the original CBMs. Concept Embedding Models <cit.>. As the original CBM relies solely on concept features to determine the model’s predictions, compared to canonical deep neural networks, it will degrade the prediction performance. To further improve the performance of CBMs, CEM was developed by <cit.>. It achieves this by learning interpretable high-dimensional concept representations (i.e., concept embeddings), thus maintaining high task accuracy while obtaining concept representations that contain meaningful semantic information. For CEMs, we use the same setting as that of <cit.>. For each input x, the concept encoder learns k concept representations ĉ_1, ĉ_2, …, ĉ_k, each corresponding to one of the k ground truth concepts in the training dataset. In CEMs, each concept c_i is represented using two embeddings ĉ_i^+,ĉ_i^-∈ℝ^m, each with specific semantics, i.e., the concept is TRUE (activate state) and concept is FALSE (negative state), where hyper-parameter m is the embedding dimension. We use a DNN ψ(x) to learn a latent representation h∈ℝ^n_h, to be used as input of the CEM's embedding generator, where n_h is the dimension of the latent representation. CEM's embedding generator ϕ feeds h into two concept-specific fully connected layers in order to learn two concept embeddings in ℝ^m. ĉ_i=ϕ _i( h) =a(W_ih+b_i). Then we use a differential scoring function s:ℝ^2m→[ 0,1 ], to achieve the alignment of concept embeddings ĉ_i^+,ĉ_i^- and ground-truth concepts c_i. It is trained to predict the probability p̂_i:=s( [ ĉ_i^+,ĉ_i^-] ^⊤) =σ( W_s[ [ ĉ_i^+,ĉ_i^-] ^⊤] +b_s ) of concept c_i being active in embedding space. We get the final concept embedding ĉ_i, as follows: ĉ_i:=p̂_iĉ_i^++(1-p̂_i)ĉ_i^-. At this point, we understand that we can obtain high-quality concept embeddings rich in semantics through CEMs. In the subsequent section <ref>, we will effectively utilize these representations of concepts and further optimize their interpretability through our proposed framework SSCBM. Semi-supervised Setting. Now, we consider the setting of semi-supervised learning for concept bottleneck models. As mentioned earlier, a typical training dataset for CBMs can be represented as 𝒟 = {(x^(i), y^(i), c^(i))}_i=1^N, where x^(i)∈𝒳 represents the input feature. However, in semi-supervised learning tasks, the set of feature vectors typically consists of two parts, 𝒳 = {𝒳_L, 𝒳_U}, where 𝒳_L represents a small subset of labeled data and 𝒳_U represents the remaining unlabeled data. Generally we assume |𝒳_L| ≪ |𝒳_U|. We assume that x^(j)∈𝒳_L is labeled with a concept vector c^(j) and a label y^(j), and for any x^(i)∈𝒳, there only exists a corresponding label y^(i)∈𝒴. Note that our method can be directly extended to the fully semi-supervised case where even the classification labels for feature vectors in 𝒳_U are unknown. Under these settings, given a training dataset 𝒟 = 𝒟_L∪𝒟_U that includes both labeled and unlabeled data, the goal is to train a CBM using both the labeled data 𝒟_L and unlabeled data 𝒟_U. This aims to get better mappings g: ℝ^d →ℝ^k and f: ℝ^k →ℝ^l than those trained by using only labeled data, ultimately achieving higher task accuracy and its corresponding concept-based explanation. § SEMI-SUPERVISED CONCEPT BOTTLENECK MODELS In this section, we will elaborate on the details of the proposed SSCBM framework, which is shown in Figure <ref>. SSCBM follows the main idea of CEM. Specifically, to learn a good concept encoder, we use different processing methods for labeled and unlabeled data. Labeled data first passes through a feature extractor ψ to be transformed into a latent representation h, which then enters the concept embedding extractor to obtain the concept embeddings and predicted concept vector ĉ for the labeled data, which is compared with the ground truth concept to compute the concept loss. Additionally, the label predictor predicts ŷ based on ĉ, and calculates the task loss. For unlabeled data, we first extract image features V from the input using an image encoder. Then, we use the KNN algorithm to assign a pseudo-label ĉ_img to each unlabeled data, which has been experimentally proven to be simple and effective. In the second step, we compute a heatmap between concept embeddings and the input. After applying a threshold, we obtain the predicted alignment label ĉ_align. Finally, we compute the alignment loss between ĉ_img and ĉ_align. During each training epoch, we simultaneously compute these losses and update the model parameters based on the gradients. §.§ Label Anchor: Concept Embedding Encoder Concept Embeddings. As described in Section <ref>, we obtain high-dimensional concept representations with meaningful semantics based on CEMs. Thus, our concept encoder should extract useful information from both labeled and unlabeled data. For the labeled training data 𝒟_L = {(x^(i), y^(i), c^(i))}_i=1^|𝒟_L|, we follow the original CEM <cit.>, i.e., using a backbone network (e.g., ResNet50) to extract features h = {ψ(x^(i)) }_i=1^|𝒟_L|. Then, for each feature, it passes through a embedding generator to obtain concept embeddings ĉ_i ∈ℝ^m × k for i ∈ [k]. After passing through fully connected layers and activation layers, we obtain the predicted binary concept vector ĉ∈ℝ^k for the labeled data. The specific process can be represented by the following expression: ĉ^(j)_i, h^(j) = σ(ϕ(ψ(x^(j))), i = 1, …, k, j = 1, …, |𝒟_L|, where ψ, ϕ and σ represent the backbone, embedding generator and activation function, respectively. To enhance the interpretability of concept embeddings, we calculate the concept loss utilize binary cross-entropy to optimize the accuracy of concept predictions by computing ℒ_c based on predicted binary concept vector ĉ and ground truth concept labels c: ℒ_c = BCE(ĉ, c), where BCE is the binary cross entropy loss. Task Loss. Since our ultimate task is classification, we also need to incorporate the task loss for the final prediction performance. After obtaining the predicted concept ĉ, we use a label predictor to predict the final class ŷ. Then we can define the task loss function using the categorical cross-entropy loss to train our classification model as follows: ℒ_task = CE(ŷ, y), Note that for the unlabeled data, we can also calculate the task loss since their class labels are known, in order to make full use of the data. §.§ Unlabel Alignment: Image-Textual Semantics Alignment Pseudo Labeling. Unlike the supervised data, CEM cannot directly extract useful information from the unlabeled data as the concept encoder is a supervised training architecture. Thus, in practical situations lacking labeled data, one direct approach is to get high-quality pseudo concept labels to train the model. Below, we will introduce the method we use to obtain pseudo concept labels. Firstly, we can natually think of measuring the similarity between images by calculating the distance in the cosine space. Based on this idea, we can assign pseudo labels to unlabeled data by finding labeled data with similar image features to them. Specifically, for each unlabeled training data x ∈𝒟_U = {(x^(i), y^(i))}_i=1^|𝒟_U|, we calculate its cosine distance with all labeled data points x^(j)∈𝒟_L = {(x^(j), y^(j), c^(j))}_j=1^|𝒟_L| and select the k samples with the smallest distances: dist(x, x^(j)) = 1- x · x^(j)/x_2·x^(j)_2, j = 1, …, |𝒟_L|. Then, we normalize the reciprocal of the cosine distance between the nearest k data points and x as weights. We use these weights to compute a weighted average of the concept labels of these k data points, obtaining the pseudo concept label for x. In this way, we can obtain pseudo concept labels ĉ_img for all x ∈𝒟_U. In our experiments, we find that directly feeding pseudo-concept labels generated by KNN to CEM has satisfactory performance. However, this simple labeling method can lead to alignment issues in the concept embeddings learned by the CEM's concept encoder. Specifically, the predicted concepts ĉ might have no relation to the corresponding features V in the image, hindering the effectiveness of CEM as a reliable interpretability tool. Moreover, due to misalignment, this will also degrade the prediction performance of the concept encoder. In the following, we aim to address such a misalignment issue. Generating Concept Heatmaps. Our above pseudo concept labels via KNN have already contained useful information for prediction. Thus, our goal is to provide these labels with further information about their relation to the corresponding features. To achieve this, we first get another pseudo-concept label via these relations, which are calculated by the similarity between the concept embedding and the input image, namely concept heatmaps. Specifically, given an image x, we first have its feature map V ∈ℝ^H × W × m for each concept are extracted by V = Ω(x), where Ω is the visual encoders, H and W are the height and width of the feature map. Given V and the i-th concept embedding c_i, we can obtain a heatmap ℋ_i, i.e., a similarity matrix that measures the similarity between the concept and the image can be obtained by computing their cosine distance: ℋ_p,q,i = e_i^⊤ V_p,q/||e_i||· ||V_p,q||, p = 1, … , H, q = 1, … , W where p,q are the p-th and q-th positions in the heatmaps, and ℋ_p,q,i represents a local similarity score between V and e_i. Intuitively, ℋ_i represents the relation of each part of the image with the i-the concept. Then, we derive heatmaps for all concepts, denoted as {ℋ_1, ℋ_2, …, ℋ_k}. Calculating Concept Scores and Concept Labels. As average pooling performs better in downstream classification tasks <cit.>, we apply average pooling to the heatmaps to deduce the connection between the image and concepts: s_i = 1/P · Q∑_p=1^P∑_q=1^Qℋ_p,q,i. Intuitively, s_i is the refined similarity score between the image and concept e_i. Thus, a concept vector s can be obtained, representing the similarity between an image input x and the set of concepts: s = (s_1, … , s_k)^⊤. s can be considered as a soft concept label which is got by similarity. Next, we have to transform it into a hard concept label ĉ_align. we determine the presence of a concept attribute in an image based on a threshold value derived from an experiment. If the value s_i exceeds this threshold, we consider the image to possess that specific concept attribute and set the concept label to be True. We can obtain predicted concept labels for all unlabeled data. We set the threshold as 0.6. Alignment of Image. Based on our above discussions, on the one hand, the concept encoder should learn information from ĉ_img. On the other hand, it should also get concept embeddings which can get good similarity-based concept labels ĉ_align for alignment with the input image. Thus, we need a further alignment loss to achieve these two goals. Specifically, we compute the alignment loss as follows: ℒ_align = BCE(ĉ_img, ĉ_align). §.§ Final Objective In this section, we will discuss how we derive the process of network optimization. First, we have the concept loss ℒ_c in (<ref>) for enhancing the interpretability of concept embeddings. Also, as the concept embeddings are input into the label predictor to output the final prediction, we also have a task loss between the predictions given by concept bottleneck and ground truth, which is shown in (<ref>). In the context of binary classification tasks, we employ binary cross-entropy (BCE) as our loss function. For multi-class classification tasks, we use cross-entropy as the measure. Finally, to align the images with concept labels, we computed the alignment loss in (<ref>). Formally, the overall loss function of our approach can be formulated as: ℒ = ℒ_task + λ_1 ·ℒ_c + λ_2 ·ℒ_align, where λ_1, λ_2 are hyperparameters for the trade-off between interpretability and accuracy. § EXPERIMENTS In this section, we will conduct experimental studies on the performance of our framework. Specifically, we will evaluate the utility and interoperability. Details and additional results are in Appendix <ref> due to space limit. §.§ Experimental Settings Datasets. We evaluate our methods on three real-world image tasks: CUB, CelebA, and AwA2. See Appendix <ref> for a detailed introduction. Baseline Models. As there is no ground truth in the supervised setting, here we compare our SSCBM with two baselines in the supervised setting mentioned in Section <ref>. Concept Bottleneck Model (CBM) <cit.>: We adopt the same setting and architecture as in the original CBM. Concept Embedding Model (CEM): We follow the same setting as in  <cit.>. Evaluation Metrics. To evaluate the utility, we consider the accuracy for both class and concept label prediction. Specifically, concept accuracy measures the model's prediction accuracy for concepts: 𝒞_acc = 1/N·1/k∑_i=1^N ∑_j=1^k 𝕀(ĉ_j^(i)=c_j^(i)). Task accuracy measures the model's performance in predicting downstream task classes: 𝒜_acc = 1/N∑_i=1^N 𝕀(ŷ^(i)=y^(i)). To evaluate the interpretability, besides the concept accuracy (due to the structure of CBM, concept accuracy can also reveal the interpretability), similar to previous work <cit.>, we will show visualization results. Moreover, we will evaluate the performance of the test-time intervention. Implementation Details. All experiments are conducted on a Tesla V100S PCIe 32 GB GPU and an Intel Xeon Processor (Skylake, IBRS) CPU. See Appendix <ref> for more details. §.§ Evalaution Results on Utility Performance of SSCBM. We first study the performance of SSCBM. As we are focusing on the semi-supervised setting, we will use different ratios (ranging from 1% to 80%) of all data samples as the supervised data and the left ones will be the unsupervised data. The results are shown in Table <ref>. For the CUB dataset, the concept accuracy improves as the labeled ratio increases, peaking at 95.04% when the labeled ratio is 0.8. The class accuracy shows a consistent upward trend from 62.19% at a 1% labeled ratio to 79.27% at a 60% ratio, with a slight dip at 80%. It is notable that compared with the concept and task accuracy of CEM trained on the whole dataset, our SSCBM can already achieve comparable performance with only 40% of the supervised data. Compared to CUB, we can see the performance on CeleA and AwA2 fluctuates, i.e., both concept accuracy and task accuracy could decrease when the labeled ratio increases. This is potentially due to the CUB dataset has a more diverse set of concepts (112 selected) compared to CelebA (only 6 selected <cit.>) and AwA2 (85 selected) and factors such as higher variability in the data or more complex feature relationships. However, we claim that these results are still very close to the CEM under fully supervised training. Comparison with Baselines. We then compare both the concept and class accuracy with baselines in three datasets when the labeled ratio is 10%. Results in Table <ref> show that our method significantly outperforms other baselines when there is lacking of plenty supervised data. These results also confirm that the classical CBM and CEM are unsuitable for the semi-supervised setting. The success of SSCBM is because our alignment loss can increase the concept prediction accuracy, thus leading to an increase in class accuracy, which indicates that by leveraging joint training on both labeled and unlabeled data and aligning the unlabeled data at the concept level, we get more information. §.§ Interpretability Evaluation For interpretability, in the previous section, we have shown that the concept accuracy of SSCBM with a small labeled ratio is very close to that of CEM, indicating it inherits CEM's interpretability. Besides the concept accuracy, note that in SSCBM, we also have a pseudo label revived by the alignment between the concept embedding and the input saliency map. We use the alignment loss to inherit such alignment. Thus, we will evaluate the alignment performance here to show the faithfulness of the interpretability given by SSCBM. We measure our alignment performance by comparing the correctness of the concept saliency map with concept semantics in Figure <ref>. See Appendix <ref> for detailed experimental procedures on generating saliency maps. Results show that our concept saliency map matches the concept semantics, indicating the effectiveness of our alignment loss. In Appendix <ref>, we also provide our additional interpretability evaluation in Figure <ref>-<ref>. §.§ Test-time Intervention Test-time intervention enables human users to interact with the model in the inference time. We test our test-time intervention by correcting the 10% to 100% ratio of the concept labels in the concept predictor. We adopt individual intervention for CelebA and AwA2, as there are no grouped concepts. For CUB, we perform the group intervention, i.e., intervene in the concepts with associated attribution. For example, the breast color::yellow, breast color::black, and breast color::white are the same concept group. So, we only need to correct the concept label in the group. We expect that the model performance will steadily increase along with the ratio of concept intervention, indicating that the model learned such correct label information and automatically corrected other labels. Results in Figure <ref> (a) demonstrate our model's robustness and an increasing trend to learn the information of concept information, indicating our interpretability and model prediction performance. This lies in our loss of alignment in effectively learning the correct information pairs in unlabeled and labeled data. Results in Figure <ref> (b) also show that by changing the wing color to brown, we successfully caused the model to predict the Great Crested Flycatcher instead of the Swainson Warbler. More results are in Appendix <ref>. §.§ Ablation Study We will finally conduct ablation experiments. It is notable that when λ_2=0 in (<ref>), SSCBM will be the original CEM. Thus, by comparing SSCBM with CEM in Table <ref>, we confirm that the alignment loss is essential for our method. We then give a finer study on the two kinds of pseudo-labels to demonstrate that each cone plays an indispensable role in bolstering the efficacy of SSCBM. Specifically, based on our method in Section <ref>, we have three types of pseudo labels available: pseudo concept label ĉ_img, ĉ_align, and label ĉ predicted by the concept embedding. We first remove ĉ_img and calculate our alignment loss using ĉ_align and ĉ; conversely, we remove ĉ_align and calculate the alignment loss using ĉ_img and ĉ. From the results presented in Table <ref>, it is evident that removing the ĉ_img component significantly degrades the performance of SSCBM at both the concept level and class level. This indicates that the pseudo-concept labels via KNN contain necessary information about the ground truth. The concept encoder needs to extract information from such labels to get better performance. This situation is similar when removing the ĉ_align component, indicating that aligning the concept embedding and the input saliency map can further extract useful information from the input image and, thus, is beneficial to improve the performance. Our observations underscore the high degree of joint effectiveness of two kinds of pseudo-concept labels within our objective function, collectively contributing to the enhancement of model prediction and concept label prediction. § CONCLUSION The training of current CBMs heavily relies on the accuracy and richness of annotated concepts in the dataset. These concept labels are typically provided by experts, which can be costly and require significant resources and effort. Additionally, concept saliency maps frequently misalign with input saliency maps, causing concept predictions to correspond to irrelevant input features - an issue related to annotation alignment. In this problem, we propose SSCBM, a strategy to generate pseudo labels and an alignment loss to solve these two problems. Results show our effectiveness. plain § DETAILS OF EXPERIMENTAL SETUP §.§ Datasets We evaluate our methods on three real-world image tasks: CUB, CelebA, and AwA2. * CUB <cit.>: the Caltech-UCSD Birds-200-2011 (CUB) dataset, which includes a total of 11,788 avian images. The objective is to accurately categorize these birds into one of 200 distinct species. Following <cit.>, we use k = 112 binary bird attributes representing wing color, beak shape, etc. * CelebA <cit.>: the Large-scale CelebFaces Attributes dataset, in the CelebA task, there are 6 balanced incomplete concept annotations and each image can be one of the 256 classes. * AwA2 <cit.>: Animals with Attributes 2 consists of in total 37322 images distributed in 50 animal categories. The AwA2 also provides a category-attribute matrix, which contains an 85-dim attribute vector (e.g., color, stripe, furry, size, and habitat) for each category. §.§ Implementation Details First, we resize the images to an input size of 299 x 299 (64 x 64 for CelebA). Subsequently, we employ ResNet34 <cit.> as the backbone to transform the input into latent code, followed by a fully connected layer to convert it into concept embeddings of size 16 (32 for CUB). During pseudo-labeling, we also utilize ResNet34 with the KNN algorithm with k = 2. We set λ_1=1 and λ_2=0.1 and utilize the SGD optimizer with a learning rate of 0.05 and a regularization coefficient of 5e-6. We train SSCBM for 100 epochs with a batch size of 256 (for AwA2, the batch size is 32 due to the large size of individual images). We repeat each experiment 5 times and report the average results. To construct the concept saliency map, we first upsample the heatmaps {ℋ_1, ℋ_2, …, ℋ_k} calculated in Section <ref> to the size H × W (the original image size). Then, we create a mask based on the value intensities, with higher values corresponding to darker colors. § TEST-TIME INTERVENTION Results in Figure <ref> demonstrate our model's robustness and an increasing trend to learn the information of concept information, indicating our interpretability and model prediction performance. This lies in our loss of alignment in effectively learning the correct information pairs in unlabeled and labeled data. Here, we present some successful examples of Test-time Intervention illustrated in Figure <ref>. The first two on the left show examples from the CUB dataset. In the top left image, by changing the wing color to brown, we successfully caused the model to predict the Great Crested Flycatcher instead of the Swainson Warbler. In the bottom left, because the model initially failed to notice that the upper part of the bird was black, it misclassified it as Vesper Sparrow. Through test-time intervention, we successfully made it predicted the bird was a Grasshopper Sparrow. The results on the right side of the image are from the AwA2 dataset. We successfully made the model predict correctly by modifying concepts at test time. For example, in the top right image, by modifying the concept of 'fierce' for the orca, we prevented it from being predicted as a horse. In the bottom right, we successfully made the model recognize the bat through the color of the bat. § ADDITIONAL INTERPRETABILITY EVALUATION We provide our additional interpretability evaluation in Figure <ref> - <ref> as follows. § LIMITATIONS While we solve a small portion of annotation problems by semi-supervised learning, semi-supervised models may not be suitable for all types of tasks or datasets. It is more effective that the data distribution is smooth. However, this is the limitation of semi-supervised learning, not our methods. § BROADER IMPACT The training of current CBMs heavily relies on the accuracy and richness of annotated concepts in the dataset. These concept labels are typically provided by experts, which can be costly and require significant resources and effort. Additionally, concept saliency maps frequently misalign with input saliency maps, causing concept predictions to correspond to irrelevant input features - an issue related to annotation alignment. In this problem, we propose SSCBM, a strategy to generate pseudo labels and an alignment loss to solve these two problems. Results show our effectiveness. This method has practical use in the real world.
http://arxiv.org/abs/2406.19314v1
20240627164742
LiveBench: A Challenging, Contamination-Free LLM Benchmark
[ "Colin White", "Samuel Dooley", "Manley Roberts", "Arka Pal", "Ben Feuer", "Siddhartha Jain", "Ravid Shwartz-Ziv", "Neel Jain", "Khalid Saifullah", "Siddartha Naidu", "Chinmay Hegde", "Yann LeCun", "Tom Goldstein", "Willie Neiswanger", "Micah Goldblum" ]
cs.CL
[ "cs.CL", "cs.AI", "cs.LG" ]
Simple homotopy invariance of the loop coproduct Pavel Safronov July 1, 2024 ================================================ § ABSTRACT Test set contamination, wherein test data from a benchmark ends up in a newer model's training set, is a well-documented obstacle for fair LLM evaluation and can quickly render benchmarks obsolete. To mitigate this, many recent benchmarks crowdsource new prompts and evaluations from human or LLM judges; however, these can introduce significant biases, and break down when scoring hard questions. In this work, we introduce a new benchmark for LLMs designed to be immune to both test set contamination and the pitfalls of LLM judging and human crowdsourcing. We release , the first benchmark that (1) contains frequently-updated questions from recent information sources, (2) scores answers automatically according to objective ground-truth values, and (3) contains a wide variety of challenging tasks, spanning math, coding, reasoning, language, instruction following, and data analysis. To achieve this, contains questions that are based on recently-released math competitions, arXiv papers, news articles, and datasets, and it contains harder, contamination-free versions of tasks from previous benchmarks such as Big-Bench Hard, AMPS, and IFEval. We evaluate many prominent closed-source models, as well as dozens of open-source models ranging from 0.5B to 110B in size. LiveBench is difficult, with top models achieving below 65% accuracy. We release all questions, code, and model answers. Questions will be added and updated on a monthly basis, and we will release new tasks and harder versions of tasks over time so that LiveBench can distinguish between the capabilities of LLMs as they improve in the future. We welcome community engagement and collaboration for expanding the benchmark tasks and models. § INTRODUCTION In recent years, as large language models (LLMs) have risen in prominence, it has become increasingly clear that traditional machine learning benchmark frameworks are no longer sufficient to evaluate new models. Benchmarks are typically published on the internet, and most modern LLMs include large swaths of the internet in their training data. If the LLM has seen the questions of a benchmark during training, its performance on that benchmark will be artificially inflated <cit.>, hence making many LLM benchmarks unreliable. Recent evidence of test set contamination includes the observation that LLMs' performance on Codeforces plummet after the training cutoff date of the LLM <cit.>, and before the cutoff date, performance is highly correlated with the number of times the problem appears on GitHub <cit.>. Similarly, a recent hand-crafted variant of the established math dataset, GSM8K, shows evidence that several models have overfit to this benchmark <cit.>. To lessen dataset contamination, benchmarks using LLM or human prompting and judging have become increasingly popular <cit.>. However, using these techniques comes with significant downsides. While LLM judges have multiple advantages, such as their speed and ability to evaluate open-ended questions, they are prone to making mistakes and can have several biases. For example, we will show in <ref> that for challenging reasoning and math problems, the pass/fail judgments from have an error rate of up to 46%. Furthermore, LLMs often favor their own answers over other LLMs, and LLMs favor more verbose answers <cit.>. Additionally, using humans to provide evaluations of LLMs can inject biases such as formatting of the output, and the tone and formality of the writing <cit.>. Using humans to generate questions also presents limitations. Human participants might not ask diverse questions, may favor certain topics that do not probe a model's general capabilities, or may construct their prompts poorly <cit.>. In this work, we introduce a framework for benchmarking LLMs designed to be immune to both test set contamination and the pitfalls of LLM judging and human crowdsourcing. We use this framework to create , the first benchmark with these three desiderata: (1) contains frequently-updated questions based on recent information sources; (2) is scored automatically according to the objective ground truth without the use of an LLM judge; and (3) questions are drawn from a diverse set of six categories. We ensure (2) by only including questions that have an objectively correct answer. questions are difficult: no current model achieves higher than 65% accuracy. Questions will be added and updated on a monthly basis, and we will release new tasks and harder versions of tasks over time so that can distinguish among the capabilities of LLMs as they improve in the future. Overview of tasks. currently consists of tasks across 6 categories: math, coding, reasoning, language, instruction following, and data analysis. Each task falls into one of two types: (1) tasks which use an information source for their questions, e.g., data analysis questions based on recent Kaggle datasets, or fixing typos in recent arXiv abstracts; and (2) tasks which are more challenging or diverse versions of existing benchmark tasks, e.g., from Big-Bench Hard <cit.>, IFEval <cit.>, or AMPS <cit.>. The categories and tasks included in are: * Math: questions from high school math competitions from the past 12 months (AMC12, AIME, USAMO, IMO, SMC), as well as harder versions of AMPS questions <cit.> * Coding: code generation questions from Leetcode and AtCoder (via LiveCodeBench <cit.>), as well as a novel code completion task * Reasoning: a harder version of Web of Lies from Big-Bench Hard <cit.>, and Zebra Puzzles (e.g., <cit.>) * Language Comprehension: Connections word puzzles, a typo-fixing task, and a movie synopsis unscrambling task for recent movies on IMDb and Wikipedia * Instruction Following: four tasks to paraphrase, simplify, summarize, or generate stories about recent new articles from The Guardian <cit.>, subject to one or more instructions such as word limits or incorporating specific elements in the response * Data Analysis: three tasks using recent datasets from Kaggle and Socrata, specifically, table reformatting (among JSON, JSONL, Markdown, CSV, TSV, and HTML), predicting which columns can be used to join two tables, and predicting the correct type annotation of a data column We evaluate dozens of models, including proprietary models as well as open-source models with sizes ranging from 0.5B to 8x22B. We release all questions, code, and model answers, and we welcome community engagement and collaboration. Our codebase is available at <https://github.com/livebench/livebench>, and our leaderboard is available at <https://livebench.ai>. § LIVEBENCH DESCRIPTION In this section, we introduce . It currently has six categories: math, coding, reasoning, data analysis, instruction following, and language comprehension. Categories are diverse with two to four tasks per problem. Each task either includes recent information sources (such as very recent news articles, movie synopses, or datasets) or is a more challenging, more diverse version of an existing benchmark task. Each task is designed to have roughly 50 questions which span a range of difficulty, from easy to very challenging, while loosely aiming for an overall 30-70% success rate on the top models for each task. Prompts are tailored for each category and task but typically include the following: zero-shot chain of thought <cit.>, asking the model to make its best guess if it does not know the answer, and asking the LLM to output its final answer in a way that is easy to parse, such as in **double asterisks**. In the following sections, we give a summary of each task from each category. See <ref> for additional details. §.§ Math Category Evaluating the mathematical abilities of LLMs has been one of the cornerstones of recent research in LLMs, featuring prominently in many releases and reports <cit.>. Our benchmark includes math questions of three types: questions from recent high school math competitions, fill-in-the-blank questions from recent proof-based USAMO and IMO problems, and questions from our new, harder version of the AMPS dataset <cit.>. Our first two math tasks, and , use expert human-designed math problems that offer a wide variety in terms of problem type and solution technique. First, we include questions from AMC12 2023, SMC 2023, and AIME 2024 in and from USAMO 2023 and IMO 2023 in . These are challenging and prestigious competitions for high school students in the USA (AMC, AIME, USAMO), in the UK (SMC), and internationally (IMO). The competitions test mathematical problem solving with arithmetic, algebra, counting, geometry, number theory, probability, and other secondary school math topics <cit.>. Finally, we release synthetically generated math questions in the task. This task is inspired by the math question generation used to create the MATH and AMPS datasets <cit.>. We generate harder questions by drawing random primitives, using a larger and more challenging distribution than AMPS across the 10 hardest tasks within AMPS. §.§ Coding Category The coding ability of LLMs is one of the most widely studied and sought-after skills for LLMs <cit.>. We include two coding tasks in : a modified version of the code generation task from LiveCodeBench (LCB) <cit.>, and a novel code completion task combining LCB problems with partial solutions collected from GitHub sources. The assesses a model's ability to parse a competition coding question statement and write a correct answer. We include 50 questions from LiveCodeBench <cit.> which has several tasks to assess the coding capabilities of large language models. The task specifically focuses on the ability of models to complete a partially correct solution—assessing whether a model can parse the question, identify the function of the existing code, and determine how to complete it. We use LeetCode medium and hard problems from LiveCodeBench's <cit.> April 2024 release, combined with matching solutions from <https://github.com/kamyu104/LeetCode-Solutions>, omitting the last 15% of each solution and asking the LLM to complete the solution. §.§ Reasoning Category The reasoning ability of large language models is another highly benchmarked and analyzed skill of LLMs <cit.>. In , we include two reasoning tasks: our harder version of a task from Big-Bench Hard <cit.>, and Zebra puzzles. The task is an advancement of the similarly named task included in Big-Bench <cit.> and Big-Bench Hard <cit.>. The task is to evaluate the truth value of a random Boolean function expressed as a natural-language word problem. Already by October 2022, LLMs achieved near 100% on this task, and furthermore, there are concerns that Big-Bench tasks leaked into the training data of LLMs such as GPT-4, despite using canary strings <cit.>. For , we create a new, significantly harder version by including additional deductive components and red herrings. Next, we include , a well-known reasoning task <cit.> that tests the ability of the model to follow a set of statements that set up constraints, and then logically deduce the requested information. We build on an existing repository for procedural generation of Zebra puzzles <cit.>; the repository allows for randomizing the number of people, the number of attributes, and the set of constraint statements provided. Below, we provide an example question from the task. 15pt [breakable, colback=white, colbacktitle=blue!5!white, colframe=black, boxrule=1pt, title=An example question from the task.] There are 3 people standing in a line numbered 1 through 3 in a left to right order. Each person has a set of attributes: Food, Nationality, Hobby. The attributes have the following possible values: - Food: nectarine, garlic, cucumber - Nationality: chinese, japanese, thai - Hobby: magic-tricks, filmmaking, puzzles and exactly one person in the line has a given value for an attribute. Given the following premises about the line of people: - the person that likes garlic is on the far left - the person who is thai is somewhere to the right of the person who likes magic-tricks - the person who is chinese is somewhere between the person that likes cucumber and the person who likes puzzles Answer the following question: What is the hobby of the person who is thai? Return your answer as a single word, in the following format: **X**, where X is the answer. 26pt §.§ Data Analysis Category LiveBench includes three practical tasks in which the LLM assists in data analysis or data science: column type annotation, table join prediction, and table reformatting. Each question makes use of a recent dataset from Kaggle or Socrata. The first task is to predict the type of a column of a data table. To create a question for the column type annotation task (), we randomly sample a table and randomly sample a column from that table. We use the actual name of that column as the ground truth and then retrieve some samples from that column. We provide the name of all the columns from that table and ask the LLM to select the true column name from those options. Data analysts often also require a table to be reformatted from one type to another, e.g., from some flavor of JSON to CSV or from XML to TSV. We emulate that task in by providing a table in one format and asking the LLM to reformat it into the target format. Finally, another common application of LLMs in data analysis is performing table joins. In the task, the LLM is presented with two tables with partially overlapping sets of columns. The LLM is tasked with creating a valid join mapping from the first to the second table. §.§ Instruction Following Category An important ability of an LLM is its capability to follow instructions. To this end, we include instruction-following questions in our benchmark, inspired by IFEval <cit.>, which is an instruction-following evaluation for LLMs containing verifiable instructions such as “write more than 300 words” or “Finish your response with this exact phrase: {end_phrase}.” While IFEval used a list of 25 verifiable instructions, we use a subset of 16 that excludes instructions that do not reflect real-world use-cases. See Appendix <ref>. Furthermore, in contrast to IFEval, which presents only the task and instructions with a simple prompt like “write a travel blog about Japan”, we provide the models with an article from The Guardian <cit.>, asking the models to adhere to multiple randomly-drawn instructions while asking the model to complete one of four tasks related to the article: , , , and . We score tasks purely by their adherence to the instructions. §.§ Language Comprehension Category Finally, we include multiple language comprehension tasks. These tasks assess the language model's ability to reason about language itself by, (1) completing word puzzles, (2) fixing misspellings but leaving other stylistic changes in place, and (3) reordering scrambled plots of unknown movies. First, we include the category. Connections is a word puzzle popularized by the New York Times (although similar ideas have existed previously). In this task, we present questions of varying levels of difficulty with 8, 12, and 16-word varieties. The objective of the game is to sort the words into sets of four words, such that each set has a `connection' between them, e.g., types of fruits, homophones, or words that come after the word `fire'. Due to the variety of possible connection types, this task is challenging for LLMs, as shown by prior work that tested the task on the GPT family of models <cit.>. Next, we include the task. The idea behind this task is inspired by the common use case for LLMs in which a user asks the LLM to identify typos and misspellings in some written text but to leave other aspects of the text unchanged. It is common for the LLM to impose its own writing style onto that of the input text, such as switching from British to US spellings or adding the serial comma, which may not be desirable. We create the questions for this task from recent ArXiv abstracts, which we ensure originally have no typos, by programmatically injecting common human typos into the text. Below is an example question from the task. 15pt [breakable, colback=white, colbacktitle=blue!5!white, colframe=black, boxrule=1pt, title=An example question from the task.] Please output this exact text, with no changes at all except for fixing the misspellings. Please leave all other stylistic decisions like commas and US vs British spellings as in the original text. We inctroduce a Bayesian estimation approach forther passive localization of an accoustic source in shallow water using a single mobile receiver. The proposed probablistic focalization method estimates the timne-varying source location inther presense of measurement-origin uncertainty. In particular, probabilistic data assocation is performed to match tiome-differences-of-arival (TDOA) observations extracted from the acoustic signal to TDOA predicitons provded by the statistical modle. The performence of our approach is evaluated useing rela acoustic data recorded by a single mobile reciever. 26pt Finally, we include the task, which takes the plot synopses of recently-released movies from IMDb or Wikipedia. We randomly shuffle the synopses sentences and then ask the LLM to simply reorder the sentences into the original plot. We find that this task is very challenging for LLMs, as it measures their abilities to reason through plausible sequences of events. § EXPERIMENTS In this section, first we describe our experimental setup and present full results for LLMs on all tasks of . Next, we give an empirical comparison of to existing prominent LLM benchmarks, and finally, we present ablation studies. Experimental setup. Our experiments include LLMs total, with a mix of top proprietary models, large open-source models, and small open-source models. In particular, for proprietary models, we include six GPT models: , , , , , <cit.>; four Anthropic models: , , , <cit.>; two Mistral models: , <cit.>; and two Google models: and <cit.>. For large open-source models, we include , <cit.>, , , , <cit.>, <cit.>, , <cit.>, , <cit.>, , , and <cit.>. For small open-source models, we include <cit.>, <cit.>, <cit.>, , , , <cit.>, , , , , , , <cit.>, <cit.>, <cit.>, , <cit.>, <cit.>, , and <cit.>. For all models and tasks, we perform single-turn evaluation with temperature 0. All models run with their respective templates from FastChat <cit.>. We run all open-source models with . For each question, a model receives a score from 0 to 1. For each model, we compute the score on each task as the average of all questions, we compute the score on each of the six categories as the average of all their tasks, and we compute the final score as the average of all six categories. §.§ Discussion of Results We compare all models on according to the experimental setup described above; see <ref> and <ref>. We find that performs the best across all six categories and performs the best overall, 6% better than all other models. substantially outperforms all other models in the reasoning and coding categories. The next-best model is , with and not far behind. Although is third overall, it is on-par or better than across all categories except language. The best-performing open-source model is , seventh on the leaderboard and ahead of , due to its impressive coding, math, and reasoning performance. The best-performing 8B or smaller open-source model is , whose performance in reasoning especially is far ahead of other 7B models. §.§ Comparison to Other LLM Benchmarks Next, we compare to two prominent benchmarks, ChatBot Arena <cit.> and Arena-Hard <cit.>. In <ref>, we show a bar plot comparison among models that are common to both benchmarks, and in <ref>, we compare the performance of these models to a best-fit line. We also compute the correlation coefficient of model scores among the benchmarks: has a 0.91 and 0.88 correlation with ChatBot Arena and Arena-Hard, respectively. Based on the plots and the correlation coefficients, we see that there are generally similar trends to , yet some models are noticeably stronger on one benchmark vs. the other. For example, and perform substantially better on Arena-Hard compared to , likely due to the known bias from using itself as the LLM judge <cit.>. We hypothesize that the strong performance of some models such as the models and on ChatBot Arena compared to may be due to having an output style that is preferred by humans. These observations emphasize the benefit of using ground-truth judging, which is immune to biases based on the style of the output. §.§ Comparison between Ground-Truth and LLM-Judging In this section, we run an ablation study to compare the result of ground-truth judging with LLM judging, by taking three math sub-tasks and one reasoning task and scoring them by either matching with the ground-truth answer or by asking an LLM judge to score the answer as either correct or incorrect. We use a judge prompt based on the MT-Bench judge prompt (see <ref> for details), and we use as the judge. We judge the model outputs of both and in <ref>. We find that the error rate for all tasks is far above a reasonable value, indicating that LLM judges are not appropriate for challenging math and logic tasks. Interestingly, the lowest error rates are on AIME 2024, which is also the task with the lowest overall success rate according to ground-truth judgment. § RELATED WORK We describe the most prominent LLM benchmarks and the ones that are most related to our work. For a comprehensive survey, see <cit.>. The Huggingface Open LLM Leaderboard <cit.> is a widely-used benchmark suite that consists of six static datasets: ARC <cit.>, GSM8K <cit.>, HellaSwag <cit.>, MMLU <cit.>, TruthfulQA <cit.>, and Winogrande <cit.>. While this has been incredibly useful in tracking the performance of LLMs, its static nature has left it prone to test set contamination by models. LLMs-as-a-judge. AlpacaEval <cit.>, MT-Bench <cit.>, and Arena-Hard <cit.> are benchmarks that employ LLM judges on a fixed set of questions. Using an LLM-as-a-judge is fast and relatively cheap. Furthermore, this strategy has the flexibility of being able to evaluate open-ended questions, instruction-following questions, and chatbots. However, LLM judging also has downsides. First, LLMs have biases towards their own answers <cit.>. In addition to favoring their own answers <cit.>, GPT-4 judges have a noticeable difference in terms of variance and favorability of other models compared to Claude judges. Additionally, LLMs make errors. As one concrete example, question 2 in Arena-Hard asks a model to write a C++ program to compute whether a given string can be converted to `abc' by swapping two letters. GPT-4 incorrectly judges 's solution as incorrect <cit.>. Humans-as-a-judge. ChatBot Arena <cit.> leverages human prompting and feedback on a large scale. Users ask questions and receive outputs of two randomly selected models and have to pick which output they prefer. This preference feedback is aggregated into Elo scores for the different models. While human evaluation is great for capturing the preferences of a crowd, using a human-as-a-judge has many disadvantages. First, human-judging can be quite labor-intensive, especially for certain tasks included in such as complex math, coding, or long-context reasoning problems. Whenever humans are involved in annotation (of which judging is a sub-case), design choices or factors can cause high error rates <cit.>, and even in well-designed human-annotation setups, high variability from human to human leads to unpredictable outcomes <cit.>. Other benchmarks Perhaps the most-related benchmark to ours is LiveCodeBench <cit.>, which also regularly releases new questions and makes use of ground-truth judging. However, it is limited to only coding tasks. Concurrent work, the SEAL Benchmark <cit.>, uses private questions with expert human scorers, however, the benchmark currently only contains the following categories: Math, Coding, Instruction Following, and Spanish. In <cit.>, the authors modify the original MATH dataset <cit.> by changing numbers in the problem setup. They find drastic declines in model performance for all LLMs including the frontier ones. However, while such work can evaluate LLMs on data that is not in the pretraining set, the data still ends up being highly similar to the kind of data likely seen in the pretraining set. In addition, the hardness of the benchmark remains the same over time. Finally, we discuss benchmarks that were the basis for tasks in . In IFEval <cit.>, the authors assess how good LLMs are at following instructions by adding one or more constraints in the instruction as to what the output should be. They limit the set of constraints to those in which it can provably be verified that the generation followed the constraints. In Big-Bench <cit.>, a large number of tasks are aggregated into a single benchmark with the aim of being as comprehensive as possible. Big-Bench-Hard <cit.> investigates a subset of Big-Bench tasks that were particularly challenging for contemporaneous models as well as more complex prompting strategies for solving them. § CONCLUSIONS, LIMITATIONS, AND FUTURE WORK In this work, we introduced LiveBench, an LLM benchmark designed to mitigate both test set contamination and the pitfalls of LLM judging and human crowdsourcing. LiveBench is the first benchmark that (1) contains frequently updated questions from new information sources, in which questions become harder over time, (2) scores answers automatically according to objective ground-truth values, without the use of LLM judges, and (3) contains a wide variety of challenging tasks, spanning math, coding, reasoning, language, instruction following, and data analysis. LiveBench contains questions that are based on recently released math competitions, arXiv papers, and datasets, and it contains harder, `contamination-proof' versions of previously released benchmarks. We released all questions, code, and model answers, and questions will be added and updated on a monthly basis. We welcome community collaboration for expanding the benchmark tasks and models. Limitations and Future Work. While we attempted to make as diverse as possible, there are still additions from which it would benefit. For example, we hope to add non-English language tasks in the future. Furthermore, while ground truth scoring is beneficial in many ways, it still cannot be used for certain use cases, such as `write an email to my boss', or `write a travel guide to Hawaii' in which it is hard to define a ground truth. Finally, while we attempted to make all tasks and categories fair for all models, there are still biases due to certain LLM families favoring certain prompt types. We plan to update the prompts (at the start and end of each question) in the future, as new prompt strategies are developed. Similarly, we plan to update the leaderboard as new LLMs are released. plain § ADDITIONAL DETAILS ABOUT LIVEBENCH EXPERIMENTS In this section, we detail further descriptions about the benchmark itself and our experiments. For example, we include further depictions of the comparisons of to ChatBot Arena and Arena-Hard in <ref>. We display the full results table for in <ref>. We display the list of all verifiable instructions in <ref>. Other details are below. §.§ Additional Experimental Results Discussion Instruction Following Results Discussion To better understand why Opus achieves a higher ranking than GPT-4o in LiveBench, we manually examined some samples. We noticed that certain instructions were somewhat unusual and could be challenging for aligned models to perform correctly. For instance, here is a case where Claude-3-Opus succeeds while GPT-4o utterly fails: 15pt [colback=blue!5!white, colframe=blue!5!white, left=4pt, right=4pt, top=0pt, bottom=0pt] Question: {{Prompt}} In your response, the letter i should appear less than 1 times. Claude-3-Opus: <<The Strateg<<c>> Battleground of D<<g>>tal Campa<<gn>>ng>>\n\nAs the UK gears up for the upcom<<ng>> general elect<<on>>, pol<<t>>cal part<<es>> are d<<v>>ng headf<<rst>> <<nto>> the realm of d<<g>>tal campa<<gn>>ng. GPT-4o: <<The Impact of Digital Advertising on Election Timing>>\n\nWhy schedule an election for November? This query was posed by digital marketing expert Mike Harris in a message to his friend, Labour\u2019s campaign manager, Morgan McSweeney, earlier this year. 26pt Additionally, we examined the aggregate scores for various constraints, such as letter frequency, word frequency, word existence, and the number of placeholders. We discovered that while Claude-3-Opus could follow these types of instructions, GPT-4o could not - see <ref>. We also analyze the performance of Claude-3-Opus and GPT-4o on the original IFeval benchmark (<Ref>). We consider both strict accuracies, which require exact matching, and loose accuracies, where the evaluator reduces false negatives by giving partial points for partial matches (see <cit.>). While the performance is similar in the strict setting, Claude-3-Opus exhibits a higher upper bound in the loose accuracy setting. Moreover, our hypothesis about the larger performance gap observed in LiveBench compared to IFeval is that LiveBench is more challenging. In LiveBench, the models must understand the document, the task, and the instructions, which can involve complex compositions. In contrast, the original IFeval presents only the task and instructions. The increased difficulty in LiveBench comes from the fact that instead of a simple prompt like "write a blog about Japan," it requires summarizing or analyzing a Guardian article, necessitating comprehension of the text. This increased difficulty in parsing and understanding the context likely contributes to the larger performance gap observed in LiveBench. §.§ Details from Ablation Studies In this section, we give more details from <ref>. Recall that in <ref>, we ran an ablation study by taking three math sub-tasks and one reasoning task, and scoring them by either matching with the ground truth answer, or by asking an LLM judge to score the answer as either correct or incorrect. We used a judge prompt similar to MT-Bench, which we duplicate below. Furthermore, to complement <ref>, we give the model performance scores for ground-truth and LLM judging for the respective models and tasks, in <ref>. [Instruction] Please act as an impartial judge and evaluate the quality of the response provided by an AI assistant to the user question displayed below. Your evaluation should consider correctness alone. Identify and correct any mistakes. Be as objective as possible. After providing your explanation, you must rate the response on a scale of 1 to 10 by strictly following this format: "[[rating]]", for example: "Rating: [[5]]". [Question] question [The Start of Assistant's Answer] answer [The End of Assistant's Answer] §.§ Detailed Description of LiveBench Categories In this section, we describe the categories and tasks of in more detail. §.§.§ Math Category Evaluating the mathematical abilities of LLMs has been one of the cornerstones of recent research in LLMs, featuring prominently in many releases and reports <cit.>. Our benchmark includes math questions of three types: questions from recent high school math competitions, fill-in-the-blank questions from recent proof-based USAMO and IMO problems, and questions from our new, harder version of the AMPS dataset <cit.>. Math competitions. Our first math category uses expert human-designed math problems that offer a wider variety in terms of problem type and solution technique. We focus on high school math competition questions from English-speaking countries: AMC12, AIME, SMC, and USAMO, and also IMO, the international competition. First, we include the American Mathematics Competition 12 (AMC12), both AMC12A and AMC12B 2023, released on November 8, 2023 and November 14, 2023, respectively, and the Senior Mathematical Challenge (SMC) 2023, released on October 3, 2023. All three are challenging multiple-choice competitions for high school students in the USA (AMC) and UK (SMC) that build in difficulty, meant as the first step for high school students to qualify for their country's team for the International Mathematical Olympiad (IMO). The questions test mathematical problem solving with arithmetic, algebra, counting, geometry, number theory, probability, and other secondary school math topics <cit.>. An example of a problem of this type from the AMC12A 2023 problem set is below: [breakable, colback=white, colbacktitle=blue!5!white, colframe=black, boxrule=1pt, title=An example question from the Math Competitions task. How many complex numbers satisfy the equation z^5=z, where z is the conjugate of the complex number z? (A) 2 (B) 3 (C) 5 (D) 6 (E) 7 If you cannot determine the correct multiple-choice answer, take your best guess. Once you have your answer, please duplicate that letter five times in a single string. For example, if the answer is F, then write FFFFF. ] Ground Truth: EEEEE 26pt Next, we include the American Invitational Mathematics Examination (AIME), both AIME I and AIME II 2024, released on January 31, 2024 and February 7, 2024, respectively. These are prestigious and challenging tests given to those who rank in the top 5% of the AMC. Each question's answer is an integer from 000 to 999. An example of a problem of this type from the AIME I 2024 problem set is below: [breakable, colback=white, colbacktitle=blue!5!white, colframe=black, boxrule=1pt, title=An example question from the Math Competitions task. Real numbers x and y with x,y>1 satisfy log_x(y^x)=log_y(x^4y)=10. What is the value of xy? Please think step by step, and then display the answer at the very end of your response. The answer is an integer consisting of exactly 3 digits (including leading zeros), ranging from 000 to 999, inclusive. For example, the answer might be 068 or 972. If you cannot determine the correct answer, take your best guess. Remember to have the three digits as the last part of the response. ] Ground Truth: 025 Proof-based questions. Lastly, we consider the USA Math Olympiad (USAMO) 2023 and International Math Olympiad (IMO) 2023 competitions, released on March 21, 2023 and July 2, 2023, respectively. These contests are primarily proof-based and non-trivial to evaluate in an automated way. One possibility is to use LLMs to evaluate the correctness of the natural language proof. However, we then have no formal guarantees on the correctness of the evaluation. Another possibility is to auto-formalize the proofs into a formal language such as Lean and then run a proof checker. However, while there have been notable recent improvements in auto-formalization, such a process still does not have formal guarantees on the correctness of the auto-formalization – and thus that of the evaluation. To tackle this, we formulate a novel task which can test the ability of an LLM in the context of proofs. Specifically, for a proof, we mask out a subset of the formulae in the proof. We then present the masked out formulae in a scrambled order to the LLM and ask it to reinsert the formulae in the correct positions. Such a task tests the mathematical, deductive, and instruction following abilities of the LLM. In particular, if the LLM is strong enough to generate the correct proof for a question, then one would expect it to also solve the far easier task of completing a proof which has some missing formulae – especially if the formulae are already given to it in a scrambled order. Note that this also allows us to easily control the level of difficulty of the question by changing the number of formulae that we mask. We generate 3 hardness variants for each problem, masking out 10%, 50% and 80% of the equations in the proof. We evaluate by computing the edit distance between the ground truth ranking order and the model predicted ranking order. [NB : in preliminary testing we also evaluated using the accuracy metric and the model rankings remained nearly the same]. Models perform worse on IMO compared to USAMO, in line with expectations. We also looked at the performance as separated by question hardness. The scores are greatly affected by question hardness going from as high as 96.8 for the easiest questions (10% masked out, GPT-4o) to as low as 36 for the hardest (80% masked out). The full results are in <ref> and <ref>. An example of a problem of this type from the IMO problem set can be found https://huggingface.co/datasets/livebench/math/viewer/default/test?f Synthetically generated math questions. Finally, we release synthetic generated math questions. This technique is inspired from math question generation used to create the MATH and AMPS datasets <cit.>. In particular, we randomly generate a math problem of one of several types, such as taking the derivative or integral of a function, completing the square, or factoring a polynomial. We generate questions by drawing random primitives, using a larger (and therefore more challenging) distribution than AMPS. Note that, for problem types such as integration, this simple technique of drawing a random function and taking its derivative results in a wide variety of integration problems of varying difficulty. For example, problem solutions may involve applying the chain rule, the product/quotient rule, trigonometric identities, or use a change of variables. In order to extract the answer, we ask the model to use the same `latex boxed answer' technique as in the MATH dataset <cit.>. We judge the correctness of answers as in the EleutherAI Eval Harness <cit.> using Sympy <cit.> where we check for semantic as well as numerical equivalence of mathematical expressions. An example of a integral problem is as follows: [breakable, colback=white, colbacktitle=blue!5!white, colframe=black, boxrule=1pt, title=An example question from the AMPS Hard task. Find an indefinite integral (which can vary by a constant) of the following function: 5 ^2(5 x+1)-8 sin (7-8 x). Please put your final answer in a boxed{}. ] Ground Truth: -sin (7) sin (8 x)-cos (7) cos (8 x)+tan (5 x+1) §.§.§ Coding Category The coding ability of LLMs is one of the most widely studied and sought-after skills for LLMs <cit.>. We include two coding tasks in : a modified version of the code generation task from LiveCodeBench <cit.>, and a novel code completion task combining LiveCodeBench problems with partial solutions collected from GitHub sources. Examples of questions from the Coding tasks can be found https://huggingface.co/datasets/livebench/codinghere. Code generation. In the task, we assess a model's ability to parse a competition coding question statement and write a correct answer. LiveCodeBench <cit.> included several tasks to assess the coding capabilities of large language models. We have taken 50 randomly selected problems from the April 2024 release of LiveCodeBench, selecting only problems released in or after November 2023. The problems are competition programming problems from LeetCode <cit.> and AtCoder<cit.>, defined with a textual description and solved by writing full programs in Python 3 code. These problems are presented as in LiveCodeBench's Code Generation task, with minor prompting differences and with only one chance at generating a correct solution per question, per model. We report pass@1, a metric which describes the proportion of questions that a given model solved completely (a solution is considered correct if and only if it passes all public and private test cases). Code completion. In this task, we assess the ability of the model to successfully complete a partially provided solution to a competition coding question statement. The setup is similar to the Code Generation task above, but a partial (correct) solution is provided in the prompt and the model is instructed to complete it to solve the question. We use LeetCode medium and hard problems from LiveCodeBench's <cit.> April 2024 release, combined with matching solutions from <https://github.com/kamyu104/LeetCode-Solutions>, omitting the last 15% of each solution and asking the LLM to complete the solution. As with Code Generation, we report pass@1. §.§.§ Reasoning Category The reasoning abilities of large language models is another highly-benchmarked and analyzed skill of LLMs <cit.>. In , we include two reasoning tasks: a harder version of a task from Big-Bench Hard <cit.>, and Zebra puzzles. Web of lies v2. Web of Lies is a task included in Big-Bench <cit.> and Big-Bench Hard <cit.>. The task is to evaluate the truth value of a random Boolean function expressed as a natural-language word problem. In particular, the LLM must evaluate f_n(f_n-1(...f_1(x)...)), where each f_i is either negation or identity, and x is True or False. We represent x by the sentence: X_0 {tells the truth, lies}, and we represent f_i by a sentence: X_i says X_i-1 {tells the truth, lies}. The sentences can be presented in a random order for increased difficulty. For example, a simple n=2 version is as follows: `Ka says Yoland tells the truth. Yoland lies. Does Ka tell the truth?' Already by October 2022, LLMs achieved near 100% on this task, and furthermore, there are concerns that Big-Bench tasks leaked into the training data of GPT-4, despite using canary strings <cit.>. For , we create a new, significantly harder version of Web of Lies. We make the task harder with a few additions: (1) adding different types of red herrings, (2) asking for the truth values of three people, instead of just one person, and (3) adding a simple additional deductive component. For (1), we maintain a list of red herring names, so that the red herrings do not affect the logic of the answer while still potentially leading LLMs astray. For example, `Fred says Kayla lies,' where Fred is in the true `web of lies', while Kayla may lead to a series of steps ending in a dead end. Overall, the number of total red herring sentences is drawn from a uniform distribution ranging from 0 to 19. For (3), we simply assign each name to a location and give sentences of the form `Devika is at the museum. The person at the museum says the person at the ice skating rink lies.' We find that this makes the task significantly harder for leading LLMs, even without shuffling the sentences into a random order. [breakable, colback=white, colbacktitle=blue!5!white, colframe=black, boxrule=1pt, title=An example question from the Web of Lies v2 task. In this question, assume each person either always tells the truth or always lies. Tala is at the movie theater. The person at the restaurant says the person at the aquarium lies. Ayaan is at the aquarium. Ryan is at the botanical garden. The person at the park says the person at the art gallery lies. The person at the museum tells the truth. Zara is at the museum. Jake is at the art gallery. The person at the art gallery says the person at the theater lies. Beatriz is at the park. The person at the movie theater says the person at the train station lies. Nadia is at the campground. The person at the campground says the person at the art gallery tells the truth. The person at the theater lies. The person at the amusement park says the person at the aquarium tells the truth. Grace is at the restaurant. The person at the aquarium thinks their friend is lying. Nia is at the theater. Kehinde is at the train station. The person at the theater thinks their friend is lying. The person at the botanical garden says the person at the train station tells the truth. The person at the aquarium says the person at the campground tells the truth. The person at the aquarium saw a firetruck. The person at the train station says the person at the amusement park lies. Mateo is at the amusement park. Does the person at the train station tell the truth? Does the person at the amusement park tell the truth? Does the person at the aquarium tell the truth? Think step by step, and then put your answer in **bold** as a list of three words, yes or no (for example, **yes, no, yes**). If you don't know, guess. ] Ground Truth: no, yes, yes Zebra puzzles. The second reasoning task we include is Zebra puzzles. Zebra puzzles, also called Einstein's riddles or Einstein's puzzles, are a well-known <cit.> reasoning task that tests the ability of the model to follow a set of statements that set up constraints, and then logically deduce the requested information. The following is an example with three people and three attributes: [breakable, colback=white, colbacktitle=blue!5!white, colframe=black, boxrule=1pt, title=An example question from the Zebra Puzzle task. There are 3 people standing in a line numbered 1 through 3 in a left to right order. Each person has a set of attributes: Food, Nationality, Hobby. The attributes have the following possible values: * Food: nectarine, garlic, cucumber * Nationality: chinese, japanese, thai * Hobby: magic-tricks, filmmaking, puzzles and exactly one person in the line has a given value for an attribute. Given the following premises about the line of people: * the person that likes garlic is on the far left * the person who is thai is somewhere to the right of the person who likes magic-tricks * the person who is chinese is somewhere between the person that likes cucumber and the person who likes puzzles Answer the following question: What is the hobby of the person who is thai? Return your answer as a single word, in the following format: ***X***, where X is the answer.] Ground Truth: filmmaking We build on an existing repository for procedural generation of Zebra puzzles <cit.>; the repository allows for randomizing the number of people, the number of attributes, and the set of constraint statements provided. For the attribute randomization, they are drawn from a set of 10 possible categories (such as Nationality, Food, Transport, Sport) and for each of these categories there are between 15 and 40 possible values to be taken. For the constraint statements, the implementation allows for up to 20 `levels' of constraint in ascending order of intended difficulty. For example, level 1 could include a statement such as `The person who likes garlic is on the left of the person who plays badminton' and a level 10 statement could be `The person that watches zombie movies likes apples or the person that watches zombie movies likes drawing, but not both'. Higher levels also include lower level statements in their possible set of statements to draw from, but this set narrows progressively as the level increases from 12 to 20 by removing the possibility of having lower-level statements (starting with removing level 1, then removing level 2, etc). The repository also includes a solver for the puzzles, which we use to ensure there is a (unique) solution to all of our generated puzzles. Our modifications to the original repository primarily target the reduction of ambiguity in the statements (e.g. changing `X is to the left of Y' to `X is to the immediate left of Y'). For generation, we pick either 3 or 4 people with 50% probability, either 3 or 4 attributes with 50% probability, and we draw the levels from the integer interval [10, 20] with uniform probability. In preliminary testing, we found that larger puzzles proved exceedingly difficult for even the top performing LLMs. §.§.§ Data Analysis LiveBench includes three practical tasks in which the LLM assists in data analysis or data science: column type annotation, table join prediction, and table reformatting. Each question makes use of a recent dataset from Kaggle or Socrata. Owing to the limited output context lengths of the current generation of LLMs and the comparatively high per-token costs of generating responses, we upper bound the size of our tables with respect to cell length, column count and row count. Even with these limitations, we find that our tasks remain sufficiently challenging even for the current state-of-the-art models. Example questions from the Data Analysis category can be lengthy, so examples can be viewed https://huggingface.co/datasets/livebench/data_analysis/viewer/default/test?row=0here. Column type annotation. Consider a table A with t columns and r rows. We denote each column C ∈ A as a function which maps row indices to strings; i.e., for 0 ≤ i < t, we have C_i : ℕ→Σ_*, where i is the column index. Let L ⊆Σ_* denote a label set; these are our column types to be annotated. Standard CTA assumes a fixed cardinality for this label set, indexed by a variable we call j. Given the above definitions, we define single-label CTA ⊂ A × L as a relation between tables and labels: ∀ C, ∃ l_j | (C_i,l_j) ∈ CTA We seek a generative method M : Σ_* →Σ_* that comes closest to satisfying the following properties: M(σ, L) ∈ L, ∀ C ∈ A, M(σ, L) ∈ CTA For further details on the task, please refer to  <cit.>. Implementation details. For each benchmark instance, we retrieve a random A from our available pool of recent tables. We randomly and uniformly sample C from A, use the actual column name of A as our CTA ground-truth L, and retrieve σ_1 ⋯σ_5 column samples from C, with replacement, providing them as context for the LLM. Metrics. We report Accuracy @ 1 over all instances, accepting only case-insensitive exact string matches as correct answers. Table reformatting. Given a table A rendered according to a plaintext-readable and valid schema for storing tabular information a_s, we instruct the LLM to output the same table with the contents unchanged but the schema modified to a distinct plaintext-readable valid schema b_s. Implementation details. We use the popular library Pandas to perform all of our conversions to and from text strings. We allow the following formats for both input and output: "JSON", "JSONL", "Markdown", "CSV", "TSV", "HTML". As tabular conversion from JSON to Pandas is not standardized, we accept several variations. At inference time, we ingest the LLM response table directly into Pandas. Metrics. We report Accuracy @ 1 over all instances. An instance is accepted only if it passes all tests (we compare column count, row count, and exact match on row contents for each instance). Join-column prediction. Given two tables A and B, with columns a_1, … and b_1, … respectively, the join-column prediction task is to suggest a pair (a_k, b_l) of columns such that the equality condition a_k=b_l can be used to join the the tables in a way that matches with the provided ground-truth mapping M : A → B. The mapping is usually partial injective: not every column in B is mapped from A, not every column in A is mapped to B. For further details, please refer to <cit.>. Implementation details. We randomly sample columns with replacement from our entire collection of tables, generating a fixed column pool C. We retain half the rows of A to provide as context to the LLM. The remaining rows are used to generate a new table B. For each instance, we randomly sample columns from both the target table and the column pool and join them to B. We anonymize the column names in B, then pass both A and B to the LLM and ask it to return a valid join mapping M. Metrics. We report the F1 score over columns, with TPs scored as exact matches between ground truth and the LLM output, FPs scored as extraneous mappings, FNs scored as missing mappings, and incorrect mappings counting as FP + FN. §.§.§ Instruction Following An important ability of an LLM is its capability to follow instructions. To this end, we include instruction following questions in our benchmark, inspired by IFEval <cit.>. Generating live prompts and instruction. IFEval, or instruction-following evaluation for LLMs, contains verifiable instructions such as “write more than 300 words” or “Finish your response with this exact phrase: {end_phrase}.” These instructions are then appended to prompts like “write a short blog about the a visit in Japan”. We use this modular nature between the prompt and instruction to construct live prompts. For our live source, we considered news articles from The Guardian; we are able to obtain 200 articles using their API[<https://open-platform.theguardian.com/>]. Using the first n sentences article text as the source text, we consider four different tasks using the text: paraphrase, summarize, simplify, and story generation. The exact prompts can be seen in <ref>. For the instructions, we use the code provided by <cit.>, making a few modifications such as increasing the max number of keywords from two to five. Additionally, we compose different instructions together by sampling from a uniform distribution from 2 to 5. However, since the instructions can be conflicting, we deconflict the instructions. This results in approximate normal distribution of the number of instructions per example with the majority of the containing two or three instructions. A full list of the instructions can be found in Appendix <ref>. To construct, the full prompt, containing the news article sentences, the prompt, and the instructions, we use the following meta prompt: “The following are the beginning sentences of a news article from the Guardian.\n——-\n{guardian article}\n——-\n{subtask prompt} {instructions}”. Scoring. To evaluate the model's performance on instruction following, we use a scoring method that considers two key factors: whether all instructions were correctly followed for a given prompt, i.e. Prompt-level accuracy, and what fraction of the individual instructions were properly handled, i.e. Instruction-level accuracy. The first component of the score checks if the model successfully followed every instruction in the prompt and assigns 1 or 0 if it missed any of the instructions. The second component looks at each individual instruction and checks whether it was properly followed or not. The final score is the average of these two components, scaled to lie between 0 and 1. A score of 1 represents perfect adherence to all instructions, while lower scores indicate varying degrees of failure in following the given instructions accurately. Example questions from the Instruction Following category can be lengthy, so examples can be viewed https://huggingface.co/datasets/livebench/instruction_following/viewer/default/test?row=0here. §.§.§ Language Comprehension Finally, we include multiple language comprehension tasks. These tasks assess the language model's ability to reason about language itself by, (1) completing word puzzles, (2) fixing misspellings while leaving other stylistic changes in place, and (3) reordering scrambled plots of unknown movies. Connections. First we include the `Connections' category[See <https://www.nytimes.com/games/connections>.]. Connections is a word puzzle category introduced by the New York Times (although similar ideas have existed previously). Sixteen words are provided in a random order; the objective of the game is to sort these into four sets of four words, such that each set has a `connection' between them. Such connections could include the words belonging to a related category, e.g., `kiwi, grape, pear, peach' (types of fruits); the words being anagrams, the words being homophones, or being words that finish a certain context, e.g., `ant, drill, island, opal' being words that come after the word `fire' to make a phrase. Due to the variety of possible connection types that can exist, the wider knowledge required to understand some connections, as well as some words potentially being `red herrings' for connections, this task is challenging for LLMs – prior work <cit.> has comprehensively tested the task on the GPT family of models, as well as on sentence embedding models derived from, e.g., BERT <cit.> and RoBERTa <cit.>. The authors found that GPT-4 has an overall completion rate below 40% on the puzzles (when allowed multiple tries to get it correct), concluding that `large language models in the GPT family are able to solve these puzzles with moderate reliability, indicating that the task is possible but remains a formidable challenge.' In our work, we assess the single-turn performance and test performance on a much larger set of models. The original task provided for a number of `retry' attempts in the event of an incorrect submission for a category. To fit into the framework of our benchmark we take the model's answer from a single turn; to ameliorate the increased difficulty of this setting, we use fewer words/groups for some questions. The split we use is 15 questions of eight words, 15 questions of twelve words and 20 questions of sixteen words. An example prompt is as follows: [breakable, colback=white, colbacktitle=blue!5!white, colframe=black, boxrule=1pt, title=An example question from the task. You are given 8 words/phrases below. Find two groups of four items that share something in common. Here are a few examples of groups: bass, flounder, salmon, trout (all four are fish); ant, drill, island, opal (all four are two-word phrases that start with 'fire'); are, why, bee, queue (all four are homophones of letters); sea, sister, sin, wonder (all four are members of a septet). Categories will be more specific than e.g., '5-letter-words', 'names', or 'verbs'. There is exactly one solution. Think step-by-step, and then give your answer in **bold** as a list of the 8 items separated by commas, ordered by group (for example, **bass, founder, salmon, trout, ant, drill, island, opal**). If you don't know the answer, make your best guess. The items are: row, drift, curl, tide, current, press, fly, wave. ] Ground Truth: current, drift, tide, wave, curl, fly, press, row The score for this task is the fraction of groups that the model outputs correctly. Typo Corrections Next, we include details about the task. The idea behind this task is inspired by the common use-case for LLMs where a user will ask the system to identify typos and misspellings in some written text. The challenge for the systems is to fix just the typos or misspellings, but to leave other aspects of the text unchanged. It is common for the LLM to impose its own writing style onto that of the input text, such as switching from British to US spellings or adding the serial comma, which may not be desirable. To create the questions for this task, we take text from recent ArXiv abstracts. These abstracts may themselves start with misspellings and grammatical errors. Therefore, our first step is to manually pass over the abstracts and fix typos and grammar issues. Next, we assemble a list of common misspellings as found online. This is done so as to replicate common misspellings performed by humans, even though we synthetically generate the questions. Finally, for each question, we sample a probability p∼ U(0.5,0.7) of flipping correctly spelled words to misspelled words. We then use that probability to replace every correctly spelled word with a common misspelling with that probability p. This allows there to be variability in the difficulty of the problem included in the benchmark. In our first release, we include 50 questions. Finally, to score this problem, we merely ask whether the ground truth abstract is contained in the output provided by the LLM. [breakable, colback=white, colbacktitle=blue!5!white, colframe=black, boxrule=1pt, title=An example question from the task. Please output this exact text, with no changes at all except for fixing the misspellings. Please leave all other stylistic decisions like commas and US vs British spellings as in the original text. We introducehten consept of a k-token signed graph adn studdy some of its combinatorial and algebraical properties. We prove that twpo switching isomorphic signed graphs ahve switching isomorphic token graphs. Moreover, we sohw tyhat the Laplacian spectum of a balanced signed graph is contained in the Laplacian spectra of its k-token signed graph. Besides, we introduce and studdyther unbalance levle of a signed graph, which is a new parameter tyhat measures how far a signed graph is frome beng balanced. Moreover, we study the relation bewteen the frustration index anbdther unballance level of signed graphs adn their token signed graphs.] Ground Truth: We introduce the concept of a k-token signed graph and study some of its combinatorial and algebraic properties. We prove that two switching isomorphic signed graphs have switching isomorphic token graphs. Moreover, we show that the Laplacian spectrum of a balanced signed graph is contained in the Laplacian spectra of its k-token signed graph. Besides, we introduce and study the unbalance level of a signed graph, which is a new parameter that measures how far a signed graph is from being balanced. Moreover, we study the relation between the frustration index and the unbalance level of signed graphs and their token signed graphs. Plot unscrambling. Finally, we include a movie synopsis unscrambling task. We obtain movie plot synopses from IMDb or Wikipedia for feature-length films released after January 1st 2024. These synopses are then split into their constituent sentences and are randomly shuffled. The lengths of the synopses vary from as few as 7 sentences to as many as 66 sentences; at the upper end, this is a very challenging task. The LLM is provided the shuffled sentences with the prompt: `The following plot summary of a movie has had the sentences randomly reordered. Rewrite the plot summary with the sentences correctly ordered. Begin the plot summary with <PLOT_SUMMARY>.'. Scoring the task involves two decision points: 1) how to deal with transcription errors - those in which the model modifies the lines when producing its output 2) given the ground truth ordering of sentences and the LLM's ordering, choosing an appropriate scoring metric. For 1), one option is to permit only strict matching – that is, the LLM must transcribe perfectly. However, although the strongest models do perform well on this (we find they achieve over 95% transcription accuracy), we find that LLMs often correct grammatical errors or spelling mistakes in the source data when transcribing. As we are primarily interested in testing the models' capabilities for causal language reasoning in this task, rather than precise transcription accuracy, we instead apply a fuzzy-match using difflib <cit.> to determine the closest match using a version of the Ratcliff/Obershelp algorithm <cit.>. For 2), we calculate the score as 1 - d/n_sentences_gt, where n_sentences_gt is the number of sentences in the ground truth synopsis, and d is the Levenshtein distance <cit.> of the model's sentence ordering to the ground truth synopsis ordering. Thus if the model's sentence ordering perfectly matches the ground truth, the distance d would be 0, and the score would be 1 for that sample. One might think that it is plausible that synopsis unscrambling cannot always be solved with the information provided. However, note that even if the set of sentences do not create a distinct causal ordering, the task is essentially asking the LLM to maximize the probability that a given arrangement of sentences is a real movie. In addition to causal reasoning, the LLM can use subtle cues to reason about what ordering creates the most compelling plot. Furthermore, even if there does exist an upper bound on the score that can be achieved that is strictly below 100%, it can still be a useful metric for distinguishing models' relative strengths. An analogous metric is that of next-token perplexity in language modelling; although it is likely that a perfect prediction of the next token is impossible to achieve, and we do not even know what the obtainable lower bound on perplexity is, it is still a powerful metric for determining language-modelling performance. Example questions from the plot unscrambling task can be lengthy, so examples can be viewed https://huggingface.co/datasets/livebench/language/viewer/default/test?f § ADDITIONAL DOCUMENTATION In this section, we give additional documentation for our benchmark. For the full details, see <https://github.com/livebench/livebench>. §.§ Author responsibility and license We, the authors, bear all responsibility in case of violation of rights. The license of our repository is the Apache License 2.0. §.§ Maintenance plan The benchmark is available on HuggingFace at <https://huggingface.co/livebench>. We plan to actively maintain and update the benchmark, and we welcome contributions from the community. §.§ Code of conduct Our Code of Conduct is from the Contributor Covenant, version 2.0. See <https://www.contributor-covenant.org/version/2/0/code_of_conduct.html>. The policy is copied below. “We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.” §.§ Datasheet We include a datasheet <cit.> for LiveBench at <https://github.com/livebench/livebench/blob/main/docs/DATASHEET.md>. §.§ Benchmark statistics Here, we give statistics on the number of questions and average number of output tokens per task, and the total cost of running with common API models. For the number of questions for each task, as well as the mean and std. dev number of input and output tokens per question for , see <ref>. Across 960 questions, the mean number of input tokens per question is 1568, and the mean number of output tokens is 371. See <ref> for the price to run GPT and Claude models on , as of June 21, 2024. and are the most expensive at $25.72, while is the cheapest, at $0.82.
http://arxiv.org/abs/2406.18490v1
20240626165419
Parameter selection and optimization of a computational network model of blood flow in single-ventricle patients
[ "Alyssa M. Taylor-LaPole", "L. Mihaela Paun", "Dan Lior", "Justin D Weigand", "Charles Puelz", "Mette S. Olufsen" ]
q-bio.QM
[ "q-bio.QM" ]
^1Department of Mathematics, North Carolina State University, Raleigh, NC, USA ^2School of Mathematics and Statistics, University of Glasgow, Glasgow, UK ^3Division of Cardiology, Department of Pediatrics, Baylor College of Medicine and Texas Children's Hospital, Houston, TX, USA Computational modeling, Parameter inference, Applied mathematics, Single ventricle disease Mette S Olufsen msolufse@ncsu.edu § ABSTRACT Hypoplastic left heart syndrome (HLHS) is a congenital heart disease responsible for 23% of infant cardiac deaths each year. HLHS patients are born with an underdeveloped left heart, requiring several surgeries to reconstruct the aorta and create a single ventricle circuit known as the Fontan circulation. While survival into early adulthood is becoming more common, Fontan patients suffer from reduced cardiac output, putting them at risk for a multitude of complications. These patients are monitored using chest and neck MRI imaging, but these scans do not capture energy loss, pressure, wave intensity, or hemodynamics beyond the imaged region. This study develops a framework for predicting these missing features by combining imaging data and computational fluid dynamics (CFD) models. Predicted features from models of HLHS patients are compared to those from control patients with a double outlet right ventricle (DORV). We use parameter inference to render the model patient-specific. In the calibrated model, we predict pressure, flow, wave-intensity (WI), and wall shear stress (WSS). Results reveal that HLHS patients have higher vascular stiffness and lower compliance than DORV patients, resulting in lower WSS and higher WI in the ascending aorta and increased WSS and decreased WI in the descending aorta. Parameter selection and optimization of a computational network model of blood flow in single-ventricle patients Alyssa M Taylor-LaPole^1, L. Mihaela Paun^2, Dan Lior^3, Justin D Weigand^3, Charles Puelz^3, Mette S Olufsen^1 July 1, 2024 ==================================================================================================================== 2 § INTRODUCTION Hypoplastic left heart syndrome (HLHS) is a congenital heart disease responsible for 23% of infant cardiac deaths and up to 9% of congenital heart disease cases each year <cit.>. The disease arises in infants with an underdeveloped left heart, preventing adequate transport of oxygenated blood to the systemic circulation <cit.>. Characteristics of HLHS include an underdeveloped left ventricle and ascending aorta, as well as small or missing atrial and mitral valves. Three surgeries are performed over the patients' first 2-3 years of life, resulting in a fully functioning, univentricular circulation. This system, called the Fontan circuit, is shown in Figure <ref>. The first surgical stage in the creation of the Fontan circulation involves moving the aorta from the left to the right ventricle and widening it with tissue from the pulmonary artery. This surgically modified aorta is hereafter referred to as the “reconstructed” aorta. Next, the venae cavae are removed from the right atrium and attached to the pulmonary artery. As a result, flow to the pulmonary circulation is achieved by passive transport through the systemic veins <cit.>. The Fontan circuit has near-normal arterial oxygen saturation. However, the lack of a pump pushing blood into the pulmonary vasculature causes a bottleneck effect, increasing pulmonary impedance and decreasing venous return to the heart <cit.>. Moreover, the single-pump system degenerates over time due to vascular remodeling accentuated by the system's attempt to compensate for reduced cardiac output <cit.>. Concerning clinical implications of the Fontan physiology include reduced cerebral and gut perfusion, increasing the risk of stroke <cit.> and the development of Fontan-associated liver disease (FALD) <cit.>. The study by Saiki et al. <cit.> shows that increased stiffness of the aorta, head, and neck vessels in HLHS patients with reconstructed aortas reduces blood flow to the brain. This reduced flow, combined with increased arterial stiffness, is associated with ischemic stroke <cit.>. Regarding FALD, the single pump system decreases both supply and drainage of blood in the liver. Hypertension within the liver vasculature typically occurs 5-10 years after Fontan surgery <cit.>. Reduced cardiac output and passive venous flow lead to liver fibrosis, a characteristic of FALD. With a heart transplant, FALD is reversible if caught early. Advanced FALD is irreversible and can lead to organ failure <cit.>. Another single ventricle pathology is double outlet right ventricle (DORV), which is also treated by creating a Fontan circulation. DORV patients, like HLHS patients, have a non-functioning left ventricle, but are born with a fully functioning aorta attached to the right ventricle. This physiology makes aortic reconstruction unnecessary <cit.>. Figure  <ref> shows a healthy (a) and a single ventricle (b) heart and circulation. The gray patch on the aorta indicates the reconstructed portion, required in HLHS but not DORV patients. Studies <cit.> suggest that DORV patients undergo less vascular remodeling and, therefore, have better cardiac function. Rutka et al. <cit.> found that Fontan patients with reconstructed aortas had poorer long-term survival, and Sano et al. <cit.> noted that these patients have an increased need for surgical intervention and increased mortality rates. type=figure < g r a p h i c s > Comparison of healthy (a) and single ventricle (b) heart and peripheral circulation. The homographic patch augmentation highlighted in gray on the single ventricle heart is only needed in HLHS patients. DORV patients have Fontan circulation but do not need aortic reconstruction. All single ventricle patients are monitored throughout life to assess the function of their single ventricle pump <cit.>. Typically, this is done via time-resolved magnetic resonance imaging (4D-MRI) <cit.> and magnetic resonance angiography (MRA) of the vessels in the neck and chest. In the imaged region, 4D-MRI provides time-resolved blood velocity fields in large vessels, while the MRA provides a high-fidelity image of the vessel anatomy. These imaging sequences do not measure energy loss, blood pressure, wave intensity, or any hemodynamics outside of the imaged region. Additional hemodynamic information can be predicted by combining imaging data with computational fluid dynamics (CFD) modeling <cit.>. Many studies have examined the Fontan circuit using three-dimensional (3D) CFD models <cit.>. While such models are ideal for assessing complex velocity patterns, they only provide insight within the imaged region and are, in general, computationally expensive. One-dimensional (1D) CFD models provide an efficient and accurate alternative modeling approach. Several studies have shown that flow and pressure waves predicted from 1D models are comparable to those obtained with 3D models <cit.>. Most computational work on the Fontan circulation focuses on venous blood flow. This includes studies investigating the hemodynamics and function of the total cavopulmonary connection <cit.>. However, computational studies assessing the Fontan circuit from the systemic arterial perspective are relatively scarce. A few have examined systemic arterial hemodynamics in individual patients. This includes the study by Taylor-LaPole et al. <cit.> that used a 1D CFD model to compare DORV and HLHS flow and pressure wave propagation in the cerebral and gut vasculature under rest and exercise conditions. Their study is promising, but results are calibrated manually and only include a single DORV and HLHS patient pair. The study by Puelz et al. <cit.> used a 1D CFD model to explore the effects of two different Fontan modifications (fenestration versus hepatic vein exclusion) on blood flow to the liver and intestines. Their study compared model predictions to ranges of clinical data obtained from literature. Still, they lacked a systematic methodology to determine model parameters. These studies successfully built patient-specific networks and predict hemodynamics outside the imaged region, demonstrating the importance of calibrating models to data but lack a systematic methodology to determine model parameters. This study addresses this significant limitation by devising a framework to combine imaging and hemodynamic data with a 1D CFD model. The model is systematically calibrated to patient-specific data using sensitivity analysis and parameter inference, an improvement over previous studies that rely on manual tuning of parameters. This approach presented here is then used to predict hemodynamics in four matched HLHS and DORV patient pairs. Calibration of the model is crucial, especially when it is used for predictions in a clinical setting. For this purpose, an identifiable subset of parameters is needed. Calibrating the model by estimating identifiable parameters provides unique patient-specific biomarkers, which can be compared between patients. Two steps are used to determine an identifiable parameter subset: sensitivity analyses (local and global), used to determine the effect of a given parameter on a specified quantity of interest <cit.>, and subset selection, required to determine interactions among parameters <cit.>. Several recent studies use these methodologies. Schiavazzi et al. <cit.> combine local and global sensitivity analyses to find a set of parameters that fit a zero-dimensional model to data from single-ventricle patients. This study demonstrates that both techniques lead to consistent and accurate subset selection. The study by Colebank et al. <cit.> uses local and global sensitivity analyses to determine the most influential parameters. They combine results with covariance analysis to quantify parameter interactions and further reduce the parameter subset to be inferred. The study calibrates the model using experimental data, but they only have pressure data at one location. Colunga & Colebank et al. <cit.> used local and global sensitivity analyses for subset selection. Their study incorporated multi-start inference to ensure identifiability and low coefficient of variation for all parameters. Inspired by these studies, we use local and global sensitivity analyses to determine parameter influence and covariance analysis to identify correlated parameters. The latter technique relies on local sensitivities. Once a parameter subset is identified, we use multistart inference to estimate the identifiable parameters. Once calibrated, we predict hemodynamic quantities, including pressure, flow, wave intensity, and wall shear stress to compare cardiac function between HLHS and DORV patients. To our knowledge, this is the first study that uses multiple data sets to calibrate a 1D, patient-specific CFD model for single-ventricle patients. § METHODS Medical imaging and hemodynamic data are combined to construct a patient-specific 1D arterial network model. Magnetic resonance angiography (MRA) images <cit.> are segmented, creating a 3D representation of vessels within the imaged region. For each HLHS and DORV patient, we extract vessel dimensions (radius and length) and vessel connectivity. The MRA images are registered to the 4D-MRI images, allowing for the extraction of flow waveforms in the aorta, head, and neck vessels using the high-fidelity MRA anatomy. The network includes vessels in the imaged region extended for one generation to ensure sufficient resolution of wave reflections. We solve a 1D CFD model in this network and use sensitivity analyses and subset selection to determine a subset of identifiable parameters. Parameter inference uses a gradient-based optimization scheme to minimize the residual sum of squares (RSS). §.§ Data acquisition Data for this study comes from four pairs of age- and sized-matched HLHS and DORV patients seen at Texas Children's Hospital, Houston, TX. Data collection was approved by the Baylor College of Medicine Institutional Review Board (H-46224: “Four-Dimensional Flow Cardiovascular Magnetic Resonance for the Assessment of Aortic Arch Properties in Single Ventricle Patients”). Imaging and pressure data. Cuff pressures are measured in the supine position. Images are acquired on a 1.5 T Siemens Aera magnet. MRA images are acquired in the sagittal plane and contain a reconstructed voxel size of 1.2 mm^3. 4D-MRI images are acquired during free-breathing with a slice thickness of 1-2.5 mm and encoding velocity 10% larger than the highest anticipated velocity in the aorta. The imaged region contains the ascending aorta, aortic arch, descending thoracic aorta, and innominate artery vessels (Figure <ref>). Patient characteristics, including age (years), height (cm), sex (m/f), weight (kg), systolic and diastolic blood pressure (mmHg, measured with a sphygmomanometer), and cardiac output (L/min), are listed in Table <ref>. 3D rendering. 3D rendered volumes are generated from both the MRA and 4D-MRI images, but vessel dimensions are only extracted from the MRA image. The aorta, head, and neck vessels are segmented using 3D Slicer, a popular open-source software from Kitware <cit.>. Using thresholding, cutting, and islanding (tools within 3D Slicer), we create a 3D rendered volume. The threshold for the 4D-MRIs is 0.51 to 1.00 Hounsfield Units (HU), and the threshold for MRAs is 430 to 1262 HU. The rendered volumes are saved to a stereolithography (STL) file and imported into Paraview <cit.>, from which they are converted to a VTK polygonal data file. Centerlines. Using VMTK, centerlines are generated from maximally inscribed spheres. At each point along the centerline, the vessel radius is obtained from the associated sphere <cit.>. Users manually select an inlet and outlet points. With this information, VMTK recursively generates centerlines, beginning at the outlets and determining pathways to the inlet. Junctions are defined as the points where two centerlines intersect. This procedure does not place the junction point at the barycenter for most vessels, especially those branching from the aorta. To correct this discrepancy, we use an in-house algorithm <cit.>, which is detailed in the supplement. Image registration. Volumetric flow waveforms at multiple locations along the patient's vasculature are extracted from the 4D-MRI images. The 4D-MRI sequence measures the time-resolved blood velocity field at each voxel in the imaged region with a sampling frequency 19-28 times over the course of a cardiac cycle. 4D-MRIs provide vascular geometry, but only at certain phases of the cardiac cycle and with low spatial resolution. Hence, both an MRA, for the anatomy, and a 4D-MRI, for the velocity, are acquired for each patient. These two images are obtained in the same session, but shifting and deformation may occur between scans caused by patient movement or misalignment within the respiratory cycles. Without correction, this can lead to inaccurate blood flow and volume estimates. To compensate, the MRA and 4D-MRI are aligned via an image registration procedure (shown in Figure <ref>) <cit.>. Details on this procedure can be found in the supplement. Volumetric flow data. Flow waveforms are extracted from the 4D-MRI image using the aligned MRA segmentation. A detailed description of this procedure can be found in the supplement. The velocity field in the 4D-MRI image is generally not divergence-free, in part because of noise in the measurements and averaging performed over several cardiac cycles. Thus, the sum of the flows in the branching vessels is not expected to equal the flow in the ascending aorta. The 1D CFD model assumes that blood is incompressible and flow through vessel junctions is conserved. To ensure convergence of the optimization, we impose volume conservation by scaling the flow waveforms extracted from the 4D-MRI images. The scaling uses a linear system to calculate the smallest possible scale factors needed to enforce mass conservation. The cardiac output of the ascending aorta is held fixed, and flows in the peripheral branching vessels are scaled as follows: q_inlet=∑_i=1^4(1+α_i)q_i. type=figure < g r a p h i c s > Segmentations and centerlines for each patient shown in their respective pairs. The term q_inlet is the flow through the ascending aorta, α_i is the scaling factor for the i^th vessel, and q_i is flow in the i^th vessel. A linear system for the scale factors α_i is constructed from the Lagrangian of the following optimization problem min1/2(∑_1^4α^2_i) such that q_inlet=∑_i=1^4(1+α_i)q_i. It should be noted that the flow waveforms are determined by extracting five flow waveforms from the 4D-MRI image that lie in close proximity on the centerline and are within a straight section of the vessel. These waveforms are averaged to create a representative flow waveform. §.§ Network generation A labeled directed graph is generated for each patient using in-house algorithms <cit.> to extract vessel radii and lengths from the VMTK output. The centerlines are defined as edges, containing x,y,z coordinates along the centerline, and junctions as nodes. Vessel radii are specified at each x,y,z coordinate of an edge from the radius of the maximally inscribed sphere. The vessel length is calculated as the sum of the Euclidean distances between the x,y, and z coordinates. Nodes shared between edges spanning more than one vessel are called "junction nodes." Terminal vessels are defined as those with no further branching. A connectivity matrix is used to specify how vessels connect to each other in the network. For example, in Figure <ref>, vessel 1 (the ascending aorta) forms a junction with vessel 2 (the aortic arch) and vessel 3 (the innominate artery). The generated network is a tree (it has one input and multiple outputs). The tree has a natural direction since flow moves from the heart towards the peripheral vasculature. As a result, the network is represented by a labeled directed graph, where labels include the vessel radii and lengths. The directed graph generated from the images is adjusted in order to move junctions to the barycenter and determine representative vessel radii. This adjustment is done using an in-house recursive algorithm <cit.>. The network is extended beyond the imaged region to ensure predicted hemodynamics have the appropriate wave reflections <cit.>. In total, 17 vessels are included in the network, with 9 created from the imaged region. Allometric scaling <cit.> is performed on vessel dimensions outside the imaged region and is based on values obtained from the literature <cit.>. type=figure < g r a p h i c s > The network used in the fluids model. (a) Arteries extracted from the MRA images are displayed in dark red, and red lines denote the extended vessels outside of the imaged region. Small vessels, represented by structured trees, are attached to the end of each large vessel. (b) An example of a structured tree. Vessel length is scaled as L_2 = L_1(W_1/W_2)^α, where α = 0.35, W_1 (kg) and L_1 (cm) are literature bodyweight and vessel length values, W_2 (kg) is the bodyweight of the patient, and L_2 (cm) is the unknown vessel length. Vessel radii not specified in the imaged region are also calculated this way. At terminal ends of the large vessels, asymmetric structured trees are used for outlet boundary conditions for the fluids model. These structured trees represent arterial branching down to the capillary level. A schematic of the network, including the structured trees, can be seen in Figure <ref>. §.§ Fluid dynamics A 1D fluid dynamics model derived from the Navier-Stokes equations is used to compute pressure p(x,t) (mmHg), flow q(x,t) (mL/s), and cross-sectional area A(x,t) (cm^2). In the large vessels, the equations are solved explicitly, and in the small vessels, a wave equation is solved in the frequency domain to predict impedance at the root of the small vessels. Large vessels. We assume the blood to be incompressible, Newtonian, and homogeneous with axially symmetric flow, and the vessels to be long and thin. Under these assumptions, flow, pressure, and vessel cross-sectional area satisfy mass conservation and momentum balance equations of the form At+qx = 0 , qt+x(q^2/A)+A/ρpx = -2πν R/δq/A, where 0≤ x≤ L (cm) is the axial position along the vessel, μ (g/cm/s) is the dynamic viscosity, ρ (g/cm^3) is blood density, ν = μ/ρ (cm^2/s) is the kinematic viscosity, R (cm) is the radius, and t (s) is time. These equations rely on the assumption of a Stokes boundary layer u_x(r,x,t) = {[ u̅_x, r<R-δ,; u̅_x(R-r)/δ, R-δ<r≤ R. ]. Here δ = √(ν T/2π) (cm) is the boundary layer thickness, T (s) is the duration of the cardiac cycle, u_x is the axial velocity, and u̅_x is the axial velocity outside of the boundary layer. To close our system of equations, we define a pressure-area relationship of the form p(x,t) -p_0 = 4/3Eh/r_0(1-√(A_0/A)), Eh/r_0 = k_1exp(-k_2r_0)+k_3 , where E (g/cm/s^2) is Young's modulus, h (cm) is the vessel wall thickness, p_0 (g/cm/s^2) is a reference pressure, r_0 (cm) is the inlet radius, and A_0 (cm^2) is the cross-sectional area when p(x,t) = p_0. The system (<ref>)-(<ref>) is hyperbolic with Riemann invariants propagating in opposite directions. Therefore, boundary conditions are required at the inlet and outlet of each vessel. At the ascending aorta, corresponding to the root of the labeled tree, an inlet flow waveform is imposed using the patient-specific flow waveform extracted from the 4D-MRI data. At junctions, we impose mass conservation and pressure continuity: q_p = q_d1+q_d2, p_p = p_d1 = p_d2. The subscripts p and d1,2 refer to the parent and daughter vessels, respectively. Outlet boundary conditions are imposed by coupling the terminals of the large vessels to an asymmetrical structured tree model (described below) representing the small vessels <cit.>. The model equations are nondimensionalized and solved using the explicit two-step Lax-Wendroff method <cit.>. Small vessels. In the small vessels, viscous forces dominate; therefore, we neglect the nonlinear inertial terms. As a result, equations (<ref>) and (<ref>) are linearized and reduced assuming periodicity of solutions, resulting in iω Q+A_0(1-F_J)/ρPx =0, F_J = 2J_1(ω_0)/ω_0J_0(ω_0) iω CP+Qx =0, C = pA≈3/2r_0/Eh, where J_0 and J_1 are the zeroth and first order Bessel functions, ω_0^2=i^3r_0ω/ν is the Womersely number, and C denotes the vessel compliance. The term Eh/r_0 has the same form as (<ref>) but with small vessel stiffness parameters, ks_1,ks_2, and ks_3. Analytical solutions for equations (<ref>) and (<ref>) are Q(x,ω) = acos(ω x/c)+bsin(ω x/c), P(x,ω) = i/g_ω(-asin(ω x/c)+bcos(ω x/c)), where a,b are integration constants and g_ω = √(CA_0(1-F_J)/ρ) <cit.>. From this equation, it is possible to determine impedance at the beginning of each vessel as a function of the impedance at the end of the vessel, Z(0,ω)=ig^-1sin(ω L/c)+Z(L,ω)cos(ω L/c)/cos(ω L/c)+igZ(L,ω)sin(ω L/c), where c=√(A_0(1-F_j)/ρ C) is wave propagation velocity. Similar to the large vessels, junction conditions impose pressure continuity and mass conservation. The terminal impedance of each structured tree is assumed to be zero for all frequencies, and impedance is calculated through bifurcations recursively to compute the impedance at the root of each structured tree <cit.>. Wave-intensity analysis. The propagated pressure and flow waves are decomposed using wave-intensity analysis (WIA), a tool that quantifies the incident and reflected components of these waves <cit.>. Components are extracted from the pressure and average velocity using Γ_± (t) = Γ_0+∫_0^TdΓ_±, Γ=p,u̅ dp_± = 1/2(dp±ρ c du̅), du̅_±=1/2(du̅±dp/ρ c), where c (cm/s) is the pulse wave velocity, dp is the change in pressure and du̅ is the change in average velocity across a wave <cit.>. The time-normalized wave intensity is defined as WI_± = (dp_±/dt)(du̅_±/dt), where the positive subscript refers to the incident wave and the negative to the reflective wave. Both incident and reflected waves can be classified as compressive or expansive. Compressive waves occur when WI_±,dp_±>0, while expansive waves occur when WI_±,dp_±<0. A wave-reflection coefficient is calculated by defining the ratio of amplitudes of reflected compressive pressure waves to incident compressive pressure waves I_R = Δ p_-/Δ p_+. The incident (forward) wave begins at the network inlet. Forward compression waves (FCW) increase pressure and flow velocity as blood travels downstream to the outlet, while forward expansion waves (FEW) decrease pressure and flow velocity. All reflective (backward) waves begin at the outlet of the vessel. Backward compression waves (BCW) increase pressure and decrease flow velocity as blood is reflected upstream of the vessel while backward expansion waves (BEW) decrease pressure and accelerate flow <cit.>. Wall shear stress. The stress, denotedτ_w (g/cm/s^2), the fluid exerts on the vessel wall, called the wall shear stress (WSS), is defined using the Stokes boundary layer from equation (<ref>): τ_w = -μ∂ u_x/∂ r = {[ 0, r<R-δ,; μu̅/δ, R-δ<r≤ R, ]. where μ is kinematic viscosity, δ is the thickness of the boundary layer, and u̅(x,t)=q(x,t)/A(x,t) is the average velocity <cit.>. §.§ Model summary The 1D fluid dynamics model, described in section <ref>, predicts pressure, area, and flow in all 17 large vessels within the network shown in Figure <ref>. Solutions of the model rely on parameters that specify the fluid and vessel properties. The latter includes geometric parameters that determine the vessel lengths, radii, connectivity, material parameters that determine vessel stiffnesses, and boundary condition parameters that determine inflow at the root of the network and outflow conditions at the terminal vessels. The model consists of both large vessels modeled explicitly and small vessels described by the structured tree framework. Fluid dynamics. Parameters required to specify the fluid dynamics include blood density ρ (cm^3), dynamic ν (g/cm/s) and kinematic μ = ν/ρ (g/cm/s) viscosities, and the boundary layer thickness δ = √(ν T/2π) (cm). θ_fluid = {ρ,μ,ν,δ}. Hematocrit and viscosity are assumed to be the same for all patients, and, therefore, we keep these fixed for all simulations. Parameters ρ, μ, and ν are used in the large and small vessels. Standard values for these parameters (Table <ref>) are obtained from the literature <cit.>. Geometry parameters. For each patient, geometric parameters are determined by segmenting the MRA images as described in section <ref>. The network includes 17 vessels, so there are 3×17 parameters corresponding to vessel lengths (L), inlet radii r_in, and outlet r_out radii: θ_geometry = {r_in,i,r_out,i,L_i}, i = 1,..17. Values for these parameters (Table <ref>) are extracted from the imaging data and are kept constant for each patient. Material parameters. The model assumes vessels have increasing stiffness with decreasing radii. This trend is modeled by equation (<ref>), which contains three stiffness parameters: k_1 (g/cm/s^2), k_2 (1/cm), and k_3 (g/cm/s^2). Vessels are grouped by similarity, and we keep these parameters fixed within each group. Referring to Figure <ref>, vessels 1, 2, 4, and 7 (the aortic vessels) are in group 1, Vessels 3, 6, 8, and 16 (vessels leading to the arms) are in group 2, vessels 5, 12, 13, 14, and 15 (carotid vessels) are in group 3, and vessels 9 and 17 (vertebral vessels) are in group 4. In summary, we have 12 material parameters for the large vessels: θ_material={k_1,g,k_2,g,k_3,g}, where subscript g = 1, ..., 4 enumerates the vessel groups. In our previous study <cit.>, these parameter values were determined using hand-tuning for a single DORV and HLHS patient pair. For this study, we use parameter values from <cit.> for k_1,g, k_2,g, while nominal values for k_3,g were obtained using equations (<ref>) and (<ref>). Nominal parameter values are listed in Table <ref>. Boundary condition parameters. The last set of parameters correspond to either the inflow or the structured tree boundary conditions. For each patient, at the inlet of the network, an inflow waveform is extracted from the 4D-MRI image. For the outflow boundary conditions, seven structured tree parameters are needed for each terminal vessel (vessels 7, 8, 9, 12, 13, 14, 15, 16, and 17). Parameters include the radius scaling factors α and β, that govern the asymmetry of the structured tree, lrr, that specifies the length-to-radius ratio, and r_min, the minimum radius used to determine the depth of the structured tree. In addition, the small vessels also have three stiffness parameters, ks_1, ks_2, and ks_3. Small vessel stiffness parameters are vessel specific: vessel 7, the descending aorta, is in group 1, vessels 8 and 16, the brachial arteries, are in group 2, vessels 12 to 15, the carotid arteries, are in group 3, and vessels 9 and 17, the vertebral vessels, are in group 4. This gives a total of 16 structured tree parameters: θ_boundary={ks_1,g, ks_2,g,ks_3,g,α,β,lrr,r_min}. Nominal values for α, β, lrr, r_min, ks_1,g, and ks_2,g are taken from <cit.>, and ks_3,g is calculated in the same way as k_3,g. For all terminal vessels except the descending aorta, the parameter r_min is set to 0.001 cm. This value is consistent with the average radius of the arterioles. The parameter r_min for the descending aorta is set to 0.01 cm, since this vessel is terminated at a larger radius. Quantity of interest. We estimate identifiable unknown parameters to quantify differences between the two patient groups. To do so, we define a quantity of interest to measure the discrepancy between model predictions and available data. Figure <ref> marks locations for flow waveform measurements. In this study, we extract flow waveforms from the ascending aorta, descending aorta, and the innominate, left common carotid, and left subclavian arteries. Flow measured in the ascending aorta is used as the inflow boundary condition, and the remaining four flow waveforms are used to calibrate the model. In addition, we have systolic and diastolic pressure measurements from the brachial artery. These are not measured simultaneously with the flows but are still obtained in the supine position. Using this data, we construct residual vectors for both the flow and pressure. More precisely, for each flow q_i we compute r_q_i(t_j) = [q_i^m(t_j)-q_i^d(t_j)/q_i,max^d], i=1,… 4, j=1,… T_p , where r_q_i(t_j) denotes the residuals between the flow data (q_i^d(t_j) (mL/s)) and the associated model predictions (q_i^m(t_j) (mL/s)) at each of the four locations (i=1,…4) at time t_j (j = 1, … T_p), denoting the j^th time step within the cardiac cycle. T_p refers to the number of time point measurements (19-28) per cardiac cycle. This number differs between patients. The combined flow residual is defined as r_q = [ r_ q_1 r_ q_2 r_ q_3 r_ q_4], where r_ q_i is the residual vector for flow i defined in (<ref>). It should be noted that 𝐫_𝐪 has dimensions of 4× T_p. Pressure measurements are only available from one location at peak systole and diastole; therefore, the pressure residual is defined as 𝐫_p = [r_p_1, r_p_2] = [(p_sys^m-p^d_sys/p^d_sys),(p^m_dia-p^d_dia/p^d_dia)], where p_i^d and p_i^m, i=sys, dia (mmHg) denote the data and model predictions respectively. Since we do not have the exact location of the cuff measurement, we match model predictions to data at the midpoint of the brachial artery. We perform sensitivity and identifiability analysis to determine a subset of influential and identifiable parameters. The initial set of parameters to be explored include those not determined from the patient data, i.e. the material and structured tree parameters, θ_SA = {k_1,g,k_2,g,k_3,g,ks_1,g,ks_2,g,ks_3,g,α,β,lrr}. §.§ Sensitivity and identifiability analysis Local sensitivity analysis provides insight into the influence of nominal parameters on the model predictions. However, sensitivities can be inaccurate if the model is highly nonlinear and optimal parameters vary significantly from their nominal values. Global sensitivities provide additional information in the form of parameter influences over the entire parameter space. Local and global sensitivity analyses are performed on the flow residual vector r_q defined in (<ref>) with respect to parameters θ_SA defined in (<ref>). Local sensitivity analysis. Using a derivative-based approach, we compute local sensitivities of the quantity of interest (residual vector) with respect to each parameter. Sensitivities are evaluated by varying one parameter and fixing all others at their nominal values <cit.>. We compute local sensitivities to log-scaled parameters (θ_SA=logθ_SA), ensuring both positivity and that parameters values are on the same scale. With these assumptions, the local sensitivity of r_q, with respect to the n^th component of the parameter vector, is given by 𝐒_n = ∂r_q/∂θ^n_SA = [∂𝐪_1/∂θ^n_SAθ^n/q_1,max^d,…, ∂𝐪_4/∂θ^n_SAθ^n/q_4,max^d], n=1… B, where B denotes the total number of parameters. Sensitivities are estimated using centered finite differences r_qθ^n_SA≈r_q(θ^n_SA+𝐞_nψ)-r_q(θ^n_SA-𝐞_nψ)/2ψ, where θ̃^n_SA is the parameter of interest, ψ is the step size, and 𝐞_n is a unit basis vector in the n^th direction <cit.>. Parameters are ranked from most to least influential by calculating the 2-norm of each sensitivity <cit.>, 𝐒̅_n = ||𝐒̃_n||_2. Global sensitivities. Morris screening <cit.> is used to compute global sensitivities. This method predicts elementary effects defined as 𝐝_n(θ^n_SA) = 𝐫_𝐪(θ^n_SA+𝐞_nΔ)-𝐫_𝐪(θ^n_SA)/Δ, where the number of samples is set to K=100 and the number of levels of parameter space is set to ℳ=60, resulting in a step size of Δ = ℳ/2(ℳ-1)≈ 0.508. Elementary effects are determined by sampling K values from a uniform distribution for a particular parameter θ^n_SA. Similar to local sensitivities, the elementary effects are ranked by computing the 2-norm of each effect, d_n^j(θ) = ||d_n^j(θ)||_2, where j denotes the time point. Using the algorithm by Wentworth et al. <cit.>, results are integrated to determine the mean and variance for the elementary effects. These are defined as μ_n^* = 1/K∑_j=1^K|d_n^j|, σ^2_n = 1/K-1∑_j=1^K(d_n^j -μ_n^*)^2, where μ^* is the sensitivity of the quantity of interest with respect to the specified parameter and σ^2 is the variability in sensitivities due to parameter interactions and model nonlinearities. Parameters are ranked by computing √(μ^*2+σ^2) to account for the magnitude and variability of each elementary effect. To stay within a physiological range of parameter values, large and small vessel stiffness parameters are perturbed ±10%, and α,β,lrr are perturbed ±5%. Covariance analysis. Pairwise correlations between parameters can be determined using covariance analysis. From the local sensitivities, we construct a covariance matrix, v_nj = V_nj/√(V_nnV_jj), where V = s^2[𝐒_n^T𝐒_n]^-1 and s^2 is a constant observation variance. Studies using this method have defined correlated parameters for which v_nj>0.8 to 0.95 <cit.>. Here, we assume that parameter pairs are correlated if v_nj>0.9. Information gained from the sensitivity and covariance analyses allows us to determine a parameter set for inference, denoted θ_inf, which, as we will show in Section <ref>, is given by: θ_inf = {α, k_3,1:4,ks_2, ks_3,1:4}. §.§ Parameter inference The parameter subset θ_inf defined in equation (<ref>) is inferred by minimizing the RSS RSS = ∑_i=1^4 ( ∑_j=1^T_p r_q_i(t_j)^2 ) + ∑_k∈{dia, sys} r_p_k^2, where r_q_i(t_j) and r_p_k are defined in equations (<ref>) and (<ref>). To minimize equation (<ref>), we use the sequential quadratic programming (SQP) method, a gradient-based algorithm <cit.>, that is implemented in Matlab's function with option 'sqp'. We use a tolerance of 1.0× 10^-8 <cit.>. Parameter bounds for each patient are set to ensure that model predictions remain within the physiological range. Exact nominal values and ranges for each patient are listed in the supplemental material. To complement the global sensitivity analysis, we use multistart inference to test if the locally identifiable parameter subset remains identifiable. The optimizer is initialized to 12 different sets of parameter values. These sets are specified by sampling from a uniform distribution defined by varying the parameters ±30% of their nominal values. We record the final value of the cost function and check for convergence across optimizations. Parameters with a coefficient of variation (CoV, the standard deviation divided by the mean) greater than 0.10 are removed from the subset. The multistart inference process is repeated until all parameters in the subset converge and each the coefficient of variation for each parameter is below 0.10. Parameters in the final subset, collected in the vector θ_inf, are estimated through one round of optimization with different initializations to avoid trapping in local minima. § RESULTS §.§ Image analysis The DORV and HLHS patients have significantly different vessel radii. Figure <ref> shows that remodeling of the reconstructed aorta in the HLHS group widens the ascending aorta and aortic arch as compared to the native aorta in the DORV group. For most HLHS patients, the aortic arch is wider than the ascending aorta. However, the two groups have approximately the same radii at the distal end of the descending aorta. type=figure < g r a p h i c s > Radii along the aorta for a DORV and an HLHS patient pair. In the HLHS patient, the radii increase along the ascending aorta and then decrease, while in the DORV patient, the aorta decreases along its length. Also, note that the two vessels have similar radii at the distal end. Error bars (one standard deviation of radii measurements within each vessel) are plotted at the inlet of the ascending aorta, aortic arch I, aortic arch II, and descending aorta. Only the beginning of the descending aorta differs between the two patients. The geometric results in Table <ref> also show that remodeling is heterogeneous. The variance of vessel dimensions, according to the changepoint method described in the supplement, along the ascending aorta and aortic arch aorta is significantly smaller in the DORV group compared to the HLHS group, while the variance within each patient is similar (Figure <ref>). Remodeling also impacts the vessels in contact with the reconstructed tissue, namely the innominate artery and aortic arch. As noted above, the aortic arch is widened, and the innominate artery is significantly shorter in HLHS patients. The same holds for the ascending aorta, which is also shorter in HLHS patients. The vessels further away (the head and neck vessels and the descending aorta) have similar dimensions and variance in both groups. However, as noted in Section <ref>, these vessels are stiffer in HLHS patients. Ranges of vessel dimensions for each patient type are listed in Table <ref>, and exact vessel dimensions for each patient are reported in the supplement. §.§ Parameter inference Parameter identifiability. An identifiable parameter subset is constructed by combining the results of the local and global sensitivity analyses, covariance analysis, and multistart inference. Local sensitivity analysis reveals that the most influential parameters are α, β, and lrr. These parameters are followed by the large vessel stiffness parameters, k_3,g, ks_3,g, and ks_2,g (see Figure <ref>). Globally, k_3,g and ks_3,g are the most influential, followed by the structured tree parameters α, β, lrr. However, the relative sensitivities for these five parameters are similar, i.e., they are good candidates for the parameter subset. This result should be contrasted with parameters k_1,g and k_2,g, which are less influential. Covariance analysis revealed a correlation between the structured tree parameters. Since α is the most influential parameter, we inferred α and fixed β and lrr. The large vessels stiffness parameters k_3,g and k_1,g are also correlated, and so are k_2,g and k_1,g. Informed by the sensitivity analyses, we chose to fix k_1,g and k_2,g and inferred only k_3,g. For small vessels, k_2,g is more influential, therefore we fixed ks_1 and inferred ks_2,g and ks_3,g. Multistart inference is used to determine the final subset of identifiable parameters. The initial parameter subset (θ_CA={α, k_3,1:4, ks_2,1:4, ks_3,1:4}) includes parameters that are influential and locally uncorrelated. Multistart inference of θ_CA resulted in Cov > 0.1 for ks_2,g. Given that most structured trees are similar, we set ks_2,g= ks_2. Repeated estimation including ks_2 as a global parameter gave an identifiable subset including θ_inf ={α, k_3,1:4,ks_2, ks_3,1:4}, i.e., all 12 samples estimated parameters with a CoV < 0.1. Results of the multistart are shown in Figure <ref>(c). Model calibration. Parameter inference provided successful calibration of the model to data. Results in Figure <ref> show that the model fits both the main and reflected features of the flow waveforms and the systolic and diastolic brachial pressures. Optimal model predictions are plotted with a solid red line and the averaged measured flow waveforms with solid black lines. Error bars were obtained by averaging waveforms extracted at nearby points within the vessel, as described in Section <ref>. These bars correspond to one standard deviation above and below the mean. The gray silhouette represents predictions generated by sampling parameters from a uniform distribution within the bounds used for parameter inference, demonstrating the ability of the optimization to generate model outputs that fit the data. §.§ Model predictions Inferred parameters. Figure <ref> compares the estimated parameters, scaled from 0 to 1. These results show that the aortic stiffness (k_3,1) is higher and more variable in HLHS patients, which appears to be consistent with our results for the vessel geometry. Even though vessels further from the reconstruction have similar geometry between patient groups, vessel stiffness (k_3,2:4) is increased in HLHS, indicating remodeling affects the vasculature as a whole. Moreover, peripheral vessels are stiffer in HLHS patients, but remodeling has not affected the peripheral branching structure (estimated values for α are the same for all vessels). This result seems to agree with the finding that geometry is not altered in the peripheral vessels. Flow, pressure, and area predictions. Figure <ref>(a) shows that average systolic and pulse pressures are higher in the aortic and cerebral arteries for the HLHS patients. Figure <ref> (b) and (c) shows pairwise comparisons of flow predictions. Averaging of all patients within each group shows no differences, but pairwise comparisons of pulse flow (max flow - min flow) reveal different trends. In (b), pulse flow is lower in HLHS patients or has no significant difference in aortic vessels, except in HLHS patient 2 (from patient pair 2). In (c), on average, pulse flow is decreased in the HLHS cerebral vasculature. This can be seen better by comparing the values vessel-wise (see supplement). Paired HLHS and DORV patients have similar cardiac outputs, therefore this pairwise comparison provides greater insight. Larger pulse flows in cerebral vessels indicates greater perfusion to the brain, while similar pulse flows in the descending aorta suggests both groups are possibly at the same risk for FALD (see supplement). We remark that the reported pulse flows are relative to the total cardiac output. Wave intensity analysis. The incident and reflected waves shown in Figure <ref> differ significantly between the two patient groups. Overall, the wave-reflection coefficient I_r is higher in HLHS patients, suggesting reconstructed aortas induce more reflections. This might be in part due to larger differences in vessel size and stiffer vessels for patients in the HLHS group. Moreover, DORV patients have significantly higher forward compression and expansion waves and HLHS patients have higher backward compression waves. The higher backward compression wave is likely a result of the widened ascending aorta and aortic arch, increasing the wave reflections. The exception to this trend is pair number 2. The DORV patients do not differ significantly from the other patients in the group, but HLHS patient 2 has significantly higher forward and backward waves. This patient does not have significantly different geometry as compared to the other patients. However, they do have increased pulse flow in their aortic vessels. Our modeling approach is able to predict abnormalities in this patient by taking in to account the complex interactions of these quantities. Wall shear stress. We computed wall shear stress (WSS) in the center of the aortic vessel segment for each patient (Figure <ref>). DORV patients have higher WSS values that peak between 25 to 50 g/cm·s^2 as compared to HLHS patients, with WSS values peaking between 10 to 29 g/cm·s^2. All patients have similar WSS values in the ascending aorta and aortic arch, but it is significantly higher in the DORV patients in these regions. Both groups have increased wall shear stress in the descending aorta, but the increase is higher in HLHS patients. This is likely a result of the stiffening of the descending aorta in this patient group. Again, HLHS patient 2 has larger WSS in the descending aorta compared to the other patients. § DISCUSSION This study describes a framework for building patient-specific 1D CFD models and then applies these models to predict hemodynamics for two groups of single ventricle patients. We begin with the analysis of medical images to extract patient geometries and to quantify differences in vessel dimensions. Then, we perform local and global sensitivity analyses, covariance analysis, and multistart inference to determine an influential and identifiable subset of parameters. We then infer this subset, θ_inf={k_3,g,ks_2,ks_3,g,α}, corresponding to vascular stiffnesses and downstream resistance. Our findings show that HLHS patients have wider ascending aortas, increased vascular stiffnesses throughout the network, increased pressures in the aorta and cerebral vasculature, and increased wave reflections. Additionally, model predictions reveal that HLHS patient 2 has quantitatively different pressure and flow patterns compared to the other HLHS patients. §.§ Image analysis Patient geometry derived from medical imaging data is one of the known parameters that is integrated into the model. It has a substantial influence on the model predictions. Colebank et al. <cit.> and Bartolo et al. <cit.> detail the importance of accurate vascular geometry and its influence on model predictions. HLHS patients have greater ascending aorta radii. Two HLHS patients have significant remodeling, with ascending aortas increasing along the aortic arch rather than decreasing. However, all patients have similarly sized descending aortas. This abnormal geometry inherent to the HLHS cohort likely contributes to abnormal flow patterns and disturbances, promoting continued vascular and ventricular remodeling over time <cit.>. Hayama et al. <cit.> noted that aortic roots in HLHS patients continue to dilate over time, contributing to both ventricular and downstream remodeling and eventual Fontan failure. Therefore, monitoring the change of aortic geometry over time is essential for single ventricle patients with reconstructed aortas. §.§ Parameter inference Local sensitivity analysis reveals that the structured tree parameters α and β have the greatest influence on the quantity of interest. Stiffness parameters k_3,g and ks_3,g also influence the predictions significantly. This result agrees with findings by Paun et al. <cit.> who used a similar 1D CFD model. Their study only considered local sensitivities and parameters were not vessel specific. They concluded that α was the most influential, followed by β and k_3. Our findings also agree with work from Clipp and Steele <cit.>, which emphasized the importance in tuning of the structured tree boundary condition parameters to fit the model to data. Morris screening results are consistent with local sensitivity results in the sense that the five most influential parameters are the same in each case. Figure <ref> shows that while k_3,g and ks_3,g are slightly more influential than the structured tree parameters, there are no significant differences in the values of the relative sensitivities. Our findings are also consistent with experimental studies investigating the effects of aortic stiffness and resistance on hemodynamics. In the aortic vasculature, it has been found that increased aortic stiffness can decrease stroke volume and increase pressure <cit.>. Through an in-vitro investigation focused on the impact of aortic stiffness on blood flow, Gulan et al. <cit.> found that increasing aortic stiffness greatly influences velocity patterns and blood volume through the aorta. Covariance analysis revealed a correlation between the structured tree parameters, as well as between k_1,g and the stiffness parameters k_2,g and k_3,g. We inferred α and fixed β and lrr. The parameter α is the most influential, and it is not correlated with any of the stiffness parameters. We fixed k_1 and k_2 for all vessels to reduce the number of inferred parameters. Covariance analysis is performed using the local sensitivity matrix, therefore there is no guarantee that parameters remain correlated after being estimated. Multistart inference results showed that ks_2,g had a CoV > 0.1, and estimated parameters varied significantly. Due to the influence that ks_2 has on the model, we inferred that parameter but used a global rather than vessel-specific value. These findings are consistent with studies from Colebank et al. <cit.> and Paun et al. <cit.> in which they performed related analyses on similar models and chose to infer large artery stiffness, small vessel stiffness, and at least one structured tree parameter. Figure <ref> compares the inferred parameter values between the two patient groups. Overall, DORV patients have lower large vessel stiffnesses in the aorta and peripheral vessels. The stiffness varies less between vessel types, and hemodynamic predictions are more uniform. Estimated values for the α and ks_2 parameters did not differ significantly between groups. However, small and large vessel stiffnesses, ks_3,g and k_3 respectively, were substantially higher in the HLHS group, with DORV patients having lower downstream resistance vascular stiffness than HLHS patients. This finding is consistent with those from Cardis et al. <cit.> and Schafer et al. <cit.>, who found that Fontan patients with HLHS had higher vascular stiffness compared to other Fontan patient types. The increased stiffness might occur in part due to properties of the non-native tissue used to surgically reconstruct the aorta <cit.>. For most HLHS patients, reconstruction is performed with a homograft material that comprises at least 50% of the reconstructed vessel <cit.>. This homograft material differs significantly from the native aortic tissue, and remodeling over time generates tissue that is significantly stiffer than the native aorta <cit.>. On a related note, clinicians have discussed the utility of aortic stiffness in helping to determine when medical intervention is needed. The retrospective study by Hayama et al. <cit.> found that increased aortic stiffness in Fontan patients is correlated with exercise intolerance, vascular and ventricular remodeling, and heart failure. They postulated that surgical intervention and vasodilation/hypertension medication may help offset vascular remodeling <cit.>. §.§ Model predictions Hemodynamic predictions. Figure <ref> demonstrates that our parameter inference method generates model predictions that fit the data reasonably well. We found that the HLHS group has increased average systolic and pulse pressures with the interquartile range (IQR) also being higher in aortic and cerebral vasculature. Given the heterogeneous geometry present in the HLHS group and the small sample size, the latter is only evident with comparisons between patient pairs. Since HLHS requires surgical interventions over multiple years, we found that using matched DORV patients provided a clear and systematic way to understand the effects of surgical aortic reconstruction and subsequent remodeling on model predicted hemodynamics. All HLHS patients are hypertensive <cit.> according to model predictions, despite three out of four HLHS patients receiving medication to reduce blood pressure. In particular, cerebral blood pressure was high. Decreased pulse flows are consistent with research that describes inadequate perfusion and oxygen transport to the brain in HLHS patients with reconstructed aortas <cit.>. Brain perfusion appears to be an important clinical endpoint, since it has recently been shown that HLHS patients have abnormal cerebral microstructure and delayed intrauterine brain growth <cit.>. With respect to vessel area deformation over a cardiac cycle, the aortic vessels in the HLHS patients (except for HLHS patient 2) deform less than the same vessels in the DORV patients (see supplement). Vessels with small deformations over a cardiac cycle tend to have increased stiffness and tend to appear in patients with larger aortic radii and abnormal wall properties <cit.>. Wave intensity analysis. Figure <ref> shows DORV have larger forward waves than their reflective, but HLHS have smaller forward waves and increased reflective waves. In HLHS patients, the ascending aorta has smaller forward waves and larger backward waves compared to the DORV group. Wave-reflection coefficients I_R (Figure <ref>) confirm these differences. For each matched patient pair, the HLHS patient has a larger I_R. The I_R values we found for DORV patients' agree with those reported in literature <cit.>. The results of our WIA for the ascending aorta of the HLHS group are consistent with a similar analysis by Schafer et al. <cit.>, who noted increased ratios of backward to forward waves in HLHS patients with reconstructed aortas. Increased backward waves in the ascending aorta is indicative of blood flowing backwards towards the heart, leading to an increase in afterload and a decrease in ventricular performance <cit.>. In particular, HLHS patient 2 has a significantly higher I_R in the aortic vessels compared to the other patients . Wall shear stress. Except for DORV patient 3, WSS peak values in the DORV group decrease in the descending aorta compared to the ascending aorta. This finding is consistent with studies using CFD models to compute WSS in healthy patients <cit.>. The HLHS group has lower WSS values in the ascending aorta and aortic arch as compared to the DORV group. Reduced WSS can be indicative of hypertension and stiffening of the vessel wall. For example, Traub et al. <cit.> found that consistently low WSS values were correlated with upregulation of vasoconstrictive genes. This promoted smooth muscle cell growth, which led to a loss of vessel compliance. As for the larger WSS values in the descending aorta, Voges et al. <cit.> found that the descending aorta, in HLHS patients with reconstructed aortas, began to dilate over time due to increased WSS. Notably, HLHS patient 2 has a large increase in WSS in the descending aorta. Of the HLHS group, this patient has the smallest inlet radius for the descending aorta. As the body responds and begins to remodel, this WSS could decrease as the inlet radius widens. WSS is an important clinical marker that cannot be measured in vivo. However, it can be determined from CFD models, as demonstrated by Loke et al. <cit.>. They used CFD modeling and surgeon input to develop a Fontan conduit that minimized power loss and shear stress, thereby improving flow from the gut to the pulmonary circuit. Many studies have focused on WSS in the Fontan conduit and pulmonary arteries. However, there is a lack of work devoted to studying WSS within the aorta and systemic arterial vasculature for single ventricle patients. §.§ Future work and limitations This study describes the construction of patient-specific, 1D CFD arterial network models that include the aorta and head/neck vessels for four DORV and four HLHS patients. A main contribution is the development of a parameter inference methodology that uses multiple datasets. To obtain reliable parameter inference results, we limited the number of vessels in the network. The small size of the network makes it challenging to predict cerebral and gut perfusion. A way to overcome this limitation is to add more vessels, e.g., to use the network defined in the study by Taylor-LaPole et al. <cit.>. Another limitation is that HLHS patients generally have abnormal aortic geometries due to surgical reconstruction. These geometries most likely cause energy losses as blood flows from the wider arch into the narrow descending aorta. Our model does not predict these energy losses. However, previous studies have included energy loss terms in 1D arterial network models <cit.>. This approach could be adapted for this study, but more work is needed to calibrate parameters required for these energy loss models. Calibration could be informed by the analysis of velocity patterns from 4D-MRI images or 3D fluid-structure interaction models. With regards to optimization, the current study minimizes the Euclidean distance between measurements and model predictions to infer the biophysical parameters of interest, ignoring the correlation structure of the measurement errors due to the temporal nature of the data. In future studies, we aim to capture the correlation by assuming a full covariance matrix for the errors by using Gaussian Processes <cit.>. We will also incorporate model mismatch to account for discrepancies between data and model predictions (Figure <ref>) caused by numerical errors or model assumptions by using the methodology presented in <cit.>. Finally, many modeling studies devoted to the Fontan circulation focus only on venous hemodynamics, in part because venous congestion impacts blood returning to the heart from the liver and likely contributes to the progression of FALD. In the future, a two-sided vessel network model could be incorporated into this framework that includes a description of both the arterial and venous vasculature, see e.g. <cit.>. There are limitations related to the clinical data that was used in this study. The 4D-MRI images are averaged over several cardiac cycles. This averaging and noise in the scans likely contributes to a lack of volume conservation in the flow data created from these images. We performed parameter inference using one pressure reading from one vessel. In the future, it would be helpful for parameter estimation to have multiple pressure readings from multiple vessels, e.g., from the arms and the ankles. Finally, five out of the nine patients received some form of hypertension medication. This is a factor that our model does not take into account. § CONCLUSIONS This study defines a patient-specific 1D CFD model and a parameter inference methodology to calibrate the model to 4D-MRI velocity data and sphygmomanometer pressure data. Results from the parameter inference give insight into physiological phenomena such as vascular stiffness and downstream resistance. The reconstructed aortas in the HLHS patients were wider than the native aortas of the DORV patients, and parameter inference revealed that HLHS patients has increased vascular stiffness and downstream resistance. Model predictions showed that vessels in HLHS patients do not distend over a cardiac cycle as much as those in DORV patients, indicative of hypertension. WIA predicted increased backward waves in the ascending aortas of HLHS patients, suggesting abnormal blood flow. Results shows decreased WSS in HLHS patients indicative of hypertension and a precursor to remodeling. HLHS patient 2 in particular has the highest pressures, largest backward waves, and largest WSS of the HLHS patient group, indicating this patient may be in need of additional clinical, possibly surgical, intervention. To our knowledge, this study is the first patient-specific 1D CFD model of the Fontan systemic arterial vasculature that is calibrated using multiple data sets from multiple patients. 20pt Software and code for the fluids model and optimization can be found at github.com/msolufse. AMT designed the study, performed all analyses and hemodynamic simulations, and drafted the manuscript. DL performed image registration. DL, JDW, and CP provided all patient data and advised its use and contributed to writing and editing the manuscript. LMP provided parameter inference/statistical expertise and contributed to writing and editing the manuscript. MSO conceived and coordinated the study and helped write and edit the manuscript. We have no competing interests. This work was supported by the National Science Foundation (grant numbers DGE-2137100, DMS-2051010). Any opinions, findings, and conclusions expressed in this material are those of the authors and do not necessarily reflect the views of the NSF. Work carried out by LMP was funded by EPSRC, grant reference number EP/T017899/1. We thank Yaqi Li for segmenting the patient geometries for the image registration.
http://arxiv.org/abs/2406.18613v1
20240625170701
Inducing Riesz and orthonormal bases in $L^2$ via composition operators
[ "Yahya Saleh", "Armin Iske" ]
math.FA
[ "math.FA", "cs.LG", "cs.NA", "math.NA", "47B33, 42C15" ]
Light-weight End-to-End Graph Interest Network for CTR Prediction in E-commerce Search Zichong Xiao July 1, 2024 ====================================================================================== § ABSTRACT We investigate perturbations of orthonormal bases of L^2 via a composition operator C_h induced by a mapping h. We provide a comprehensive characterization of the mapping h required for the perturbed sequence to form an orthonormal or Riesz basis. Restricting our analysis to differentiable mappings, we reveal that all Riesz bases of the given form are induced by bi-Lipschitz mappings. In addition, we discuss implications of these results for approximation theory, highlighting the potential of using bijective neural networks to construct complete sequences with favorable approximation properties. § INTRODUCTION Let Ω⊆ℝ^d be an open Lebesgue-measurable set and (γ_n)_n=0^∞ be an orthonormal basis of the L^2(Ω) space of real-valued functions. Suppose that h: Ω→Ω is a measurable mapping that induces a composition operator C_h: L^2(Ω) → L^2(Ω) defined by C_h f := f ∘ h for any f in L^2(Ω). In this work, we investigate completeness properties of perturbed sequences of the form (C_h γ_n)_n=0^∞. A composition operator C_h that maps L^2(Ω) into itself is necessarily bounded. Consequently, a sequence of the form (<ref>) is precisely an orthonormal basis of L^2(Ω), or a Riesz basis of L^2(Ω) if the composition operator C_h is unitary or bijective, respectively. A variety of works derived necessary and sufficient conditions for C_h to satisfy these various criteria, see, for example,  <cit.>. However, due to the generality of the measure spaces considered in these works, the provided conditions were given in terms of conditions on the operator C_h and the σ-subalgebra induced by h. Moreover, such results were not applied to the study of sequences of the form (<ref>). We build upon several studies of composition operators on L^2(Ω) to characterize direct conditions on h under which the sequence (<ref>) forms an orthonormal or a Riesz basis of L^2(Ω). Specifically, we show that the sequence (C_h γ_n)_n=0^∞ is an orthonormal basis if and only if h is measure-preserving, see <Ref>. Similarly, we provide the precise conditions under which the perturbed sequence is a Riesz basis and derive its dual, see <Ref> and <Ref>. Motivated by the observation that h must be measure-preserving for (<ref>) to form an orthonormal basis, we further study perturbations of orthonormal bases via a weighted composition operator. Similarly, we derive here conditions for the induced sequence to form an orthonormal basis of L^2(Ω), see <Ref>. Given the potential importance of such induced sequences in optimization problems, we restrict our analysis to the case where h is a differentiable mapping. Our results demonstrate that all Riesz bases of the form (<ref>) are induced precisely by bi-Lipschitz mappings, see <Ref>. Finally, we provide an approximation-theory perspective on the applicability of our results. In particular, we argue that given a certain problem, a complete sequence of the form (<ref>) with favorable approximation properties can be constructed by choosing a proper inducing mapping h. The fact that Riesz bases induced via composition operators are characterized by bi-Lipschitz mappings suggests the use of various bijective neural networks to this end. We provide a numerical example to illustrate this idea. Further, we link our results to recent studies, where bijective neural networks were used to enhance the expressivity of orthonormal bases for solving differential equations. Organization. In <Ref> we collect some important results on the well-posedness of composition operators and introduce the definitions of different complete sequences in L^2(Ω). In <Ref> we derive necessary and sufficient conditions for the induced sequence to satisfy various completeness criteria. In <Ref> we restrict our analysis to the case where h is differentiable and show that Riesz bases of the form (<ref>) are induced precisely by bi-Lipschitz mappings. In <Ref> we provide an approximation-theory perceptive on the applicability of our results. § NOTATION In what follows we consider the measure space (Ω , ℬ, μ), where Ω⊆ℝ^d is open, d is a positive integer, ℬ is the Borel σ-algebra generated by Ω, and μ is the Lebesgue measure. We denote by ⟨·, ·⟩ the inner product of L^2. We omit the notational dependence of all function spaces on the domain Ω for simplicity. For any operator A: L^2 → L^2 we denote by A_op its operator norm. Given a measurable mapping h: Ω→Ω we denote by h_#μ the push-forward measure induced by h, , the measure given by h_#μ(B) = μ(h^-1(B)), for any B ∈ℬ. If this measure is absolutely continuous with respect to μ we write h_#μ≪μ. In such a case we call h non-singular and denote by g_h the Radon-Nikodym derivative of h_#μ with respect to μ. We call the mapping h measure-preserving if h_#μ (B) = μ(B) for any B ∈ℬ. We say that the measurable mapping h is surjective if there exists a measurable mapping ζ: Ω→Ω such that h ∘ζ = id_Ω almost everywhere, where id_Ω denotes the identity map on Ω. We say that h is injective if there exists a measurable mapping ζ: Ω→Ω such that ζ∘ h = id_Ω almost everywhere. We say that h is bijective if it is both surjective and injective. § PRELIMINARIES Any measurable mapping h that induces an absolutely continuous measure h_#μ induces a linear composition operator C_h from L^2 into the space of all measurable functions on Ω. However, for the completeness of the induced sequence (<ref>) in L^2, it is necessary that the composition operator maps L^2 into itself. The following result provides necessary and sufficient conditions on h to this end. A measurable mapping h: Ω→Ω induces a composition operator on L^2 into itself if and only if h_#μ≪μ and g_h is essentially bounded. In this case, C_h_op = g_h _L^∞^1/2. See <cit.>. The main question of this work is whether the induced sequence (ϕ_n)_n=0^∞ := (C_h γ_n)_n=0^∞ is complete in L^2, i.e., whether there exists for any f in L^2 a sequence of coefficients (c_n(f))_n=0^∞, c_n ∈ℝ for any n ∈ℕ, such that f = ∑_n=0^∞ c_n(f) ϕ_n. We discuss in our work two notions of completeness. First, that of orthonormal bases. Here, the coefficients are unique and given by the inner product of f with the basis functions. Second, that of Riesz bases. Here, the coefficients are also unique, but they are given by the inner product of f with another Riesz basis (ϕ_n)_n=0^∞ that is bi-orthogonal to the original Riesz basis, , ⟨ϕ_n, ϕ_m ⟩ = δ_nm for any n,m∈ℕ. An important result for our analysis is the fact that these complete sequences can be characterized by operators acting on an orthonormal basis <cit.>. In particular, let (γ_n)_n=0^∞ be an orthonormal basis of L^2. Then * The Riesz bases of L^2 are precisely the sequences (U γ_n)_n=0^∞, where U:L^2→ L^2 is a bounded bijective operator. * The orthonormal bases of L^2 are precisely the sequences (U γ_n)_n=0^∞, where U:L^2 → L^2 is a unitary operator. § PERTURBATION OF BASES VIA COMPOSITION AND WEIGHTED COMPOSITION OPERATORS We start by deriving necessary and sufficient conditions on h such that the induced sequence (<ref>) forms either a Riesz or an orthonormal basis of L^2. §.§ Induced Bases via Composition Operators We start by citing a very important result for our analysis, which only holds for standard Borel spaces, , measurable spaces that can be written as Polish spaces with their Borel σ-algebra. Consider the measure space (Ω, ℬ(Ω), μ) and let C_h be a composition operator from L^2 into itself. If C_h is invertible, its inverse is a composition operator from L^2 into itself. Observe that (Ω, ℬ(Ω), μ) is a standard Borel space. Therefore, the result follows from <cit.> and <cit.>. We now characterize conditions on h for the induced sequence (<ref>) to form a Riesz basis. Let (γ_n)_n=0^∞ be an orthonormal basis of L^2. Let h: Ω→Ω be a measurable mapping that induces a composition operator from L^2 into itself. The sequence (C_h γ_n)_n=0^∞ is a Riesz basis of L^2 if and only if h is injective and there exists r>0 such that r ≤ g_h almost everywhere. Let (C_h γ_n)_n=0^∞ be a Riesz basis of L^2. This implies that C_h is bijective. Denote its inverse by C_h^-1. Under our measurable space of interest, C_h^-1 is a composition operator from L^2 into itself, see <Ref>. It follows that there exists a mapping ζ such that C_h^-1 = C_ζ. Since C_h C_ζ f = f for any f in L^2, it follows that ζ∘ h = id_Ω almost everywhere. Therefore, h is injective. Similarly, one can see that h ∘ζ = id_Ω, , h is surjective as well. Hence, h is bijective with inverse h^-1 = ζ. For any B ∈ℬ denote by χ_B the characteristic function and observe that μ(B) = ∫_Ωχ_B dμ = ∫_Ωχ_B ∘ h ∘ h^-1 dμ = ∫_Ωχ_B ∘ h g_h^-1 dμ ≤g_h^-1_L^∞ h_#μ (B). Set r = 1/g_h^-1_L^∞. It straightforwardly follows that r ≤ g_h almost everywhere. Now suppose that h is injective and g_h ≥ r almost everywhere for some r >0. Since h is injective, it follows that C_h has a dense range in L^2 <cit.>. Together with the lower bound on g_h, this implies that C_h is surjective, see <cit.>. To see that C_h is one-to-one, we observe that for any f in L^2 and any B ∈ℬ we have h_#μ(B) = ∫_Ωχ_B g_h dμ ≥μ (B). It thus follows that μ≪ h_#μ. This immediately implies that C_h is one-to-one. Since C_h is also bounded we conclude that (C_h γ_n)_n=0^∞ is a Riesz basis. In comparison to other characterizations of the invertibility of a composition operator C_h <cit.>, our result provides a characterization in terms of direct conditions on the mapping h. This is, indeed, only possible because we restrict our analysis to a standard Borel space. Combining <Ref> and <Ref> demonstrates that all Riesz bases of the form (C_h γ_n)_n=0^∞ are induced by mappings h that satisfy the following. * h is a non-singular injective mapping. * there exist r, R >0 such that r ≤ g_h ≤ R almost everywhere. We turn now to deriving the expansion coefficients of a Riesz basis of the form (C_h γ_n)_n=0^∞. To this end we state the following result. Let h: Ω→Ω be a non-singular injective measurable mapping such that g_h ≥ r almost everywhere for some r >0. Then h is bijective. The hypothesis on h implies that C_h is injective, see proof of <Ref>. This, in turn, implies that h is surjective, see <cit.>. The previous result shows that any Riesz basis of the form (C_h γ_n)_n=0^∞ is induced by a bijective mapping h. This allows us to derive the dual of a Riesz basis of the given form. Given a measurable function k, we define the multiplication operator M_k as M_k f(x) := k(x) f(x) for any function f in L^2 and any x in Ω. Let (γ_n)_n=0^∞ be an orthonormal basis of L^2 and let h be a measurable mapping such that (C_h γ_n)_n=0^∞ is a Riesz basis. Then (M_g_h^-1 C_h γ_n)_n=0^∞ is the bi-orthogonal dual of (C_h γ_n)_n=0^∞. To prove that the sequence (M_g_h^-1 C_h γ_n)_n=0^∞ is a Riesz basis it suffices to show that the operator M_g_h^-1C_h is a bounded bijection. Since C_h is well-posed from L^2 into itself, it is a bounded operator. Moreover, (C_h γ_n)_n=0^∞ being a Riesz basis implies that C_h is a bijection with inverse C_h^-1. This, in turn, implies that h^-1 is non-singular and g_h^-1 is bounded almost everywhere, see <Ref> and <Ref>. By <Ref> it holds that g_h^-1 is bounded away from zero almost everywhere. By combining these results we deduce that M_g_h^-1 is a bounded bijection. Therefore, M_g_h^-1 C_h is a bounded bijection. To see that (C_h γ_n)_n=0^∞ and (M_g_h^-1 C_h γ_n)_n=0^∞ are bi-orthogonal observe that for any n, m ∈ℕ we have ∫_Ω (γ_n ∘ h) (γ_m ∘ h) g_h^-1 dμ = ∫_Ω (γ_n ∘ h) (γ_m ∘ h) dh^-1_#μ = ∫_Ω (γ_n ∘ h ∘ h^-1) (γ_m ∘ h ∘ h^-1) dμ = δ_nm. Thus, it remains to show that the pair (C_h γ_n)_n=0^∞ and (M_g_h^-1 C_h γ_n)_n=0^∞ are dual. In other words, it remains to show that for any f in L^2 we have f = ∑_n=0^∞⟨ f, M_g_h^-1 C_h γ_n ⟩ C_h γ_n. Note that C_h^-1 f ∈ L^2 for any f ∈ L^2. Since (γ_n)_n=0^∞ is an orthonormal basis of L^2 it follows that C_h^-1 f = ∑_n=0^∞ ⟨ C_h^-1 f, γ_n ⟩ γ_n = ∑_n=0^∞ (∫_Ω f ∘ h^-1 γ_n d μ) γ_n = ∑_n=0^∞ (∫_Ω f γ_n ∘ h g_h^-1 d μ) γ_n = ∑_n=0^∞ ⟨ f, M_g_h^-1 C_h γ_n ⟩ γ_n. The result follows by applying C_h to both sides. We now turn to derive necessary and sufficient conditions on h for the induced sequence (<ref>) to form an orthonormal basis. Let (γ_n)_n=0^∞ be an orthonormal basis of L^2. Let h: Ω→Ω be a measurable mapping that induces a composition operator C_h from L^2 into itself. The sequence (C_h γ_n)_n=0^∞ is an orthonormal basis of L^2 if and only if h is measure-preserving. Let (C_h γ_n)_n=0^∞ be an orthonormal basis of L^2. It follows that C_h is a unitary operator. Hence, for any f,p ∈ L^2 we have ⟨ f, p ⟩ = ⟨ C_h f, C_h p ⟩ = ∫_Ω f p g_h dμ This implies that g_h=1 almost everywhere and hence h is measure-preserving. Now let h be measure-preserving. It follows that g_h=1 almost everywhere and hence h preserves the L^2 inner product. Using <cit.> we conclude that C_h is surjective. <Ref> demonstrates the impossibility of generating non-trivial orthonormal bases via composition operators. Essentially, the problem here is the difficulty of conserving the orthonormality of the underlying basis. To see this, note, , that for some m ∈ℕ, ∫_Ω (C_hγ_m)^2 dμ = ∫_Ωγ_m^2 g_h dμ. Requiring that this equals 1 indeed poses a severe restriction on h, namely that g_h=1 almost everywhere. In <Ref> we suggest remedying this by perturbation via a weighted composition operator. For the completeness of our results. we conclude this section by the following remark on the possibility of generating frames of the form (<ref>). Another important type of complete sequences in L^2 is that of frames. In contrast to orthonormal and Riesz bases, expansion coefficients of frames are not necessarily unique. One might wonder what the precise conditions on h are for the induced sequence (<ref>) to form a frame. It can be demonstrated that frames of the form (C_h γ_n)_n=0 ^∞ are induced precisely by non-singular mappings h that are injective and such that g_h is bounded away from zero almost everywhere on its support. Deriving a canonical dual frame for such a frame is, however, not as straightforward as in the case of Riesz bases of the form (<ref>), since h is not necessarily bijective. §.§ Induced Orthonormal Bases via a Weighted Composition Operator We saw in the previous section that requiring the induced sequence (<ref>) to form an orthonormal basis imposes the very restrictive condition that g_h = 1 almost everywhere. Therefore, to generate non-trivial orthonormal bases we consider perturbations via a weighted composition operator. Let h be a bijective mapping that induces a composition operator C_h from L^2 into itself. Further, assume that its inverse h^-1 is non-singular and g_h^-1_L^∞ < ∞. Under this assumption on h^-1, g_h^-1 induces a multiplication operator M_g_h^-1 from L^2 into itself defined by M_g_h^-1 f(x) := g_h^-1(x) f(x) for any f in L^2 and any x in Ω. Under this setting, we consider the weighted composition operator W_h defined by W_h f := (f ∘ h) · g^1/2_h^-1 = M_g^1/2_h^-1 C_h f for all f ∈ L^2. Note that this setting is very similar to the one of <Ref>, except that we now require h to be bijective. This is necessary to ensure that the weighted composition operator is well-defined. Further, note that W_h maps L^2 into itself. Given an orthonormal basis (γ_n)_n=0^∞ we study the completeness of perturbed sequences of the form (W_hγ_n)_n=0^∞. To the best of our knowledge, the completeness of (<ref>) was only studied in <cit.> under very smooth assumptions on h. Moreover, only sufficient conditions were derived. In the following we provide a more general analysis. Let (γ_n)_n=0^∞ be an orthonormal basis of L^2, and h be a bijective measurable mapping that induces the weighted composition operator (<ref>). Then (W_hγ_n)_n=0^∞ is an orthonormal basis of L^2 if and only if g_h ≥ r almost everywhere for some r>0. Let g_h ≥ r>0 almost everywhere for some r>0. It follows that C_h is invertible with inverse C_h^-1, see the proof of <Ref>. Consider the operator W_h^-1:L^2 → L^2 defined by W_h^-1 f := C_h^-1 M_g^-1/2_h^-1 f = (f.g^-1/2_h^-1)∘ h^-1. Observe that this operator is well-defined from L^2 into itself. It clearly holds that W_h^-1 W_h f = f for all f in L^2. Hence, W_h is bijective with inverse W_h^-1. Observe also that for any f,p ∈ L^2 we have that ∫_Ω (W_h f) · p dμ = ∫_Ω (f ∘ h) (p ∘ h^-1∘ h) g_h^-1^1/2 dμ = ∫_Ω (f ∘ h) (p ∘ h^-1∘ h) g_h^-1^1/2g_h^-1^1/2/g_h^-1^1/2 dμ = ∫_Ω (f ∘ h) (p ∘ h^-1∘ h) 1/g_h^-1^1/2 dh_#^-1μ = ∫_Ω f ((p g_h^-1^-1/2 )∘ h^-1) dμ = ∫_Ω f (W_h^-1 p) dμ. Therefore, W_h^-1 is the adjoint of W_h. Now let (W_h γ_n)_n=0^∞ be an orthonormal basis of L^2. It follows that 1 = W_h^2_op = sup_f ∈ L^2, f_L^2=1∫_Ω (f ∘ h)^2 g_h^-1 dμ = sup_f ∈ L^2, f_L^2=1∫_Ω f^2 g_h^-1 g_h dμ Hence, g_h^-1 g_h = 1 almost everywhere. Since we required that g_h^-1∈ L^∞ in the definition (<ref>), we can set r = 1/g_h^-1_L^∞ and observe that g_h ≥ r almost everywhere. This concludes the proof. § INDUCED BASES VIA DIFFERENTIABLE MAPPINGS Complete sequences of the form (<ref>) are of interest in optimization problems, particularly those involving differential operators. To facilitate the use of such sequences in applications, we state in this section the necessary and sufficient conditions for completeness in L^2 when h is a differentiable mapping. Under the conditions of <Ref> it immediately follows that h_#μ is locally finite. Hence, assuming that h is everywhere differentiable, it follows by Lebesgue differentiation theorem <cit.> that g_h = 1/| J_h| almost everywhere, where J_h denotes the Jacobian of h. Using the inverse function theorem, we have 1/| J_h| = | J_h^-1| almost everywhere. In this case, the weighted composition operator W_h, defined in (<ref>), can be written as W_h f = f ∘ h ( J_h)^1/2. Based on the gained regularity we can now state the following. Let (γ_n)_n=0^∞ be an orthonormal basis of L^2. Let h: Ω→Ω be an everywhere differentiable mapping that induces a composition operator from L^2 into the space of all measurable mappings. Then (C_h γ_n)_n=0^∞ is a Riesz basis of L^2 and (W_h γ_n)_n=0^∞ is an orthonormal basis of L^2 if and only if r ≤ J_h ≤ R for some r, R >0. Note that when h is differentiable, the lower and upper bound on J_h imply that h is bijective. Therefore, the result follows immediately from <Ref>, <Ref> and the application of the inverse function theorem. Observe that in the case where (C_h γ_n)_n=0^∞ is a Riesz basis, its dual is given by (M_ J_h C_h γ_n)_n=0^∞ = (γ_n ∘ h J_h)_n=0^∞. Also observe that r ≤ J_h ≤ R is equivalent to the condition that h is a (√(r), √(R))-bi-Lipschitz mapping, i.e., √(r) x-y_2≤h(x) - h(y)_2 ≤√(R) x-y_2 for all x,y ∈Ω, where ·_2 denotes the Euclidean norm. Often, continuous differentiability is assumed to use the inverse function theorem. However, differentiability everywhere is enough, see <cit.> or https://terrytao.wordpress.com/2011/09/12/the-inverse-function-theorem-for-everywhere-differentiable-maps/"The inverse function theorem for everywhere differentiable mappings" by Terence Tao. In summary, we showed that when the inducing mapping h is differentiable, then Riesz basis of the form (<ref>) and orthonormal bases of the form (<ref>) are characterized by bi-Lipschitz mappings. § AN APPROXIMATION-THEORY PERSPECTIVE The use of complete sequences for approximation problems is ubiquitous in applied and engineering sciences. Here, an unknown function f is approximated in the linear span of a truncated complete sequence (γ_n)_n=0^N-1 for some N ∈ℕ. The fact that the sequence is complete ensures that the approximation error converges to zero as N →∞. However, in practice, the accuracy of the approximation and the convergence rate highly depend on the choice of the complete sequence <cit.>. This dependence renders methods to chose an optimal complete sequence for a given problem highly desirable. We argue here that one can construct a problem-specific complete sequence by optimizing a perturbation to an orthonormal basis via a well-posed composition operator. The following example suggests that a perturbation induced by a function as simple as a shifting mapping can lead to a significant improvement in the approximation of a target function. [Approximating a Shifted Gaussian] Let d=1 and set (γ_n)_n=0^∞ to be the sequence of Hermite functions. These functions are given by γ_n(x) = a_n h_n(x) exp(-x^2/2), where h_n is the n-th Hermite polynomial and a_n is a normalization constant. Let f be a normalized Gaussian function centered around a point a, , f(x) = √(2/√(π))exp(-(x-a)^2/2), for x ∈ℝ. Consider approximating f in the linear span of (γ_n)_n=0^N-1 for some N ∈ℕ. Hermite functions form an orthonormal basis of L^2(ℝ). Thus, approximating f in the linear span of Hermite functions is well-posed and the approximation error in L^2 converges to zero as N→∞. However, the approximation error in L^2 becomes small for N ≫e/2 a^2 <cit.>, where e is Euler's number. In other words, to have a good approximation one requires a number of functions that depends nonlinearly on the center of the Gaussian a. Consider now the modified basis (C_h γ_n)_n=0^∞ where h(x) = x-a. Note that f = γ_0 ∘ h, , one needs only one function of the sequence (C_h γ_n)_n=0^∞ to reproduce the target function exactly. Perturbing orthornomal bases via composition with linear mappings is a common practice in computational mathematics <cit.>. Such mappings are often chosen based on insight or formal analytical reasoning <cit.>. Perturbations via non-linear mappings are, however, more scarce in the literature, since it is harder to identify effective nonlinear mappings based on insight. However, given a class ℌ of adequate mappings, one can use numerical optimization techniques to choose an optimal mapping h^* ∈ℌ for a given problem, , a mapping that minimizes a certain objective. In general, one can end up with a mapping h^* that leads to a sequence (C_h^*γ_n)_n=0^∞ that is not complete. Our results demonstrate, that this can be avoided by setting ℌ to be a class of bi-Lipschitz mappings. While there are several ways to construct classes ℌ of bi-Lipschitz mappings, one popular approach is to use neural networks. Indeed, there exists many bi-Lipschitz neural-network architectures <cit.>, such as, , invertible residual neural networks <cit.>. Here, the network h can be written as a composition of a sequence of M blocks, , h = h_1 ∘ h_2 ∘…∘ h_M, where each block h_i is given by h_i(x) = x + NN(x), and NN is a standard multi-layer perceptron. Requiring NN to be Lipschitz with a Lipschitz constant L<1 ensures that h_i is a ((1-L)-(1+L))-bi-Lipschitz. This, in turn, implies that h is a bi-Lipschitz. Such networks are often used to generate complex probability measures from simple ones and are called normalizing flows. Our results suggest that the same flows can be used to generate problem-specific complete sequences. The following numerical example illustrates the idea. [Numerical Evidence for Improved Approximation using Nonlinear Perturbations] Consider the univariate function f given by f(x) = sin(2 |x|) exp(-x^2/2), and plotted in <Ref>. Clearly, the function f belongs to L^2(ℝ) and hence it can be approximated in the linear span of Hermite functions (γ_n)_n=0^N-1 for some N ∈ℕ. However, the convergence is not rapid due to the non-differentiability of f at x=0. Consider now approximating f in the linear span of (γ_n ∘ h)_n=0^N-1 where h is a bi-Lipschitz mapping that we modeled using a linear mapping composed with an invertible residual neural network of one block, , h is given by h(x) = α (x + NN_θ(x)) + β, where NN_θ denotes a multi-layer perceptron of one layer composed of 8 hidden units that use nonlinear activation functions. Its parameters θ along with the linear parameters α , β∈ℝ were optimized to minimize the L^2-error in approximating f in the linear span of (γ_n ∘ h)_n=0^9. <Ref> shows the perturbing function h. Following our theoretical analysis, (γ_n ∘ h)_n=0^∞ is a basis of L^2 and its dual is given by (γ_n ∘ h (h'))_n=0^N-1. Using these results, we computed the expansion coefficients for multiple values of N and compared the error of the resulting approximation with the approximation using Hermite functions. <Ref> shows the convergence behavior of the approximation error in L^2 as a function of the number of basis functions N for both the Hermite basis and the perturbed basis. The results show that more accurate approximations can be obtained with our choice of the perturbing function h. <Ref> shows how Hermite basis functions transform under the composition operator C_h. Clearly, the perturbed basis functions exhibit sharper behavior around the origin, at which the function f is non-differentiable. In summary, we showed in the previous examples that the approximation properties of Hermite functions can be improved by composing them with a proper linear or nonlinear bi-Lipschitz function. This result is, indeed, not restricted to Hermite functions and holds for other bases. Further, we demonstrated that such bi-Lipschitz mappings can be constructed via normalizing flows, , invertible neural networks. While the examples are rather simple, we note that a few recent works have already made important first steps into more complex applications. Specifically,  <cit.> and  <cit.> utilized bi-Lipschitz normalizing flows to optimize non-linear perturbations to orthonormal bases of L^2 for solving differential equations.  <cit.> reported several orders-of-magnitude increased accuracy upon optimizing a perturbation of the underlying basis. Our results serve as a theoretical framework for such applications and open up new avenues for the design and numerical analysis of problem-specific complete sequences. § ACKNOWLEDGMENTS The authors would like to thank Eleonora Ficola and Stephan Wojtowytsch for useful discussions and constructive feedback.
http://arxiv.org/abs/2406.18305v1
20240626124543
S3: A Simple Strong Sample-effective Multimodal Dialog System
[ "Elisei Rykov", "Egor Malkershin", "Alexander Panchenko" ]
cs.CL
[ "cs.CL", "cs.AI" ]
E. Rykov et al. Skolkovo Institute of Science and Technology, Russia Artificial Intelligence Research Institute, Russia mailto:e.rykov@skol.tech{e.rykov, egor.malkershin, a.panchenko}@skol.tech S^3: A Simple Strong Sample-effective Multimodal Dialog System Elisei Rykov1 Egor Malkershin1 Alexander Panchenko1,2 July 1, 2024 ================================================================ § ABSTRACT In this work, we present a conceptually simple yet powerful baseline for multimodal dialog task, an S^3 model, that achieves near state-of-the-art results on two compelling leaderboards: MMMU and AI Journey Contest 2023. The system is based on a pre-trained large language model, pre-trained modality encoders for image and audio, and a trainable modality projector. The proposed effective data mixture for training such an architecture demonstrates that a multimodal model based on a strong language model and trained on a small amount of multimodal data can perform efficiently in the task of multimodal dialog. § INTRODUCTION In the dynamic landscape of artificial intelligence (AI), the advent of multimodal systems has marked a transformative shift, enabling machines to interpret and analyze heterogeneous data streams with unprecedented finesse. These systems, which seamlessly integrate multiple forms of data such as text, images, and audio, are becoming progressively adept at mirroring human cognitive capabilities. However, one of the principal challenges confronting researchers in this domain has been the necessity for considerable volumes of data and substantial computational resources to train state-of-the-art models. Against this backdrop, our study introduces a novel paradigm that posits that a powerful multimodal system is feasible with minimal data and computational resources. This paper presents a simple yet effective baseline model which challenges the conventional premise that large datasets and excessive computational power are prerequisites for developing competitive multimodal AI systems. By using a compact corpus of less than 150,000 multimodal samples, a pre-trained frozen modality encoder, a 7B language model, and by exploiting the computational economy of a single A100-80GB GPU, we have created a model with an elegantly simple architecture that delivers performance on par with the more complex systems that currently dominate the field. The core of our approach is a modality projector that uses a simple multi-layer perceptron (MLP) to map multimodal features into token embeddings. Our contributions can be summarized as follows: * We apply a well-known pipeline for training multimodal projectors to multiple modalities (image, audio, and text) to train a multimodal dialog model. * We introduce a high-quality effective data mixture for training multimodal dialog models. * We demonstrate that mapping of the whole image into 4 textual tokens is enough for multimodal dialog task. * We openly release the obtained model, which shows comparable performance to state-of-the-art models.[<https://github.com/s-nlp/s3>] § RELATED WORK In this section we consider relevant research on deep pre-trained models of various modalities (text, image, audio), the related multimodal dataset used for training such models, and the existing multimodal architectures based on both multimodal encoders and language models. §.§ Text Modality: Large Language Models (LLMs) State-of-the-art large language models are based on deep pre-trained transformers. Popular representatives of this kind are LLaMA <cit.> and Mistral <cit.>, which are used in our work. LLaMA was trained on a mixed dataset from different sources covering different domains. Compared to LLaMA, Mistral introduces some changes related to sliding window attention, pre-filling and chunking strategies, etc. §.§ Image Modality ImageBind <cit.> serves as a universal multimodal encoder, using a multi-layer architecture that facilitates the extraction of universal image features. One of the modern neural network encoders is CLIP <cit.>, which is based on a combined architecture of a convolutional neural network and a transformer. CLIP is trained on large datasets of images and text and is involved in contrastive tasks. The CLIP architecture enables the linking of visual and textual representations, providing an effective representation of both content and meaning. The effectiveness of CLIP allows the model to successfully classify distorted or context-dependent images and to integrate different knowledge domains into a unified space. §.§ Audio Modality Whisper <cit.>, an audio encoder, has received considerable attention in recent studies as an innovative encoder architecture. Unlike traditional encoders, Whisper is designed to produce compact and high-quality representations of input data. The architecture uses stacked autoencoders and introduces a pioneering training approach known as "WhisperNet". In addition, Imagebind can also be used to provide audio features. §.§ Datasets In the context of image captioning, datasets such as COCO <cit.> and TextCaps <cit.> have been created. The COCO dataset was created by collecting a large number of images with corresponding captions from the web, creating a diverse data set for training and evaluating image captioning models. The TextCaps dataset was created by extending the COCO dataset with additional descriptions obtained from contextual user queries. For the Visual Question Answering (VQA) task, VisDial <cit.> was created using images and a series of dialogs about them to train models to answer questions about the images provided. ScienceQA <cit.> was created by automatically extracting questions and answers from textbooks and tests on science topics. VizWiz <cit.> was created using a mobile application that allows blind and visually impaired people to ask questions about unfamiliar images. Visual Genome is a dataset of images, attributes and relationships between objects in those images, created with the help of annotators. The GQA <cit.> dataset was created using a powerful question engine based on the scene graph structures of Visual Genome, generating 22 million different questions about the visual world, with functional programs for answer control and bias smoothing. LLaVA dataset <cit.> is automatically compiled using GPT-4 with various prompts, including regular VQA dialogs, highly complex questions and more. For the optical character recognition (OCR) task, OCR_books was used. OCR_books contains images of text obtained after scanning books of various genres. For the task of audio captioning, one of the well-known datasets is CLOTHO <cit.>, a fully crowdsourced dataset with multiple captions for each audio file. The OpenAssistant dataset <cit.> is a large collection of conversational data collected through crowdsourcing with over 13,000 volunteers. §.§ Multimodal Models The FROMAGe <cit.> architecture is based on the idea of combining a pre-trained frozen LLM with an equally pre-trained and frozen visual image encoder through just one layer of linear projection. This linear layer, although having a small number of trainable parameters, plays a crucial role in establishing the connection between the image and text modalities. For LLaVA model <cit.> the authors employed GPT-4 to create a multimodal visual-text instructional dataset for training. LLaVA combines a LLM Vicuna and a visual encoder based on ViT-L/14 from CLIP, linked by a linear projection layer. Qwen-VL <cit.> demonstrates a similar approach to LLaVA with fine-tuning of the projector and multi-level training. HoneyBee <cit.> shows that classical approaches to building projectors from LLaVA could be improved by applying complex and effective resempling to reduce the number of modality tokens. CogVLM <cit.> introduces a so-called "visual expert" module: a copy of some transformer blocks inside the language model that activate and update their weights only on image tokens. § METHODS §.§ Dataset preprocessing We developed a model designed to engage in multimodal conversations with users. To achieve this, we formatted each dataset in a standard chat layout. This format involves representing each message as a JSON object containing 'role' (indicating whether the message is from a user or the bot), 'type' (indicating whether the message contains an image, audio, or text), and the message content itself (this would be the file path in case of images and audios). Our chat layout allows media content to be inserted at any point in the conversation, possibly multiple times. For each dataset, we created a custom system prompt that was tailored to elicit bot responses that closely matched the original dataset. For example, for the TextCaps dataset, we chose a prompt such as “Answer the question with a single word or phrase” to reflect the fact that the dataset contains primarily short responses. By implementing such prompts, we can guide the model to respond either concisely or with elaborate explanations, depending on the context. When adapting captioning datasets such as COCO or CLOTHO-Captions for conversational use, we formulated a set of basic synthetic questions, such as “What do you see in this picture?” or “What could make this sound?”. These artificial questions serve as prompts to the user about a particular image or sound. In our setup, the appearance of an image within a conversation is flexible. We randomised the order of questions and corresponding images within each dataset, allowing the media content to precede or follow the question. This randomness addresses the issue of brevity that is present in many datasets, which typically consist of single pairs of questions and answers. To create more extended dialogs and overcome this limitation, we randomly combined several short dialogs into extended sequences. We introduced a unique mixture of data specifically designed for our task, as detailed in Table <ref>. Our goal was to create a diverse collection, for which we included a range of tasks such as Optical Character Recognition (OCR), Visual Question Answering (VQA), Audio Question Captioning (AQA), image captioning, casual conversation, and more. In total, we trained the model on around 145,000 samples. §.§ Special tokens and post processing We integrated additional special tokens into the tokenizer of the base model and unfrozen both the language model head and the embedding layer to facilitate the training of these new tokens. Specifically, we introduced modality tokens and to mark the beginning and end of different modality objects in the data. We encoded different modality objects using tokens such as and for audio and image content, respectively. To represent speakers within the dialogs, we included special tokens for the bot and user roles. We also included the token to indicate the start and end of each message in a dialog. The tokens were specifically used to indicate the scope of modality objects. Consequently, each dialog in the dataset is processed into a string containing these special tokens, as shown below: In the data processing stage, an image processor handles all the images, which are then transformed into image embeddings via an encoder. Our multimodal dialog model is designed to accept embeddings as the primary form of input data. Therefore, in the preprocessing phase, we tokenize the processed chat data and retrieve the embeddings for each token. For multimodal tokens such as and , we replace them with the output embeddings of the corresponding image, which are divided into N segments. In our configuration, we chose to split the modality embeddings into four different tokens. §.§ Model architecture We used a widely accepted a “shallow alignment” architecture, which consists of three main components: a basic Large Language Model, a modality encoder, and a modality projector. The role of the modality encoder is to generate image representations, which are then transformed into token embeddings by the modality projector, allowing the integration of visual information into the language model. The architecture of the whole multimodal dialog model is shown in figure <ref>. §.§ Pre-trained modality encoder Modality encoders are designed to preprocess different types of modality objects, such as images and audio, and transform them into embeddings. These embeddings are then made compatible with the Large Language Model by a modality projector. To process audio inputs, we experimented with the ImageBind multimodal encoder, which can handle various modalities, including audio, image, and video. For image inputs, we used CLIP as the encoder of choice. Typically, image encoders such as CLIP aggregate the output of the processed modality object, usually by pooling, to produce a single, final embedding. However, for the purposes of our task, we hypothesized that using individual patched embeddings would yield more advantageous results. §.§ Modality projector The role of the modality projector is to adjust the embeddings of various modality objects, such as images and audio, to ensure they are compatible with the language model. Its configuration can vary from a straightforward linear layer to a more intricate Multilayer Perceptron, among other options. In our research, we initially implemented a basic architectural design in which the hidden states from the modality encoder are mapped directly to the token embeddings of the language model using several linear layers. The complete architecture of our MLP modality projector is shown in figure <ref>. This modality projector is capable of converting an image only into a 4 token embeddings by extending the output from its final linear layer. To illustrate, if the dimension of the token embeddings within the language model is 32, we can instruct the output projection of the modality projector to produce a 128-dimensional embedding, which we then divide into 4 separate token embeddings. We employed the same architectural design for the modality projector across both image and speech modalities. In contrast to state-of-the-art models such as LLaVA, we map the modality object into 4 tokens, regardless of the number of output patches within the modality encoder. We assume that the small number of output modality tokens is sufficient for basic visual understanding. Also, using only 4 tokens significantly reduces the length of the sequence we pass to Transformer. §.§ Language model We incorporated parameter-efficient fine-tuning LoRA adapters into our baseline language model to enhance its interaction capabilities in dialog contexts. Starting with basic pre-trained models such as LLaMA or Mistral, which do not inherently have the ability to participate in user dialogs, we postulated that the integration of a PEFT adapter would solve this problem. In the course of our experiments, we found that the Mistral-7b model served as an effective base model. By using PEFT adapters, we aimed not only to provide the language model with improved conversational capabilities, but also to do so without significantly increasing the number of trainable parameters; this efficient approach leverages the existing knowledge of the pre-trained model and complements it with targeted tunability for specific dialog-oriented tasks. §.§ Training details The entire architecture was trained using the CrossEntropyLoss function. To speed up the training process, we used DeepSpeed, especially at optimization level 2, which is designed to provide significant speedups by introducing optimizations that reduce memory usage and improve computational performance. For the training procedure, we created a custom training pipeline rooted in the HuggingFace trainer framework. Within this configuration, we used the AdamW optimizer. To regulate the learning rate, we used a cosine annealing scheduler, starting with an initial learning rate of 1e-4. The batch size was set to 128. All training was done on a single A100-80GB GPU. § RESULTS In this section, we thoughly test our model in two recent compeltive leaderboards namely AI Journey Content 2023[<https://dsworks.ru/en/champ/super-aintelligence#overview>] and MMMU <cit.> §.§ Experiment 1: AI Journey The first trial of our multimodal framework was tailored for the AI Journey Contest 2023. AI Jorney Contest is an annual competition related to ML and AI with cash prizes from large companies, an analog of the Kaggle platform. Our system was developed to solve the “Strong Intelligence” task in the AI Jorney Contest 2023. The goal of this contest was to develop a system with the ability to interact seamlessly across three modalities: text, image, and audio. For reasoning, we used the instruction “Answer questions thoroughly and in detail”, which was given to encourage the model to provide comprehensive answers with ample elaboration. Thus, an effective system should demonstrate adeptness in managing input across images, audio, and text. During the evaluation, all systems were constrained by the following inference container conditions: 243 GB RAM, 16 CPU cores, 1 Tesla A100 (80 GB) GPU, 3.5 hours to run, and no access to Internet resources. dataset: The evaluation phase challenged the system by presenting dialogs of heterogeneous composition: some consisted of only textual prompts, some combined text and images, some intertwined text and audio, and some even contained all three modalities. The dialogs presented to the system could involve contexts spanning more than two exchanges, allowing for the possibility of recurrence of image modality objects within a single dialog. Furthermore, in scenarios involving multiple modalities, there was potential for intermodality relationships, such as a user asking the bot to identify differences between two provided images. A notable detail of the contest setup was the absence of a specifically designated evaluation dataset, as provided by the contest organizers. Because the evaluation process was private, the evaluation data and the exact description of the other teams' submissions are not available. Metric: The evaluation of our multimodal system was based on two metrics: METEOR <cit.> and a Hidden Metric based on the perplexity of the model's answer. HM(X) = exp(1/t∑_i^tlogp_θ(x_i|x_<i)), where logp_θ(x_i|x_<i) is the likelihood of the ith token given all x < i tokens. The final Integral Metric: ∑_j=1^Jω_j/N_j∑_i=1^N_j METEOR_i + HM_i/2, where j is type of dialog, N_j is the number of dialogs for of type j, and ω_j is the weight of examples for the dialogs of the j type. For textual only dialogs, authors propose ω_j = 0.1. For dialogs with both text and images, ω_j = 0.2. For dialogs with audios and text, ω_j = 0.3. And, finally, for dialogs that consists of all three modalities, ω_j = 0.4. Result: In the AI Journey contest, our approach secured 4th place out of a total of 30 participating teams. The top 10 systems from the contest, along with detailed metrics, are presented in the table <ref>. The best-performing approach demonstrated, similar to our system setup, additionally included a small sample of self-generated supervised high-quality training data for the LoRA fine-tuning phase. Therefore, the author of the system freezes the language model at each stage of training, except the last one, using self-generated data. §.§ Experiment 2: MMMU The model we developed was then evaluated on the MMMU benchmark to determine its visual comprehension abilities and to compare it to existing models. When generating responses for the MMMU dataset, we instructed the model using the prompt “Answer with the letter of the option directly from the given choices” to ensure that the responses were concise and directly related to the given answer choices. Dataset: The MMMU benchmark serves as a comprehensive resource for evaluating the capabilities of multimodal dialog models. It includes a dataset of over 11,500 questions curated from college-level exams, quizzes, and textbooks spanning six academic disciplines: arts, business, science, health, humanities, and engineering. In total, the questions cover 30 subjects and delve into 183 specific subfields, incorporating a wide range of 30 different image types, including but not limited to graphs, diagrams, and chemical structures. Metric: The performance on the MMMU benchmark is quantified by calculating the average accuracy across the different types of tasks included in the benchmark. Result: Our system, which utilizes a low-count data mixture, surpassed many of the existing models in terms of performance. Across open-sourced 7B models, it demonstrates a competitive performance. It even closely competed with state-of-the-art models that were trained on significantly larger datasets, with larger LLM, falling short by a marginal difference. Our model's standing, along with comparisons to other models, can be found in a segment of the MMMU leaderboard that is shown in Table <ref>. §.§ Conclusion Our research demonstrates that it is possible to develop a highly competitive multimodal dialog model without the need for large datasets or enormous computing power. Using less than 150,000 multimodal samples and a single A100-80GB GPU, we constructed a system that performs comparably to state-of-the-art models in the field. In particular, our model features a simple architecture consisting of a modality projector that uses a simple multi-layer perceptron (MLP) to effectively integrate a substantial amount of information into token embeddings. Future work should focus on increasing dataset size and diversity, especially in the audio modality, as this could potentially lead to further performance gains. In addition, exploring the integration of more complex architectures for modality adaptation could also be beneficial for further enhancing its capabilities. § ACKNOWLEDGEMENTS The research of Alexander Panchenko was supported by the Russian Science Foundation grant 20-71-10135. splncs04
http://arxiv.org/abs/2406.17734v1
20240625172123
Non-periodic not everywhere dense trajectories in triangular billiards
[ "Julia Slipantschuk", "Oscar F. Bandtlow", "Wolfram Just" ]
math.DS
[ "math.DS", "37C83, 37C79, 37B20, 37E30" ]
J. Slipantschuk]J. Slipantschuk J. Slipantschuk Department of Mathematics University of Warwick Coventry CV4 7AL UK. julia.slipantschuk@warwick.ac.uk O.F. Bandtlow]O.F. Bandtlow O.F. Bandtlow School of Mathematical Sciences Queen Mary University of London London E3 4NS UK. o.bandtlow@qmul.ac.uk W. Just]W. Just W. Just Institut für Mathematik Universität Rostock D-18057 Rostock Germany. wolfram.just@uni-rostock.de [2010]Primary 37C83; Secondary 37C79, 37B20, 37E30 Non-periodic not everywhere dense trajectories in triangular billiards [ June 13, 2024 ====================================================================== § ABSTRACT Building on tools that have been successfully used in the study of rational billiards, such as induced maps and interval exchange transformations, we provide a construction of a one-parameter family of isosceles triangles exhibiting non-periodic trajectories that are not everywhere dense. This provides, by elementary means, a definitive negative answer to a long-standing open question on the density of non-periodic trajectories in triangular billiards. § INTRODUCTION AND RESULTS Billiards, that is, the ballistic motion of a point particle in the plane with elastic collisions at the boundary, are among the simplest mechanical systems producing intricate dynamical features and thus serve as a paradigm in applied dynamical systems theory <cit.>. The seemingly trivial case of billiards with piecewise straight boundaries, known as polygonal billiards, offers surprisingly hard challenges <cit.>. When the inner angles of the polygon are rational multiples of π the billiard dynamics is dominated by a collection of conserved quantities and a rigorous and sophisticated machinery for its treatment becomes available, see, for example, <cit.> for overviews. Much less is known in the irrational case. A notable exception is the proof of ergodicity of Lebesgue measure for a topologically large class of irrational polygonal billiards <cit.>. It is however not clear what this result means for numerical simulations of the billiard dynamics, as numerical studies of polygonal billiards are inconclusive. Depending on the shape of the polygon, correlations in irrational billiards may or may not exhibit decay <cit.>, and even the ergodicity of Lebesgue measure has been questioned <cit.>. The relevance of symmetries has been emphasised as an explanation for this conundrum <cit.> and these predominantly numerical studies are underpinned by well established rigorous results on recurrence in polygonal billiards, see for instance <cit.>. In this article we shall be concerned with the simplest examples of polygonal billiards, namely those of triangular shape. In particular we shall revisit a hypothesis formulated by Zemlyakov, see <cit.>, according to which trajectories are either periodic or cover the billiard table densely. While <cit.> shows that this dichotomy does not hold in convex[For the simpler case of non-convex billiards, McMullen constructed an L-shaped example for which this dichotomy fails.] polygonal billiards with more than three sides, the proof is flawed for triangular billiards, as pointed out recently in <cit.>. Thus the existence of non-periodic and not everywhere dense trajectories in triangular billiards remains an open problem, see also <cit.> and references therein. We will fill this gap by constructing trajectories of this type for a large set of symmetric triangular billiards. For this purpose, similarly to <cit.>, we reduce this problem to the properties of an induced one-dimensional map, a technique more commonly used in the case of rational billiards. Leaving details of the notation for later sections, we will prove the following. Consider a billiard map in the isosceles triangle determined by inner angles (α,α,π-2α) with base angle α∈ (α_*,3π/10) for some α_* satisfying π/4<α_*<2π/7. Then there exist an angle ϕ_* ∈ (0, π) and an induced map on the base of the triangle {[k=1,ϕ_*,x]: x∈ [0,1]} which is a rotation on the unit interval x ↦ x+ω 1 with ω =cos(3α)/(2 cos(α)cos(4 α)). As the rotation number ω is continuous and strictly mononotic for the given range of α, this theorem implies that non-periodic not everywhere dense trajectories exist in the billiard dynamics of the isosceles triangle for all but a countable subset of α∈ (α_*,3π/10), providing a negative answer to the hypothesis by Zemlyakov. More precisely, we have the following corollary. For all α∈ (α_*,3π/10) with cos(5α)/cos(3 α) ∈ℝ∖ℚ, in particular for all α∈ (α_*,3π/10) with α/π algebraic and irrational, the billiard dynamics in the isosceles triangle contains trajectories which are non-periodic and not everywhere dense in the triangle. The main idea of the proof can be gleaned from the Zemlyakov-Katok unfolding of the billiard dynamics <cit.>. Unfolding the dynamics in a particular direction determined by an orbit starting and ending at a vertex (known as a generalised diagonal), it can be seen that the dynamics takes place in two recurrent cylinders, see Figure <ref>. This occurs, for instance, for an isosceles triangle with base angle α=π√(3)/6. For the geometrically minded reader we summarise the essence of the proof. The existence of the required cylinders can be verified by an explicit coordinatisation of the vertices in Figure <ref>. Moreover, symmetry considerations show that this cylinder configuration persists for all values α in a certain neighbourhood of π√(3)/6. As a result it is then possible to introduce an induced map (on the base of the triangle), which turns out to be a rotation with rotation number varying continuously with α, in particular taking irrational values for a full-measure subset of the admissible range of α values. The construction also reveals that the cylinders do not cover the whole interior of the triangle, thus yielding non-periodic and not everywhere dense trajectories, together forming a non-trivial flat strip in the sense of <cit.>. Most of this article is devoted to making this argument rigorous and completely explicit by an algebraic approach. The idea for the geometric construction depicted in Figure <ref> has been reported in <cit.> where anomalous dynamics and recurrence in triangular billiards has been studied by a combination of numerical computations and analytic arguments. The exposition contained in that reference provides compelling numerical evidence that the dynamics in a particular direction is governed by an irrational rotation map, but a rigorous proof for this observation has not been provided so far. We note that the proof of Corollary <ref> implies that the constructed billiard trajectories never visit a certain neighbourhood around a tip of the isosceles triangle. This neighbourhood can be replaced by a polygonal one, forming a convex n-gon for any n≥ 4, thus providing an alternative, accessible, and elementary proof of Theorem 1 in <cit.> on the existence of non-periodic and not everywhere dense billiard trajectories in convex n-gons. We would like to mention in passing, that the complementary question of characterising billiards satisfying the so-called “Veech dichotomy”, that is those with the property that each direction is either completely periodic or uniquely ergodic, has received significant attention in the literature in the case of rational billiards; see for example <cit.>. This article is organised as follows. After fixing notation in Section <ref>, the existence of the generalised diagonal will be established by Lemma <ref> in Section <ref>. We then turn to the existence and the properties of the two recurrent cylinders in Proposition <ref> in Section <ref> and Proposition <ref> in Section <ref>, respectively. The symmetry of the triangle will be instrumental in setting up these cylinders and Lemma <ref> of Section <ref> summarises the main impact of the symmetry. The proof of the main results follows standard arguments and will be presented in Section <ref>. Our construction works for a limited range of base angles α∈(α_*,3π/10). For base angles outside this range the particular generalised diagonal or the recurrent cylinders of period ten and four cease to exist. Nevertheless we suspect that the main conclusion of Corollary <ref>, the existence of non-periodic not everywhere dense orbits holds for almost all isosceles triangles. Analogous constructions can be performed for other angles, but a more systematic approach would be needed to cover the general case. For trivial reasons analogous statements hold in right-angled triangles. § NOTATION AND BILLIARD MAP Consider a triangle with positively oriented boundary. The sides are labelled by a cyclic index k=1,2,3. We denote by s_k the length of side k. The side with label k=1 is called the base. We chose units of length such that s_1=1. Denote by γ_2 and γ_3 the left and right inner angle on the base, respectively. The angle opposite to the base is denoted by γ_1. We shall focus exclusively on the case of isosceles triangles, that is, γ_2=γ_3=α. It readily follows that s_2=s_3=1/(2 cos(α)). The ballistic motion of a point particle with elastic bounces on the sides of the triangle traces out a planar curve consisting of straight line segments. We call this curve the trajectory. We denote by x_t^[k], 0<x_t^[k]<s_k, the location of the bounce of the particle (at discrete time t) at side k, and by ϕ_t^[k]∈ (0,π) the angle between the oriented side and the outgoing ray of the trajectory. We call a move counter-clockwise (ccw) if a bounce on side k is followed by a bounce on side k+1. Similarly we call a move clockwise (cw) if a bounce on side k is followed by a bounce on side k-1. Subsequent bounces are related by the billiard map (x_t^[k_t], ϕ_t^[k_t] )↦ (x_t+1^[k_t+1], ϕ_t+1^[k_t+1] ) where ϕ_t+1^[k_t+1] = {[ π-ϕ_t^[k_t] -γ_k_t-1 k_t+1=k_t+1; π-ϕ_t^[k_t] + γ_k_t+1 k_t+1=k_t-1 ]. x_t+1^[k_t+1] = {[ (s_k_t-x_t^[k_t]) sin(ϕ_t^[k_t])/sin(ϕ_t+1^[k_t+1]) k_t+1=k_t+1; s_k_t+1-x_t^[k_t]sin(ϕ_t^[k_t])/sin(ϕ_t+1^[k_t+1]) k_t+1=k_t-1 ]. As it will be useful to keep track of the sequence of bouncing sides, we use a slightly non-standard notation and call an orbit a finite or infinite sequence of triplets ([k_t,ϕ_t^[k_t],x_t^[k_t]])_t ∈ I which obeys the billiard map (<ref>). Each orbit corresponds to a trajectory in the plane, and vice versa. We call a point [k,ϕ^[k],x^[k]] singular, if it corresponds to one of the corners of the triangle, that is, if x^[k]=0 or x^[k]=s_k. We call an orbit regular if all its points are non-singular. An orbit which starts or ends at a singular point will be referred to as a singular orbit, while a billiard orbit which starts and ends at a singular point is called a generalised diagonal[When viewing the billiard flow as a flow on a translation surface, this is also referred to as a saddle connection.]. For a fixed side k of the triangle and a fixed angle ϕ, we will refer to the family of parallel trajectory segments reflecting from k at angle ϕ and returning to k with the same angle ϕ after a fixed sequence of bouncing sides as a recurrent cylinder. § PROOF OF RESULTS Our proof consists of several steps. We will first establish the existence of a suitable induction angle, such that an orbit emanating from the left endpoint of the base at this angle forms a generalised diagonal with a certain length-5 sequence of bouncing sides. We will then show that every orbit emanating from the base at this angle returns to the base with the same angle after a fixed number of bounces (either 10 or 4, depending on the initial point). The two corresponding sets of billiard trajectories will form two recurrent cylinders in the plane, crucially bounded away by a positive distance from one of the triangle's vertices. This construction will yield an induced map, forming an interval exchange transformation over two subintervals of the triangle base. The rotation number of this interval exchange transformation will depend continuously on the angle of the isosceles triangle, implying an irrational rotation and hence dense trajectories in the union of the recurrent cylinders for a large set of angles of the triangle. §.§ Induction angle We begin by proving several lemmas, which will be used to establish that for a suitable range of values of α there exists an angle ϕ_* (depending on α), such that the orbit emanating from the left endpoint of the base at angle ϕ_* forms a generalised diagonal. For α∈ (π/4, 3 π/10) the equation g(α)=sin(7 α)-sin(3 α)+ sin(α)=0 has a unique solution α_* ∈ (π/4, 2 π/7). We have that g(π/4)=sin(7π/4)<0 and g(3 π/10)=sin(3π/10)>0. Since 7 α∈ (7 π/4, 21π/10), 3α∈(3 π/4,9π/10) and α∈ (π/4, 3 π/10) it follows that g'(α)=7 cos(7 α)-3cos(3 α)+ cos(α)>0 . Existence and uniqueness of α_*∈ (π/4, 3 π/10) now follow from a variant of the intermediate value theorem. For the remaining assertion observe that g(2π/7)=sin(2π/7)-sin(6π/7)=sin(2π/7)-sin(π/7)>0. The angle α_* established by Lemma <ref> is the value of the base angle where the geometry shown in Figure <ref> starts to break down, since the generalised diagonal hits the top vertex of the triangle, as we will show shortly. Let α∈ [π/4,3 π/10]. The equation g(α,ϕ)=sin(6 α+ϕ)-sin(2 α+ϕ)+sin(ϕ)=0 has a unique solution ϕ=ϕ_*(α) in (0,π). We have g(α,0)=2 sin(2 α) cos(4 α)<0. Existence and uniqueness of the solution in (0,π) follow from the observation that g(α,ϕ) is a Fourier polynomial in ϕ containing only the two first order terms. For the range α∈(α_*,3π/10) of base angles, Lemma <ref> defines a direction ϕ_* of the directional billiard flow which determines the unfolding shown in Figure <ref>. This flow will be instrumental in proving our main result. Let α∈ (α_*, 3 π/10). The solution to (<ref>) obeys 0<ϕ_*<α, α+ϕ_* > π/2, 3 α+ϕ_*>π, 6 α+ϕ_*<2 π < 7 α+ ϕ_* . Using the substitution α= 3 π/10+x, ϕ_*= π/5+y with -π/20<x<0 (equivalent to α∈ (π/4,3π/10)), equation (<ref>) reads g̅(x,y)=sin(6x + y) + 2 sin(x+y) sin(3π/10+x)=0 . We have that g̅(x,-x)=sin(5 x)<0, g̅(x,-6x)=2 sin(-5x) sin(3π/10+x) >0 . It follows that -x<y<-6x with -π/20 < x < 0, and therefore α+ϕ_* = 3 π/10+π/5 + x +y > π/2, 6 α + ϕ_* = 9 π/5 + π/5 + 6x +y < 2 π, 3 α+ϕ_* = 9 π/10 +3 x + π/5 +y = π + x+y+π/10+2x > π, 7 α + ϕ_* = 6 α + α + ϕ_* > 3 π/2 + π/2 . Treating y in (<ref>) as a function of x, implicit differentiation yields 0 = dy/dx(cos(6x+y)+2 cos(x+y)sin(3π/10+x)) +6 cos(6x+y)+2 cos(x+y) sin(3π/10+x)+ 2 sin(x+y) cos(3π/10+x) . Since -3π/10<6x+y<0, 0<x+y<3π/10, and π/4<3π/10+x<3π/10, all trigonometric terms are positive and dy/dx<0. Hence the solution ϕ_*(α) is a strictly monotonic decreasing function for α∈ (π/4,3π/10). Since Lemma <ref> and <ref> imply ϕ_*(α_*)=α_* the final assertion follows. For the remainder of the paper we will refer to the value obtained in Lemma <ref> as α_*, and for α∈ (α_*, 3π/10) we will write ϕ_* = ϕ_*(α), omitting the dependence on the angle α where there is no risk of ambiguity. §.§ Generalised diagonal Next, we proceed to show the existence of a generalised diagonal starting from the left endpoint of the base at angle ϕ_*. For this, we will ascertain that the formal recurrence equations (<ref>) and (<ref>) are satisfied by a given sequence of bouncing sides, angles, and spatial coordinates, which therefore form a valid (that is, realisable) orbit. We define the sequence of bouncing sides (m_t)_0≤ t ≤ 5=(1,2,3,1,3,1) and introduce the sequence of angles [ ψ_0=ϕ_*, ψ_1=π-α-ϕ_*, ψ_2=-π+3α+ϕ_*,; ψ_3=2π-4 α-ϕ_*, ψ_4=-π+5α+ϕ_*, ψ_5=2π-6α-ϕ_* . ] It is straightforward to check that the angles (<ref>) together with (<ref>) satisfy the recurrence (<ref>). Furthermore, Lemma <ref> yields the following result. Let α∈ (α_*,3π/10). The angles defined by (<ref>) and (<ref>) obey 0<ψ_t<π, 0≤ t ≤ 5. Define, for δ∈ℝ, the spatial coordinates ξ_0(δ) = δ, ξ_1(δ) = (s_1-δ) sin(ψ_0)/sin(ψ_1), ξ_2(δ) = s_2 sin(ψ_1)/sin(ψ_2) -(s_1-δ)sin(ψ_0)/sin(ψ_2), ξ_3(δ) = s_3 sin(ψ_2)/sin(ψ_3)- s_2 sin(ψ_1)/sin(ψ_3) +(s_1-δ)sin(ψ_0)/sin(ψ_3), ξ_4(δ) = s_3-s_3 sin(ψ_2)/sin(ψ_4)+ s_2 sin(ψ_1)/sin(ψ_4) -(s_1-δ)sin(ψ_0)/sin(ψ_4), ξ_5(δ) = s_3 sin(ψ_2)/sin(ψ_5)- s_2 sin(ψ_1)/sin(ψ_5) +(s_1-δ)sin(ψ_0)/sin(ψ_5) . It is again straightforward to check that the expressions in (<ref>) together with (<ref>) and (<ref>) obey the formal recurrence in (<ref>). Let α∈ (α_*,3π/10). The coordinates defined in (<ref>), (<ref>), and (<ref>) satisfy ξ_0(0)=0, ξ_5(0)=1, and 0<ξ_t(0)<s_k_t, 1≤ t ≤ 4. The initial coordinate ξ_0(0)=0 is obvious. From Lemma <ref> we have 0 = 2 cos(α) (sin(6 α+ ϕ_*)-sin(2 α+ϕ_*) +sin(ϕ_*)) = 2 cos(α) sin(6 α+ ϕ_*) -sin(3 α + ϕ_*)-sin(α+ϕ_*) + 2 cos(α) sin(ϕ_*) . With (<ref>) and 1=s_1=2 cos(α) s_2/3 this reads sin(ψ_5) = s_3 sin(ψ_2)-s_2 sin(ψ_1)+ s_1 sin(ψ_0) , which implies ξ_5(0)=1. Using (<ref>) and Lemma <ref> we have ξ_4(0)=s_3-sin(ψ_5)/sin(ψ_4) <s_3 . Furthermore, by Lemma <ref> we have sin(ψ_4)-2 cos(α)sin(ψ_5) = sin(7 α + ϕ_*) >0 , which implies ξ_4(0)>0. Again using (<ref>) and Lemma <ref> we have ξ_3(0)=sin(ψ_5)/sin(ψ_3) > 0 , and by Lemma <ref> and 2α>π/2 we obtain sin(ψ_3)-sin(ψ_5)=2 sin(α) cos(5 α+ϕ_*)>0 , which implies ξ_3(0)<1=s_1. Lemma <ref> implies 0<sin(α-ϕ_*)=sin(α+ϕ_*)-2 cos(α)sin(ϕ_*) so that, using the abbreviations (<ref>) we have sin(ψ_0)<s_2 sin(ψ_1) . Hence ξ_2(0)>0. Furthermore, (<ref>) and Lemma <ref> yield 0<s_3 -s_2 sin(ψ_1)/sin(ψ_2)+s_1 sin(ψ_0)/sin(ψ_2) , which is equivalent to ξ_2(0)<s_3. Finally, ξ_1(0)>0 is obvious, and ξ_1(0)<s_2 follows from (<ref>). Lemma <ref> and <ref> now yield the following conclusion. Let α∈ (α_*,3π/10). With the definitions (<ref>), (<ref>), and (<ref>) the sequence ([m_t,ψ_t,ξ_t(0)])_0≤ t ≤ 5 defines a generalised diagonal. Lemma <ref> establishes the generalised diagonal, shown in Figure <ref> as a dashed yellow line, by purely algebraic means. The generalised diagonal determines the direction ϕ_* of the unfolding. If the base angle of the triangle, α, drops below the critical value α_* this connection ceases to exist. At α=α_* the generalised diagonal hits the top vertex of the first triangle in the unfolding, as can be gleaned from Figure <ref>. This geometric condition poses the major constraint on the existence of the generalised diagonal. §.§ Recurrent cylinder of length ten In this section we will establish the existence of a point x_D on the base of the triangle, such that all points in (0, x_D) ×{ϕ_*} share the same length-10 sequence of bouncing sides. Using a symmetry of the triangle, this sequence will be shown to consist of the length-5 sequence (<ref>), followed by a `mirrored' variant of the same sequence, in a sense made precise below. Moreover we will observe that the image of (0, x_D) ×{ϕ_*} under the 10th iteration of the billiard map is (1-x_D, 1) ×{ϕ_*}. The orbit of the point (x_D, ϕ_*) itself will be singular, giving rise to a discontinuity of the induced map on the base. We begin by defining x_D=1-sin(2 α+ϕ_*)/sin(ϕ_*) . Let α∈ (α_*, 3π/10). The quantity defined by (<ref>) obeys x_D ∈ (0,1) and x_D=sin(ψ_5)/sin(ψ_0)=1-cos(3 α)/2 cos(α) cos(4 α) . Lemma <ref> implies 2 α+ϕ_*<π so that x_D<1. Furthermore sin(2 α+ϕ_*)-sin(ϕ_*)=2 sin(α)cos(α+ϕ_*)<0 so that x_D>0. Using Lemma <ref> we have x_D=-sin(6 α+ϕ_*)/sin(ϕ_*)=sin(ψ_5)/sin(ψ_0) . Furthermore, (<ref>) yields (cos(6 α)-cos(2 α)+1) sin(ϕ_*) + (sin(6 α)-sin(2 α)) cos(ϕ_*)=0 so that sin(2 α + ϕ_*)/sin(ϕ_*) = -sin(2 α)cos(6 α)-cos(2 α)+1/sin(6 α)- sin(2 α)+cos(2 α) = sin(4 α)-sin(2 α)/sin(6 α)-sin(2 α)=cos(3 α)/2 cos(α) cos(4 α) . Let α∈ (α_*,3π/10). The coordinates defined in (<ref>), (<ref>), and (<ref>) obey 0<ξ_t(x_D)<s_m_t, t=0,1, ξ_2(x_D)=ξ_4(x_D)=s_3, and ξ_3(x_D)=ξ_5(x_D)=0. Since ξ_0(x_D)=x_D, Lemma <ref> yields the assertion for t=0. Using (<ref>) and Lemma <ref> we have ξ_5(x_D)=1-x_D sin(ψ_0)/sin(ψ_5) =0 . The assertions ξ_3(x_D)=0 and ξ_2(x_D)= ξ_4(x_D)=s_3, follow from the equalities ξ_3(δ)=ξ_5(δ) sin(ψ_5)/sin(ψ_3), ξ_2(δ)=s_3-ξ_5(δ) sin(ψ_5)/sin(ψ_2), and ξ_4(δ)=s_3-ξ_5(δ)sin(ψ_5)/sin(ψ_4). By Lemma <ref> we have sin(ψ_0)>sin(ψ_5), which implies ξ_1(x_D)=sin(ψ_0)/sin(ψ_1)-sin(ψ_5)/sin(ψ_1)>0 . Finally, using (<ref>) we obtain ξ_1(x_D)=sin(ψ_0)/sin(ψ_1)-sin(ψ_5)/sin(ψ_1) < sin(ψ_0)/sin(ψ_1) < s_2 . Since the angles and spatial coordinates defined in (<ref>), (<ref>), and (<ref>) obey the formal recurrence scheme determined by the billiard map (<ref>), Lemmas <ref> and <ref> yield the following result. Let α∈ (α_*,3π/10). Then the following holds. Given any δ∈ (0,x_D), the sequence ([m_t,ψ_t,ξ_t(δ)])_0≤ t ≤ 5 with components defined by (<ref>), (<ref>), and (<ref>) constitutes a regular orbit of the billiard map (<ref>). The symmetry of the triangle has implications for the structure of orbits. Reflecting an orbit at the symmetry axis of the triangle yields again an orbit. In formal terms, this type of reflection is expressed as [k,ϕ^[k],x^[k]] ↦ [k̅,π-ϕ^[k],s_k-x^[k]] where the adjoint index k̅ is given by 1̅=1, 2̅=3, 3̅=2. Similarly, reversing the motion gives again an orbit. In formal terms, the corresponding transformation reads [k,ϕ^[k],x^[k]] ↦ [k,π-ϕ^[k],x^[k]]. Combining both operations maps an orbit to another orbit. If ([k_t,ϕ_t^[k_t], x_t^[k_t]])_0≤ t ≤ T denotes a finite regular orbit in a symmetric triangular billiard then ([ℓ_t,φ_t^[ℓ_t], z_t^[ℓ_t]])_0≤ t ≤ T gives a finite regular orbit of the same length where ℓ_t=k̅_T-t, φ_t^[ℓ_t]=ϕ_T-t^[k_T-t] and z_t^[ℓ_t]=s_k_T-t - x_T-t^[k_T-t]. Here k̅ denotes the adjoint index defined by 1̅=1, 2̅=3, 3̅=2. We first note the identity k± 1=k̅∓ 1. The symmetry of the triangle is equivalent to γ_k=γ_k̅ and s_k=s_k̅. We consider a fixed time t, 0≤ t <T. Case A: Assume that the move T-t-1 → T-t in the original orbit is counter-clockwise, that is, k_T-t=k_T-t-1+1. Then ℓ_t=k̅_T-t= k̅_T-t-1-1=ℓ_t+1-1 (that is, the move t→ t+1 in the image orbit is counter-clockwise as well). Equation (<ref>) tells us that for the original angles we have ϕ_T-t^[k_T-t]= π- ϕ_T-t-1^[k_T-t-1]-γ_k_T-t-1-1 . Observing that γ_k_T-t-1-1= γ_k_T-t-1-1= γ_k̅_T-t-1+1 =γ_ℓ_t+1+1=γ_ℓ_t-1 , we have φ_t^[ℓ_t]= π-φ_t+1^[ℓ_t+1]-γ_ℓ_t-1 which is the angle dynamics of the billiard map for the image orbit. Similarly, (<ref>) implies for the spatial coordinates of the original orbit that x_T-t^[k_T-t]=(s_k_T-t-1-x_T-t-1^[k_T-t-1]) sin(ϕ_T-t-1^[k_T-t-1])/sin(ϕ_T-t^[k_T-t]) so that s_k_T-t-z_t^[ℓ_t]=z_t+1^[ℓ_t+1] sin(φ_t+1^[ℓ_t+1])/sin(φ_t^[ℓ_t]) . Recalling that s_k_T-t=s_k̅_T-t=s_ℓ_t we obtain the position dynamics of the billiard map for the image orbit. Case B: The proof in case the move T-t-1 → T-t in the original orbit is clockwise, that is, k_T-t=k_T-t-1-1, is similar. The symmetry allows us to extend the regular orbit derived in Lemma <ref> to a recurrent orbit with ϕ_0^[k_0]=ϕ_T^[k_T]. Let α∈ (α_*,3π/10). For any δ∈ (0,x_D) there exists a recurrent regular orbit of length 10 given by ([k_t, ϕ_t^[k_t], x_t^[k_t]])_0≤ t ≤ 10 with initial condition [k_0,ϕ_0^[k_0],x_0^[k_0]]=[1,ϕ_*,δ] and endpoint [k_10,ϕ_10^[k_10],x_10^[k_10]]=[1,ϕ_*,δ+1-x_D]. The explicit expression for the orbit is given by k_t=m_t, ϕ_t^[k_t]=ψ_t, and x_t^[k_t]=ξ_t(δ) for 0≤ t ≤ 5 and k_t=m̅_10-t, ϕ_t^[k_t]=ψ_10-t, and x_t^[k_t]=s_m̅_10-t-ξ_10-t(x_D-δ) for 6≤ t ≤ 10. Let δ∈ (0,x_D). Lemma <ref> provides us with the regular orbit of length 5, ([m_t,ψ_t,ξ_t(δ)])_0≤ t ≤ 5, with initial condition [1,ϕ_*,δ] and endpoint [1,ψ_5,ξ_5(δ)]. Replacing δ by x_D-δ, Lemma <ref> yields the regular length-5 orbit given by ([m_t,ψ_t,ξ_t(x_D-δ)])_0≤ t ≤ 5. Applying Lemma <ref> we obtain the regular orbit ([m̅_5-t,ψ_5-t,s_m̅_5-t-ξ_5-t(x_D-δ)])_0≤ t ≤ 5 with initial condition [1,ψ_5,s_1-ξ_5(x_D-δ)] and endpoint [1,ϕ_*,1-x_D+δ]. Recalling that (<ref>) and (<ref>) imply ξ_5(δ)=1-δsin(ψ_0)/sin(ψ_5) and using Lemma <ref>, we obtain s_1-ξ_5(x_D-δ)=ξ_5(δ). The assertions of the proposition follow by transitivity of orbits of the billiard map. Proposition <ref> establishes by purely algebraic means the recurrent cylinder of length 10 depicted in Figure <ref> as the red parallelogram. Due to the symmetry of the underlying triangle, the entire structure shown in Figure <ref> is point symmetric about the midpoint of this parallelogram. Hence the top and bottom side of the parallelogram are guaranteed to be parallel ensuring recurrence of the scattering angle of the billiard dynamics. The same symmetry ensures that the right side of the parallelogram also contains a generalised diagonal and links up with another periodic cylinder. The width of the parallelogram, that is, the distance between the two long sides is essentially given by (<ref>). Both sides approach each other when increasing the base angle α, and the width of the parallelogram vanishes at the upper critical angle α=3π/10. Our algebra ensures that no further geometric obstruction, that is, no further vertex, appears within the cylinder of period 10, as can be gleaned from Figure <ref>. §.§ Recurrent cylinder of length four In an analogous way we can define a recurrent cylinder of length 4 for initial conditions x_0^[1] in (x_D,1). For that purpose we define (ℓ_t)_0≤ t ≤ 2 = (1,2,1), θ_0=ϕ_*, θ_1= π-α-ϕ_*, θ_2= 2 α+ϕ_*, η_0(δ) = δ, η_1(δ) = (s_1-δ) sin(θ_0)/sin(θ_1), η_2(δ) = s_1-(s_1-δ) sin(θ_0)/sin(θ_2) . Formally, the angles and spatial coordinates defined in (<ref>), (<ref>), and (<ref>) obey the recursion scheme of the billiard map (<ref>). In a similar vein to Lemma <ref> we have the following result. Let α∈ (α_*,3π/10). Then the following holds. Given any δ∈ (x_D,1), the sequence ([ℓ_t,θ_t,η_t(δ)])_0≤ t ≤ 2 with components defined by (<ref>), (<ref>), and (<ref>) constitutes a regular orbit of the billiard map (<ref>). By Lemma <ref> we have that 0<θ_t<π for 0≤ t ≤ 2. Fixing δ∈ (x_D,1), Lemma <ref> yields 0<η_0(δ)<1. We next observe that η_2(1)=1, while (<ref>) yields η_2(x_D)=0, and hence 0<η_2(δ)<1. Finally, we have η_1(1)=0; furthermore, since 2cos(α) sin(2α+ ϕ_*) - sin(α+ϕ_*) =sin(3α+ϕ_*)<0 by Lemma <ref>, we have η_1(x_D)=sin(2α+ϕ_*)/sin(α+ϕ_*)<1/2cos(α), and so 0<η_1(x_D)<s_2, which yields 0<η_1(δ)<s_2 for δ∈(x_D,1). Again employing the symmetry of the triangle yields the following result. Let α∈ (α_*,3π/10). For any δ∈ (x_D,1) there exists a recurrent regular orbit of length 4 given by ([k_t, ϕ_t^[k_t], x_t^[k_t]])_0≤ t ≤ 4 with initial condition [k_0,ϕ_0^[k_0],x_0^[k_0]]=[1,ϕ_*,δ] and endpoint [k_4,ϕ_4^[k_4],x_4^[k_4]]=[1,ϕ_*,δ-x_D]. The explicit expression for the orbit is given by k_t=ℓ_t, ϕ_t^[k_t]=θ_t, and x_t^[k_t]=η_t(δ) for 0≤ t ≤ 2 and k_t=ℓ̅_4-t, ϕ_t^[k_t]=θ_4-t, and x_t^[k_t]=s_ℓ̅_4-t-η_4-t(1+x_D-δ) for 3≤ t ≤ 4. Let δ∈ (x_D,1). Lemma <ref> provides us with the regular orbit of length 2, ([ℓ_t,θ_t,η_t(δ)])_0≤ t ≤ 2, with initial condition [1,ϕ_*,δ] and endpoint [1,θ_2,η_2(δ)]. Replacing δ by 1+x_D-δ, Lemma <ref> yields the regular length-2 orbit given by ([ℓ_t,θ_t,η_t(1+x_D-δ)])_0≤ t ≤ 2. Applying Lemma <ref> we obtain the regular orbit ([ℓ̅_s-t,θ_s-t,s_ℓ̅_2-t-η_2-t(1+x_D-δ)])_0≤ t ≤ 2 with initial condition [1,θ_2,s_1-η_2(1+x_D-δ)] and endpoint [1,ϕ_*,δ-x_D]. Recalling that (<ref>), (<ref>) and (<ref>) imply x_D=1-sin(θ_2)/sin(θ_0) and η_2(δ)=1-(1-δ) sin(θ_0)/sin(θ_2) we obtain s_1-η_2(1+x_D-δ)=η_2(δ). The assertions of the proposition follow by transitivity of orbits of the billiard map. §.§ Proof of the theorem and its corollary Propositions <ref> and <ref> constitute the proof of Theorem <ref> with the expression for the rotation number ω=1-x_D following readily from Lemma <ref>. The proof of the corollary will be based on the following lemma which summarises the findings in Lemma <ref> and <ref>. Let α∈ (α_*,3π/10). There exists ε>0 such that any infinite regular orbit ([k_t,ϕ_t^[k_t],x_t^[k_t])_t≥ 0 of the billiard map with initial condition ([k_0,ϕ_0^[k_0],x_0^[k_0]]=[1,ϕ_*,x_0^[1]] satisfies the conditions x_t^[k_t]≤ s_2-ε whenever k_t=2, and x_t^[k_t]≥ε whenever k_t=3. The spatial coordinate x_t^[k_t] does not take the values 0,1, or x_D if k_t=1 as those are singularities or are mapped to singularities, see Lemma <ref>. Furthermore, by Proposition <ref> and <ref> the orbit is recurrent. Hence it is sufficient to consider the 4- and 10-recurrent pieces of the orbit. Consider [k_0,ϕ_0^[k_0] ,x_0^[k_0]]=[1,ϕ_*,δ] with x_D<δ<1, that is, a piece of the orbit in a 4-recurrent cylinder. Since by (<ref>) and (<ref>) η_1(δ) ≤η_1(x_D) = sin(θ_2)/sin(θ_1) s_3-η_1(1+x_D-δ) ≥ s_3-η_1(x_D)= s_3- sin(θ_2)/sin(θ_1) we conclude from Proposition <ref> that for 0≤ t ≤ 4 we have x_t^[k_t]≤sin(θ_2)/sin(θ_1) whenever k_t=2 and x_t^[k_t]≥ s_3-sin(θ_2)/sin(θ_1) whenever k_t=3. Similarly, consider [k_0,ϕ_0^[k_0] ,x_0^[k_0]]=[1,ϕ_*,δ] with 0<δ<x_D, that is, a part of the orbit in a 10-recurrent cylinder. Then by (<ref>) and (<ref>) ξ_1(δ) ≤ξ_1(0) = sin(ψ_0)/sin(ψ_1) ξ_2(δ) ≥ξ_2(0)= s_3-sin(ψ_5)/sin(ψ_2) ξ_4(δ) ≥ξ_4(0) = s_3 -sin(ψ_5)/sin(ψ_4) s_2-ξ_4(x_D-δ) ≤ s_2-ξ_4(0) = sin(ψ_5)/sin(ψ_4) s_2 - ξ_2(x_D-δ) ≤ s_2-ξ_2(0)= sin(ψ_5)/sin(ψ_2) s_3-ξ_1(x_D-δ) ≥ s_3-ξ_1(0)=s_3-sin(ψ_0)/sin(ψ_1) . Hence we conclude from Proposition <ref> that for 0≤ t ≤ 10 we have x_t^[k_t]≤ε whenever k_t=2 and x_t^[k_t]≥ s_3-ε whenever k_t=3, where ε=min{s_2-sin(ψ_0)/sin(ψ_1), s_2-sin(ψ_5)/sin(ψ_2), s_2-sin(ψ_5/sin(ψ_4)} . Altogether, the claim of the lemma is valid with the choice ε= min{s_2-sin(θ_2)/sin(θ_1), s_2-sin(ψ_0)/sin(ψ_1), s_2-sin(ψ_5)/sin(ψ_2), s_2-sin(ψ_5)/sin(ψ_4)} . Lemma <ref> and <ref> ensure that ε>0. Since 2cos(α)cos(4 α)=cos(5 α)-cos(3 α) it follows that choosing α∈ (α_*,3π/10) such that cos(5 α)/cos(3 α) ∈ℝ∖ℚ ensures that the map in Theorem <ref> is an irrational rotation. In fact, the ratio cos(5 α)/cos(3 α) is continuous and strictly monotonic for α∈ (α_*,3π/10), so that apart from a countable set of α values we obtain an irrational rotation. For explicit examples of such angles we invoke the Gelfond–Schneider Theorem (see, for example, <cit.>), according to which for any algebraic a∈ℝ∖{0,1} and algebraic irrational b, the number a^b is transcendental. Thus, if α=πβ with β≠ 0 an algebraic irrational number, then cos(5 α)/cos(3 α) must be irrational, otherwise exp(iα)=exp(iπβ)=(-1)^β would be algebraic, which contradicts the Gelfond-Schneider Theorem. Hence, if α∈ (α_*,3π/10) with cos(5 α)/cos(3 α) ∈ℝ∖ℚ, it follows that Lebesgue almost all initial values x_0^[1] will give rise to a regular non-periodic orbit with initial condition [1,ϕ_*,x_0^[1]]. By Lemma <ref> the corresponding trajectory does not have bounces on the sides within a distance ϵ>0 of the tip of the triangle (when distance is measured along the bouncing side). Hence, the trajectory does not enter a small symmetric triangular region at the tip of the triangle and is thus not everywhere dense. A graphical illustration of this type of trajectory is shown in Figure <ref>. § ACKNOWLEDGEMENT The authors thank an anonymous reviewer for many constructive and insightful suggestions, which greatly improved the exposition of this article. § FUNDING AND COMPETING INTERESTS All authors gratefully acknowledge the support for the research presented in this article by the EPSRC grant EP/RO12008/1. J.S. was partly supported by the ERC-Advanced Grant 833802-Resonances and W.J. acknowledges funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) SFB 1270/2 - 299150580. The authors have no competing interests to declare that are relevant to the content of this article. 10 ArCaGu_PRE97 R. Artuso, G. Casati, and I. Guarneri. Numerical study on ergodic properties of triangular billiards. Phys. Rev. E, 55:6384, 1997. BoTr_FM11 J. Bobok and S. Troubetzkoy. Does a billiard orbit determine its (polygonal) table? Fundamenta Mathematicae, 212:129, 2011. BuTu_2004 E.B. Burger and A. Tubbs. Making Transcendence Apparent. New York, Springer, 2004. CaPr_PRL99 G. Casati and T. Prosen. Mixing property of triangular billiards. Phys. Rev. Lett., 83:4729, 1999. ChMa:06 N. Chernov and R. Markarian. Chaotic Billiards. Mathematical Surveys and Monographs, vol. 127, American Mathematical Society, 2006. DaDiRuLa_AG18 Diana Davis, Kelsey DiPietro, Jenny Rustad, and Alexander St Laurent. Negative refraction and tiling billiards. Adv. Geom., 18:133, 2018. Galp_CMP83 G.A. Galperin. Non-periodic and not everywhere dense billiard trajectories in convex polygons and polyhedrons. Comm. Math. Phys., 91:187, 1983. Gutk_ETDS84 E. Gutkin. Billiards on almost integrable polyhedral surfaces. Ergod. Th. & Dynam. Sys., 4:569, 1984. Gutk_JSP96 E. Gutkin. Billiards in polygons: Survey of recent results. J. Stat. Phys., 83:7, 1996. Gutk_CHAOS12 E. Gutkin. Billiard dynamics: An updated survey with the emphasis on open problems. Chaos, 22:026116, 2012. KeSm_CMV00 R. Kenyon and J. Smillie. Billiards on rational-angled triangles. Comment. Math. Helv., 75:65, 2000. KeMaSm_AM86 S. Kerckhoff, H. Masur, and J. Smillie. Ergodicity of billiard flows and quadratic differentials. Ann. Math., 115:293, 1986. MaTa:02 H. Masur and S. Tabachnikov. Rational billiards and flat structures. In B. Hasselblatt and A. Katok, editors, Handbook of Dynamical Systems, Vol 1A, page 1015. North Holland, Amsterdam, 2002. Mcmu_IM06 C.T. McMullen. Teichmüller curves in genus two: torsion divisors and ratios of sines. Inv. Math., 165:651, 2006. Toka_CMP15 G.W. Tokarsky. Galperin's triangle example. Comm. Math. Phys., 335:1211, 2015. Trou_RCD04 S. Troubetzkoy. Recurrence and periodic billiard orbits in polygons. Regul. Chaotic Dyn., 9:1, 2004. Trou_AIF05 S. Troubezkoy. Periodic billiard orbits in right triangles. Ann. Inst. Four., 55:29, 2005. Veec_IM89 W.A. Veech. Teichmüller curves in moduli space, Eisenstein series and an application to triangular billiards. Inv. Math., 97:553, 1989. WaCaPr_PRE14 J. Wang, G. Casati, and T. Prosen. Nonergodicity and localization of invariant measure for two colliding masses. Phys. Rev. E, 89:042918, 2014. ZaSlBaJu_PRE22 K. Zahradova, J. Slipantschuk, O. F. Bandtlow, and W. Just. Impact of symmetry on ergodic properties of triangular billiards. Phys. Rev. E, 105:L012201, 2022. ZaSlBaJu_PD23 K. Zahradova, J. Slipantschuk, O.F. Bandtlow, and W. Just. Anomalous dynamics in symmetric triangular irrational billiards. Physica D, 445:133619, 2023. ZeKa_MNAS75 A. N. Zemlyakov and A. B. Katok. Topological transitivity of billiards in polygons. Math. Notes Acad. Sci. USSR, 18:760, 1975.
http://arxiv.org/abs/2406.19091v1
20240627111706
SubLock: Sub-Circuit Replacement based Input Dependent Key-based Logic Locking for Robust IP Protection
[ "Vijaypal Singh Rathor", "Munesh Singh", "Kshira Sagar Sahoo", "Saraju P. Mohanty" ]
cs.CR
[ "cs.CR" ]
Understanding the Security Benefits and Overheads of Emerging Industry Solutions to DRAM Read Disturbance Oğuzhan Canpolat A. Giray Yağlıkçı Geraldo F. Oliveira Ataberk Olgun Oğuz Ergin Onur Mutlu ETH Zürich TOBB University of Economics and Technology July 1, 2024 ================================================================================================================================================================================= § ABSTRACT Intellectual Property (IP) piracy, overbuilding, reverse engineering, and hardware Trojan are serious security concerns during integrated circuit (IC) development. Logic locking has proven to be a solid defence for mitigating these threats. The existing logic locking techniques are vulnerable to SAT-based attacks. However, several SAT-resistant logic locking methods are reported; they require significant overhead. This paper proposes a novel input dependent key-based logic locking (IDKLL) that effectively prevents SAT-based attacks with low overhead. We first introduce a novel idea of IDKLL, where a design is locked such that it functions correctly for all input patterns only when their corresponding valid key sequences are applied. In contrast to conventional logic locking, the proposed IDKLL method uses multiple key sequences (instead of a single key sequence) as a valid key that provides correct functionality for all inputs. Further, we propose a sub-circuit replacement based IDKLL approach called SubLock that locks the design by replacing the original sub-circuitry with the corresponding IDKLL based locked circuit to prevent SAT attack with low overhead. The experimental evaluation on ISCAS benchmarks shows that the proposed SubLock mitigates the SAT attack with high security and reduced overhead over the well-known existing methods. § INTRODUCTION Recent advancement in technology (i.e., artificial intelligence, Internet of Thing (IoT) and cyber-physical systems) encourages using high-functioning, automated and intelligent electronic devices, specifically IC, for various mission-critical applications such as financial, national defence, health care, transportation, and energy. Security is a critical concern in all these mission-critical applications. Due to the unaffordable cost of constructing and maintaining a foundry with advanced fabrication capabilities, semiconductor industries are becoming fabless. Further, critical time-to-market forces companies to use third-party intellectual property (IP) blocks to design an integrated circuit (IC). Due to these economic and timing constraints, outsourcing from third parties is unavoidable in IC development. The involvement of several untrusted third parties and people in IC development makes the IC vulnerable to various hardware-based attacks such as IP piracy, overbuilding, reverse engineering (RE), and hardware Trojan (HT) insertion <cit.>, <cit.>. These attacks pose serious security concerns during the IC life cycle as they can compromise the whole system's security. For example, the insertion of hardware Trojan not only deviates IC from its normal function but also can leak sensitive information during infield operation <cit.>. Further, as a result of these supply chain assaults, the IC industries lose billions of dollars every year <cit.>, <cit.>. To mitigate these attacks, different design-based defence techniques such as removing rare-triggered nets <cit.>, <cit.>, HT detection <cit.>, <cit.>, logic locking/obfuscation <cit.> <cit.>, <cit.>, logic camouflaging <cit.>, <cit.>, etc. are reported in the literature to mitigates these threats. In the last five years, logic locking has emerged as the most effective and prominent method to thwart these attacks during the IC life cycle. Logic locking locks/obfuscates the design functionality by embedding a secret key sequence during the IC development, as shown in Figure <ref>. The design provides correct functionality only when a valid secret key sequence is applied. The correct key sequence is stored in on-chip tamper-evident memory, which is inaccessible to the attacker <cit.>, <cit.>. The security of the logic locking techniques mainly depends on the secrecy of the key. Since the attacker always has access to the unlocked IC from the open market, he/she reveals the correct key by applying different attack mechanisms to the locked IC. In the last six years, boolean satisfiability (SAT) has emerged as the most effective attack that can compromise the security of a logic locking technique within a few minutes, even using a large key size. This attack iteratively eliminates the wrong keys by applying the distinguishing input patterns (DIPs) and identifying the correct key <cit.>, <cit.> . To mitigate the SAT attack, several SAT resilience logic locking techniques have been reported in the literature such as Anti-SAT block (ASB) <cit.>, <cit.>, SARLock <cit.>, stripped functionality logic locking (SFLL) <cit.>, cascaded locking (CAS-Lock) <cit.> and Strong Anti-SAT (SAS) <cit.>. These methods either vulnerable to other attacks such as removal <cit.>, <cit.>, AppSAT <cit.>, Bypass <cit.>, Functional Analysis (FALL) <cit.>, Identify flip signal (IFS) attack <cit.> or provide trade-off between security and effectiveness with large overhead. However, recently an input dependent key-based logic locking (IDKLL) method called GateLock to prevent the SAT attacks <cit.>. This method may not be vulnerable to structural analysis attacks due to using fixed structures for IDKLL gates. This paper proposes a new lightweight IDKLL based SAT resilient method that can effectively mitigate SAT and structural analysis attacks and provides robust IP protection. The contributions of this paper are presented in the next section. § CONTRIBUTIONS OF THIS PAPER This section first presents the problem being addressed in this paper, the proposed solution, followed by the novelty and significance of the proposed solution. §.§ Problem Addressed in this Paper The IC is vulnerable to hardware-based attacks, i.e., piracy, overbuilding, RE, HT, etc. Though several logic locking techniques attempt to mitigate these attacks, they are vulnerable to SAT-based attacks and require significant design overhead. Most of the existing SAT-resistant logic locking techniques are vulnerable to SAT variants such as App-SAT, Bypass, removal, and FALL and require high implementation costs. Further, the recent methods such as CAS-Lock and SAS also provide the trade-off between SAT and other attacks and require a large overhead. Further, to the best of our knowledge, none of the existing SAT-resistant logic locking techniques has focused on preventing SAT attacks instead of increasing SAT attack time or DIPs. This paper considers preventing SAT-based attacks completely with low overhead and providing effective protection during the IC life cycle. §.§ Solution Proposed in this Paper This paper proposes a novel lightweight sub-circuit replacement based input dependent key-based logic locking (IDKLL) technique to prevent SAT-based attacks. The proposed method uses multiple key sequences as the correct key to unlock the design. The IDKLL divides the input patterns into multiple sets and uses a different key sequence as a valid key for each set of inputs. This means a separate key sequence is used to unlock the circuit for a specific set of inputs. None of the key sequences exists out of total key space that can achieve correct functionality for all the input patterns. Therefore, the design functions correctly for all sets of inputs only when their respective correct key sequences are applied. Further, in the proposed methods, design functionality is locked by replacing the original sub-circuits with their corresponding IDKLL based locked circuits. Due to sub-circuit replacement, it is hard to apply structural analysis attacks. §.§ Novelty and Significance of the Solution Unlike existing techniques, the proposed IDKLL based technique utilizes multiple key sequences (instead of a single key sequence) as a valid key to lock/unlock the design. Design can provide correct functionality for all inputs only when their respective correct key sequence is applied. Instead of storing single key sequences in normal tamper-proof memory, the proposed IDKLL uses the LUT-based tamper-proof memory to store and retrieve the corresponding valid key sequences to the applied inputs. In the proposed method, the SAT attack eliminates all the key sequences as all the key sequences are individually incorrect. Thus SAT attack failed to identify the correct key sequences. The existing techniques suffer from a trade-off between SAT and effectiveness and other attacks and require a large overhead. In contrast, the proposed IDKLL provides high security against SAT attacks while providing effective protection against other attacks such as removal <cit.>, App-SAT <cit.>, and Bypass <cit.>. The quantitative security analysis of the proposed IDKLL shows that the attacker has to require more than brute force attempts to decipher the correct key sequences. Further, the experimental evaluation on ISCAS and ITC benchmarks shows that the proposed technique effectively prevents SAT-based attacks with low overhead over the recent SAT-resistant logic locking. The rest of the paper is organized as follows: Section <ref> analyses the existing SAT-resistant logic locking techniques. Section <ref> presents a novel concept of input dependent key-based logic locking to prevent SAT attacks. Further, Section <ref> presents a lightweight sub-circuit replacement based IDKLL method followed by its quantitative security analysis. The experimental evaluation and comparative analysis of our technique are presented in Section <ref>. The conclusion of the paper is provided in Section <ref>. § RELATED PRIOR WORKS The researchers have proposed various logic locking techniques which either insert the additional key gates like XOR/XNOR <cit.>, <cit.>, AND/OR <cit.>, MUX <cit.> or replaces the existing logic with corresponding new locked logic <cit.>, LUTs <cit.>. However, several attacks such as key-sensitization <cit.>, <cit.>, logic cone analysis <cit.>, hill climbing<cit.>, SAT <cit.>, <cit.> are reported to compromise the security above logic locking methods. Locking a design using an interference graph-based method and inserting an additional AES module in the locked design improves the security against these attacks <cit.>. However, the interference graph-based method is topology dependent and vulnerable to logic cone analysis attacks. Also, AES insertion causes a large design overhead <cit.>. A lightweight and topology independent key-gate replacement based logic locking is reported in <cit.> to neutralize the key-sensitization and logic cone analysis attacks. The SAT-resistant logic locking techniques such as SARLock <cit.>, Anti-SAT block <cit.>, and AND Tree insertion <cit.> are reported to mitigate the SAT attack with reduced overhead. The AND tree insertion-based approach is vulnerable to sensitization-guided SAT attack <cit.>. Whereas, the SARLock and Anti-SAT are vulnerable to removal <cit.>, <cit.>, App-SAT <cit.>, double DIP <cit.>, and Bypass <cit.> attacks. Moreover, the security of SARALock and Anti-SAT block can also be compromised using SAT-based Signature <cit.> and Bit-flipping <cit.> attacks. Similar to the removal attacks, these attacks also separate the SAT-resistant block from the locked circuit. Although, the security of the Anti-SAT block against removal attack is increased <cit.> by obfuscating it using LUT-based design withholding and wire entanglement approach <cit.>. The use of these LUT-based approaches for obfuscation introduces a large design overhead <cit.>. A lightweight Anti-SAT design and obfuscation technique is reported in <cit.> to reduce the design overhead and increase the security against removal attacks. This technique increases security with reduced design overhead by constructing the Anti-SAT block using existing circuitry. Besides the above, a tenacious and traceless logic locking called TTlock is reported in <cit.> where the SARLock is re-architected to mitigate the SAT and removal attacks simultaneously. The TTLock is an extension of SARLock where the original netlist is modified to protect a secret input pattern such that the output of the original and modified netlist differ only for one input pattern. Similar to SARLock and Ant-SAT, TTLock is also vulnerable to App-SAT <cit.>, double DIP <cit.> and Bypass <cit.> attacks. These attacks mainly compromise the security of the above SAT-resistant methods due to their low output corruptibility. Though it is reported that output corruptibility of Anti-SAT can be increased <cit.>, <cit.>, increasing the output computability may decrease the effectiveness of the logic locking against SAT attack <cit.>, <cit.>, <cit.>. In order to mitigate the App-SAT, Double DIP and Bypass attacks, an extension of TTLock called stripped functionality logic locking (SFLL) has also been proposed in <cit.>. The SFLL selects multiple protected input patterns to increase the output corruptibility and increase the security against App-SAT, Bypass and Double DIP attacks. However, increasing the number of protected patterns increases the corruptibility; it decreases SAT iterations and increases the required overhead <cit.>, <cit.>. Thus, SFLL creates the trade-off between security and effectiveness <cit.>. Besides, the SFLL is also proven vulnerable to Functional Analysis (FALL) attack as reported in <cit.>. An attack framework has been proposed in <cit.> that breaks the SFLL by exploiting structural traces left in the locked design. However, a CAS-Lock <cit.>, and strong Anti-SAT called SAS <cit.> based logic locking methods are proposed to mitigate the threats of App-SAT, Bypass and simultaneously ensure the effectiveness against SAT attack. The CAS-Lock basically adopts the merits of SARlock, and Anti-SAT uses cascaded key controlled AND/OR gates. But it is found that CAS-Lock is vulnerable to removal attacks. Therefore, the authors <cit.> have mirrored the CAS-Lock called M-CAS in the original design to mitigate the removal attacks at the increased design overhead. Although M-CAS increase the security against removal, the design becomes vulnerable to SAT attack. The authors in <cit.> have proposed IFS/Key-bit mapping & SAT called IFS-SAT/KBM-SAT that exploits the structural traces such as single point function and breaks the CAS and M-CAS locking. Another side, SAS also adopts the merits of Anti-SAT and SFLL, where it introduces the error in the design for the selected set of input minterms called critical minterms. Though SAS based locking achieves high effectiveness without compromising security against SAT attacks over SFLL, it increases design overhead significantly. Further, it will also require additional design overhead for obfuscating and integrating the SAS block with standard logic locking to thwart removal and other attacks. The summary of the different related SAT-resistant logic locking techniques and their identified limitations is presented in Table <ref>. It is also reported in the literature that SAS and SFLL provide low output corruptibility. Therefore, two methods called CORruption adaptable logic locking (CORALL) <cit.> and DisORC <cit.> are reported that increased the output corruptibility in comparison to SAS and SFLL techniques. In addition to SAT attacks, these methods also increase the security against structural or netlist analysis attacks. Besides these methods, the generalized Anti-SAT (G-Anti-SAT) <cit.> and SKG-Lock+ <cit.> are also reported, which significantly increase output corruption without compromising the security of logic locking against SAT and other related attacks. A ChaoLock method is reported in <cit.> that uses asymmetric chaotic Boolean gates and dummy inputs to increase the security against SAT attack. However, the G-Anti-SAT is susceptible to a vulnerability assessment tool called Valkyrie <cit.>, SigAttack <cit.> and generalized <cit.> attacks and SKG-Lock+ and ChaoLock incur around 10% area overhead in most of the ISCAS-85 benchmarks even with low size key. In addition to these techniques, some other SAT-resistant logic locking methods are proposed. In <cit.>, the authors have added the dummy cycles in the locked design to significantly increase the attacker's efforts while deciphering the correct key using SAT attack. Since cyclic logic locking failed against the CycSAT attack <cit.>, the unreachable states are added in <cit.> to improve the security of cyclic logic locking against the CycSAT attack. Recently, an extension of cyclic logic locking called LoopLock 2.0 <cit.> and LoopLock 3.0 <cit.> are proposed to enhance the security against the CycSAT and SAT attacks simultaneously. However, LoopLock 3.0 provides better security than LoopLock 2.0. All the previously proposed techniques primarily focus on increasing the number of SAT iterations rather than preventing them at the cost of significant overhead. This is because the traditional SAT attacks work in such a manner that it requires exponential time/iterations to break them. However, the analysis of the SAT attack presented in <cit.> shows that the SAT attack can defeat any logic locking method in linear time by using a different relation between keys using DIPs and their respective oracle responses. Hence, it is highly desirable to develop a method that can prevent the SAT attack instead of just enhancing security against SAT attacks. However, the input-dependent key-based logic locking (IDKLL) method called GateLock is reported in <cit.> that attempts to prevent the SAT attacks. Since this method uses IDKLL-based gates, thus it may be vulnerable to structural analysis attacks. Therefore, this paper proposes a novel sub-circuit replacement-based IDKLL method called SubLock to effectively prevent the SAT attack and its variants with less overhead and provide robust IP protection. Due to using sub-circuit replacement, the proposed method also provides effective security against structural analysis-based attacks compared to GateLock. The comparison of the efficacy of proposed and existing methods for mitigating different attacks is presented in Table <ref>. The next section discuss the concept of IDKLL in details and its use in logic locking followed by the proposed SubLock method. § IDKLL: INPUT DEPENDENT KEY-BASED LOGIC LOCKING In the previous logic locking techniques, the locked design provides the correct output for all the input patterns when a valid key sequence is applied. A key sequence is considered correct or valid if it gives the correct output for all the input patterns; otherwise, it is incorrect. Therefore, the designer stores only that valid key sequence in a tamper-proof memory to activate or unlock the design. The SAT-based attacks eliminate the incorrect key sequences and identify the correct key sequence by iteratively applying DIPs <cit.>, <cit.>. Since DIP is an input pattern that provides different outputs for two key sequences, SAT attack eliminates a key sequence that provides incorrect output. This process repeats until all the wrong keys are eliminated. The computation time of SAT attacks mainly depends on the number of iterations or DIPs required to eliminate the incorrect keys. Though the existing logic locking techniques increase SAT attack computation time, they fail to prevent it completely. Therefore, this paper introduces a novel idea of logic locking called input dependent key-based logic locking (IDKLL) that effectively prevents SAT-based attacks. §.§ Concept of IDKLL The proposed idea for preventing the SAT attack is to lock a design utilizing several key sequences (rather than a single key sequence) as the valid key so that the locked design only gives correct output for all input patterns when their respective valid key sequence is applied. In other words, each set of input patterns has a separate correct key sequence in the locked design. As a result, the inputs determine the correct key sequence for unlocking the design. It implies that a key sequence can only unlock the design for a single set of input patterns. A different key sequence must be used to unlock the functionality for the second set of input patterns. Consequently, other key sequences could be used to unlock the design functionality for the third, fourth, and other sets of inputs. Consider a circuit with n primary inputs and k key-inputs supposed to be locked. Suppose we employ m key sequences out of a total of 2^k to unlock the design. In that case, the 2^n input patterns must be separated into m sets so that the locked design gives accurate output for each input set when its corresponding valid key sequence (KS) is applied. If we choose key sequences KS_1, KS_2,..., KS_m sequences as valid for input patterns sets XS_1, XS_2,..., XS_m respectively, then the locked design produces correct output for XS_1, XS_2,..., XS_m only when KS_1, KS_2,..., KS_m key sequences are applied respectively. Moreover, there must not exist any key sequence that can unlock the design or provide correct output for all input patterns. The correct outputs for all the input patterns can only be generated when their corresponding key sequences are applied. §.§ Locking a Design using IDKLL To use IDKLL to lock the circuit, we make the original circuit's functionality dependent on key inputs such that circuit only produces correct output for all input patterns when their respective valid key sequence is applied. For example, consider a half-adder circuit as shown in Figure <ref>(a). We lock this half-adder circuit using input dependent key-based logic locking, as shown in Figure <ref>(c). To lock the functionality of this circuit, we divide the input patterns into two sets, i.e., XS_1={“00", “01"} and XS_2={“10", “11"}. The original circuit is locked by making the functionalities of its original outputs, i.e., sum (S) and/or carry (C), dependent on two key-inputs K_1 and K_2 according to the truth table as shown in Figure <ref>(b). Here, we use m=2 key sequences KS_1 = “01" and KS_2 = “10" as a valid key for the input sets XS_1 and XS_2 respectively. The locked functionality of circuit is represented with S_L and C_L, the correct output would be obtained for the input sets XS_1={“00", “01"} and XS_2={“10", “11"} only when the key sequences KS_1 = “01" and KS_2 = “10" will be applied respectively. It is also ensured that KS_1 and KS_2 only provide correct output for the specified set of inputs and must provide complementary or incorrect output for the other set of inputs. Otherwise, it may be possible that a single key sequence provides the correct output for all inputs. Therefore, we keep S_L as complementary of S (i.e., S_L = !S) in this example for the inputs {“0110", “0111", “1000", “1001"}. Since here we mainly lock the functionality of the sum (S) logic of the half adder circuit, we keep the C_L don't care (x) for all the other input patterns except {“0100", “0101", “1010", “1011"}. It is observed that the overhead for implementing IDKLL can be reduced when a design is locked by keeping the output value don't care for the invalid key sequences. Therefore, we also lock the circuit functionality (S_L and C_L) by keeping them don't care (x) for the remaining key-values (i.e., “00" and “11"). But due to optimization, the expression (obtained from the K-map) of S_L does not depend on all the inputs (i.e, A and B). Thus, to ensure the dependency of S_L on both inputs (A and B), we keep S_L `1'/`0' instead of don't care (x) for a few patterns, i.e., “0000" and “0010". However, a designer can also select different patterns to do the same. If the designer is not worried about the dependency, then he/she can keep don't care for the remaining key values. In case the locked functionality already depends on all inputs while retaining, don't care (x) for incorrect key-values. Then, we should keep don't cares (x) to produce an optimized locked design as illustrated in Figure <ref>. In this figure, the original circuit is locked using two key inputs K_1 and K_2, where m=2 key sequences, i.e., {“01", “10"} out of four key sequences, i.e., {“00", “01", “10", “11"} are selected to provide correct output for the sets of input patterns, i.e., {“000", “001", “010", “011"} and {“100", “101", “110", “111"} respectively. It is clear from this figure that the Boolean expression of the locked circuit already depends on all the inputs while keeping Y_L don't care for the reaming key sequences, i.e. “00" and “11". We also lock this circuit by keeping output bits `0'/`1' for the incorrect key sequences with different input combinations. Finally, we discovered that compared to all other locked circuits, the circuit locked by preserving the output value don't care for the rest of the key sequences always requires fewer number literals. However, keeping the wrong output (!Y) instead of don't care (x) for the remaining key sequences results in high output corruption. As a result, one significant advantage of our method is that it allows the designer to achieve desired output corruption, which may not be possible with existing logic locking solutions. §.§ Use of LUT based Tamper-Proof Memory for IDKLL In the prior logic locking works, a single key sequence was employed as a valid key to unlock the design for all the input patterns. As a result, the designer stores a single key sequence in the tamper-proof memory. However, the proposed logic locking method unlocks the design by using numerous key sequences as the valid key. Therefore, our method keeps all of these correct key sequences in a tamper-evident memory to unlock the design for all input patterns. The different key sequences are valid for different input patterns. Therefore, we employ LUT-based tamper-proof memory to extract the correct key sequence for the applied input pattern. Figure <ref>(a) and (b) presents the use of LUT based memory for the locked circuits presented in Figure <ref> and Figure <ref>, respectively. It is clear from Figure <ref>(a) that for the set of input patterns {“000", “001", “010", “011"} the correct key sequence, i.e., “01" can be easily fetched from the tamper-proof memory by applying any of these patterns at the selection line of the multiplexer. In the same manner, Figure <ref>(b) shows the extraction of the right key sequence for the applied input. However, it can be realized from Figure <ref>(a) and <ref>(b) that the key sequences, i.e. “01" and “10", are repeated numerous times in memory, which results in increased design cost. Therefore, to save the required memory, we should store unique, valid key sequences as shown in Figure <ref>(c). Moreover, in Figure <ref>(a) and<ref>(b), the value of the correct key for different patterns only changes (from “01" to “10") with input A and it is independent of the other inputs, i.e. B, C. Therefore, we remove these two inputs for optimized implementation of LUT based tamper-proof memory to store correct key sequences “01" and “10" as shown in Figure <ref>(c). However, one thing can also be observed from the above that using LUT based tamper-proof memory requires more overhead compared to the conventional normal tamper-proof memory. This is because the LUT based memory stores multiple key sequences valid for different input sets. Whereas the conventional normal memory stores only a single key sequence that is valid for all input patterns. Further, LUT based memory uses an additional multiplexer unit to fetch the respective valid key for different sets of inputs. Due to these reasons, the LUT based tamper-proof memory requires more overhead than the conventional normal tamper-proof memory. However, analysis of this is out of scope in this paper. This paper uses LUT based memory as one solution to store multiple key sequences and fetch respective valid key sequences on applying an input. One can also explore any other cost-effective solution to achieve the same. The next section presents the generalized structure of IDKLL and its integration with LUT based memory. §.§ Complete Generalized Structure of IDKLL after Integrating LUT based Memory The truth tables of the above-locked circuit, as shown in Figure <ref>(b) and Figure <ref>(b), exhibit don't care values for some outputs. However, the boolean expression or circuit obtained after solving the K-Map will always produce deterministic outputs. Therefore, we also present the truth table in Figure <ref>(a), which exhibits deterministic output for every input of the locked circuit/expression as shown in Figure <ref>(c). We observed from this table that the locked circuit also provides correct output for the inputs “11" and “10" on applying key sequences“00" and “11" respectively. Though these key sequences cannot produce the correct output for all patterns, they may be used in combination with other valid key sequences to achieve the correct output. For example, utilizing “00" and “11" key sequences (instead of “10") along with “01" (i.e., key set {“00", “01", “11"}) can also achieve correct output for all patterns i..e, “11", “10" and {“00", “01"} respectively. Hence, it cannot be ensured that only a single valid key set exists that provide correct output for a locked circuit. However, one can easily ensure that only a single valid set can lock/unlock the design at the cost of increased overhead. To do this, we can specify the complementary (i.e., wrong output) output instead of don't care (x) for all the remaining/unused key sequences (which are not part of valid key sequences). In this case, only valid key sequences will only provide correct output for the input patterns, and all the unused/remaining (invalid) key sequences will guaranteed provide incorrect output for the input patterns. However, the locked design requires a large overhead if we mention complementary/wrong output for all unused key sequences. Therefore, the proposed method use don't care (x) for the remaining key sequences to obtain the optimized design and ensure that no single key sequence exists that provides correct output for all input patterns by manually verifying the truth table of synthesized locked sub-circuits. Since none of the single key sequences generates the correct output, the attacker cannot apply SAT attack, or SAT attack will either return a single key that will not make it successful. In addition to the above, we cannot ensure in the proposed method that a returned key sequence does not belong to the valid key set because SAT attack always eliminates wrong key sequences and retain the key sequence, which provides the correct output for at least one pattern. However, if a key sequence provides correct output for a few patterns, then still SAT attack cannot be applied/worked on the proposed method even if returned key sequences provide correct output for a few patterns or belong to a valid key set. This is because, in our method, all key sequences are incorrect individually; SAT attack eliminates all key sequences and returns a single key sequence. Though in our method, the returned key sequence may belong to a set of correct key sequences or a set of incorrect/unused key sequences, it can never provide correct output for all patterns, which will make SAT unsuccessful. Therefore, in our method, we do not consider that the key returned by SAT attack must not belong to the valid key set. Finally, we combine the locked design with the LUT based tamper-proof memory to obtain the correct output. The integration of the locked circuit(shown in Figure <ref>(c)) with its LUT based tamper-proof memory is given in Figure <ref>. It is clear from this that correct output can be easily obtained for all patterns, as shown in Truth Table. Furthermore, Figure <ref> shows the generalized structure of LUT based tamper-proof memory and its integration for generating the input dependent key (or storing valid key sequences) for unlocking the design locked using the proposed IDKLL. Only the key sequences that produce correct output are considered valid and stored in this memory. In the proposed IDKLL, all the key sequences are individually invalid, and no key sequence exists that can provide the correct output for all the inputs. Therefore, the attacker will fail to determine the valid key sequences by applying DIPs in SAT-based attacks. This is because the application of a DIP produces different outputs for two valid key sequences. The SAT solver eliminates one correct key sequence, which provides incorrect output for that DIP. Similarly, the second DIP will eliminate the second key sequence. Finally, the SAT solver either eliminates all the key sequences or returns a single valid/invalid key sequence. The returned key sequence would be insufficient to obtain the correct output for all the inputs because the locked circuit cannot produce the correct output for all the inputs until all their respective correct key sequences are applied. The implementation of the proposed IDKLL and LUT-based tamper-proof memory for storing the valid key sequences for the small circuit is explained above. The concept of LUT based tamper-proof memory can also be easily represented at the block level, as shown in Figure <ref>. However, locking a small circuit using the proposed IDKLL is explained above. Locking a large design that exhibits many inputs/outputs using the above procedure (where we also make the output dependent on the key inputs) would be very complex and challenging. Further, it may also require significant overhead, which may be unaffordable sometimes. Therefore, we also propose a low-cost approach for locking a design using input dependent key-based logic locking in the next section. § SUB-LOCK: SUB-CIRCUIT REPLACEMENT BASED IDKLL The locking design by considering the whole functionality (discussed in Section <ref>) is very challenging and may require unaffordable design overhead. However, we can insert an additional locked circuit similar to Anti-SAT to lock the design functionality. The additional insertion may also incur a large overhead. Therefore, we propose an approach that locks the design by replacing the original sub-circuitry of the design with the corresponding new circuit that is locked using input dependent key-based logic locking. In the proposed approach, we select several sub-circuits in a design and then the selected circuits are replaced with their corresponding locked circuits, which are locked using proposed logic locking as discussed in Section <ref>. The example of locking a design by replacing its sub-circuit is shown in Figure <ref>. Here, an example circuit, as shown in Figure <ref>(a), is considered for locking by replacing its sub-circuit (Y = cd + de) with the corresponding circuit, which is locked using input dependent key-based logic locking. To achieve this, we first construct the lock version of the selected sub-circuit as shown in Figure <ref>(b) using proposed input dependent key-based logic locking. Finally, the original circuit is locked by replacing the selected sub-circuit (Y = cd + de) with its locked version as shown in Figure <ref>(c)). This figure presents the complete locked design, where the locked version of the originally selected sub-circuit is mentioned with the dotted box. In this locked circuit, if we analyzed the overhead in terms of gate count, then only three additional gates are required due to replacing the existing sub-circuitry. Hence, locking a design by replacing multiple sub-circuits with their corresponding locked circuits can significantly reduce the design overhead compared to the insertion-based logic locking technique, i.e., Anti-SAT <cit.>. The selection of sub-circuits for locking the design can be random or judicious. Though random selection can introduce some non-determinism while locking the design, it may fail to provide high output corruption. On the other hand, a judicious replacement can provide high output corruption. For example, selecting the sub-circuits whose output exhibits the highest fault impact point <cit.> for replacement can significantly increase output corruption. The designer can combine both the above approaches to achieve high security. It can also be observed in proposed logic locking that no additional special structure or key-gates (i.e., XOR/XNOR, MUX etc.) are inserted <cit.>, <cit.>. Hence, it would be very difficult to identify the replaced/locked circuitry in the design exactly. Moreover, the designer can also lock the design by inserting the additional locked circuits and by replacing the existing sub-circuits with their locked circuits, as reported in <cit.>. In this case, it is very difficult for the attacker to distinguish which part of the circuit is additionally inserted or which part is an existing part. The quantitative security analysis of the proposed logic locking is given in the next section. § SECURITY ANALYSIS OF PROPOSED TECHNIQUE It is clear from the above that the proposed input dependent key-based logic locking technique completely prevents SAT-based attacks. The attacker cannot apply the SAT-based attack to decipher the correct key from the circuit locked using the proposed input dependent key-based logic locking. Hence, the attacker has to employ only a brute force attack to extract the key. Therefore, in this subsection, we also quantitatively analyze and compare the security of the proposed technique against the brute force attack. Let us consider the above discussion as shown in Section <ref>, where a circuit with n primary inputs is locked by inserting k key-inputs. If this design is locked using standard logic locking techniques, the attacker has to apply all key sequences or brute force attempts to unlock the design <cit.>, <cit.> <cit.>. If the total key sequences or brute force attempts for k bit-length key are denoted with l, then l can be represented as l = 2^k Although, the SAT-based attacks can break these standard logic locking techniques within a few attempts <cit.>. The use of SAT-resistant logic locking techniques such as Anti-SAT <cit.>, <cit.> forces the attacker to apply at least λ attempts/iterations to extract the correct key sequence. Here, the value of λ for k bits key is represented with Eq. (<ref>) <cit.>. λ = 2^k/2 On the other hand, if we use m key key-sequences out of l to lock the entire design functionality using the proposed technique, then to know the correct key sequences and their order, the attacker has first to know the output corresponding to each key sequence. Next, he/she has to identify correct combinations of key sequences as well as their order of application as a secret key, such that it provides correct output for all input patterns. The number of attempts required to know the output of each key sequence would be equivalent to the brute force attempts, i.e, l =2^k as shown in Eq. <ref>. Though we utilize m key sequences as a correct key out of l, the attacker would not be aware of how many key sequences, which combination, and which order (i.e. which key sequence is used for which input pattern) they have used to lock design. Therefore, the attacker has to apply all possible permuted combinations of l key sequences to determine a correct combination and order of application of key sequences. If the total permuted combinations for l key sequences are denoted with PC, then PC can be represented as follows. PC = [l]1.l! + [l]2 .2! + [l]3 .3! + ..... + [l]l .l! The above equation can also be represented in the form of permutation as follows PC = [l]1 + [l]2 + [l]3 + ..... + [l]l Since, [l]r = l!/(l-r)! Thus, we can also write the Eq. <ref> as follows. PC = l!/(l-1)! + l!/(l-2)! + l!/(l-3)! + ..... + l!/(l-l)! PC = l! (1/(0)! + 1/(1)! + 1/(2)! + ..... + 1/(l-1)!) PC = l! ∑_r=1^l-11/(r)! However, the above equation can also be represented in other forms; this is out of scope in this paper. Thus, we compute the total attempts required by the attacker to decipher the correct key sequences by adding the attempts of finding the output, i.e., l (Eq. (<ref>)) with the total permuted combinations (Eq. (<ref>)) of key sequences as follows. Total_Attempts = l + PC Finally, the total number of brute force attempts required by the attacker to extract correct key sequences in the proposed technique can be computed by placing the value of l and PC from Eq. (<ref>) and Eq. (<ref>) into the Eq. (<ref>) as follows Total_Attempts = 2^k + (2^k)!∑_r=1^2^k-11/(r)! Now, it can be observed from Eq. (<ref>) and Eq. (<ref>) that existing logic locking techniques are only able to maintain 2^k brute force attempts without preventing SAT attack. They can provide security against SAT attacks while maintaining only 2^k/2 attempts/iterations. On the other hand, the proposed technique significantly increases the number of brute force attempts for the attacker to decipher the correct key over the existing techniques while preventing SAT attack, as shown in Eq. (<ref>). This equation basically presents the attack complexity for the entire locked circuit. The attacker may also attempt to identify the locked sub-circuits/sub-cones and break them individually. However, it will require knowing the original functionality of the replaced sub-circuit. But due to replacement, it will be approximately impossible for the attacker to know the original functionality of the replaced sub-circuit. Further, due to the use of multiple key sequences as valid for each sub-circuit, the attacker cannot apply SAT attack even on individual sub-circuits. Therefore, the attacker must apply brute force similar to Eq. (<ref>), which will require exponential complexity. Due to the replacement, if the attacker also tries to break the multiple sub-circuits simultaneously, he/she also cannot solve them simultaneously. Further, the output of one locked sub-circuit may also interfere with the output of another locked sub-circuit. Due to this and using multiple key sequences as the valid key, the attacker cannot even apply sensitization and any other attack to break the proposed IDKLL method. Hence, an attacker will always require exponential complexity (or brute force) to break the proposed method in every case. In the proposed method, the number of increased brute force can be computed by subtracting the Eq. (<ref>) from Eq. (<ref>) as shown below. Increased_Attempts = (2^k)!∑_r=1^2^k-11/(r)! It is clear from the above equation that the proposed input dependent key-based logic locking technique prevents SAT-based attacks and significantly increases the required number of brute force attempts over all the existing techniques. We also justify this claim with the experimental evaluation of the proposed technique, as presented in the next section. § EXPERIMENTAL RESULTS AND ANALYSIS This section first presents the experimental setup, followed by simulation results and comparative analysis. §.§ Experimental Setup We locked the ISCAS/ITC benchmark circuits by randomly replacing the 3-input and 4-input sub-circuities with their corresponding locked circuits which are locked using the proposed IDKLL method as discussed in Section <ref>. To analyze the effectiveness of the proposed technique, the varying number of keys (i.e., 16, 32, 64, 128) are embedded in the benchmarks, and different locked circuits are generated, namely Prop_16K, Prop_32K, Prop_64K and Prop_128K. The security of the proposed technique against SAT-based attacks is evaluated using the SAT attack tool as reported in <cit.>. Finally, to analyze the implementation overhead, the locked circuits are synthesized using Cadence RTL compiler with Nangate Open Cell Library 45nm <cit.>, and different design metrics (area, power, delay) are extracted. We compared our approach with the existing methods. §.§ SAT Attack Results and Analysis To validate the effectiveness of the proposed method against SAT attacks, the locked versions of the selected sub-circuits are also individually verified for functionality and security against SAT attacks. Afterwards, we locked the benchmark circuits by randomly replacing their sub-circuits with the corresponding IDKLL based locked versions. We have embedded the different number of keys (16, 32, 64, 128) in each benchmark while locking them. Finally, we evaluate the effectiveness of the proposed sub-circuit replacement based IDKLL method against SAT attack by applying SAT attack on the above-locked benchmarks using SAT solver tool <cit.>. As a result of the SAT solver, we found that the SAT attack cannot extract the correct key from the individual locked sub-circuits as well as from the locked benchmarks circuits. It either provides the “UNSAT Model" or returns a key that has been proven wrong on its verification. The screen-shots of the demonstration of SAT attack on the c7552 and c1908 benchmark circuits locked by inserting 16 keys are shown in Figure <ref>. SAT attack completely failed to identify the correct key even with applying the Partial-Break algorithm along with fault analysis <cit.>. It can be observed from this figure that the application of SAT attack on the c7552 locked benchmark provides the “UNSAT Model", which means it is unable to determine the correct key as shown in Figure <ref>(a). Although SAT attack returns a key for the c1908 locked benchmark (Figure <ref>(b)), the returned key is declared wrong or incorrect after verification using SAT solver program as shown in Figure <ref>(c). Apart from c7552 and c1908, we have also applied the SAT attack on other locked ISCAS and ITC benchmark circuits. We observed that SAT attack has completely failed to identify the correct key in any locked benchmark circuit that is locked using the proposed IDKLL. The SAT attacks simulation results for ISCAS benchmarks locked with 16 and 32 keys are shown in Table <ref>. While performing the SAT attack, we have extracted three evaluation metrics, i.e., the number of SAT iterations, SAT attack time (in seconds) and status (whether the correct key was found or not), to validate the proposed IDKLL. It can be observed from this table that SAT attack fails to determine any key (i.e., “UNSAT Model") or provides an “wrong key" for the implementations of the proposed method. On the other hand, SAT attack identifies the correct key for all the benchmark circuits locked using exiting Anti-SAT, CAS-Lock and other SAT resistant methods in an average 2^k iterations with 2k keys ( here k= 8, 16). However, the SAT attack time and the number of SAT iterations are significantly low in the proposed sub-circuit replacement based IDKLL. This is because the execution of SAT attack fails or terminates unsuccessfully. Therefore, the attacker has to employ brute force only in our method. The comparison of SAT/brute-force iterations/attempts required by the proposed SubLock and existing methods are shown in Table <ref>. Here m is the number of critical inputs. The SAT attack does not fail in the existing SAT-resistant methods; it runs until the worst-case time and returns the correct key afterwards. In the proposed method, SAT attack would never identify the correct key in the proposed IDKLL method, even locking with any number of keys. Therefore, the proposed IDKLL method completely prevents the SAT attack. This is basically happening because the proposed technique uses multiple key sequences as a correct key; thus, SAT attack either eliminates all the keys or leaves with a single key sequence that can never be correct for all the input patterns. On the other hand, the existing SAT-resistant methods approach uses a single key sequence as a correct key for all input patterns. Hence, the existing SAT resistance logic locking techniques failed to prevent the SAT attack. Further, we also analyze the effectiveness of the proposed method against variants of SAT attacks such as App-SAT, removal, IFS and others. Since the proposed method is not based on a single-point function and can provide desired output corruption, applying these attacks is approximately impossible on our method. The proposed sub-circuit replacement based IDKLL method will be very effective in mitigating all the known attacks. The overhead analysis of SubLock is presented in the next subsection. §.§ Overhead Results and Analysis To analyze the implementation overhead of our technique, we have implemented the proposed method by embedding the varying number of keys in ISCAS and ITC benchmarks. The original, as well as locked benchmarks, are synthesized using the Cadence RTL compiler and different design metrics such as area, power and delay are extracted. It has been analyzed during synthesis that the proposed sub-circuit replacement based IDKLL (SubLock) requires a significantly large overhead for small benchmarks (ISCAS-85), even with a small key size. The average percentage overhead for the area, power and delay required by the proposed SubLock method for 16 (Prop_32K) and 32 (Prop_32K) keys are presented in Figure <ref>. Here, it can be seen that the proposed SubLock require 16%, 9.3% and 5%, area, power and delay overhead for embedding 32 keys. However, we have also evaluated the proposed SubLock method on large benchmarks with large key sizes. This evaluation proved that our SubLock method could effectively thwart SAT-based attacks with low overhead. The area, power and delay required for locking large ISCAS-89 and ITC benchmark circuits using the proposed method while embedding 32, 64, and 128 keys are shown in Table <ref>. It can be easily observed from this table that the design metrics (area, power, delay) are not significantly increasing while changing the key size from 32 to 128. It means that the proposed SubLock method will not require high area, power, and delay while locking a design even with more large size key, i.e., K=256 or K=512. In addition, we have also implemented the existing SAT resistant methods ASB <cit.>, CAS <cit.>, and SAS <cit.> by randomly inserting different key size blocks (K=32, K=64 and K=128) in ISACS-89 and ITC benchmarks. The same locked benchmarks are synthesized using the Cadence RTL compiler, and different design metrics, i.e., area, power and delay, are extracted. The comparison of comparative analysis of the proposed method is presented next subsection. §.§ Comparative Analysis To concisely compare the area, power and delay, we calculated the average of each design metric for the proposed and existing ASB <cit.>, CAS <cit.>, and SAS <cit.> methods. The comparison of average area, power and delay required by proposed and existing methods on large ISCAS-89 and ITC benchmarks is shown in Table <ref>. It can be observed from this table that the ASB requires slightly less overhead compared to CAS and SAS blocks. However, the SAS block requires a large overhead in comparison to ASB and CAS blocks; it provides high security against SAT over ASB and CAS blocks. On the other hand, the proposed SubLock method required low area, power and delay over all these SAT-resistant methods. Furthermore, we also compute the average percentage overhead required by proposed and existing methods, as shown in Figure <ref>. It can be analysed from this figure that the ASB, CAS and SAS methods require low overhead for area and power, whereas they require a large overhead for the delay, specifically for K=128, i.e., more than 25%. This is because we randomly inserted the ASB, CAS and SAS blocks in the design, causing the insertion of these blocks in critical paths. However, the delay overhead can be reduced for these methods by judicious insertion or by avoiding insertion in critical paths. On the other hand, the proposed SubLock method requires low delay overhead (i.e., 3.8%) because we replace the small sub-circuits (3-input/4-input) with the corresponding IDKLL based locked circuit. In addition, it can also be easily analysed that area and power overhead for all the methods are less than the 5% even for 128-bit key size. Though the SAS block insertion-based method provides high security, it requires a large area and power overhead compared to ASB and CAS insertion-based methods. For the 128-bit key, the SAS-based method requires 3.5% and 3.8% area and power overhead, and the ASB method requires 2.7% and 3% area and power overhead receptively. All these existing methods are implemented without any obfuscation; they may be vulnerable to Removal attacks <cit.>. The implementation of these methods with obfuscation will require significantly high overhead. On the other hand, the proposed SubLock method requires significantly less area and power overhead compared to ASB, CAS and SAS block insertion-based methods for all the key sizes. For the 128-bit key, the proposed method requires 1% and 0.5% area and power overhead, whereas, for the 64-bit key, it requires only 0.4% and -0.8% area and power overhead, respectively. It can also be observed from Figure <ref> and Table <ref> that the proposed method requires less power and delay (negative value in the figure for K=32 and K=64) in comparison to the original circuit. However, it is majorly happening for K=32, where both power and delay overhead are negative in the proposed method. This is happening because 1) we intentionally designed and optimized the locked sub-circuits to reduce the overhead of the proposed logic locking method, 2) the replacement of sub-circuits with corresponding IDKLL based optimized locked sub-circuits changes the original logic of the replaced sub-circuits; afterwards, the synthesizer optimizes the entire locked circuits to reduces power, area and delay. These optimizations slightly reduce the overhead of a locked circuit over the original circuit. Due to the low number of replacements, this slight reduction is clearly visualized while embedding a small number of keys (i.e., K=32 or K=64) in large benchmarks. Due to many replacements, it is not visualizing in the case of K=128. Hence, in practical key size (K=128, K=256 and K=512), the power and delay overhead in the proposed method will always be high compared to the original circuit. Besides the above, we have also compared our approach with the TTLock <cit.>. The comparison of the average percentage overhead required by proposed and TTLock methods on large ISACS-89 benchmarks for the key size of 32 and 64 is shown in Figure <ref>. It can be observed here that the delay overhead in TLock is approximately near to our approach. But TTLock requires significantly high area and power overhead compared to the proposed approach. We have also analyzed the implementation overhead of the SFLL <cit.> method. It is found that the proposed method also requires significantly less overhead compared to SFLL. The SFLL is the extension of TTLock, which provides high security over TTLock but requires more overhead over the TTLock. Since the proposed SubLock method requires less overhead over TTLock even for the same key size, thus the proposed method will also require low overhead over the SFLL method for the same key size. In addition to these methods, other recently reported methods such as CORALL <cit.>, DisORC <cit.>, G-Anti-SAT <cit.> and SKG-Lock+ <cit.> are also required around 10% overhead while only increasing the security against SAT attack not preventing it. In the above analysis, we compared and analyzed that the implementation of the proposed SubLock method for the same key size requires low implementation overhead compared to existing SAT-resistant methods. On the other hand, it can be observed from Eq. (<ref>) that the proposed method achieves minimum 2^k more attack iterations for the same key size over the existing ASB, CAS, and SAS methods. It means the proposed method can achieve the same security with the 64-bit key, which is achieved by existing methods with a 128-bit key. In this case, the required overhead of the proposed method will be further reduced while comparing the proposed SubLock and existing methods for the same security. For the 64-bit key, the proposed SubLock method can provide the same or higher security with 0.4%, -0.8% and 2.76% area, power, and delay overhead. In contrast, the existing SAS/CAS method requires around 3.5%/3%, 3.8%/4.2% and 28%/31% area, power and delay overhead, respectively for achieving the same security. Here, it is clear that the proposed method requires significantly less overhead than all the existing methods for the same security. Overall, it can be easily said that the proposed SubLock method outperforms all the well-known SAT-resistant methods in terms of security and design overhead. § CONCLUSION This paper proposes a new SAT-resistant logic locking method based on the idea of input dependent key-based logic locking called IDKLL. The existing SAT-resistant logic locking techniques utilize a single key sequence as a correct key to unlock the design, whereas the proposed IDKLL approach uses multiple key sequences as correct to unlock the design functionality for all inputs. The use of multiple key sequences as the correct key neutralizes the SAT attack completely. In order to lock the whole design functionality using IDKLL, we propose a lightweight sub-circuit replacement based IDKLL method called SubLock that provides effective security against SAT based attacks with low overhead. The proposed SubLock approach replaces the original small sub-circuitries with their corresponding IDKLL based locked circuits. Further, we present the security analysis of the proposed method, which shows that our method tremendously increases the security or brute force attempts over all the existing SAT-resistant methods and effectively prevents SAT based attacks. The experimental evaluation of the proposed SubLock on ISCAS and ITC benchmarks shows that the proposed method effectively prevents the SAT attack while requiring very low overhead compared to the well-known method such as Anti-SAT, CAS-Lock and Strong Anti-SAT. The concept of IDKLL is based on using multiple correct key sequences to unlock the design for all inputs. This paper uses LUT based tamper-proof memory to store multiple valid key sequences and retrieve the correct key sequence corresponding to a given input set. Implementing LUT based tamper-proof memory for storing multiple key sequences may require high implementation costs over implementing normal taper-proof memory for storing a single key sequence. Therefore, our future work will focus on identifying the cost-effective implementation/way to store multiple correct key sequences. unsrtnat < g r a p h i c s > Vijaypal Singh Rathor is currently working as an Assistant Professor in the Department of CSE at PDPM Indian Institute of Information Technology, Design and Manufacturing (IIITDM), Jabalpur, India, since August 2021. His research interest includes Hardware Security, Machine Learning and IoT, and Cloud Computing. He is also a member of IEEE since 2017. < g r a p h i c s > Munesh Singh is currently working as an Assistant Professor in PDPM Indian Institute of Information Technology, Design and Manufacturing (IIITDM), Jabalpur, India. He has published many research articles in IEEE, Springer, and Elsevier journals. His research interests include Sensor Networks, Cooperative Computing, Radar Surveillance, Cyber Security, Machine Learning, and Intelligent Robotics. < g r a p h i c s > Kshira Sagar Sahoo (Senior Member, IEEE) completed his Ph.D. (2019) and M.Tech (2014) from Dept. of CSE, National Institute of Technology, Rourkela, India and Indian Institute of Technology, Kharagpur, India respectively. He is currently acting as a postdoctoral Kempe fellow at the dept of computing science, Umeå University, Sweden. He has authored more than 90 research articles in top tier journals and conferences, 2 edited books, and 8 granted and pending patents. He is recognized as top 2% scientists in the world 2023 list by Stanford University. His research interests are SDN, SDIoT, Edge Computing, IoT, Machine Learning and Distibuted Computing. He is a senior member of IEEE and IETE. < g r a p h i c s > Saraju P. Mohanty (Senior Member, IEEE) received the bachelor’s degree (Honors) in electrical engineering from the Orissa University of Agriculture and Technology, Bhubaneswar, in 1995, the master’s degree in Systems Science and Automation from the Indian Institute of Science, Bengaluru, in 1999, and the Ph.D. degree in Computer Science and Engineering from the University of South Florida, Tampa, in 2003. He is a Professor with the University of North Texas. His research is in “Smart Electronic Systems’’ which has been funded by National Science Foundations (NSF), Semiconductor Research Corporation (SRC), U.S. Air Force, IUSSTF, and Mission Innovation. He has authored 500 research articles, 5 books, and 10 granted and pending patents. His Google Scholar h-index is 56 and i10-index is 258 with 14,000 citations. He is regarded as a visionary researcher on Smart Cities technology in which his research deals with security and energy aware, and AI/ML-integrated smart components. He introduced the Secure Digital Camera (SDC) in 2004 with built-in security features designed using Hardware Assisted Security (HAS) or Security by Design (SbD) principle. He is widely credited as the designer for the first digital watermarking chip in 2004 and first the low-power digital watermarking chip in 2006. He is a recipient of 19 best paper awards, Fulbright Specialist Award in 2021, IEEE Consumer Electronics Society Outstanding Service Award in 2020, the IEEE-CS-TCVLSI Distinguished Leadership Award in 2018, and the PROSE Award for Best Textbook in Physical Sciences and Mathematics category in 2016. He has delivered 29 keynotes and served on 15 panels at various International Conferences. He has been serving on the editorial board of several peer-reviewed international transactions/journals, including IEEE Transactions on Big Data (TBD), IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD), IEEE Transactions on Consumer Electronics (TCE), and ACM Journal on Emerging Technologies in Computing Systems (JETC). He has been the Editor-in-Chief (EiC) of the IEEE Consumer Electronics Magazine (MCE) during 2016-2021. He served as the Chair of Technical Committee on Very Large Scale Integration (TCVLSI), IEEE Computer Society (IEEE-CS) during 2014-2018 and on the Board of Governors of the IEEE Consumer Electronics Society during 2019-2021. He serves on the steering, organizing, and program committees of several international conferences. He is the steering committee chair/vice-chair for the IEEE International Symposium on Smart Electronic Systems (IEEE-iSES), the IEEE-CS Symposium on VLSI (ISVLSI), and the OITS International Conference on Information Technology (OCIT). He has mentored 3 post-doctoral researchers, and supervised 17 Ph.D. dissertations, 27 M.S. theses, and 27 undergraduate projects.
http://arxiv.org/abs/2406.19037v1
20240627094305
Witnessing mass-energy equivalence with trapped atom interferometers
[ "Jerzy Paczos", "Joshua Foo", "Magdalena Zych" ]
quant-ph
[ "quant-ph", "gr-qc" ]
jerzy.paczos@fysik.su.se Department of Physics, Stockholm University, SE-106 91 Stockholm, Sweden jfoo@stevens.edu Centre for Quantum Computation and Communication Technology, School of Mathematics and Physics, University of Queensland, St. Lucia, Queensland 4072, Australia Department of Physics, Stevens Institute of Technology, Castle Point Terrace, Hoboken, New Jersey 07030, USA magdalena.zych@fysik.su.se Department of Physics, Stockholm University, SE-106 91 Stockholm, Sweden Centre for Engineered Quantum Systems, School of Mathematics and Physics, The University of Queensland, St. Lucia, Queensland, 4072, Australia § ABSTRACT State-of-the-art atom interferometers can keep atoms in a superposition of heights in Earth's gravitational field for times reaching minute-scale, allowing for precise measurements of the gravitational potential. Yet, the phase shifts measured in such experiments can always be explained with a non-relativistic theory of gravity. There is therefore growing interest in finding feasible ways to use such new experimental capabilities to go beyond the non-relativistic regime. Here we propose modifying the existing experimental setups to probe both the quantum and the general relativistic effects on the atom's dynamics. Our proposal consists of adding two additional laser pulses in a trapped atom interferometer that would set up a quantum clock trapped at a superposition of heights reading a quantum superposition of relativistic proper times. We derive the phases acquired by different trajectories in the interferometer and demonstrate that the effect of superposition of proper times would manifest itself in two ways in the interference pattern: as visibility modulations, and as a shift of the resonant frequency of the atom. We argue that the latter might be observable with current technology. Witnessing mass-energy equivalence with trapped atom interferometers Magdalena Zych July 1, 2024 ==================================================================== —Introduction. A key prediction of general relativity is that time is not a global background parameter but flows at different rates depending on the spacetime geometry, a phenomenon known as time dilation <cit.>. The physical effects produced by time dilation have been tested in numerous paradigmatic experiments. Pound and Rebka were the first to observe an altitude-dependent gravitational redshift in gamma rays emitted between the top and bottom of a tower <cit.>; Hafele and Keating directly tested special and general relativistic time dilation by comparing actual clocks at different heights and moving at different speeds <cit.>; while Shapiro proposed to measure the reduction of the speed of light for electromagnetic waves traveling across regions subject to a gravitational potential <cit.>, which was observed in <cit.>. The above effects are fully explainable within the paradigm of classical (non-quantum) relativistic physics. The first experiment measuring the effect of gravity on the quantum wavefunction of a single particle was performed by Colella, Overhauser, and Werner (COW) <cit.>. The setup considered neutrons traveling in a superposition of heights in the Earth's gravitational field, following the basic geometry of a Mach-Zehnder-type interferometer. The neutrons, traversing arms at differnt heights in the gravitational potential acquire a relative phase. The resulting interference pattern was measured and a phase difference consistent with the Earth's gravitational potential was deduced. These and related effects measured with atoms <cit.> are thus tests of the non-relativistic gravitational effects in quantum systems. An idea of how to unambiguously probe the general relativistic effects in a quantum experiment was developed in <cit.>. These works considered interferometric experiments with a particle having an internal degree of freedom that could be treated operationally as a “quantum clock”. The internal evolution would then depend on the proper time along the paths taken through general curved spacetime <cit.>. Specifically, the authors of <cit.> predicted that in the proposed experiment one would observe oscillations of the visibility of the interference pattern, defined as the contrast between the constructive and destructive interference fringes (which in the non-relativistic theory in the COW-type experiments is, in principle, always maximal). The predicted periodic modulations of the visibility would be an effect that arises unambiguously from the interplay between relativistic and quantum physics. Several proposals were made to probe this effect in interferometric experiments with electrons <cit.> and atoms <cit.>. Derivations of the dynamics of relativistic quantum particles with quantized internal mass-energy, necessary to make predictions for the above experiments, have been done in the framework of relativistic quantum mechanics in refs <cit.>, and starting from quantum field theory in curved background in refs <cit.>. In the low energy limit for the center of mass of the composite particles, they coincide. These results were applied to modeling the dynamics of particles in clock interferometers <cit.> and of trapped particles <cit.>, in particular in the context of optical clocks to explore relativistic contributions to the clock's frequency and precision in refs <cit.>, and most recently to explore associated many-body effects <cit.>. In this Letter, we propose a concrete experimental setup for witnessing time dilation effects in atom interference. Our idea is inspired by the rapid development of trapped atom interferometers <cit.> utilizing Bloch oscillations to keep atoms in a coherent superposition of heights in a gravitational field for increasingly long times, reaching minute-scale <cit.>. We propose a modification of the existing interferometers in the form of two additional “clock pulses” at the beginning and at the end of the Bloch oscillations. This procedure effectively sets up a clock at a superposition of heights in the gravitational field, which means that the clock experiences a superposition of proper times. We calculate all the phases acquired by different trajectories within the interferometer and analyze the interference pattern. We recover the visibility modulations predicted in <cit.>, where fixed classical trajectories of the clocks within the interferometer were assumed, but crucially, we predict frequency shift of the clock which makes this proposal feasible. Indeed, the proposed setup implements Ramsey spectroscopy for spatially superposed atoms, allowing for measurements of their resonant frequencies. We show that relativistic effects manifest therein as resonant-frequency shifts. We argue that this will be easier to observe than the visibility oscillations due to the enormous precision of frequency measurements <cit.>. In particular, for the height separations reported in ref. <cit.>, the predicted fractional frequency shift is of the order of 10^-22, one order of magnitude below the current measurement precision. —Setup. Let us consider a one-dimensional atom interferometry setup, Fig. <ref>, closely resembling configurations implemented in <cit.>. It involves a highly cooled atom [in experiments <cit.> the temperature reaches a few hundreds of nanokelvins] moving along the gravitational field g with velocity modified by Bragg pulses and an optical lattice. We assume a two-level internal structure of the atom with the ground and excited states separated by energy ħω_0. First, atoms are prepared in the ground state and launched upwards into free fall with initial velocity v_0. Next, a pair of π/2 Bragg pulses are applied separated by a time T. Each of them splits the atomic trajectory into two — one with unchanged velocity, and one with velocity modified by Δ v. Therefore, after this sequence, the initial trajectory of each atom is split into four paths with pairwise equal velocities — one pair with velocity v_0-gT, and the other pair with velocity greater by Δ v. We focus on the latter pair and assume that the track of the remaining trajectories is lost as in experiments in refs <cit.>. When the relevant trajectories reach the apex, namely at time T+T'=(v_0 + Δ v)/g, the optical lattice is switched on, and atoms performs Bloch oscillations. At switching on the lattice we introduce our modification — a clock pulse at frequency ω that puts atoms in an equal superposition of the two internal states without affecting its momentum <cit.>. The optical lattice remains on for time T_B, after which the second clock pulse is applied and the lattice is switched off. The atoms fall freely for time T', after which another pair of π/2 Bragg pulses is applied to recombine the trajectory. At this stage, another two trajectories are lost — this is different than in experiments <cit.> where all the output ports are measured but only two interfere (a modification that we introduce for simplicity of the subsequent analysis). In the end, one measures the probabilities P_g^(j) that the atoms are found in the ground state in the output trajectory j∈{0,1}, with 0 and 1 corresponding to the red and blue trajectories in Fig. <ref>, respectively). Apart from the non-relativistic phase Δϕ between the upper and lower ground state trajectories (measured in typical interferometric experiments, e.g. <cit.>), there will be additional phases δ_d and δ_u acquired within the optical lattice by the excited state on the lower and upper trajectory, respectively, which can be measured with the proposed setup. The probability P_g^(j) is given by (see Appendix <ref> for derivation) P_g^(j)=1/16 [1-cos(δ_u+δ_d/2)cos(δ_u-δ_d/2)+(-1)^j× ×2cos(Δϕ+δ_u-δ_d/2)sin(δ_d/2)sin(δ_u/2)]. One can get rid of the Δϕ-dependence by calculating the total probability P_g=P_g^(0)+P_g^(1) of finding the atom in the ground state P_g=1/8[1-cos(δ_u-δ_d/2)cos(δ_u+δ_d/2)]. This probability does not depend on the specific opening and closure of the interferometer, but only on the part confined in the optical lattice. This part is central to our considerations because the atom will effectively behave as a clock at a superposition of heights within the optical lattice. The presence of two clock pulses is the main modification to previous experimental implementations <cit.>. Let us notice, that without those pulses, the probability of measuring the atoms in the ground state would be constant and equal to 1/4 (recall that we lose some of the trajectories). In our setup, they oscillate between 0 and 1/4, depending on the time T_B of the lattice hold. —Dynamics and phases. Let us analyze the optical lattice stage of the interferometer and the additional phases δ_d and δ_u acquired by the excited state. At this stage, the atom will Bloch oscillate at a superposition of heights separated by Δ z. We adopt a simplified model of Bloch oscillations (see <cit.> and Fig. <ref>), in which the atom interacts with the optical lattice only when it reaches a downward velocity v_B=ħ k/m (here k is the wave vector of the optical-lattice laser and m is the mass of the ground-state atom). Then, it instantaneously exchanges two photons with the lattice and changes its velocity. For the appropriately chosen frequency of the lattice light, the velocity at the interaction point will simply flip its sign, and the atom will oscillate at the fixed height with period τ_B=2v_B/g. We will assume that the laser frequency has been chosen exactly this way. Each bounce starts and ends at the apex of a free-fall trajectory. Within the time T_B the atom performs N oscillations with the last one possibly extended (or shortened) by a time τ∈(-τ_B/2,τ_B/2) (this is because T_B is not necessarily an integer multiple of the Bloch period τ_B; see Fig. <ref>). Crucially, due to the mass difference between the excited and the ground state of the atom, the condition for bouncing at the fixed height can be satisfied only for one of the internal states for a given lattice. For instance, if we choose the lattice laser frequency so that the condition is satisfied for the ground state, the excited state will necessarily fall beyond the starting height with each oscillation. However, it can be shown (see Appendix <ref>) that in viable experimental scenarios, the total fall is much smaller than the thermal wavelength of the atom. Therefore, employing the perturbative approach developed in <cit.> is justified, which here means that we can approximate that both the ground and the excited state follow the same trajectory. We further verify that this approximation holds in Appendix <ref>. The phases δ_d and δ_u consist of two components: laser phase ω T_B acquired by the excited state due to the clock pulses, and a propagation phase correction due to different masses of the ground and the excited state. To calculate the latter we follow <cit.> and express the Lagrangian of the excited state L_e=L_g+δ L in terms of the Lagrangian L_g of the ground state and a small correction δ L resulting from the rest-mass difference between the ground and the excited state. If we denote this mass difference by δ m=ħω_0/c^2, the Lagrangian correction is given by δ L=-δ m(c^2+gz-v^2/2). Then, to calculate the propagation phase correction we integrate δ L over the trajectory followed by the atom and divide by ħ. The full calculation is presented in the Appendix <ref> (followed by a non-perturbative calculation in Appendix <ref>). It leads to the following formulae for the phases δ_d and δ_u: δ_d=[ω-ω_0(1+⟨ v^2⟩/2c^2)]T_B, δ_u=[ω-ω_0(1+⟨ v^2⟩/2c^2+gΔ z/c^2)]T_B, where ⟨ v^2⟩=v_B^2/3 is the mean square speed within a single oscillation. In principle, the mass-energy corrections to the interferometric phases are not the only corrections of the order of 1/c^2 that are present in the proposed experiment — following <cit.> we could also include corrections coming from the higher-order expansion of the free-fall Lagrangian of the relativistic particle, and 1/c^2 corrections to the laser phases coming from elastic interactions (Bragg and Bloch pulses). These depend only on the trajectory followed by the atom and thus, in the picture in which ground and excited states follow the same trajectories, affect only the interferometric phase Δϕ leaving δ_k and δ_g unmodified. However, the 1/c^2 corrections to Δϕ are not the main focus of our considerations and we will not deal with them. On the contrary, any corrections to the clock-pulse laser phases would affect δ_k and δ_g if they were present. Let us notice, however, that the corrections to the laser phases calculated in <cit.> resulted from the assumption that the atom-light interactions are always resonant and the atom interacts with photons with laboratory-frame frequencies depending on its position and velocity. For our present purposes, we assume instead that the laser frequency of the clock pulses is fixed and the non-resonant processes are possible. Then, there are no corrections to the clock-pulse laser phases because the atom interacts with photons with the same laboratory-frame frequency regardless of its height and velocity. —Results and discussion. Let us denote ε_k=⟨ v^2⟩/2c^2 for the kinetic correction in Eq. (<ref>) and ε_g=gΔ z/c^2 for the gravitational correction, and denote by ⟨ω_0⟩=ω_0(1+ε_k+ε_g/2) the (quantum mechanical) mean of the atomic internal frequency. Then, the probability of detecting the atom in the ground state (compare with Eq. (<ref>)) can be written as P_g=1/8[1-𝒱(T_B)cos([ω-⟨ω_0⟩]T_B)], where 𝒱(T_B):=cos(ε_gω_0T_B/2) is the interferometric visibility. In a generic situation (away from the resonance), the visibility varies very slowly with time T_B compared to cos([ω-⟨ω_0⟩]T_B). Therefore, Eq. (<ref>) describes an interference pattern with a slowly varying envelope. Apart from the visibility modulations, Eq. (<ref>) looks like a result of a Ramsey interferometry experiment for an atom with internal transition frequency ⟨ω_0⟩. For fixed time T_B one can scan through different clock pulse frequencies ω to find this resonant frequency. Thus, the relativistic effects coming from mass-energy equivalence can be measured in two ways: by observing the visibility oscillations, and by measuring the fractional frequency shift ⟨δω_0/ω_0⟩≡⟨ω_0⟩-ω_0/ω_0=ε_k+ε_g/2. Note that while the visibility oscillations are sensitive only to the gravitational correction ε_g, the frequency shift depends both on ε_g and ε_k. To distinguish between the gravitational and the kinetic effect in fractional frequency shift measurements, one can modulate the separation Δ z between the upper and the lower trajectory — this will affect the gravitational correction, but not the kinetic one. The probability P_g as a function of time is visualized in Fig. <ref> for exemplary values of ω, ω_0, ε_g, and ε_k. Assuming the experimental parameters as in <cit.>, we would have the velocity v_B∼10 mm/s, and the vertical separation of the trajectories Δ z∼10 μm. This would mean that the corrections would be of the order ε_k∼ε_g∼10^-22. The fractional frequency shift of that order is very close to what can be currently detected <cit.>. Indeed, state-of-the-art atomic clocks reach the multiple-measurement precision of the order of 10^-21 <cit.> (the single-measurement precision in these experiments is ∼10^-19), which is the order of magnitude for the fractional frequency shift in the proposed experiment for separations Δ z∼100 μm, which has already been achieved in <cit.>, at the expense of shorter coherence time. On the other hand, for the transition frequency ω_0∼10^15 Hz and lattice hold time T_B∼1 s (numbers taken from <cit.>), and ε_g∼10^-22, the visibility drop would be of the order of 10^-15, which is certainly far beyond what can be currently observed in experiment. In principle, the properties of strontium atoms allow for keeping coherent Bloch oscillations for over 100 s <cit.>. While, to the best of our knowledge, the 1 s is the longest that has been achieved for strontium atoms so far, experiments with minute-scale oscillation times have been performed using cesium atoms <cit.> (with these atoms, however, the frequency ω_0 is much smaller, so the effect would be even weaker). Extending the oscillation time to 100 s would enhance the visibility oscillations by 10^4, but the effect would still be tiny. We expect that to make it observable, one would need to maintain a superposition separated by Δ z∼10-100 cm for 100 s — then the visibility drop would be of the order 0.01-1. Such separations are, however, far from what can be currently achieved and most likely require using two separate lasers for the lower and upper trajectories. Note that the observation of the frequency shift (<ref>) itself would not be evidence of the quantum clock reading the superposition of proper times since the frequency shift is the same as for the clock being in a statistical mixture of two heights. To reject the latter possibility one can, apart from the total probability P_g of measuring the atom in the ground state at the output, extract from the measured data also the difference D_g≡ P_g^(0)-P_g^(1) given by (compare with Eq. (<ref>)) D_g=1/8cos(Δϕ+δ_u-δ_d/2)sin(δ_d/2)sin(δ_u/2), where δ_d and δ_u are given in Eq. (<ref>), and Δϕ is given by (see Appendix <ref> for derivation) Δϕ=-mgΔ z/ħ(T+2T'+τ). If the atom is really in a superposition of heights, D_g will oscillate with time T_B. On the other hand, for the atom following a statistical mixture of two heights, the probability difference would be equal to zero. Therefore, extracting from the interferometric output both the fractional frequency shift and the oscillations of D_g would provide evidence that the atom was reading a superposition of proper times. Finally, let us notice that the interpretation of the experiment as a spectroscopy of a clock at the superposition of heights is possible thanks to including the laser phases and the concrete mechanism that sets up the quantum clock. These were missing in the idealized scenarios in <cit.> and in the recent work <cit.> proposing a similar setup using optical tweezers. Therefore, the possibility of measuring the relativistic frequency shift in such interferometric scenarios thus far went unnoticed. We thank Timothy Le Pers, Navdeep Arya, Germain Tobar, and Jun Ye for helpful discussions and useful comments. The authors acknowledge Knut and Alice Wallenberg foundation through a Wallenberg Academy Fellowship No. 2021.0119. J.F. acknowledges funding from the U.S. Department of Energy, Office of Science, ASCR under Award Number DE-SC0023291 and Australian Research Council Centre of Excellence for Quantum Computation and Communication Technology (No. CE1701000012). § PROBABILITY CALCULATION Here we derive the formula (<ref>) for the probability P_g^(j) of measuring the atom in the ground state in output j∈{0,1}. We denote by Δϕ the phase difference between upper and lower ground state trajectories, and by δ_d and δ_u the additional phase acquired by the excited state on the lower and upper trajectory, respectively. The final state of the atom |ψ_final⟩ can be written in a generic way |ψ_final⟩=1/8|g⟩(|0⟩[1-e^iδ_d+e^iΔϕ(1-e^iδ_u)]-i|1⟩[1-e^iδ_d-e^iΔϕ(1-e^iδ_u)])+…, where |0⟩ and |1⟩ are the states corresponding to two relevant output trajectories, and dots at the end stand for the parts of the state irrelevant to our considerations (lost trajectories and excited state). The factor 1/8=(1/√(2))^6 in front comes from six laser pulses applied to the atom (four Bragg pulses and two clock pulses). The minus signs in the bracket multiplying |0⟩ result from the fact that excited state trajectories interacted with a laser two more times than the ground state trajectories (two clock pulses) — each interaction gives rise to the factor ie^iϕ_l multiplying the corresponding trajectory (here ϕ_l is the so-called laser phase and we absorb it into phases Δϕ, δ_d, and δ_u). The factor -i multiplying |1⟩ reflects the fact that trajectories leaving the interferometer in output 1 interacted with the odd number of Bragg pulses (the lower trajectory interacted once, the upper one three times), while the ones leaving in output 0 interacted with the even number (two) of Bragg pulses. Finally, the additional minus sign in the bracket multiplying |1⟩ comes from the fact that, in the case of trajectories leaving in output 1, the upper trajectory interacted with two more Bragg pulses than the lower one. The amplitude of finding the atom in the ground state in the output j is given by ⟨g,j||ψ_final⟩∝1/8[1-e^iδ_d+(-1)^je^iΔϕ(1-e^iδ_u)], where we omitted the common phase factor -i present in the case of j=1. To calculate the probability P_g^(j) we take the absolute value squared of the above amplitude and get P_g^(j)=|⟨g,j||ψ_final⟩|^2=1/16[1-cos(δ_u+δ_d/2)cos(δ_u-δ_d/2)+(-1)^j2cos(Δϕ+δ_u-δ_d/2)sin(δ_d/2)sin(δ_u/2)], which is exactly Eq. (<ref>). § FALL OF THE EXCITED-STATE TRAJECTORY Let us motivate the use of the perturbative method <cit.> by analyzing the fall of the excited-state trajectory during Bloch oscillations. We assume that the frequencies of the optical lattice lasers are chosen such that the ground state follows a bouncing trajectory at a fixed height (i.e., each interaction with the laser occurs at the same height z_B). Then, the excited-state trajectory necessarily falls with each bounce, due to its greater mass. Indeed, since we assume that the interaction with the laser occurs only when the atom reaches downward velocity v_B=ħ k/m, and the momentum transferred to the atom in each interaction is the same for both the ground and the excited state, the excited state will move after each interaction with the upward velocity v_B-δ v_B, where the velocity difference δ v_B is given by δ v_B=2v_Bδ m/m. Since the velocity of the excited state right before each interaction is larger (in terms of the absolute value) than right after it, each subsequent interaction will occur lower by δ z_B=v_Bδ v_B/g than the previous one meaning that the excited-state trajectory will progressively fall. Also, the excited trajectory's smaller starting velocity means it will reach the downward velocity v_B faster than the ground state. Two subsequent interactions on the excited-state trajectory are separated by a time τ_B-δτ_B, where δτ_B=δ v_B/g. The spatial separation between the ground and the excited state trajectories is maximal when the excited state interacts with the laser. Then, it changes the direction of motion and meets the ground state trajectory at its interaction height z_B. Therefore, the trajectories meet repeatedly (at the interaction points of the ground state trajectory), but at each subsequent meeting point the excited-state trajectory has velocity lower by δ v_B than at the previous meeting point (while the ground state moves there at velocity v_B). The largest separation between the nth and (n+1)th interaction on the ground state trajectory occurs at the moment of (n+1)th interaction on the excited-state trajectory and is equal to 2nδ z_B. For the experimental parameters from <cit.> (strontium atoms, optical lattice operating at 532 nm, around 500 oscillations) the maximal separation between the ground and excited state trajectories would be around 10^-13 m. This should be compared with the thermal wavelength of the atoms given by λ_th=√(2πħ^2/mk_BT), where T is the temperature. In the experiment <cit.> the atoms were cooled to ∼400 nK, corresponding to λ_th∼10^-7 m. Therefore, the thermal wavelength is much larger than the separation between the trajectories, and it is justified to use the perturbative approach in phase calculations. § PHASES Let us calculate the phases Δϕ, δ_d, and δ_u using the perturbative method developed in <cit.>. Each consists of two contributions: the propagation phase calculated by integrating the free-fall Lagrangian over the trajectory followed by the atom, and the laser phase resulting from atom-light interactions. The free-fall Lagrangian of a relativistic particle with the internal two-level structure incorporating the mass-energy equivalence can be expanded to the order 1/c^2 as follows: L=-m̂c^2-m̂(gz-1/2v^2)-m̂/c^2(1/2g^2z^2-1/8v^4+3/2gzv^2)+𝒪(1/c^4). Here m̂ is the mass operator returning m for the ground state, and m+δ m (where δ m=ħω_0/c^2) for the excited state of the atom. At this point, we distinguish the 1/c^2 correction to the excited-state Lagrangian coming from the mass-energy equivalence δ L^(1)=-δ m(gz-v^2/2), and the correction to the (ground- and excited-state) Lagrangian coming from the expansion of the relativistic-particle Lagrangian up to 1/c^2 order δ L^(2)=-m/c^2(1/2g^2z^2-1/8v^4+3/2gzv^2). Here we replaced m̂ by m because δ m is of the order 1/c^2 and would contribute in higher order to the above expression. Note that for the scenario considered in this work, based on numbers from <cit.>, δ L^(2)≪δ L^(1). This is because the internal energy of the atomic transition (∼1 eV) is much larger than the kinetic energy of the atom or the gravitational energy difference between the trajectories (both ∼10^-10 eV). Note also that the correction δ L^(2) and the corrections to the laser phases derived in <cit.> depend only on the trajectory, not on the internal state. Therefore, in the picture in which both the ground and the excited state follow the same trajectory, they will contribute only to the phase Δϕ, not to δ_d and δ_u. We are not interested in 1/c^2 corrections to Δϕ, hence we will omit them in the subsequent analysis and focus only on the corrections coming from δ L^(1). We begin with the calculation of Δϕ. To calculate the propagation phase contribution we divide the trajectory into freely falling pieces between points of interaction with the lasers and calculate the propagation phase by integrating the freely falling Lagrangian L_g=-m(c^2+gz-v^2/2) along the trajectory. More precisely, for the free fall starting at time t_A and ending at t_B, the propagation phase is given by ϕ_p(t_A→ t_B)= 1/ħ∫_t_A^t_Bdt L_g=-m/ħ∫_t_A^t_Bdt (c^2+gz-v^2/2) = -m/ħ[(c^2+gz_A-1/2v_A^2)(t_B-t_A)+v_Ag(t_B-t_A)^2-1/3g^2(t_B-t_A)^3], where z_A and v_A are the atom's position and velocity, respectively, at time t_A. Denote by t_j with j∈{1,2,3,4} the times at which consecutive Bragg pulses are applied, and by t_i and t_f the times of the initial and final clock pulses, respectively. Notice that to match the notation from the main text, the times of application of particular pulses must satisfy t_2-t_1=t_4-t_3=T, t_i-t_2=t_3-t_f=T', t_f-t_i=T_B. Let us further denote the quantities corresponding to the lower and upper trajectories by superscripts (d) and (u), respectively, and by Δϕ_p(t_A→ t_B)=ϕ_p^(u)(t_A→ t_B)-ϕ_p^(d)(t_A→ t_B) the propagation phase difference between those two trajectories acquired between t_A and t_B. Let us consider one by one the consecutive stages of the interferometer and calculate the corresponding phase difference using (<ref>). * t_1→ t_2:At time t_1 both trajectories are at the same height, but the upper one starts with a velocity greater by Δ v than the lower one. Denote the initial velocity of the lower trajectory by v_1. Then, the relative propagation phase acquired between t_1 and t_2 reads Δϕ_p(t_1→ t_2)=m/ħΔ vT(v_1+1/2Δ v-gT). * t_2→ t_i:Both trajectories start with the same velocity, but the upper one is higher by Δ v T. The relative propagation phase is given by Δϕ_p(t_2→ t_i)=-m/ħgΔ vTT'. * t_i→ t_f:We assume that both trajectories oscillate with the same frequency, and start simultaneously with zero initial velocity and maximal height. Therefore, the trajectories are constantly separated by Δ vT and have the same velocities all the time. The relative propagation phase equals Δϕ_p(t_i→ t_f)=-m/ħgΔ vTT_B. * t_f→ t_3:Similar to the previous two points, the velocity on both trajectories is the same, and they are separated by Δ vT. The propagation phase difference reads Δϕ_p(t_f→ t_3)=-m/ħgΔ vTT'. * t_3→ t_4:The trajectories start separated by Δ vT, and the upper one has a velocity smaller by Δ v. Denote the initial velocity of the lower trajectory by v_3. The relative propagation phase is then given by Δϕ_p(t_3→ t_4)=m/ħΔ vT(-v_3+1/2Δ v). To calculate the total propagation phase difference Δϕ_p we sum up all the contributions, which gives Δϕ_p=m/ħΔ vT[v_1+Δ v-v_3-g(T+2T'+T_B)]. Finally, let us note that, because of the requirement that at t_i the velocity on both trajectories is equal to zero, the following relation holds v_1+Δ v=g(T+T'). On the other hand, v_3 is the velocity that the atom reaches after time T' of a free fall after leaving the optical lattice. Let us notice that the atom does not leave the optical lattice with zero velocity (see Fig. <ref>), but rather with velocity -gτ. Here τ∈{-τ_B/2,τ_B/2} is given by T_B=Nτ_B+τ, where N=⌊ T_B/τ_B+1/2⌋ is the number of interactions between the optical lattice and the atom. Therefore, we have v_3=-g(τ+T') and we can rewrite Eq. (<ref>) (introducing Δ z=Δ vT) as follows: Δϕ_p=m/ħΔ zg(τ-T_B)=-m/ħgΔ zNτ_B. Now, let us focus on the laser phase contribution to the phase Δϕ. As described in <cit.>, when the atom absorbs (emits) a photon with frequency ω and wave vector k, it acquires a laser phase ϕ_l(t,z)=∓iω t± kz, where the upper (lower) sign corresponds to photon absorption (emission), and t and z are the time and position of the interaction, respectively. In the case of Bragg pulses, as well as in the optical lattice, the photon absorption is immediately followed by the emission of a photon with the same frequency and opposite wave vector. Therefore, the laser phase acquired at each such interaction point is equal to ϕ_l(t,z)=ϕ_l(z)=±2kz with the plus (minus) sign corresponding to the atom receiving upward (downward) momentum. Denote by z_j with j∈{1,2,3,4} the interaction heights with four consecutive Bragg pulses (z_1 and z_3 correspond to the upper trajectory, while z_2 and z_4 to the lower one), and by z_d and z_u the heights of interactions of the lower and upper trajectories, respectively, within the optical lattice. Note also that the wave vector k_Bragg of the pulses is related to the velocity change Δ v by 2ħ k_Bragg=mΔ v, ⟹ k_Bragg=mΔ v/2ħ, and the wave vector k_Bloch of the optical lattice laser is related to the Bloch period τ_B by 2ħ k_Bloch=mgτ_B, ⟹ k_Bloch=mgτ_B/2ħ. The total laser phase difference (between the upper and lower trajectory) is given by Δϕ_l=2k_Bragg(z_1-z_2-z_3+z_4)+2Nk_Bloch(z_u-z_d) Let us notice the following relations between particular heights: z_2-z_1=v_1T-1/2gT^2, z_4-z_3=(v_3-Δ v)T-1/2gT^2, z_u-z_d=Δ z, where v_1 and v_3 are given by (<ref>) and (<ref>). Hence, we can write the total laser phase difference as Δϕ_l=m/ħ[-gΔ z(T+2T'+τ)+gΔ zNτ_B]. The total phase difference Δϕ between the upper and the lower (ground state) trajectories is calculated by summing up the total propagation phase difference and the total laser phase difference. This gives Δϕ=Δϕ_p+Δϕ_l=-mgΔ z/ħ(T+2T'+τ). Now, let us calculate the additional phases δ_d and δ_u acquired by the excited state on the lower and upper trajectory, respectively. We need to calculate the additional laser phase coming from the clock pulses and the correction to the propagation phase due to the mass-energy equivalence. Starting with the laser phase, we assume that the clock pulses change only the atom's internal state, but do not affect its motion. This can be achieved by driving two-photon transitions with both photons having the same frequency ω/2 and opposite momenta. The corresponding laser phase is equal to ∓ω t with the minus (plus) sign corresponding to the two-photon absorption (emission). Note that this phase is independent of the position z of the interaction (thus, it is the same for the lower and upper trajectory). Since the excited state trajectories absorb the photons at t_i and emit at t_f, they acquire the laser phase ω(t_f-t_i)=ω T_B. To calculate the propagation phase correction, we employ the perturbative method developed in <cit.> and integrate the mass-energy correction to the free-fall Lagrangian δ L=-δ m(c^2+gz-v^2/2) over the ground state trajectory on the corresponding height. For the free-fall trajectory starting at t_A and ending at t_B the correction δ_p(t_A→ t_B) to the propagation phase reads (compare with (<ref>)) δ_p(t_A→ t_B)=-δ m/ħ[(c^2+gz_A-1/2v_A^2)(t_B-t_A)+v_Ag(t_B-t_A)^2-1/3g^2(t_B-t_A)^3]. For a full single oscillation in the optical lattice, the propagation phase correction δ_p,1 is given by δ_p,1=δ m/ħ(-c^2-v_B^2/6-gz_B)τ_B, where z_B and v_B are the atom's position and velocity at the interaction point, and τ_B is the period of the oscillation. We assume that the total number N of Bloch oscillations is large, and neglect the correction coming from the fact that the total time T_B of Bloch oscillations is not necessarily equal to the integer multiple of Bloch periods (instead, T_B=Nτ_B+τ with τ∈(-τ_B/2,τ_B/2); however, since τ≪ Nτ_B, we will assume τ≈0). Then, the total propagation phase correction is given by δ_p,tot=Nδ_p,1=δ m/ħ(-c^2-v_B^2/6-gz_B)Nτ_B≈δ m/ħ(-c^2-v_B^2/6-gz_B)T_B. Without loss of generality, we can assume that the interaction height corresponding to the lower trajectory z_B^(d)=0, while the interaction height of the upper one equals z_B^(u)=Δ z. We introduce the mean velocity (within a single oscillation) ⟨ v^2⟩≡ v_B^2/3, and note that δ m=ω_0/c^2. Combining the propagation phase with the laser phase corrections, we get δ_d=[ω-ω_0(1+⟨ v^2⟩/2c^2)]T_B, δ_u=[ω-ω_0(1+⟨ v^2⟩/2c^2+gΔ z/c^2)]T_B. § NON-PERTURBATIVE CALCULATIONS As a consistency check let us compute the phases accumulated during the Bloch oscillations non-perturbatively, namely for the trajectories described in Appendix <ref>, and make sure that they agree with the perturbative calculation from Appendix <ref>. In the following, we restrict all the calculations to the terms up to order 1/c^2 neglecting the higher-order terms. This is consistent with the perturbative approach, where we considered only 1/c^2 corrections. Since the trajectories of the ground and excited states no longer coincide (the excited trajectory falls), the phase difference between them will gain contribution from three different sources: the propagation phase, the laser phase, and the separation phase. The first two are defined in the same way as in the previous (perturbative) considerations, and the third one is given by δ_s(t)≡p̅(t)Δ z(t)/ħ, where p̅(t) is the mean momentum of two trajectories at time t (the oscillation starts at t=0), and Δ z(t) is the separation between them (note that this has nothing to do with the separation Δ z between the upper and lower trajectory). Since the phase acquired due to the interaction with the clock pulses is position-independent, it will not change in the non-perturbative analysis. Therefore, we consider only the phases acquired during the Bloch oscillations within the optical lattice. The trajectory followed by both trajectories has been described above. We analyze only the time between the two first interactions on the ground state trajectory (at some arbitrary height z_B) and show that the phase difference between the ground and the excited state trajectories agrees with the perturbative calculations. It is straightforward that it will agree with the perturbative calculations also at later times. The propagation phases acquired on both trajectories in the first part of the oscillation (before the second interaction of the excited state) can be calculated using the general formula for the propagation phase in free fall: ϕ_p^(g/e)(t)=m^(g/e)/ħ[(-c^2+1/2v_0^2-gz_0)t-v_0gt^2+1/3g^2t^3], (this is just Eq. (<ref>) with t_A→0 and t_B→ t) where m^(g)=m and m^(e)=m+δ m are the masses of the atom in the ground and excited state, respectively, v_0 is the initial velocity, and z_0 is the initial height. For the ground state, we replace v_0 by v_B, while for the excited state by v_B-δ v_B. The initial height for both states is the same and equal to z_B. The difference in propagation phases at time t is then given by δ_p(t)=ϕ_p^(e)(t)-ϕ_p^(g)(t)=δ m/ħ[(-c^2+1/2v_B^2-gz_B)t-v_Bgt^2+1/3g^2t^3]-m/ħ(v_B-gt)δ v_Bt (recall that we neglect the terms higher-order than 1/c^2). Let us notice that, since δ v_B t is just the separation Δ z(t) between the trajectories, the second term in the above formula is equal in value but with the opposite sign to the separation phase δ_s(t)=p̅(t)δ v_B t/ħ=m/ħ(v_B-gt)δ v_Bt, where p̅(t)=m(v_B-gt)+𝒪(1/c^2). Therefore, the total phase difference for t<τ_B-δ v_B/g is given by δ_p(t)+δ_s(t)=δ m/ħ[(-c^2+1/2v_B^2-gz_B)t-v_Bgt^2+1/3g^2t^3], in agreement with the perturbative results (indeed, this is just the propagation phase correction calculated in the perturbative approach). At the time τ_B-δ v_B/g the interaction of the atom on the excited trajectory with the laser occurs, and it acquires a laser phase ϕ_l^(e)=m/ħ2v_B(z_B-δ z_B)=ϕ_l^(g)-m/ħδ v_Bτ_B+𝒪(δ m^2), where ϕ_l^(g) is the laser phase acquired by the ground state trajectory at time τ_B. Let us notice that right before the second interaction of the excited state the separation phase is equal to δ_s(τ_B-Δ v/g-ε)=-m/ħv_Bδ v_Bt, while right after the interaction it is ∼𝒪(1/c^4), because the mean momentum is then ∼𝒪(1/c^2). This situation (mean momentum ∼𝒪(1/c^2) and thus negligible value of separation phase) persists until the second interaction on the ground state trajectory, at which time both trajectories are at the same height, but the excited one has the velocity v_B-2δ v_B. At this point, we can repeat the whole analysis changing everywhere δ v_B→2δ v_B. Since the propagation phase difference acquired between the second interaction on the excited and on the ground trajectories is also ∼𝒪(δ m^2), the total phase difference after the first oscillation is equal to δ=δ_p+δ_s+δ_l=δ m/ħ[(-c^2+1/2v_B^2-gz_B)τ_B-v_Bgτ_B^2+1/3g^2τ_B^3]=δ m/ħ(-c^2-v_B^2/6-gz_B)τ_B, which is equal to the propagation phase correction calculated for a single oscillation in the perturbative approach (compare with Eq. (<ref>)).
http://arxiv.org/abs/2406.18509v1
20240626173022
Integral representations for the joint survival functions of the cumulated components of multinomial random vectors
[ "Frédéric Ouimet" ]
math.ST
[ "math.ST", "math.PR", "stat.TH", "62E17, 62H10, 62H12, 62E20" ]
a1]Frédéric Ouimet)frederic.ouimet2@mcgill.ca [a1]Department of Mathematics and Statistics, McGill University, Montréal (Québec) Canada H3A 0B9 § ABSTRACT This paper presents a multivariate normal integral expression for the joint survival function of the cumulated components of any multinomial random vector. This result can be viewed as a multivariate analog of Equation (7) from <cit.>, who improved Tusnády's inequality. Our findings are based on a crucial relationship between the joint survival function of the cumulated components of any multinomial random vector and the joint cumulative distribution function of a corresponding Dirichlet distribution. We offer two distinct proofs: the first expands the logarithm of the Dirichlet density, while the second employs Laplace's method applied to the Dirichlet integral. Dirichlet distribution, Gaussian approximation, Laplace's method, multinomial distribution, multivariate normal distribution, quantile coupling [2020]Primary: 62E17 Secondary: 62H10, 62H12, 62E20 empty § INTRODUCTION For any integer d∈, the d-dimensional simplex and its interior are defined by 𝒮_d = {s∈ [0,1]^d: s_1 ≤ 1}, Int(𝒮_d) = {s∈ (0,1)^d: s_1 < 1}, where s_1 = ∑_i=1^d |s_i| denotes the ℓ_1-norm in ^d. Given a set of probability weights p∈Int(𝒮_d), the Multinomial(n,p) probability mass function is defined, for all k∈_0^d ∩ n 𝒮_d, by p_n(k) = n!/(n - k_1)! ∏_i=1^d k_i! p_d+1^n - k_1∏_i=1^d p_i^k_i, where p_d+1 = 1 - p_1∈ (0,1) and n∈. The covariance matrix of the multinomial distribution is well-known to be n Σ_p, where Σ_p = diag(p) - pp^⊤; see, e.g., <cit.>. From Theorem 1 of <cit.>, it is also well known that (Σ_p) = p_1 … p_d p_d+1. The centered multivariate normal density with the same covariance structure as the above multinomial distribution is defined, for all x∈^d, by ϕ_Σ_p(x) = exp(-x^⊤Σ_p^-1 x / 2)/√((2π)^d (Σ_p)). Let [d] = {1,…,d}. Let I_1 = (0,p_1] and I_j = (p_1 + … + p_j-1, p_1 + … + p_j] for all j∈ [d]\{1}. If U_1, …,U_n iid∼Uniform(0,1), then X = (X_1,…,X_d) ≡(∑_i=1^n _{U_i∈ I_1},…,∑_i=1^n _{U_i∈ I_d})∼Multinomial(n,p). If U_(1)≤…≤ U_(n) denote the order statistics, then (X_1 + … + X_i ≥ k_1 + … + k_i,  ∀ i∈ [d]) = (U_(k_1 + … + k_i)≤ p_1 + … + p_i,  ∀ i∈ [d]) = ∫_0^p_1∫_u_1^p_1+p_2…∫_u_d-1^p_1+…+p_d n! ∏_i=1^d+1(u_i - u_i-1)^k_i-k_i-1-1/(k_i - k_i-1 - 1)! u_1 u_2 … u_d, where one defines u_0 = 0, u_d+1 = 1, k_0 = 0, k_d+1 = n+1 in the multidimensional integral. After applying the change of variables s_i = u_i - u_i-1 and j_i = k_i - k_i-1, for all i∈ [d], together with j_d+1 = k_d+1 - k_d = (n + 1) - ∑_i=1^d j_i, the above can be rewritten as (X_1 + … + X_i ≥ k_1 + … + k_i,  ∀ i∈ [d]) = ∫_0^p_1∫_0^(p_1 - s_1) + p_2…∫_0^∑_k=1^d-1 (p_k - s_k) + p_d n! ∏_i=1^d+1s_i^j_i-1/(j_i - 1)!s. This neat equation provides a direct relationship between the joint survival function of the cumulated components of any multinomial random vector and the joint cumulative distribution function of a corresponding Dirichlet distribution. In turn, letting N = n - d, J_i = j_i - 1 for all i∈ [d+1], and defining the region ℛ_d = {s∈𝒮_d : (s_1,s_2,…,s_i)∈(∑_k=1^i p_k)𝒮_i   ∀ i∈ [d]}, one can write (X_1 + … + X_i ≥ k_1 + … + k_i,  ∀ i∈ [d]) = ∫_ℛ_d(N + d)!/N!×N!/∏_i=1^d+1 J_i!∏_i=1^d+1 s_i^J_is. § QUANTILE COUPLING For any integer m∈, let λ_m denote the error term in Stirling's approximation for ln (m!). It is well known that, for all m∈, ln(m!) = 1/2ln(2π m) + m ln m - m + λ_m, where (12 m + 1)^-1≤λ_m ≤ (12 m)^-1; see, e.g., <cit.>. Together with the notation introduced in Section <ref>, define, for all i∈ [d+1], _i = J_i / N - p_i/p_i, _i = p_i _i = J_i/N - p_i, and also Λ_N = λ_N - ∑_i=1^d+1λ_i, Δ_N = ln{(N + d)!/N! N^d} + Λ_N - 1/2∑_i=1^d+1ln (1 + _i) - N γ(), γ() = ∑_i=1^d+1 p_i (1 + _i) ln (1 + _i) - 1/2^⊤Σ_p^-1, γ^⋆(s) = ∑_i=1^d+1 p_i (1 + _i) ln(s_i/p_i) - {^⊤Σ_p^-1 (s - p) - 1/2 (s - p)^⊤Σ_p^-1 (s - p)}. Theorem <ref> below expresses the joint survival function of the cumulated components of any multinomial random vector in terms of a multivariate normal integral. The result can be seen as a multivariate analog of Eq. (7) of <cit.>, who improved on Tusnády's inequality. For an overview of the literature on Tusnády's inequality and the most recent improvement in the bulk, see <cit.> and <cit.>, respectively. For all k∈_0^d ∩ n 𝒮_d, one has (X_1 + … + X_i ≥ k_1 + … + k_i,  ∀ i∈ [d]) = e^Δ_N∫_ℛ_dexp{N γ^⋆(s)} N^d/2ϕ_Σ_p{N^1/2 (p - s + )}s. Useful alternative expressions for γ() and γ^⋆(s) can be found in (<ref>) and (<ref>), respectively. Theorem <ref> also complements the local limit theorem for the multinomial distribution developed independently by <cit.> and <cit.>, and the related non-asymptotic approximations for Pearson's chi square statistic by <cit.>. § PROOFS Note that ∑_i=1^d+1 J_i = N. By taking the logarithm of the integrand in (<ref>) and applying Stirling's formula (<ref>), one obtains ln{(N + d)!/N!×N!/∏_i=1^d+1 J_i!∏_i=1^d+1 s_i^J_i} = ln{(N + d)!/N!} + ln (N!) - ∑_i=1^d+1ln (J_i!) + ∑_i=1^d+1 J_i ln s_i = ln{(N + d)!/N! N^d} - 1/2ln{(2π/N)^d ∏_i=1^d+1 (J_i/N)} + ∑_i=1^d+1 J_i ln(s_i/J_i / N) + λ_N - ∑_i=1^d+1λ_i = ln{(N + d)!/N! N^d} - 1/2∑_i=1^d+1ln (1 + _i) - 1/2ln{(2π/N)^d ∏_i=1^d+1 p_i} + N ∑_i=1^d+1 p_i (1 + _i) ln(s_i/p_i) + N ∑_i=1^d+1 (J_i / N) ln(p_i/J_i / N) + Λ_N. Using the Taylor expansion (1 + x) ln(1 + x) = x + x^2/2 - x^3/6 + x^4/12 + _η(x^5), |x| ≤η < 1, and the fact that _d+1 = -∑_i=1^d _i, one can write ∑_i=1^d+1 (J_i / N) ln(p_i/J_i / N) = - 1/2^⊤Σ_p^-1 - γ(), where the quantity γ(), as defined in (<ref>), can be expanded as follows: γ() ≡∑_i=1^d+1 p_i (1 + _i) ln (1 + _i) - 1/2^⊤Σ_p^-1 = 1/2∑_i,j=1^d _i _j {1/p_i(i = j) + 1/p_d+1} - 1/2^⊤Σ_p^-1 - 1/6∑_i,j,k=1^d _i _j _k {1/p_i^2(i = j = k) - 1/p_d+1^2} + 1/12∑_i,j,k,ℓ=1^d _i _j _k _ℓ{1/p_i^3(i = j = k = ℓ) + 1/p_d+1^3} + _d,p(_1^5). Moreover, by <cit.>, it is known that (Σ_p^-1)_ij = p_i^-1_{i = j} + p_d+1^-1 for all i,j∈ [d], so 1/2∑_i,j=1^d _i _j {1/p_i(i = j) + 1/p_d+1} - 1/2^⊤Σ_p^-1 = 0. Therefore, using the last two equations, one can rewrite γ() as follows: γ() =- 1/6∑_i,j,k=1^d _i _j _k {1/p_i^2(i = j = k) - 1/p_d+1^2} + 1/12∑_i,j,k,ℓ=1^d _i _j _k _ℓ{1/p_i^3(i = j = k = ℓ) + 1/p_d+1^3} + _d,p(_1^5). Now, consider s = (s_1,…,s_d)∈ℛ_d as defined in (<ref>), and let δ_i(s) = s_i - p_i/p_i,   i∈ [d], δ_d+1(s) = s_d+1 - p_d+1/p_d+1 = - ∑_i=1^d (s_i - p_i)/p_d+1. In particular, note that this definition yields ∑_i=1^d+1 p_i δ_i(s_i) = 0. Upon applying a second-order version of the Taylor expansion ln(1 + x) = x - x^2/2 + x^3/3 + _η(x^4), |x| ≤η < 1, with Lagrange remainder, there exists a vector s^⋆ = (s_1^⋆,…,s_d^⋆) on the line segment joining s and p = (p_1,…,p_d) such that ∑_i=1^d+1 p_i (1 + _i) ln(s_i/p_i) = ∑_i=1^d+1 p_i (1 + _i) ln{1 + δ_i(s_i)} = ∑_i=1^d+1 p_i (1 + _i) δ_i(s_i) - 1/2∑_i=1^d+1 p_i (1 + _i) {δ_i(s_i)}^2 + 1/3∑_i=1^d+1 p_i (1 + _i) {δ_i(s_i)}^3/{1 + δ_i(s_i^⋆)}^3 = ∑_i=1^d+1_i δ_i(s_i) - 1/2∑_i=1^d+1 p_i {δ_i(s_i)}^2 - 1/2∑_i=1^d+1_i {δ_i(s_i)}^2 + 1/3∑_i=1^d+1 p_i (1 + _i) {δ_i(s_i)}^3/{1 + δ_i(s_i^⋆)}^3 = ^⊤Σ_p^-1 (s - p) - 1/2 (s - p)^⊤Σ_p^-1 (s - p) - 1/2∑_i=1^d+1_i {δ_i(s_i)}^2 + 1/3∑_i=1^d+1 p_i (1 + _i) {δ_i(s_i)}^3/{1 + δ_i(s_i^⋆)}^3, where = (_1,…,_d). Using the last equation, the quantity γ^⋆(s), as defined in (<ref>), can be rewritten as γ^⋆(s) ≡∑_i=1^d+1 p_i (1 + _i) ln(s_i/p_i) - {^⊤Σ_p^-1 (s - p) - 1/2 (s - p)^⊤Σ_p^-1 (s - p)} = - 1/2∑_i=1^d+1_i {δ_i(s_i)}^2 + 1/3∑_i=1^d+1 p_i (1 + _i) {δ_i(s_i)}^3/{1 + δ_i(s_i^⋆)}^3. It readily follows that ∑_i=1^d+1 p_i (1 + _i) ln(s_i/p_i) = ^⊤Σ_p^-1 (s - p) - 1/2 (s - p)^⊤Σ_p^-1 (s - p) + γ^⋆(s) = 1/2^⊤Σ_p^-1 - 1/2 (s - J/N)^⊤Σ_p^-1 (s - J/N) + γ^⋆(s). Therefore, by putting (<ref>) and (<ref>) back into (<ref>), and exponentiating, one gets (N + d)!/N!×N!/∏_i=1^d+1 J_i!∏_i=1^d+1 s_i^J_i = exp{Δ_N + N γ^⋆(s)}exp{-N (s - J/N)^⊤Σ_p^-1 (s - J/N) / 2}/√((2π / N)^d (Σ_p)) = exp{Δ_N + N γ^⋆(s)} N^d/2ϕ_Σ_p{N^1/2 (J/N - s)} = exp{Δ_N + N γ^⋆(s)} N^d/2ϕ_Σ_p{N^1/2 (p - s + )}, where recall Δ_N = ln{(N + d)!/(N! N^d)} + Λ_N - (1/2) ∑_i=1^d+1ln (1 + _i) - N γ() from (<ref>). Putting this last equation in (<ref>) yields (X_1 + … + X_i ≥ k_1 + … + k_i,  ∀ i∈ [d]) = e^Δ_N∫_ℛ_dexp[N γ^⋆(s)] N^d/2ϕ_Σ_p{N^1/2 (p - s + )}s. This concludes the proof. For all s∈𝒮_d, define H(s) = ∑_i=1^d+1 p_i (1 + _i) ln s_i = ∑_i=1^d+1J_i/Nln s_i. For all k∈_0^d ∩ n 𝒮_d, the integral representation of the joint survival function of the cumulated components of any multinomial random vector derived in (<ref>) can be rewritten as (X_1 + … + X_i ≥ k_1 + … + k_i,  ∀ i∈ [d]) = (N + d)!/N!×N!/∏_i=1^d+1 J_i!∫_ℛ_dexp{N H(s)}s. By Stirling's formula (<ref>), note that m! = √(2π m) exp(m ln m - m + λ_m), where (12 m + 1)^-1≤λ_m ≤ (12 m)^-1. Since ∑_i=1^d+1 J_i = N and Λ_N = λ_N - ∑_i=1^d+1λ_i, one deduces that N!/∏_i=1^d+1 J_i! = exp(Λ_N)/√((2π N)^d ∏_i=1^d+1{p_i (1 + _i)})exp{- N H(J/N)}, Therefore, one can rewrite (<ref>) as (X_1 + … + X_i ≥ k_1 + … + k_i,  ∀ i∈ [d]) = (N + d)!/N! N^d×N^d/2exp(Λ_N)/√((2π)^d ∏_i=1^d+1{p_i (1 + _i)})∫_ℛ_dexp[N {H(s) - H(J/N)}] s. Decompose the integral above as ∫_ℛ_dexp[N {H(s) - H(p)} + N {H(p) - H(J/N)}] s. Using the quantity γ() as defined in (<ref>), one has H(p) - H(J/N) = - ∑_i=1^d+1 p_i (1 + _i) ln(1 + _i) = - 1/2^⊤Σ_p^-1 - γ(), where recall = (p_1 _1,…, p_d _d)^⊤, _d+1 = p_d+1_d+1, and Σ_p^-1 = {p_i^-1(i = j) + p_d+1^-1}_1 ≤ i,j ≤ d. Recall from (<ref>) that Δ_N = ln{(N + d)!/N! N^d} + Λ_N - 1/2∑_i=1^d+1ln(1 + _i) - N γ(), so one can rewrite (<ref>) as (X_1 + … + X_i ≥ k_1 + … + k_i,  ∀ i∈ [d]) = N^d/2exp(Δ_N)/√((2π)^d (Σ_p))∫_ℛ_dexp[N {H(s) - H(p)} - N/2^⊤Σ_p^-1] s. Using (<ref>) and (<ref>), one has H(s) - H(p) = ∑_i=1^d+1J_i/Nln (s_i) - ∑_i=1^d+1J_i/Nln (p_i) = ∑_i=1^d+1 p_i (1 + _i) ln(s_i/p_i) = ^⊤Σ_p^-1 (s - p) - 1/2 (s - p)^⊤Σ_p^-1 (s - p) - 1/2∑_i=1^d+1_i {δ_i(s_i)}^2 + 1/3∑_i=1^d+1 p_i (1 + _i) {δ_i(s_i)}^3/{1 + δ_i(s_i^⋆)}^3 = ^⊤Σ_p^-1 (s - p) - 1/2 (s - p)^⊤Σ_p^-1 (s - p) + γ^⋆(s) = 1/2^⊤Σ_p^-1 - 1/2 (s - J/N)^⊤Σ_p^-1 (s - J/N) + γ^⋆(s). Putting this in the previous equation yields (X_1 + … + X_i ≥ k_1 + … + k_i,  ∀ i∈ [d]) = e^Δ_N∫_ℛ_dexp{N γ^⋆(s)} N^d/2ϕ_Σ_p{N^1/2 (J/N - s)}s = e^Δ_N∫_ℛ_dexp{N γ^⋆(s)} N^d/2ϕ_Σ_p{N^1/2 (p - s + )}s. This concludes the proof. The choice of decomposition in (<ref>) can be explained as follows. The Hessian matrix of H at s is equal to (-J_i/N s_i^2(i = j) - J_d+1/N s_d+1^2)_1 ≤ i,j ≤ d, which is (symmetric) negative definite, so that H is strictly concave on Int(𝒮_d). Given the strict concavity and the fact that s = J/N∈Int(𝒮_d) is the unique point satisfying ∇ H(s) = (J_i/N s_i - J_d+1/N s_d+1)_1 ≤ i ≤ d = 0 in the interior of the simplex, the global maximum of H on Int(𝒮_d) is achieved at J/N. When the point k∈_0 ∩ n Int(𝒮_d) is such that J/N falls outside the range of integration ℛ_d defined in (<ref>), then H maximizes at s = p inside the region of integration ℛ_d. The decomposition is therefore equivalent to an application of Laplace's method provided some restrictions on k are met. § FUNDING tocsectionFunding F. Ouimet's current postdoctoral fellowship is funded through the Canada Research Chairs Program (Grant 950-231937 to Christian Genest) and the Natural Sciences and Engineering Research Council of Canada (Grant RGPIN-2024-04088 to Christian Genest). tocsectionReferences plainnat
http://arxiv.org/abs/2406.18779v1
20240626224407
Flat and tunable moire phonons in twisted transition-metal dichalcogenides
[ "Alejandro Ramos-Alonso", "Benjamin Remez", "Daniel Bennett", "Rafael M. Fernandes", "Hector Ochoa" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall" ]
[]rfernand@umn.edu []ho2273@columbia.edu § ABSTRACT Displacement fields are one of the main tuning knobs employed to engineer flat electronic band dispersions in twisted van der Waals multilayers. Here, we show that electric fields can also be used to tune the phonon dispersion of moiré superlattices formed by non-centrosymmetric materials, focusing on twisted transition metal dichalcogenide homobilayers. This effect arises from the intertwining between the local stacking configuration and the formation of polar domains within the moiré supercell. For small twist angles, increasing the electric field leads to a universal moiré phonon spectrum characterized by a substantially softened longitudinal acoustic phason mode and a flat optical phonon mode, both of which cause a significant enhancement in the vibrational density of states. The phasons also acquire a prominent chiral character, displaying a nonzero angular momentum spread across the Brillouin zone. We discuss how the tunability of the moiré phonon spectra may affect electronic properties, focusing on the recently discovered phenomenon of van der Waals ferroelectricity. Flat and tunable moiré phonons in twisted transition-metal dichalcogenides Héctor Ochoa July 1, 2024 ========================================================================== § INTRODUCTION Layered van der Waals (vdW) materials are formed by the assembly of crystalline membranes, combining properties that are characteristic of both hard and soft condensed matter <cit.>. Graphene, for example, is very stiff within the plane of the honeycomb lattice, but extremely soft with respect to vertical deformations such as ripples and corrugation, whose strain fields can dramatically alter the local electronic properties <cit.>. While encapsulation, epitaxy, or stacking effectively quench these out-of-plane deformations, in these cases a new soft degree of freedom often appears associated with the emergence of incommensurate superstructures: stacking order fluctuations. In the case of moiré materials, resulting from a lattice mismatch between vdW layers <cit.>, the incommensurate moiré superlattice is plagued by soft vibrational modes altering the local stacking arrangement <cit.>. The role of these moiré phonons (and, in particular, of their acoustic modes dubbed phasons) on the physics of twisted devices is an area of active research <cit.>. Indeed, many moiré materials display interesting macroscopic quantum phenomena ranging from incompressible Hall states <cit.> to superconductivity <cit.>. One of the reasons why the impact of the moiré phonons on these electronic phenomena remains little understood is because of the difficulty in tuning these soft modes by externally controlled parameters. In contrast, the electronic degrees of freedom can be efficiently tuned by electrostatic gating, displacement fields and the twist angle itself <cit.>. In addition to correlated phenomena, recent experiments <cit.> have revealed ferroelectricity in untwisted and twisted vdW bilayers. These findings have been interpreted in terms of sliding between the layers that changes the global or local stacking order such that inversion symmetry is broken and an out-of-plane polarization emerges <cit.>. In the case of a moiré superlattice formed by twisting identical transition metal dichalcogenide (TMD, formulae MX_2) layers about a non-centrosymmetric configuration (see Fig. <ref>), this mechanism results in an out-of-plane polarization texture defining polar domains tied to the local stacking order <cit.>. As shown in Fig. <ref>, the two types of polar domains are associated with the local stackings MX/XM, in which the metal atom of one layer aligns with the chalcogen atom of the other layer, thus breaking mirror symmetry. The different relative stackings also give rise to in-plane polarization textures with topological character <cit.>. Since the polarization couples directly to electric fields, and since the former is tied to the local stacking order, this observation opens a promising path to control the stacking-order fluctuations by an external parameter in moiré devices. In this work, we show how moiré phonons in polar moiré materials can be tuned using an out-of-plane electric field. Via its coupling to the polar domains, the latter causes commensurate stackings to locally grow and shrink through a bending of the moiré domain walls, resulting in a ferroelectric response <cit.>. Importantly, these changes in the stacking domain configuration dramatically alter the low-energy phonon spectra (see Fig. <ref> b): As the symmetry of the moiré domain pattern evolves from triangular to honeycomb, the lowest-energy longitudinal phason mode of the moiré pattern softens and low-energy optical modes become substantially flattened, enhancing the phonon density of states (DOS). These results provide a pathway to engineer the properties of the moiré phonons and thus elucidate their impact on the electronic properties of twisted vdW materials. § MODEL We focus on TMD MX_2 homobilayers, though our conclusions can also be extended to heterobilayers <cit.>. We identify a stacking configuration with a two-dimensional vector ϕ=u_t-u_b corresponding to a lateral displacement of the top (t) layer with respect to the bottom (b) layer. The lattice mismatch introduced by twisting causes the stacking configuration to modulate in space. The twist angle is assumed to be small, so that the modulation is smooth on the atomic scale. The corresponding free-energy functional ℱ[ϕ(𝐫);ℰ_z] depends parametrically on the perpendicular external electric field ℰ_z via <cit.>: ℱ[ϕ;ℰ_z]=ℱ_elas[ϕ]+ℱ_adh[ϕ]+ℱ_elec[ϕ;ℰ_z]. The first term represents the usual elastic energy cost associated with heterostrain (repeated indices are summed) ℱ_elas[ϕ(𝐫)]=∫ d𝐫 [λ/4(∂_iϕ_i)^2 + μ/8(∂_kϕ_i+∂_iϕ_k)^2] , where λ, μ are the monolayer Lamé coefficients. The second term introduces the adhesion energy cost of the different stacking configurations due to the vdW interaction between layers. Because by construction ℱ[ϕ+𝐚]=ℱ[ϕ], where 𝐚 is a monolayer Bravais vector, the adhesion energy density can be expanded in Fourier harmonics, ℱ_adh[ϕ(𝐫)]= ∫ d𝐫 [ V_0 +1/2∑_n∑_j=1^3Re(V_n e^ig_n^(j)·ϕ(𝐫)) ]. Hereafter g_n^(j), j=1,2,3, represent monolayer reciprocal lattice vector triplets related by 120^∘ rotations. Finally, the third term in Eq. (<ref>) represents the coupling of ℰ_z to the modulating local dipole moments spontaneously formed within the moiré supercell <cit.>, ℱ_elec[ϕ(𝐫);ℰ_z] = -∫ d𝐫 p_z[ϕ(𝐫)]ℰ_z. p_z[ϕ(𝐫)] represents the out-of-plane dipole density p=p_zẑ in stackings breaking mirror reflection and has a similar Fourier expansion: p_z[ϕ(𝐫)]=1/2∑_n∑_j=1^3Im{p_n e^ig_n^(j)·ϕ(𝐫)}. The twist produces a moiré pattern containing MM, XM and MX configurations, shown in Fig. <ref>. MM, which is mirror-symmetric and non-polar, is the reference configuration, ϕ=0⃗. The MX/XM stackings are related by mirror reflection, and thus ℱ_adh (p_z) must be even (odd). Consequently, the coefficients V_n and p_n must be real. They are obtained from first-principles calculations of commensurate bilayers with a relative shift between the layers <cit.>. p_n are further rescaled to match the experimentally observed field magnitudes (see Supplementary Material SM). § RELAXATION PATTERNS We first determine the stacking texture ϕ_0(𝐫) that minimizes the free energy in Eq. (<ref>) subjected to an imposed global twist angle θ <cit.>; details of the numerical implementations can be found in the Supplementary Material SM (see also Refs. NIST:DLMF,nocedal1999numerical,Optim.jl-2018, bezanson2017julia). Density plots of the adhesion energy landscape of the relaxed structure, E_0(𝐫)=ℱ_adh[ϕ_0(𝐫)]+ℱ_elec[ϕ_0(𝐫);ℰ_z], are shown in Fig. <ref> for twisted bilayer MoS_2 (see model parameter values in the caption). At zero electric field, Fig. <ref>(a), the stacking domain walls (corresponding to solitons of ϕ) form a triangular network of partial dislocations that separates approximately uniform MX and XM configurations (darker areas), intersecting at MM stackings (brighter spots). The degree of relaxation increases as the twist angle decreases, reflecting the ratio between the moiré period, L_m∼ a/θ, and the characteristic domain-wall width, ℓ∼ a √(μ/V_1). In the presence of an electric field, Fig. <ref> (b), the MX and XM domains become energetically inequivalent. Consequently, one domain expands and the other contracts, bending the solitons. For small angles, the domain walls can be envisioned as strings under a linear tension σ imposed by the adhesion potential with curvature radius R given by <cit.> R ∼σ/3√(3)|ℰ_z|(p_1-p_3). As the field increases, R approaches the moiré period L_m and the solitons merge into perfect screw dislocations <cit.>. For strong electric fields, R ≪ L_m, the relaxation pattern evolves into a honeycomb soliton network (see last panel in Fig. <ref> b), fundamentally changing the stacking fluctuation spectrum. § MOIRÉ PHONONS Diagonalization of the dynamical matrix defined by the harmonic expansion of ℱ in the stacking fluctuations δϕ(𝐫,t)=ϕ(𝐫,t)-ϕ_0(𝐫) gives the moiré phonon spectra. For twisted MoS_2, they are shown in Fig. <ref> together with the corresponding relaxation patterns. The characteristic frequency ω_m=√(μ/3ρ)4π/L_m, where ρ=3·10^-6 kg/m^2 is the monolayer mass density, is of the order of 1 meV (THz range) around θ=1^∘. Figure <ref> (a) displays the phonon spectra in the moiré Brillouin zone (mBZ) for different twist angles at zero field. The two linearly dispersing acoustic-like branches emanating from the Γ point correspond to the sliding phasons of the relaxation pattern. Contrary to a crystalline lattice, the acoustic mode with the lowest energy is the longitudinal one <cit.>. Both phason velocities are approximately independent of the twist angle <cit.>, and closely match the corresponding monolayer speeds of sound (see black curve in Fig. <ref> c). The folding of the monolayer acoustic phonons into the mBZ gives rise to new optical phonon modes. The sharpening of the domain walls at small angles is manifested in the gradual flattening of some of the optical branches, see Fig. <ref>(a). The optical modes at Γ are labeled according to the irreducible representations of the D_6 point group of the model (D_3 for ℰ_z ≠ 0), and can be understood as different standing-wave patterns of the domain walls <cit.>. Fig. <ref> (b) shows the evolution of the phonon spectrum with increasing ℰ_z for fixed θ=1^∘. Moderate fields introduce appreciable changes, such as the inversion between singlet (A_2) and doublet (E) modes at the Γ point for ℰ_z between 0.1 and 0.2 V/nm. As the field increases further, the spectrum approaches a universal (i.e. θ-independent) shape that is well-described by the model of strings under tension of Ref. Koshino2023. Figures <ref>(a) and (b) further illustrate this point by comparing the spectra of two different angles and fields with the same value of L_m/R. In the honeycomb regime [panel (b)], the two phonon spectra nearly overlap and display very flat optical branches, which are manifested as sharp peaks in the vibrational DOS. Note also that in the limit ℰ_z →∞ the honeycomb soliton network recovers an emergent C_6 symmetry, leading to the eventual closing of some of the gaps at the K points (e.g. around ω/ω_m≈0.6 in Fig. <ref>b). § PHASON SOFTENING The effect of the electric field on the acoustic phason modes, and the lowest-energy longitudinal mode in particular, is striking. As shown in Figs. <ref>(b) and  <ref>, the longitudinal phason band becomes almost completely flat in the limit of large ℰ_z , leading to a sharp DOS peak close to ω=0. Indeed, Fig. <ref> (c) demonstrates that the longitudinal velocity is strongly softened by the electric field, deviating substantially from the monolayer value. This effect can be partly attributed to the ℰ_z-induced opening of gaps in the phonon spectrum at the mBZ corners, which causes level repulsion between phonon branches. More broadly, the existence of such a nearly zero-velocity acoustic mode in a honeycomb soliton network is not unexpected. In a system of sharp domain walls, the energy is essentially controlled by the wall length. Interestingly, in the honeycomb case, displacements of network vertices that preserve the wall orientations and connectivity leave the total length of the soliton network invariant <cit.>. This makes it possible to change the area of the hexagons with little energy cost, leading to a soft longitudinal phason. § PHONON ANGULAR MOMENTUM Because the electric field breaks the C_2z symmetry of the model, moiré phonon bands may acquire nonzero angular momentum L^q_z= iħ∑_Gδϕ_q+G×δϕ_q+G^* perpendicular to the bilayer <cit.>; here ϕ_q+G are the Fourier components of the fluctuation mode (the band index is omitted) <cit.>. Although these are often called chiral phonons <cit.>, note that modes at opposite points of the mBZ carry opposite angular momentum due to time-reversal symmetry. Figure <ref> displays the distribution of L^q_z for the acoustic modes within the mBZ for fixed θ = 1^∘ and two values of ℰ_z. While for small fields the angular momentum is concentrated around the K points, it spreads across the mBZ as ℰ_z increases. Moreover, its magnitude depends on the energy of the phason modes at the K points. Phason chirality, understood as finite but opposite values of L_z^q at inequivalent K points, can be traced back to complex conjugate eigenvalues of phason modes under C_3 rotations at those points (sometimes called pseudo-angular momentum <cit.>), which gives rise to different selection rules with circularly polarized light. In untwisted 2D materials, chiral phonons have been observed in transient infrared spectroscopy and Raman experiments <cit.>. For twisted moiré systems, the chirality and the softening of the acoustic phasons could in principle be measured using Brillouin-Mandelstam spectroscopy. In this technique, the incidence angle of light can be changed to probe phonons with different wavevectors, thus providing a measurement of the velocity of the modes <cit.>. § DISCUSSION Here, we showed that the low-energy moiré phonons of twisted bilayers composed of materials lacking a center of inversion are strongly influenced by the application of an out-of-plane electric field. The electric field couples to the polarization density which in turn is tied to the stacking texture, therefore changing the adhesion energy landscape and subsequently the geometry of the stacking relaxation patterns. These are described by three characteristic length scales: the moiré period L_m imposed by the twist, the domain wall width ℓ generated by the adhesion potential, and the curvature radius R [Eq. (<ref>)] of the walls introduced by the electric field. While the strongest effect of the field is found in the phason (acoustic) modes describing traveling waves of the domain wall system, there are noticeable effects in some of the low-energy optical modes. Gap openings at the mBZ corners due to the reduction of the symmetry contribute to better resolve them in energy, giving rise to sharper features in the vibrational DOS. The electric field also alters the energetic ordering of optical modes with different symmetries, such as the aforementioned crossing of singlet and doublet modes at the Γ point in Fig. <ref> (b). Some of these phonon energy shifts could be observed in optics experiments, since the doublet is both infrared and Raman active due to the lack of an inversion center in the structure. The moiré phonons acquire a universal (θ-independent) spectrum in the limit of large fields, L_m≫ R, when the geometry of the relaxation pattern changes from a triangular to a honeycomb soliton network. This behavior is attributed to the fact that the phonon energetics are completely dominated by the length of the domain walls. In this regime, several phonon modes become flat. The flat optical phonons can be understood as standing waves of the domain walls subjected to the new boundary conditions of the field-deformed domain-wall network <cit.>. On the other hand, the flattening of the longitudinal phason mode is a characteristic feature of the properties of the honeycomb soliton network. Importantly, reaching this regime of large fields is experimentally feasible (ℰ_z=0.5 V/nm for θ=0.5^∘, see Fig. <ref> b). In what concerns the interplay with moiré ferroelectricity, phonons are believed to strongly renormalize the ferroelectric switching field and depolarization temperature in aligned bilayer <cit.>. It will be interesting to elucidate if/how the phonon softening impacts this behavior. More broadly, the softening of the longitudinal phasons suggests that electric fields contribute to making the moiré pattern with a well-defined twist angle more unstable. Nevertheless, at non-zero temperatures and large electric fields, the moiré pattern can still be stable due to the large configurational entropy of the honeycomb soliton network <cit.>. In this regard, the phonon spectra of entropically stablized moiré patterns should acquire a strong dependence on temperature that is not captured by our theory. In conclusion, in polar moiré materials, electric fields can be used to tune the flatness of the phonon dispersion, akin to how the twist angle and the displacement field are used to flatten the electronic dispersion. Because the flat optical and phason modes give rise to sharp peaks in the low-energy vibrational DOS, they should be manifested in the specific heat and thermal conductivity. Moreover, the electric field should also affect electronic properties that are directly impacted by the electron-phonon coupling. In this regard, the prospect of experimentally controlling the moiré phonon spectrum provides a unique opportunity to elucidate whether phonons play a prominent role in promoting some of the electronic phenomena observed in twisted moié systems, such as superconductivity <cit.> or linear-in-T resistivity <cit.>. § ACKNOWLEDGEMENTS R.M.F. was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Science and Engineering Division, under Award No. DE-SC0020045. D.B. acknowledges the US Army Research Office (ARO) MURI project under grant No. W911NF-21-0147 and the Simons Foundation award No. 896626. B.R.  was supported by the Yale Postdoctoral Prize fellowship. Supplemental Information: Flat and tunable moiré phonons in twisted transition-metal dichalcogenides toc §.§ Estimation of the model parameters Initial values for V_n and p_n parametrizing ℱ_adh[ϕ(𝐫)] and p_z[ϕ(𝐫)] in the main text were taken from first-principles calculations in Ref. <cit.>. From these numbers we obtain a coercive field for polarization switching in commensurate stackings of ℰ_ c = 13.3 V/nm. It has been put forward that the associated bare energy barrier height is strongly renormalized downwards by vibrational fluctuations <cit.>, which this estimate does not include. In addition to that, first-principles calculations tend to overestimate the coercive electric field values in ferroelectric materials. The typical observed coercive fields in transition metal dichalcogenides are approximately 0.15 V/nm <cit.>. We therefore apply a scaling correction to the dipole coefficients, p_n →χ p_n, in order to align our results with experiment. A simple estimate of the coercive field is the electric field required to overcome the energy barrier. It can be obtained as follows: From the energy of a commensurate, rigid displacement, F_0(ϕ⃗; ℰ_z) = ℱ_adh[ϕ⃗] + ℱ_elec[ϕ⃗; ℰ_z ], we determine the field value ℰ_z at which E_0(ϕ⃗_ MX;ℰ_z) = E_0(ϕ⃗_ DW; ℰ_z), where ϕ⃗_ MX = (a⃗_1 + a⃗_2)/3 (one third of a unit cell diagonal) corresponds to the MX stacking, and ϕ⃗_ DW = (a⃗_1 + a⃗_2)/2 (one half of a unit cell diagonal) corresponds to the high-energy intermediate stacking in a domain wall; here a⃗_1,2 are primitive vectors of the Bravais lattice. In this way, a coercive field of ℰ_ c = 0.15 V/nm is achieved by taking χ = 89. The rescaled parameters we use are provided in the caption of Fig. <ref> in the main text. The qualitative physics in this work are not affected by this correction, but this allows us to make better comparisons with experimental measurements, in particular with the relaxation patterns as a function of electric field <cit.>. §.§ Symmetries of the model The spatial symmetries of a twisted TMD bilayer are contained in a chiral point group symmetry no larger than D_3. However, for ℰ_z=0 our model has a larger 6-fold symmetry owing to the fact that the adhesion energies of stacking configurations related by mirror reflection (e.g., XM and MX stackings) are the same. The 3-fold rotational symmetry is restored by ℰ_z≠ 0, however, the model still retains spurious C_2-rotation axes (or, for a planar group, mirror reflection planes) within the plane of the sample, despite the fact that the actual symmetry is the chiral and polar group C_3. For this reason, in the main text we label the phonon modes by irreducible representations (irreps) of D_6 in the absence of electric fields (or C_6v, despite that the actual symmetry of the system is D_3) and in the presence of an electric field, as irresps of D_3 (or C_3v, despite that the actual symmetry of the system is C_3 in that case). The spurious symmetries introduced in the model are ultimate consequence of our coarse-graining in which the two layers are not treated individually: By construction, the displacements on each layer are always of the same magnitude but in opposite directions. §.§ Moiré relaxation In the relaxation problem we look for the stacking field ϕ⃗_0(r⃗) that minimizes the free energy of the bilayers rotated by a twist angle θ. We decompose ϕ⃗_0 as ϕ_0(r⃗)=2sinθ/2 ẑ×r⃗+u_0(r⃗), where the first term encodes a rigid twist and the second represents relaxation. The relaxation displacement field u_0(𝐫) is periodic with the moiré periodicity, and is spanned by u⃗_0(r⃗) = ∑_G⃗u⃗_G⃗ e^iG⃗·r⃗, where G⃗=2sinθ/2 g⃗×ẑ are the moiré reciprocal lattice vectors and g⃗ are the monolayer reciprocal lattice vectors (see Fig. <ref>). We determine u_0(𝐫) by two procedures. In the first method we solve the saddle-point equations by iteration. In the second procedure, suitable for smaller angles and larger fields, we treat the Fourier coefficients u⃗_G⃗ as variational parameters and minimize ℱ directly by gradient descent <cit.>. §.§ Iterating saddle-point equations The Euler–Lagrange equations derived from Eq. (<ref>) are: λ+μ/2∇⃗(∇⃗·u⃗_0) + μ/2∇⃗^2u⃗_0 = .∂ℱ̅_adh/∂ϕ⃗|_u⃗=u⃗_0 + .∂ℱ̅_elec/∂ϕ⃗|_u⃗=u⃗_0. It is convenient to write u⃗_G⃗ in longitudinal and transverse components, u⃗_G⃗ = G⃗/|G⃗| u_G⃗^L + G⃗×ẑ⃗/|G⃗| u_G⃗^T. In this notation the equations read 3/4π d u_G⃗^L = μ/2μ+λ1/L_m^2|G⃗|^2∑_n,j(G⃗/|G⃗|·g⃗_n^(j)/|g⃗_1^(1)|)F_n^(j), 3/4π d u_G⃗^T = 1/L_m^2|G⃗|^2∑_n,j(g⃗_n^(j)/|g⃗_1^(1)|×G⃗/|G⃗|)_zF_n^(j), where n runs over momentum stars and j over elements of the star (see Fig. <ref> for notation), and we have introduced dimensionless forces F_n^(j)=1/A_m∫ dr⃗ [ (L_m/ℓ_n^even)^2sin(G⃗_n^(j)r⃗ + g⃗_n^(j)u⃗_0) - (L_m/ℓ_n^odd)^2cos(G⃗_n^(j)r⃗ + g⃗_n^(j)u⃗_0) ] e^-iG⃗r⃗ , where the integral is on a moiré cell with area A_m. Finally, we have introduced the characteristic lengths ℓ_n^even=d√(μ/2V_n), ℓ_n^odd=d√(μ/2p_nℰ_z), where d=1.8267 Å is the interatomic distance in one layer. We seed our iterative scheme with u⃗_G⃗ = 0⃗ for all G⃗ and F_n^(j) = 0 for all n,j. Equations (<ref>) and (<ref>) are then iterated to compute new values for u⃗_G⃗ and F_n^(j). The process continues until a fixed point is reached. §.§ Optimization by gradient descent Capturing the structure of the domains walls requires retaining Fourier coefficients up to wavevectors |G⃗| ∼ 1 / ℓ, where ℓ is the characteristic domain-wall width (see below). The number of momentum shells required is then ∼ L_m / ℓ, and hence the number of momentum stars involved is (L_m ℓ)^2 ∼ (V_1 + p_1 ℰ_z) / μθ^2. Hence, in the limit of small angles and large electric fields, the number of necessary Fourier coefficients grows rapidly. The foregoing iterative method is ill-suited for such a large number of variables. Instead, we compute the relaxation pattern by numerical optimization of the free energy, Eq. (<ref>) of the main text. Truncating at N momentum shells, there are M = N (N+1) / 2  C_6-unique momentum stars. We collect the 2M components u⃗_G⃗ in a length-2M vector 𝖴. The linearity of the Fourier decomposition (<ref>) implies that at fixed r⃗, u⃗_0 (r⃗) = 𝖢(r⃗) ·𝖴, where 𝖢(r⃗) is a 2× 2M matrix of position-dependent coefficients. The free energy can then be expressed as ℱ[𝖴] = ∑_G⃗[μ/4 G^2 |u⃗_G⃗|^2 + μ + λ/4|G⃗·u⃗_G⃗|^2 ] + ∫ℱ_adh+elec(𝖢(r⃗) ·𝖴) d^2r⃗ = 1/2𝖴·𝖥_elas·𝖴 + ∑_iℱ_adh+elec(𝖢(r⃗_i) ·𝖴) w_i. Here the first term gives the elastic energy contribution (in Fourier space), expressed as a bilinear in 𝖴. The second term is the nonlinear adhesion energy integral, which runs over one moiré unit cell. In the last expression the integral is discretized to a grid of positions r⃗_i and integration weights w_i by Gauss–Legendre quadrature <cit.>. Equation (<ref>) makes explicit the free energy gradient with respect to the variational parameters, ∇_𝖴ℱ = 𝖥_elas·𝖴 + ∑_i∇_u⃗_0ℱ_adh+elec(𝖢(r⃗_i) ·𝖴) ·𝖢(r⃗) w_i, making the problem well-suited to gradient-descent methods. We optimize the energy functional using the Limited-memory Broyden–Fletcher–Goldfarb–Shanno (L-BFGS) algorithm <cit.>, implemented in the package of the programming language <cit.>. We begin the descent from the strain-free configuration 𝖴 = 0⃗. The relaxation solution is not unique (evidenced by the gapless acoustic phason modes), and so to pick a unique solution we fix the coefficients corresponding to u⃗_G⃗=0⃗ = 0⃗ identically. We incrementally obtain the relaxation solution 𝖴 for a smaller number N'<N of shells, which is then the starting point for a new gradient descent search in the enlarged phase space corresponding to N'+1 shells. This is repeated until we reach the final desired shell number N. At each iteration, the convergence criterion is specified in terms of the gradient magnitude |∇_𝖴ℱ|. §.§ Numerical implementation We have included 10 momentum shells (55 stars) to produce the relaxation and phonon dispersion figures for θ=1^∘ and ℰ_z≤ 0.2 V/nm, and 17 shells (153 stars) for larger fields or θ = 0.5^∘. To determine the velocity c of the acoustic modes we take the arithmetic mean of the velocities along Γ-M and Γ-K. The reason is that for each phason c should be isotropic around Γ, in a region whose size we do not control, so in this way we get a better estimation of the actual value. For each one, for instance c_Γ-M corresponding to the path Γ-M, we calculate c=Δω/Δ q, with Δω = ω(M/ξ)-ω(Γ) = ω(M/ξ) and Δ q = |M|/ξ. The factor ξ is a numerical parameter that we tune for each electric field, as the linearity of the dispersion within the mBZ survives until different distances away from Γ for different values of ℰ_z. In order to remove noise from the values of the velocities shown in Fig. <ref> we used a greater amount of shells (32 for the smallest angles and greatest fields) and the gradient descent method. §.§ Length scales of the relaxation patterns In the limit of small twist angles, L_m≫ℓ_n^even,ℓ_n^odd, the relaxed moiré pattern can be envisioned as a network of solitons/domain walls/strings under tension. In the absence of a field, ℰ_z=0, the solitons are well described by pure-shear sine-Gordon domain walls of characteristic width ℓ=d/2π√(μ/V_1-8V_2+9V_3), which connect energetically equivalent regions of partial commensuration, XM and MX stackings, and intersect at MM configurations forming the triangular network in panel (a) of Fig. 2 in the main text. Note that ℓ is independent of the twist angle, thus for smaller angles the domain walls become sharper on the moiré scale. In this limit the energy of the relaxation pattern is governed by the length of the domain walls, which can be thought as a system of strings under linear tension σ=2ℓ (V_1-8 V_2+9V_3). As shown in Fig. 2 (b) of the main text, for small values of the field ℰ_z the domain walls are bent as result of the forces produced by the energy imbalance between MX and XM stackings. If a domain wall is extended along direction x, we can describe its deformation by a displacement field u(x), so that a point corresponding to the saddle point between the two minima is located at position (x,u(x)) in the plane. The curvature κ of the domain wall is defined as κ= u”/[1 + (u')^2]^3/2. At small fields the shape of the domain wall can be described as the one arising from a force on an isolated string of total length L_m subjected to fixed boundary conditions, u(± L_m/2)=0. The energy functional of the string reads E[u(x)] = σ∫ dx √(1+(u')^2) - (E_MX-E_XM)∫ dx u(x). The first term represents the energy cost of changes in the length of the domain wall. The second term introduces the force due to the energy imbalance between MX and XM stackings; in our model: E_MX-E_XM=3√(3)ℰ_z( p_1-p_3). Minimizing this functional, we arrive at E_MX-E_XM+σu”/[1 + (u')^2]^3/2=0⇒κ=-3√(3)ℰ_z(p_1-p_3)/2ℓ (V_1-8V_2+9V_3). The inverse of |κ| defines the curvature radius R of the domain wall introduced in the main text. For |κ|L_m≪ 1 the deformation is small, domain walls do not overlap and their shape is well approximated by the solution of the previous equation subjected to fixed boundary conditions: u(x)=2√(1-κ^2x^2)-√(4-κ^2L_m^2)/2κ. The relative change in area of MX and XM stackings as a function of field ℰ_z can be estimated as Δ A=6 ∫_-L_m/2^L_m/2 dx u(x)=6 κ^-2arcsin(κ L_m/2)-3L_m/κ√(1-κ^2L_m^2/4)≈κ L_m^3/2. This expression describes well the measurements in Ref. ko2023operando at small fields (with no hysteresis), which offers another justification for our rescaling of p_n parameters. In the opposite regime, |κ|L_m≫1, domain walls overlap and eventually form a honeycomb-like network (Fig. <ref>) connecting energetically disfavored stacking configurations. §.§ Dynamical matrix and phonon bands We consider now dynamical fluctuations of the stacking field in the background of the relaxed solution, ϕ⃗(r⃗,t) = ϕ⃗_0(r⃗)+δϕ⃗(r⃗,t). In the harmonic approximation the fluctuation satisfies the following equations: ρδϕ̈⃗̈ = (λ+μ)∇⃗∇⃗δϕ⃗ + μ∇⃗^2δϕ⃗ - 2δϕ_j.∂ℱ̅_adh/∂ϕ_j∂ϕ⃗|_ϕ⃗=ϕ⃗_0 - 2δϕ_j.∂ℱ̅_elec/∂ϕ_j∂ϕ⃗|_ϕ⃗=ϕ⃗_0. We introduce Fourier series as δϕ⃗(t,r⃗) = 1/√(A)∑_k⃗∫dω/2πδϕ⃗(ω,k⃗)e^ik⃗r⃗-iω t, where A is the total area of the system. The previous equations can be written as ω^2ρδϕ⃗(ω,k⃗) = (λ+μ)k⃗·(k⃗·δϕ⃗(ω,k⃗)) + μk⃗^2δϕ⃗(ω,k⃗) + 2∑_G⃗K̃(G⃗) ·δϕ⃗(ω,k⃗-G⃗), where we have introduced a matrix with elements K̃_ij(G⃗) = 1/A_m∫ dr⃗( .∂^2ℱ̅_adh/∂ϕ_i∂ϕ_j|_ϕ⃗=ϕ⃗_0 + .∂^2ℱ̅_elec/∂ϕ_i∂ϕ_j|_ϕ⃗=ϕ⃗_0) e^-iG⃗r⃗, with the integral taken over the moiré unit cell. Equation (<ref>) can be read as an eigenvalue problem in the vector space defined by δΦ⃗(ω,q⃗) := [ δϕ⃗(ω,q⃗); δϕ⃗(ω,q⃗-G⃗_1); δϕ⃗(ω,q⃗-G⃗_2); ⋮ ], where q⃗∈ mBZ and G⃗_1,G⃗_2,... are reciprocal vectors of the moiré lattice that we use in the numerical implementation. We have included the same number of shells as in the relaxation problem detailed above. We show in Fig. <ref> the effect of different singlet modes on the energy landscape in real space, from zero field to the honeycomb regime. §.§ Vibrational density of states For each mode labeled by band index n and momentum q⃗ we introduce a spectral function characteristic of a damped oscillator: 𝒜_n(ω,q⃗) = 2/πω_n^2(q⃗)γ/(ω^2 - ω_n^2(q⃗))^2 +γ^2ω^2, where ω_n(q⃗) is the mode frequency following from the previous diagonalization problem and γ is a phenomenological damping coefficient. The vibrational density of states is the sum to all modes, 𝒟(ω) = ∑_n∑_q⃗𝒜_n(ω,q⃗). In Fig. 3 of the main text we took γ/ω_m=0.01. §.§ Phonon angular momentum To compute the angular momentum of each band of moiré phonons we follow Ref. Koshino2020,Koshino2023. For each q⃗∈mBZ, let δΦ⃗(ω_n,q⃗) be the solution in the from of Eq. (<ref>) of the eigenvalue problem in Eq. (<ref>) corresponding to the n-th phonon band, normalized such that ∑_G⃗|δϕ⃗(ω_n,q⃗+G⃗)|^2 = 1. Then, in reciprocal space the angular momentum of the band is L_z^q⃗,n = iħ∑_G⃗δϕ⃗(ω_n,q⃗+G⃗) ×δϕ⃗^*(ω_n,q⃗+G⃗). A nonzero L_z^q⃗,n is only possible if C_2z symmetry is broken, since δϕ⃗(ω_n,q⃗+G⃗) can be chosen to be real otherwise. In Fig. <ref> we show the results for the first eight modes at twist angle θ=1^∘ and two values of the electric field, corresponding to the first two dispersions included in Fig. <ref>(b) in the main text. The difference in magnitude is explained by the change in energy of the modes at the K points and the relative growth in area of MM and MX/XM stacking regions <cit.>, although the former is the main contributor for the acoustic modes. The field also affects the spreading of the angular momentum from the K points. The most significant example of this is seen in the fourth mode, which shows small regions of nonzero L_z^q⃗,n around the K points when ℰ_z=0.1 V/nm that grow after the field is increased to 0.2 V/nm. Finally, the qualitative details of the seventh and eighth modes are modified as a result of the band inversion (doublet and singlet) that occurs when increasing the field.
http://arxiv.org/abs/2406.18949v1
20240627072923
Streamlined approach to mitigation of cascading failure in complex networks
[ "Karan Singh", "V. K. Chandrasekar", "D. V. Senthilkumar" ]
nlin.AO
[ "nlin.AO" ]
APS/123-QED School of Physics, Indian Institute of Science Education and Research Thiruvananthapuram, Thiruvananthapuram, 695551, Kerala, India. Department of Physics, Center for Nonlinear Science and Engineering, School of Electrical and Electronics Engineering, SASTRA Deemed University, Thanjavur, 613401, Tamil Nadu, India. skumar@iisertvm.ac.in School of Physics, Indian Institute of Science Education and Research Thiruvananthapuram, Thiruvananthapuram, 695551, Kerala, India. § ABSTRACT Cascading failures represent a fundamental threat to the integrity of complex systems, often precipitating a comprehensive collapse across diverse infrastructures and financial networks. This research articulates a robust and pragmatic approach designed to attenuate the risk of such failures within complex networks, emphasizing the pivotal role of local network topology. The core of our strategy is an innovative algorithm that systematically identifies a subset of critical nodes within the network, a subset whose relative size is substantial in the context of the network's entirety. Enhancing this algorithm, we employ a graph coloring heuristic to precisely isolate nodes of paramount importance, thereby minimizing the subset size while maximizing strategic value. Securing these nodes significantly bolsters network resilience against cascading failures. The method proposed to identify critical nodes and experimental results show that the proposed technique outperforms other typical techniques in identifying critical nodes. We substantiate the superiority of our approach through comparative analyses with existing mitigation strategies and evaluate its performance across various network configurations and failure scenarios. Empirical validation is provided via the application of our method to real-world networks, confirming its potential as a strategic tool in enhancing network robustness. Streamlined approach to mitigation of cascading failure in complex networks D.V. Senthilkumar July 1, 2024 =========================================================================== § INTRODUCTION Our everyday routines rely extensively on the operation of numerous natural and artificial networks, including neural and genetic regulatory networks, as well as communication systems, social networks, transportation networks, and electric power grids <cit.>. However, these networks are susceptible to cascading failures, where the failure of a single node can lead to a domino effect, causing widespread breakdowns and disruptions in the entire network  <cit.>. Understanding the resilience of these networks to random failures and targeted attacks is crucial for avoiding system failures with serious consequences <cit.>. Cascading phenomena can have either beneficial or adverse effects, depending on their practical usefulness. Why do certain works, such as books, movies, and albums, gain widespread popularity despite limited marketing efforts <cit.>, while similar endeavors fail to attract attention? Why do fluctuations occur in the stock market without any apparent significant news driving them <cit.>? How do grassroots social movements originate without centralized coordination or widespread public communication <cit.>? These occurrences exemplify what economists term as information cascades <cit.>, where individuals in a population exhibit herd-like behavior, basing their decisions on the actions of others rather than on their own information about the issue. Studies have also looked at how networks of adopters grow, particularly through social contagion  <cit.> processes like adopting opinions, behaviors, emotions  <cit.> or innovations, which can happen quite rapidly under the influence of social pressure. Social influence is significant in this scenario, with individuals being influenced by the viewpoints of their peers <cit.>. This impact is represented in threshold models <cit.>, that propose an individual becomes an adopter when the proportion of their already adopting neighbors reaches a critical level specific to their sensitivity. Such phenomena underscore the complexities of human behavior and decision-making processes, particularly in contexts where social influence plays a significant role. One of the examples of the collapse of the top Online Social Networks(OSNs) is the Hungarian social networking site. The Hungarian social networking site iWiW <cit.> was operational from 2004 to 2013. In late 2010, there was a noticeable increase in the number of users departing from iWiW, resulting in significant loss and ultimately leading to the collapse of the entire network. The main reason cited <cit.> for this downfall was that individuals left when most of their friends had already departed. Understanding the cascade effect is crucial for achieving various applications such as marketing, advertising, and spreading information. Identifying critical nodes where resources can be strategically allocated to effectively disseminate information across the network is key. However, these processes can also have negative implications in areas like finance, healthcare, and infrastructure by causing undesired or detrimental outcomes for the community. Therefore, it is important to mitigate the cascading failure in complex networks to minimize the potential negative consequences and ensure the stability and resilience of these systems. In this work, we introduce a refined strategy for mitigating cascading failures, leveraging local environmental information of nodes, specifically focusing on the node's nearest neighbors <cit.>. Our novel methodology involves the identification and enhancement of fragile nodes, the safeguarding of which ensures near-complete network survivability. To achieve this, we utilize graph coloring to ascertain the critical average degree, employing it as a threshold for refining the set of fragile nodes. Our investigation demonstrates that the proposed strategy is optimized in terms of both the fraction of fragile nodes and the probability of survival. The effectiveness of our approach is first evaluated in theoretical models, namely Erdős-Rényi (ER) networks <cit.> and Scale-free networks <cit.>, which emulate various real-world network structures. Subsequently, we assess its applicability in practical scenarios by demonstrating its efficacy in social networks, collaboration networks, and the US power-grid network. Our method not only adeptly mitigates cascading failures in simple theoretical models but also exhibits robustness in intricate real-life network contexts. We illustrate the versatility of our approach across a wide spectrum of network topologies, emphasizing its adaptability to more complex settings. Our findings underscore the efficacy of the proposed method in averting network collapse across diverse types of networks. Understanding the dynamics of cascading failure in complex networks is crucial for developing effective mitigation strategies. As we continue to rely on and integrate complex networks into daily life, understanding and safeguarding against cascading failures becomes increasingly imperative. § THE MODELS OF CASCADING FAILURE Many different failure models have been extensively studied and applied for specific purposes in simulating the propagation of failures. The k-core <cit.>method and the fractional threshold are two common approaches heavily used to represent real-life scenarios. In the k-core method, the threshold is determined by the absolute number of active neighbors, making it particularly applicable in epidemiological settings where an individual's actual number of contacts is crucial. On the other hand, when the fraction of active neighbors holds more significance than their absolute number, it becomes known as the fractional threshold <cit.> model. This model finds application in various fields such as opinion formation, social dynamics, infrastructures, economics, and finance. In the context of opinion formation, an individual adopts a certain behavior if a fraction m of its friends k adopt it. In this case, the threshold is m/k and not m. This results in individuals with many friends needing more of their neighbors to become adopters to change their state. This process is illustrated in Fig. <ref>. Watts <cit.> presented a simple model on global cascades, implementing the fractional threshold model for infinitesimally small impact. Later, Gleeson and Cahalane <cit.> modified it for finite seed size and studied its effects on the cascades. We used the Gleeson et al. <cit.> framework for analytic calculations of cascading failure processes. Each node, having degree k with the degree distribution p_k such that ∑ p_k = 1, is an agent that belongs to the undirected network and can be in a binary state, called active or inactive. All agents are assigned a frozen random threshold r chosen from the distribution with F(r) denoting the probability that an agent has a threshold less than r. The cascade is initiated by seeding the network by activating a randomly chosen fraction of nodes ρ_0 out of total N nodes. Nodes update their state, and the average final fraction ρ of active nodes is given by <cit.> ρ = ρ_0 + (1-ρ_0)∑_k=1^∞p_k ∑_m=0k[ k; m; ]q_∞^m (1-q_∞)^k-mF(m/k) where q_∞ is the steady state or fixed point of the recursion relation q_n+1=ρ_0+(1-ρ_0)G(q_n) n =0,1,2,… and the generating function G is defined as G(q_n)=∑_k=1^∞k/zp_k∑_m=0^k-1[ k-1; m; ]q_n^m (1-q_n)^k-1-mF(m/k) Here z is the network's average degree z=∑ kp_k. § METHODS §.§ Mitigation strategy As discussed in a recent study <cit.>, our focus will be on identifying critical nodes that play a key role in propagating cascading effects instead of measuring a node's importance based on the damage it causes when removed. As shown <cit.>, the node's importance in terms of the damage it causes doesn't help from the mitigation perspective beacause at a critical threshold, even an exceedingly small impact <cit.> (as minor as removing a single node) could initiate (see Fig. <ref>) a cascade effect leading to network collapse. We adhere to the concept <cit.> of a fragile node within a failure mechanism, M, wherein the failure of the node occurs upon the removal of a minimal significant number, K, of its links. The fractional failure under the removal of a single edge takes the form (k-1)/k, and the node will be considered to fail if (k-1)/k < θ. We will now present the algorithm to select the potent nodes based on the basic percolation argument: to facilitate the cascade, the node must be fragile under the fragility condition (mentioned above) and possess at least some fragile neighbours. Algorithm: Outputs nodes to be protected * Scan each node * Check if it is fragile as per the above condition * If yes, check if it has at least two fragile neighbours * Color the graph and calculate the average degree of each color subgroup * Protect only those fragile nodes (fulfilling 2nd and 3rd condition) with degrees greater than the lowest average degree color group. The first two steps are basics and are to determine the fragile nodes, with the rationale being that, as per the fragility defined above, i.e., a node fails under the removal of the single edge, but if the node remains intact, it is unlikely to be affected at the initial cascade stage as it is not fragile enough as per the definition. Additionally, the cascade doesn't aggravate if the node's fragile neighbours are fewer than two in number. The last two steps are to reduce/refine the number of fragile nodes. It's found (see the Results section) that there is no need to protect all the fragile nodes, but protecting a fraction(which we achieve after the application of our last step) would ensure the full survivability of the network. We refer to this reduced/refined set as immune nodes (the set of nodes to be protected). We use the graph coloring <cit.> technique to reduce the number of fragile nodes (more discussion in the Result section). Upon examining the algorithm, it is observed that nodes with high degrees are not suitable for inclusion in the immune set because they remain resilient even under very high thresholds. On the other hand, low-degree nodes become fragile relatively quickly, but they do not effectively transmit the shock and, therefore, cannot be considered for the immune set. The ideal candidates for immune nodes lie between high and low degrees – these medium-degree nodes are sufficiently weak to be fragile and well-connected enough to propagate the shock significantly. Selecting this group of nodes offers an effective mitigation strategy due to their limited number. § RESULTS §.§ Application on standard networks The primary objective of the algorithm is to pinpoint the critical nodes in terms of failure propagation. The selection of nodes relies on understanding their local surroundings, making it a highly effective mitigation strategy with minimal and readily accessible information. Using this group of vulnerable nodes, we apply graph coloring <cit.> techniques to minimize this number and create a more cost-effective strategy. We refer to this subset as immune nodes (as the nodes to be protected; see green nodes in Fig. <ref>). Coloring the graph and dividing it into subgroups (colour groups), allows us to determine the connectivity of a fragile node with the other colour groups based on the average degree of that particular colour group. For example, suppose the average degree of a certain colour group is high. In that case, each node is highly connected with other colour groups but not at all connected among the same colour group. If this node is determined to be fragile, upon removal of this node, it will affect the other nodes in different color groups, thereby endangering their fractional threshold. Therefore, by choosing a threshold above which the node designated for protection possesses the lowest average degree among the color groups, we effectively and automatically safeguard the fragile nodes characterized by high interconnectivity between different color groups. We first apply our algorithm to the standard network models, Erdos-Renyi and Scale-free networks, to demonstrate its effectiveness. Figures 5 (a) and (c) show the survivability of the system against the fractional threshold. The value on the y-axis represents the probability of the system survival, and value on the x-axis denotes the fractional threshold, that means the fraction of functional neighbors required for a node in question for its survival. The blue line represents full protection (1.0), which means protecting all the nodes (immune nodes) selected by our algorithm. The dashed pink line, corresponding to the full protection case, indicates the fraction of protected nodes compared to the entire network. The orange (0.7) and green (0.4) lines indicate partial protection - that is, protecting the immune nodes 70% or 40% of the times respectively. Finally, the red line shows no protection; it depicts how the system evolves without any form of safeguarding in place. The system always survives until the fractional threshold of 0.86, regardless of the protection. Therefore, for the ER case, 0.86 is considered the critical threshold. As for scale-free networks, it's close to 0.835. Near the critical threshold, it just requires a small fraction of nodes to be protected for the complete survival of the network, as shown by the mitigation strategy (blue line) and basic cascading process (yellow line) in Figures 5 (a) and (c). Figures 5 (b) and (d), for ER and Scalefree networks respectively, show the system's survivability for all protection probabilities plotted for four different fractional thresholds depicted by four colours. The blue lines correspond to a fractional threshold that is lower than their respective critical threshold, ensuring the system always survives. As the threshold increases, the probability of survival decreases for the orange, green, and red lines. Increasing the protection probability enhances the survivability of the system. To guarantee that the system always survives, it is necessary to protect the immune nodes with a protection probability of 1.0 (as depicted in the Fig. 5 (b) and (d)). We have demonstrated the efficacy of our algorithm on ER and Scalefree networks. It identifies the fragile nodes and substantially reduces their number, obtaining what we call immune nodes. These immune nodes are designated for the protection and ensuring the full survival of the system. To show that the reduce/refine step of our algorithm is optimised in terms of both survival probability and the protected node size, we scale the critical degree <k>_c (See Fig. <ref>) by lowering and raising its value to test whether this automatic selection of the critical degree threshold is optimised or not. In Figure <ref>, the critical degree threshold for (a) ER and (b) Scale-free networks is scaled. The green solid line in Fig.<ref>(a) represents the survival probability for a fully protected network with the original critical threshold degree, while the corresponding green dashed line denotes the protected set. The blue and yellow lines indicate thresholds scaled to 95% and 97% of the original threshold degree, respectively, allowing more nodes (as shown by corresponding dashed lines) to be included in the protected set, resulting in improved survival probability. However, raising the original threshold to 102% or above (as seen by solid red or magenta lines) results in poor survivability due to the reduction of nodes in the protected set (shown by corresponding dashed lines). Similar scaling was tested for scale-free networks, revealing that ER is more sensitive than Scale-free networks in terms of scaling factor. So, this demonstrates that our refining/reducing step in the algorithm effectively establishes the threshold degree to which fragile nodes with a degree greater than that should be protected to enhance the system's surviving probability. §.§ Comparison with standard mitigation techniques. Fig. <ref> illustrates the effectiveness of our algorithm in comparison to standard intuitive and centrality-based mitigation strategies. Initially, we compare our proposed technique with intuitive algorithms. In Fig. <ref> (a), we compare our proposed protection for ER networks with simple intuitive strategies using the same set size of protected nodes for a fair comparison. We start by randomly selecting and protecting nodes to verify the effectiveness of the proposed strategy compared to random selection,the point here is to verify if the proposed strategy is actually effective or is it just another random selection. As shown clearly, it's much more than the random selection as it outperforms it. The next obvious candidate is to protect the high-degree nodes; the reason behind this is that since it is highly connected with the other nodes, keeping them safe would protect their neigbors. That, too clearly is not the reasonable set to protect, as clearly shown in the figure. Protecting the lowest degree nodes, the rationale behind it is that saving the most vulnerable nodes in the network ensures the better survival of the network. Fig. <ref> (b) compares our proposed protection for the ER networks with the centrality-based strategies. A better insightful class for selecting nodes would be different centrality measures: closeness centrality, Katz centrality, and betweenness centrality. Closeness centrality measures how close a node is to all other nodes in a network, indicating nodes that can quickly interact with others. Katz centrality quantifies the influence of a node in a network by considering the number of immediate neighbors as well as the neighbors' influence recursively throughout the network, and Betweenness centrality identifies nodes that act as bridges between different parts of a network, measuring the fraction of shortest paths that pass through a node, indicating its importance in facilitating communication or information flow. Once again, our proposed strategy stands out, providing enhanced survival chances compared to these centrality measures techniques. We obtained the same results for the Scalefree network as shown in the Fig.<ref> (c,d). §.§ Application to the Real-World networks §.§.§ 1. LastFM Asia Social Network We proceed to apply our mitigation framework to assess its efficacy in a real-world scenario. First, we examine its application within the LastFM Asia Social Network—a collection of LastFM users sourced from the public API in March 2020. In this network, nodes represent LastFM users hailing from Asian countries, while edges denote mutual follower relationships between them. The number of user recorded is N=7626, and the average degree <k>=7.29. Fig. <ref> (a,b) shows the result of the application of our mitigation strategy on the real-world data of the social network site. (a) System's survivability is plotted against the fractional threshold for four different protection probabilities, and in (b), the same is plotted for all protection probabilities. The fractional threshold model proves pertinent in delineating the dynamics of collapse within Online Social Networks (OSNs). Users tend to disengage from these platforms when a substantial portion of their social connections has already withdrawn from active participation. The waning interest of an individual in a social networking site may stem from its failure to furnish compelling features, content, or interactive activities. Consequently, users may curtail or cease their habitual sharing of daily experiences on the platform, or altogether abandon its usage. This decline in activity can precipitate a ripple effect, causing the user's network connections to lose interest due to the absence of their friend's contributions or the inability to engage in real-time interactions. If a user observes that a significant proportion, denoted by m out of k, of their network connections have exited the platform, it may prompt their own departure. This phenomenon can propagate through the network, culminating in a cascade failure. Notably, the Hungarian social networking site iWiW <cit.> fell victim to such a cascade failure. Operating from 2004 to 2013, iWiW witnessed a surge in user departures towards the end of 2010, resulting in substantial attrition and eventual network collapse. Research <cit.> attributes this decline to individuals departing once their social circles had largely vacated the platform. In response, major OSNs such as Facebook and Instagram have strategically integrated engaging features like platform games, activity sharing, story updates, and enhanced chat functionalities to mitigate similar attrition trends and bolster user engagement. §.§.§ 2. US Power Grid Network The power grid network is mostly based on the redistribution of load <cit.>. Here, we model it very basics by fractional threshold model. That is, the grid fails if more than m out of its k connecting grids fails. We are utilizing the US grid network with nodes N = 4941 and average connectivity <k> =2.67. Fig. <ref> (c,d) shows the result of the application of our mitigation strategy on the real-world data of the US Grid network. (a) System's survivability is plotted against the fractional threshold for four different protection probabilities, and in (b), the same is plotted for all protection probabilities. In this context, nodes within the network correspond to essential components such as generators, transformers, or substations, while edges symbolize the interconnected power supply lines. In the event of a transformer or substation malfunction, the repercussions extend to the connected transformers, as they assume the burden of redistributed loads. Employing a fractional threshold to gauge the vulnerability of a transformer based on the proportion of its neighboring components could be a pragmatic approach. By leveraging such a strategy, the algorithm can proactively identify susceptible transformers for preemptive upgrades, mitigating the risk of future load fluctuations or overload failures. §.§.§ 3. Collaboration Network Collaboration networks are primarily Scalefree in nature. The fractional threshold model could be highly useful for analyzing the failure of an institution, industry, or researcher. Here, we utilize the condensed matter physics collaboration data with nodes N = 23133 and average degree <k> = 8.08 that is derived from the e-print arXiv and encompasses scientific collaborations among authors who have submitted papers to the Condensed Matter category. In this network, if author i has co-authored a paper with author j, an undirected edge is included between i and j. Moreover, when a paper is co-authored by k authors, this results in a fully connected subgraph comprising k nodes. The rationale is that an individual researcher needs at least some of their collaborators to be present and participate in the publication. If this does not happen, it could result in a decrease in the rate or quality of publications, or publication may even stop altogether. Collaboration failure can occur due to various reasons such as severed ties, illness or death of a collaborator, non-compliance with mutual agreements, issues related to credit sharing, and more. As depicted in Fig.<ref> (e), the red line shows that randomly removing a few researchers (two in this case, in order to start the cascade) from the network affects their immediate neighbors and leads to a cascade of failures until the network collapses. On the other hand, the blue line demonstrates that protecting a fraction of nodes (pink dotted line), such as collaborators, ensures the survival of the network. The orange and green lines represent probabilistic protection of critical nodes by 70% and 40%, respectively, significantly increasing the probability of survival for partially protected networks. And in Fig. <ref> (b), the system's surviving probability is plotted for all protection probabilities. § DISCUSSION AND CONCLUSION In this work, we introduce an algorithm designed to efficiently mitigate cascading failures in complex networks. This mitigation is achieved by safeguarding critical nodes that are most effective at propagating and intensifying failure. Differently from the existing work on the mitigation of cascading failures in complex networks <cit.>, we have exploited the graph coloring concept to identify the critical degree threshold, designating nodes with a degree greater than it for protection. This approach optimally refines the fragile nodes without compromising the system's survival probability. Our findings demonstrate that effective node selection can be accomplished with only minimal understanding of the local neighborhoods of the nodes and the underlying failure mechanisms. This strategy significantly enhances the likelihood of network resilience, even without precise knowledge of the initial source of impact. We have evaluated our method across various network configurations and failure scenarios, yielding robust mitigation strategies. Furthermore, we have successfully implemented this approach in real-world network settings, including social networks, the US grid network, and collaboration networks. In each case, our algorithm has effectively mitigated cascading failures. However, it leaves some open questions for further research. Supplementary Information Streamlined approach to mitigation of cascading failure in complex networks § ANALYTICAL CALCULATION OF CASCADING FAILURE PROCESSES We apply the Gleeson et al. <cit.> framework for analytic calculations of cascading failure processes. Each node, having degree k with the degree distribution p_k such that ∑ p_k = 1, is an agent that belongs to the undirected network and can be in a binary state, called active or inactive. All agents are assigned a frozen random threshold r (uniform threshold for our case) chosen from the distribution with F(r) denoting the probability that an agent has a threshold less than r. The cascade is initiated by seeding the network by activating a randomly chosen fraction of nodes ρ_0 out of total N nodes. Nodes update their state, and the average final fraction ρ of active nodes is given by <cit.> ρ = ρ_0 + (1-ρ_0)∑_k=1^∞p_k ∑_m=0k[ k; m; ]q_∞^m (1-q_∞)^k-mF(m/k) where q_∞ is the steady state or fixed point of the recursion relation q_n+1=ρ_0+(1-ρ_0)G(q_n) n =0,1,2,… and the generating function G is defined as G(q_n)=∑_k=1^∞k/zp_k∑_m=0^k-1[ k-1; m; ]q_n^m (1-q_n)^k-1-mF(m/k) Here z is the network's average degree calculated as z=∑ kp_k. The improved cascade condition developed by Gleeson et al. <cit.> for a finite initial impact, (C_1 -1)^2 - 4C_0C_2 + 2ρ_0(C_1-C_1^2-2C_2+4C_0C_2) < 0 where C_l's are the coefficients of power series G(q) = ∑_l=0^∞C_lq^l. Fig. <ref> illustrates the process of cascading failure and simulation results are in good fit with the analytical result (Eq. <ref>) of an ER network. A collapse of the network occurs at the critical threshold around 0.86. In Fig. <ref>, the critical threshold for various initial average degrees is plotted, showing good alignment with the analytical result (<ref>). As the average degree of the network increases, so does its critical threshold; consequently, it will collapse at a higher threshold level. § CRITICAL THRESHOLD DEGREE AND DEGREE DISTRIBUTION OF DIFFERENT TYPES OF NODES The critical threshold degree is the cutoff degree above which fragile nodes are designated for protection for effective mitigation. This threshold is determined by coloring the graph <cit.>, calculating the average degree of each color group, and selecting the lowest average degree as the critical threshold degree. Fragile nodes with a degree greater than this critical value are assigned for protection. Figures <ref> and <ref> illustrate the distribution of degrees among fragile, protected, and unprotected nodes in ER and Scalefree networks at four different fractional thresholds respectively. Fragile nodes (depicted in yellow) are those that fail under a specific failure mechanism defined in step 3 of our algorithm; meanwhile, protected nodes (displayed in green) refer to fragile nodes with a degree greater than the critical threshold (represented by a red-dashed line), making them eligible for protection. Finally, unprotected nodes are the rest of the nodes that are not fragile(displayed in sky blue).
http://arxiv.org/abs/2406.19246v1
20240627151322
An Interpretable and Efficient Sleep Staging Algorithm: DetectsleepNet
[ "Shengwei Guo" ]
eess.SP
[ "eess.SP" ]
Article Title]An Interpretable and Efficient Sleep Staging Algorithm: DetectsleepNet [1]Shengwei Guo2211849@s.hlju.edu.cn *[1]College of Electronic Engineering, Heilongjiang University, Haxi Street, Harbin, 150080, Heilongjiang, China Sleep quality directly impacts human health and quality of life, so accurate sleep staging is essential for assessing sleep quality. However, most traditional methods are inefficient and time-consuming due to segmenting different sleep cycles by manual labeling. In contrast, automated sleep staging technology not only directly assesses sleep quality but also helps sleep specialists analyze sleep status, significantly improving efficiency and reducing the cost of sleep monitoring, especially for continuous sleep monitoring. Most of the existing models, however, are deficient in computational efficiency, lightweight design, and model interpretability. In this paper, we propose a neural network architecture based on the prior knowledge of sleep experts. Specifically, 1) Propose an end-to-end model named DetectsleepNet that uses single-channel EEG signals without additional data processing, which has achieved an impressive 80.9% accuracy on the SHHS dataset and an outstanding 88.0% accuracy on the Physio2018 dataset. 2) Constructure an efficient lightweight sleep staging model named DetectsleepNet-tiny based on DetectsleepNet, which has just 6% of the parameter numbers of existing models, but its accuracy exceeds 99% of state-of-the-art models, 3) Introducing a specific inference header to assess the attention given to a specific EEG segment in each sleep frame, enhancing the transparency in the decisions of models. Our model comprises fewer parameters compared to existing ones and ulteriorly explores the interpretability of the model to facilitate its application in healthcare. The code is available at https://github.com/komdec/DetectSleepNet.git. [ * ===== § INTRODUCTION Sleep consists of non-REM and REM sleep, which are then classified into different sleep stages by sleep experts stage sleep based on electroencephalography (EEG), electrooculography (EOG), and sub-chin (chin) electromyography (chin EMG)<cit.>. Since 1968, sleep staging studies have followed the guidelines developed by Rechtschaffen and Kales in the Handbook of Standardized Terminology, Techniques, and Scoring Systems for Sleep Staging in Human Subjects (R&K Handbook)<cit.>. It divides NREM sleep into stages I, II, III, and IV and refers to REM sleep as stage REM. Following this, NREM stage 3 and stage 4 were merged into a new stage called N3 by the American Academy of Sleep Medicine Manual for the Scoring of Sleep and Associated Events (AASM)<cit.>. Sleep staging and assessment currently depend primarily on the analysis and interpretation of polysomnogram (PSG) data by sleep experts, who usually divide the data into 30-second segments (referred to as one sleep frame). This manual process is known as sleep stage scoring or sleep stage classification. However, for one thing, this method is not only time-consuming and prone to human errors, but it cannot be applied on a large scale due to the limitations of experts. For another thing, if we use the manual method mentioned above, patients with sleep disorders would need to be placed in a specialized environment with expensive equipment and supervised by specialists to monitor their health conditions regularly. This process is both tiring and financially burdensome. Hence, automated sleep staging is essential. Researchers have paid significant attention to the advantages of profound learning technology breakthroughs in various fields. These include the ability to automatically classify sleep stages without manual feature extraction and its powerful representation and adaptability to different types of data. Recent research indicates an increasing interest in using deep learning for sleep staging. A. Supratak et al. proposed the DeepSleepNet, which utilizes convolutional neural network (CNN) to learn local features first and then the recurrent neural networks (RNN) to learn temporal transformation rules for these features<cit.>. A. Supratak further proposed the TinySleepNet, applying simpler single-pipelined CNN and unidirectional Long Short-Term Memory Networks (LSTM) to enhance classification accuracy while reducing parameters<cit.>. H. Seo et al. proposed the IITNet model, which divides each 30-second EEG feature segment into overlapping subsegments and encodes them into corresponding feature vectors. The model utilizes a modified ResNet-50 for feature extraction and applies Bi-LSTM for sleep stage classification<cit.>. U-Time utilizes a fully convolutional encoder-decoder network to analyze raw EEG signals of any length for sleep stage classification<cit.>. In the SeqSleepNet, researchers begin by applying a short-time Fourier transform (STFT) to PSG signals to create spectrograms<cit.>. Then, they use a Hamming window and fast Fourier transform (FFT) for logarithmic scaling and filter out unimportant subbands using a filter module. Finally, the RNN and an attention module are employed to classify multiple sleep stages. XSleepNet uses raw EEG signals and power spectra as inputs for sleep staging with CNN and RNN<cit.>. SleepTransformer uses power spectra as inputs for sleep staging and analyzes the confidence level of each element throughout the sleep night for interpretability<cit.>. SleePyCo first obtains a pre-trained model through comparison learning and then fine-tunes by the transformer to learn sequence information<cit.>. Although deep learning has facilitated significant advances in automatic sleep stage classification and has continuously improved the classification performance by applying cellular neural network, RNN, Transformer, etc., these methods still face many challenges and difficulties in practical applications: (1) Complexity and computational requirements, such as using the Window Fourier Transform (WFT) to convert one-dimensional physiological signals into two-dimensional images for classification. This process increases the computational resource requirements for model training and inference, making it difficult to embed models directly into small devices with limited computational power. (2) Model Interpretability: Deep learning models are usually referred to as "black boxes," especially in medicine and healthcare, where the decision-making process requires transparency to both professionals and patients in order to build trust and enhance usefulness. Hence, research should prioritize developing lightweight models with enhanced interpretability. This involves simplifying deep learning model structures, reducing parameters for small devices, and exploring new interpretability techniques. These improvements can make automated sleep stage classification more effective in clinical settings and provide more accurate tools for advancing sleep medicine. This research aims to use deep learning to identify key signal features and transformation rules during sleep and propose a highly accurate and lightweight algorithm for automatic sleep stage classification. The algorithm will substitute traditional manual sleep staging methods, allowing their widespread use in human life and contributing to the realization of AI-driven healthcare services. The main contributions of this work are as follows: (1) We develop a concise end-to-end neural network model named DetectsleepNet, employing only single-channel EEG signals as inputs and achieving excellent classification accuracy on two large, publicly available sleep staging datasets. (2) We develop a lightweight model named DetectsleepNet-tiny to address the problem of a large number of parameters and the difficulty of deployment on mobile devices. The model reduces its size to 6% of existing sleep stage models, maintaining over 99% accuracy compared to state-of-the-art (SOTA) models in the meantime. This significantly improves efficiency and deployability. (3) To facilitate visualization of the decision-making process, specific structural inference heads have been introduced to assess the attention of EEG data to specific data segments. This structure enhances transparency and persuasiveness of the "black box" in real-world applications and assistants sleep experts perform sleep staging quickly and effectively. § DETECTSLEEPNET During the process of sleep staging, EEG feature plays a crucial role. As shown in Figure 1, a frame of the EEG signal from the O2-M1 channel demonstrates that during the awake state with eyes open, the EEG predominantly exhibits β waves and α waves (8-13Hz, typically most pronounced in the occipital leads). If α waves are present when the eyes are closed, and the α wave activity in the O2-M1 channel EEG signal exceeds 50%, it is marked as the awake (W) stage. Figure 1 shows that if the α wave activity in a frame is less than 50% and is replaced by low-amplitude mixed-frequency waves (LAMF), it is marked as the N1 stage. §.§ Model Architecture and Mathematical Derivation The model architecture depicted in the Figure 2 is known as DetectSleepNet. DetectSleepNet commences by employing multi-channel convolutional layers to process the one-dimensional raw EEG data (time-series signals), and introduces an additional dimension during the feature extraction process. Subsequently, in the representation learning phase, the model utilizes global average pooling to condense the two-dimensional EEG feature matrix of each time segment into a one-dimensional feature vector, ensuring precise description of the features of each time segment by a single vector. DetectSleepNet further utilizes the global average pooling in the sequence learning part to merge the feature vectors of multiple time segments into a single feature vector representing the entire sleep frame. The process are as follows: { x_i^t|i∈{ 1,2,… ,N }}=split( X^t) f_rep_i^t=Avg1( Rep( x_i^t) ) { f_gs_i^t}=Global_Seq( { f_rep_i^t|i∈{ 1,2,… ,N },t∈{ 1,2,… ,M }}) { f_ls_i^t}=Local_Seq( { f_gs_i^t|i∈{ 1,2,… ,N }}) f_seq^t=Avg2( { f_ls_i^t|i∈{ 1,2,… ,N }}) {c^t}={ classifier( f_seq^t)|t∈{ 1,2,… ,M }} Where, X^t denotes the single-channel EEG data at time step t. Through the function split, it is split into N subsets x_i^t. For each subset x_i^t, the representation learning structure Rep extracts the feature matrix. Following this, global average pooling Avg1 is applied to condense this into a one-dimensional feature vector f_rep_i^t. The feature vectors are processed by the global sequence learning structure Global_Seq and the local sequence learning structure Local_Seq to derive the vector f_ls_i^t corresponding to each subset x_i^t. Subsequently, the global average pooling Avg2 is employed to obtain the feature vector f_seq^t for each sleep frame. Finally, the classifier classifier transforms the feature vector of each sleep frame into the category vector c^t, enabling precise analysis and identification of various sleep stages. The collection of category vectors for M sleep frames is represented as {c^t | t ∈{ 1,2,… ,M }}. To effectively extract these features, this paper developed a parameter-shared local feature extractor as the fundamental structure for representation learning. Each local time series is processed through this feature extractor to generate feature vectors that capture the waveform characteristics of the local time. These features are then integrated using a sequence learning architecture to produce the final sleep stage categories. As illustrated in Figure 3, this simplified model architecture outlines the entire process from feature extraction to classification. Given a series of M continuous sleep frames, wherein each frame is subdivided into N time segments, the forward computation steps are determined as follows: { x_i^t|i∈{ 1,2,… ,N }}=split( X^t). f_rep_i^t=Rep( x_i^t). { f_seq^t,t∈{ 1,2,… ,M }}=Seq({ f_rep_i^t|i∈{ 1,2,… ,N }, t∈{ 1,2,… ,M }}). c^t=classifier( f_seq^t). where i∈{ 1,2,… ,N } and t∈{ 1,2,… ,M } represent the indices of the time segments and sleep frames, respectively. The variable X^t represents the single-channel EEG data of the tth sleep frame, and x_i^t represents the EEG data of the ith time segment of the ith sleep frame subsequent to segmentation. Rep is the representation learning structure that converts the data of each time segment x_i^t into the corresponding feature vector f_rep_i^t. Seq is the sequence learning structure that integrates the feature vectors of continuous sleep frames to generate the composite feature vector f_seq^t for each sleep frame. The classifier classifier converts the feature vector of each sleep frame into the corresponding category vector c^t. This process involves segmentation, feature extraction, serialization, and classification, ensuring continuity and efficiency. §.§ Representation Learning To effectively capture the unique frequency features of different EEG rhythms, we have introduced a multi-scale feature extraction module (MCFEM). The module, as illustrated in Figure 4, is designed with multiple convolutional layers and feature extraction paths to capture and integrate features from different receptive fields. In the Multi-Channel Feature Extraction Module (MCFEM), the first convolutional layer (Conv Layer 1) is set with a kernel size of 3 to effectively capture features from smaller receptive fields. Subsequently, two dilated convolutional layers (Dilated Conv 1 and Dilated Conv 2) leverage dilation rates of 3 and 5, respectively, to encompass broader receptive fields and capture a more comprehensive hierarchical feature representation. Furthermore, Conv Layer 2 employs a kernel size of 1 primarily to regulate the number of data channels and feature dimensions, thereby ensuring the efficient fusion of outputs from diverse pathways. Each feature extraction pathway comprises two convolutional layers, thereby enhancing the module's fitting capacity. The module incorporates batch normalization and nonlinear activation functions to enhance the model's training efficiency and overall effectiveness. Furthermore, it seamlessly integrates features from diverse receptive fields through summation operations, deftly avoiding an increase in feature dimension. The final pooling layer diminishes feature dimensions while fortifying the model's robustness and its capacity to generalize amidst variations in input data. These design elements work together to make the extracted features more comprehensive and suitable for complex EEG signal analysis tasks. §.§ Sequence Learning This paper utilizes a six-layer bidirectional gated recurrent unit (Bi-GRU) structure for sequence learning. Among them, five layers are responsible for capturing global sequence information and analyzing the connections between consecutive sleep frames, while the remaining layer focuses on processing local sequence information and examining the detailed relationships between time segments within a single sleep frame. The specific computation process is as follows: { f_gs_i^t}=Global_Seq( { f_rep_i^t|i∈{ 1,2,… ,N },t∈{ 1,2,… ,M }}). { f_ls_i^t}=Local_Seq( { f_gs_i^t|i∈{ 1,2,… ,N }}). the set { f_rep_i^t | i ∈{ 1, 2, … , N }, t ∈{ 1, 2, … , M }} denotes the features extracted from an N x M matrix of data points, each containing single-channel EEG data within a specific time segment. The global sequence learning structure, Global_Seq, integrates the sequence features of multiple sleep frames from a global perspective, resulting in global sequence features { f_gs_i^t}. The local sequence learning structure, Local_Seq, further processes these global features to refine the sequence dynamics within each time segment of a single sleep frame, producing local sequence features { f_ls_i^t}. This design allows the model to capture global dynamics and local variations, thereby enhancing the accuracy and reliability of sleep stage determination. § EXPERIMENT §.§ Datasets In this paper, we have opted to work with the SHHS and Physio2018 datasets<cit.>. These datasets offer the advantage of large sample sizes, enabling comprehensive testing and validation of model performance and stability. Moreover, the standardization of the above datasets allows for easy comparison with other studies, thus enhancing the reliability and reproducibility of research. The specific dataset evaluation scheme is exhibited in Table 1. The processing of both datasets followed the methods of existing research. Specifically, the SHHS dataset was refined by combining sleep stage 3 and 4 into a single N3 stage and removing movement periods and unknown stages to maintain data consistency and precision. The comprehensive evaluation framework of the datasets and the categorization distribution will be explicated in subsequent sections. The distribution of labeled categories for sleep epochs is presented in Table 2. §.§ Experimental Setting In this paper, the SHHS dataset is preprocessed to adhere to the AASM scoring manual. The sleep staging model was trained using 20 consecutive single-channel EEG sleep frames, incorporating data from both the Physio2018 and SHHS datasets. The ReLU6 activation function was chosen for its ability to remain linear for positive values and truncate for negative values, effectively mitigating the vanishing gradient problem and improving training efficiency. The model utilizes the cross-entropy loss function, as illustrated in the formula: CE_Loss( y,ŷ)=-[ylogŷ+(1-y)log (1-ŷ)] This paper employed the AdamW optimizer, which incorporates a weight decay mechanism in addition to the original Adam optimizer<cit.>. The approach addressed overfitting issues and enhanced the generalization ability of model. AdamW combines adaptive learning rates and momentum optimization, thereby improving the stability and efficiency of the training process. Furthermore, the paper implemented the CyclicLR learning rate scheduling method, which intermittently adjusts the learning rate to facilitate the avoidance of local optima and expedite convergence. Such a strategy effectively safeguards the model against becoming ensnared in local minima by modifying the learning rate during training<cit.>. Overall, we combined utilization of AdamW and CyclicLR optimizes the learning process and improves the model's capacity to handle complex sleep data through a meticulously designed loss function and activation function. This enhances prediction accuracy and model generalization, resulting in more precise and stable performance in sleep staging tasks. §.§ Performance Comparison and Analysis DetectSleepNet has demonstrated outstanding performance on the Physio2018 and SHHS datasets. As shown in Table 3, is a comparison with the top algorithms in single-channel sleep staging. In Table 3, the bold indicates the best performance, and the underlined indicates the second-best. Notably, DetectSleepNet achieved superior results across multiple classification tasks. Conversely, SleePyCo, XSleepNet, and SleepTransformer employ more intricate technical methodologies. SleePyCo uses a contrastive learning strategy for pre-training rather than a direct end-to-end architecture. XSleepNet and SleepTransformer transform raw EEG signals into time-frequency graphs, extracting features through Fourier transforms. DetectSleepNet employs an end-to-end structure, directly processing raw EEG signals and effectively utilizing waveform information. This approach not only reduces the complexity of preprocessing steps but also improves the accuracy and generalization capability of the model in sleep staging tasks, demonstrating its potential and advantages in the field of sleep staging. Despite the overall effectiveness of current automatic sleep staging methods, there are specific challenges in accurately classifying the N1 stage. This can be attributed to several factors: 1. Annotation Bias: Sleep experts rely on the proportion of alpha rhythms in the occipital leads (O1-M2 or O2-M1) to determine sleep stages. The AASM scoring manual dictates that more than 50% alpha rhythm is classified as the W stage, while less than 50% is marked as the N1 stage. However, subtle variations around the 50% threshold can result in manual annotation errors. 2. Signal Modality: About 10% of the population does not produce alpha waves and requires alternative signals (e.g., EMG signals) to determine the N1 stage. Interpretation of these signals may vary among sleep experts based on personal experience and preference. 3. EEG Lead Selection: This paper primarily utilizes central leads (C4-M2 or C3-M1) for single-channel sleep staging. While central leads can capture EEG characteristics from multiple regions, the proportion of alpha rhythms may differ from the occipital leads, potentially impacting classification accuracy. The confusion matrix of the DetectSleepNet method is shown in the Figure 5, which illustrates the matching between the actual predictions and the true labels of the DetectSleepNet model on the test set. This visualization comprehensively showcases the model's prediction accuracy across different sleep stages and its misclassification scenarios, encompassing true positives, false positives, true negatives, and false negatives. The confusion matrix serves as a crucial instrument for gaining insights into the real-world performance of models, offering an intuitive portrayal of its strengths and weaknesses within each classification label. With the proposed multi-scale feature extraction module, our DetectSleepNet model can process one-dimensional EEG signals of varying frequencies. The comparison experiments are displayed in the Figure 6. In these experiments, TinySleepNet adjusts the sizes of its convolutional kernels based on the input EEG signal frequencies, resulting in changes to the model parameters. In contrast, the architecture of DetectSleepNet remains consistent. § LIGHTWEIGHT PROCESSING Previous experiments have shown that the number of model parameters in the sequence learning part far exceeds that in the representation learning part. This phenomenon is reminiscent of visual-text multimodal tasks, where the parameter count of the visual feature extractor surpasses that of the text feature extractor. To tackle this issue, a transfer learning strategy was implemented for the DetectSleepNet model. Specifically, the strategy entailed the fixation of the weights within the representation learning component while retraining the sequence learning component. A detailed exposition of this methodology and the model's architecture is provided in Figure 7. The Figure 7 shows the architecture of the DetectSleepNet-tiny model, a lightweight version of DetectSleepNet. Notably, the original 6-layer bidirectional gated recurrent unit (Bi-GRU) has been simplified to a single-layer Bi-GRU, resulting in a substantial reduction in model size and concurrent enhancements in training efficiency. The sequence of computation steps is as follows: { x_i^t|i∈{ 1,2,… ,N }}=split( X^t) f_rep_i^t=Avg1( Rep( x_i^t) ) { f_ts_i^t}=Tiny_Seq( { f_rep_i^t|i∈{ 1,2,… ,N },t∈{ 1,2,… ,M }}) f_avg^t=Avg2( { f_ts_i^t|i∈{ 1,2,… ,N }}) {c^t}={ classifier( f_avg^t)|t∈{ 1,2,… ,M }} {ŷ^t}={ argmax(c^t) } L=1/M∑_t=1^MCELoss( {ŷ^t},{y^t}) θ_Tiny_Seq←θ_Tiny_Seq-η·∂ L/∂θ_Tiny_Seq where Tiny_Seq represents the new sequence learning structure introduced in this chapter. f_ts_i^t represents the feature vector corresponding to each subset x_i^t after being processed by the Tiny_Seq structure. f_avg^t represents the feature vector for each sleep frame. The function argmax is used to obtain the category label with the highest probability. {ŷ^t} is the set of predicted labels for multiple sleep frames, and {y^t} is the set of true labels for multiple sleep frames. CELoss is the cross-entropy loss function, L represents the mean of the loss function, and η represents the learning rate. The model parameters Tiny_Seq of θ_Tiny_Seq are iteratively updated through gradient descent to optimize performance. The comparison results for accuracy between DetectSleepNet and DetectSleepNet-tiny are illustrated in Table 4. It is evident from the table that, despite a slight decrease in performance on the Physio2018 and SHHS1 datasets, DetectSleepNet-tiny still maintains high accuracy at 99.5% and 99.3% of DetectSleepNet's accuracy, respectively. This indicates that even with a significant reduction in the number of parameters, DetectSleepNet-tiny remains effective in performing the sleep staging task. (In the table, 80.5% and 80.9% represent the accuracy of DetectSleepNet-tiny and DetectSleepNet on the Physio2018 dataset, respectively.) To further evaluate the lightweight design of DetectSleepNet-tiny, this paper also compares its model parameters with those of other models published in recent years, as outlined in Table 5. The results reveal that the model parameters of DetectSleepNet-tiny are only 0.049M, significantly lower than those of the other comparison models. This validates that while maintaining high accuracy, DetectSleepNet-tiny effectively enhances the model's efficiency and adaptability, especially in scenarios with limited resources or the necessity for mobile deployment. The comparison results demonstrate that DetectSleepNet-tiny achieves model lightweight while maintaining excellent performance, highlighting its potential applications in sleep staging. Through the optimization of the sequence learning component's design, DetectSleepNet-tiny mitigates the computational load, rendering the model better suited for deployment in resource-constrained environments, such as mobile devices or remote medical monitoring systems. In summary, the successful development of DetectSleepNet-tiny exemplifies a harmonious blend of performance and resource efficiency, offering a feasible and effective solution for forthcoming sleep monitoring technologies. § INTERPRETABILITY ANALYSIS The interpretability of neural networks is gaining increasing attention in the field of deep learning, especially in the medical field, where decision transparency and verifiability are of utmost importance. While researchers have developed various techniques to elucidate the decision-making processes of convolutional neural networks in image classification tasks, the application of these methods in the specialized field of sleep staging still requires further exploration<cit.>. In this section, we aim to enhance the interpretability of the DetectSleepNet model introduced in section 2. We will retain its representation learning component and optimize the decision reasoning process by implementing various reasoning heads to augment the explanatory capability of model. These reasoning heads are specifically designed to offer clear decision logic, thereby rendering the model's prediction process more transparent. §.§ Voting based decision model In the vote-based decision model, each individual sleep frame X^t is subdivided into N sample blocks x_i^t. Each sample block x_i^t independently undergoes the representation learning module to extract the feature vector f_rep_i^t. These feature vectors are then separately fed into the classifier, resulting in an independent classification for each time segment. To determine the final sleep frame category, a voting mechanism is implemented by applying global average pooling to these classification results. The forward process is illustrated in Figure 8. The detailed calculation process of this model follows: { x_i^t|i∈{ 1,2,… ,N }}=split( X^t) f_rep_i^t=Rep( x_i^t) c_i^t=classifier( f_rep_i^t) c^t=1/N∑_i=1^Nc_i^t pred^t=argmax(c^t) where f_rep_i^t represents the feature vector of the sample block x_i^t that is extracted by the representation learning module Rep. c_i^t denotes the classification vector (i.e., the classification result) for each time segment within the sleep frame after the feature vector f_rep_i^t is input into the classifier. The final category result for the sleep frame c^t is obtained by averaging (i.e., voting) the classification results c_i^t across the N time segments. The final classification vector pred^t is obtained from the final category result c_i of the sleep frame using the maximum value index function argmax. pred^t is a scalar representing the final classification result. The process enables the decision-making of the model to be transparent, allowing for the analysis and verification of each sample block's specific contribution to the final decision. The feature vector f_rep_i^t of each sample block can be considered independent of other blocks, providing a unique opportunity to track and understand the model's decision vectors throughout the entire sleep frame. The specific decision vector expression is as follows: Att_x^t=Att_c^t={ c_i^t[ pred^t]|i∈{ 1,2,… ,N }} where c_i[ pred^t] represents the scalar of the final category, which is also the maximum value of the classification vector c_i. Att_c^t represents the decision vector of the category vector group { c_i^t | i ∈{ 1, 2, … , N }}, and Att_x^t represents the decision vector of the sample set { x_i^t | i ∈{ 1, 2, … , N }}. Similarly, the value of pred^t can be changed to observe the process of the model classifying the sleeping frame into other categories. Specifically, researchers can adjust the observed metrics in order to uncover alternative paths and inference processes in decision-making. By analyzing the features and weights that the model utilizes to classify sleep frames into different categories, researchers can gain a deeper understanding of the decision-making of the model approach and the factors involved in various scenarios. This analysis is essential for revealing the decision behavior of the model, as well as the perception and interpretation of different categories. Additionally, to facilitate the visualization of the decision vector Att_x^t in the form of a heatmap, it is transformed during display as follows: Att_show^t=interp( relu( Att_x^t/max( Att_x^t)) ) where the max function is used to obtain the maximum value in the decision vector Att_x^t (the maximum scalar in a one-dimensional matrix). The relu function sets negative numbers in the one-dimensional vector to zero. The expression relu( Att_x^t/max( Att_x^t)) maps the decision vector Att_x^t to the [0,1] interval. The interp function is an interpolation function used to adjust the vector's size to improve the resolution of the heatmap. Att_show^t is the final one-dimensional decision vector used for plotting the heatmap for observation. §.§ Sleep Staging Decision Model Based on Feature Vectors Expanding on section 5.1, this section develops a feature vector-based decision model aimed at addressing the limitations of the vote-based model's voting (global average pooling) structure. The vote-based model utilizes shorter feature vectors (equivalent in length to the category vector) in each sample block for global average pooling, leading to reduced information during decision-making. Therefore, the feature vector-based decision model employs a simple architecture to integrate the feature vectors of all sample blocks, resulting in a more informative and comprehensive global feature representation. §.§.§ Calculating Decision Vectors Based on Forward Propagation The feature vector-based decision model involves dividing the data of each single sleep frame X^t into N sample blocks x_i^t. Each sample block is then transformed into the corresponding feature vector f_rep_i^t through the representation learning module. Unlike the previous model, rather than classifying each sample block individually, all feature vectors are merged into a global feature vector using global average pooling. The final classification result is then produced through a fully connected layer. The process is depicted in Figure 9. The detailed forward process of the model architecture is as follows: f_global^t=1/N∑_i=1^Nf_rep_i^t c^t=classifier(f_global^t) pred^t=argmax(c^t) where the feature vectors f_rep_i^t corresponding to all sample blocks x_i^t are averaged to obtain f_global^t. f_global^t represents the global feature vector for this sleep frame. The category vector c^t is obtained by passing the global feature vector f_global^t through a single-layer fully connected classifier classifier. The final classification result pred^t is determined by identifying the index of the maximum value in the classification vector c^t. The computation process of the fully connected classifier classifier is as follows: classifier(f_global^t)=f_global^t𝐖_𝐟𝐜^𝐓+𝐛_𝐟𝐜 where 𝐖_𝐟𝐜 and 𝐛_𝐟𝐜, the symbols W and b denote the weight and bias parameters of the fully connected layer of the classifier, respectively. If L represents the length of f_global^t and C signifies the length of c^t, then 𝐖_𝐟𝐜 is a two-dimensional matrix with dimensions C × L, and 𝐛_𝐟𝐜 is a one-dimensional matrix with a length of C. Att_f_global^t=f_global^t·𝐖_𝐟𝐜[pred^t]^T+𝐛_𝐟𝐜[pred^t] ∼ f_global^t·𝐖_𝐟𝐜[pred^t]^T Att_x^t=Att_f^t= 1/N∑_i=1^N{ f_i^t/f_global^t· Att_f_global^t} ∼1/N∑_i=1^N{ f_i^t·𝐖_𝐟𝐜[pred^t]^T} where Att_f_global^t represents the decision vector of the global vector f_global^t for the classification result pred, with a length of L. 𝐖_𝐟𝐜[pred^t] is the pred^t-th row of the weight matrix of the fully connected layer, which is a one-dimensional vector of length L. 𝐛_𝐟𝐜[pred^t] is the pred^t-th row of the bias matrix of the fully connected layer, which is a scalar (a constant during model inference). Att_f^t represents the decision vector of the feature vector group { f_i^t | i ∈{ 1,2,… ,N }}, and Att_x^t represents the decision vector of the sample set { x_i^t | i ∈{ 1,2,… ,N }}. However, we have omitted the bias term 𝐛_𝐟𝐜[pred^t] for simplicity. The decision vector for the feature vector of each sample block can be calculated using the above method and then visualized to illustrate the model's decision basis and focus at different sleep stages. Heatmaps are generated using the same method as previously described to visually display the contribution of each sample block to the final decision. §.§.§ Calculating Decision Vectors Based on Back Propagation When dealing with intricate neural network models such as those featuring multi-layer fully connected structures or recurrent neural network architectures directly deriving decision vectors through forward propagation is often unfeasible. These models exhibit high non-linearity and complex parameter structures, making the decision-making process challenging to parse. Neural networks typically employ the backpropagation algorithm to optimize model parameters. This method guides parameter adjustments by computing the gradient of the loss function with respect to the model parameters to minimize the loss. This is valuable during the model training phase, offering insights into the model's decision process. The backpropagation algorithm can also calculate the gradients of the loss function with respect to not only the model parameters, but also the input features, quantifying the influence of each input feature on the model's prediction. The paradigm of the back propagation process is as follows: f_rep_i^t=encoder(x_i^t) c^t=decoder({ f_rep_i^t|i∈{ 1,2,… ,N }}) pred^t=argmax(c^t) where each sample block x_i^t is processed by the feature extractor encoder to obtain the corresponding feature vector f_rep_i^t. The feature vectors of all sample blocks in a single sleep frame { x_i^t | i ∈{ 1, 2, … , N }} are then processed by the classifier decoder to obtain the classification vector c^t for that sleep frame. The final classification result for the sleep frame is pred^t. The decision vector of the model is derived according to the following paradigm: A t t_f_-set ^t =-∂ c^t[p r e d]/∂ f_- s e t^t A t t_x^t =A v g_L(f__ s e t^t · A t t_f_- s e t^t) where f_set^t = { f_rep_i^t | i ∈{ 1,2,… ,N }} represents the feature vectors of all sample blocks. If L is defined as the length of f_global^t, then f_set^t can also be considered a feature matrix of size N × L. Att_f^t is the partial derivative of f_set^t with respect to c^t[pred], representing the sensitivity of the classifier decoder to f_set^t, and is of the same size as f_set^t. Avg_L is the average of the decision matrix f_set^t· Att_f^t along the L dimension, yielding the decision vector Att_x^t of length N. §.§ Sleep Staging Decision Model Based on Temporal Information The sleep staging task involves capturing the temporal information of adjacent sleep frames. When dealing with data that has temporal characteristics, such as time series data or sequential data, the decision models from section 2 are unable to capture the temporal correlation and dynamic changes in adjacent sleep frames. Therefore, this section introduces a decision model that is temporally correlated, specifically optimized and designed for the characteristics of sequential data. The temporally correlated decision model leverages temporal information in sequential data to more accurately infer and predict events or states within the sequence. Unlike traditional feature aggregation and global decision-making, temporal models take into account the time order in the data, enabling them to adapt more effectively to the dynamics and trends of time series data. This section will delve into the introduction of a temporally correlated decision model, as illustrated in Figure 10. The forward propagation process is as follows: f_global^t=Seq_local( { f_rep_i^t|i∈{ 1,2,… ,N }}) f_seq^t=Seq_global(f_seq^(t-1),f_global^t,f_seq^(t+1)) c^t=classifier(f_seq^t) pred^t=argmax(c^t) where the local temporal learning structure Seq_local first integrates the feature vectors of the sample blocks within a single sleep frame X^t, represented as { f_rep_i^t | i ∈{ 1,2,… ,N }}, to obtain a global feature vector f_global^t that contains temporal information. Then, the global temporal learning structure Seq_global processes the global feature vector f_global^t in conjunction with the forward sequence feature of the previous sleep frame f_seq^(t-1) and the backward sequence feature of the next sleep frame f_seq^(t+1), resulting in a sequence feature that comprehensively incorporates the temporal dependencies. §.§ Analysis of Interpretability Model Results In this section, we will conduct a comprehensive analysis and performance comparison of the previously introduced models, namely the vote-based decision model, the feature vector-based decision model, and the temporally correlated decision model. These models underwent rigorous testing using five-fold cross-validation on the initial fold of the Physio2018 dataset. The specific accuracy results are meticulously presented in Table 6 for detailed evaluation and comparison. In the experiment, it was noted that decision models predicated on voting and feature vectors demonstrated diminished accuracy. This phenomenon primarily emanated from their inadequate capacity to effectively capture both localized and global temporal features. Notwithstanding their disparate structural configurations, both models evinced limitations in assimilating input features during forward propagation, culminating in commensurate accuracy outcomes. The decision vectors abstracted from a segment of the Physio2018 dataset are delineated in Figure 11. These depictions illustrate the decision vectors corresponding to the 6th, 8th, 70th, and 76th sleep frames in sample tr03-0005 from the Physio2018 dataset. (The choice to feature these sleep frames was predicated on their manifestation of prototypical waveform characteristics, rendering them more amenable to scrutiny.) The sleep stages are designated as W (wakefulness), N1 (light sleep), N2 (intermediate sleep), and N3 (deep sleep). A single-channel EEG data from the central lead C4-M1 is employed as the data channel. Heatmaps labelled a, b, c, and d correspond to varying models: the vote-based model, feature vector-based forward propagation model, feature vector-based backpropagation model, and temporally correlated decision model. Within the heatmaps, regions with greater darkness signify the features upon which the models concentrate, in accordance with the sleep staging criteria articulated in the AASM scoring manual. The results depicted in Figure 11 illustrate a notable similarity in outcomes between models employing forward propagation (heatmap b) and backpropagation (heatmap c), affirming the effectiveness and accuracy of backpropagation in elucidating the model's decision-making process. In Figure 11(a), the decision vectors a, b, c, and d are indicative of the alpha rhythm, as emphasized in the heatmap. This finding aligns with the methodology employed by sleep experts for identifying the W stage as per the AASM scoring manual. Figure 11(b) displays decision vectors a, b, c, and d pointing towards the low-amplitude mixed-frequency (LAMF) waves. Nevertheless, decision vector d demonstrates limited coverage of the LAMF waves, indicative of a more focused range. In Figures 11(c) and 11(d), all models successfully identify slow waves. However, the temporally correlated model only partially encompasses all regions of slow waves, potentially attributed to excessive reliance on temporal information. Despite achieving higher accuracy, the temporally correlated decision model integrates an excessive amount of temporal information from adjacent frames, consequently leading to decreased consistency with the decisions outlined in the AASM scoring manual. Conversely, non-temporally correlated decision models demonstrate more robust alignment with the AASM scoring manual. § CONCLUSION Sleep is a crucial physiological phenomenon for humans, and the quality of sleep affects various aspects of daily life. Good sleep quality can promote metabolism, maintain cardiovascular health, and improve attention and reaction capabilities. Sleep monitoring is an important means of analyzing sleep quality, with sleep stage classification (sleep staging) being a key component. Therefore, an efficient automatic sleep staging algorithm is of great significance for health monitoring. This paper proposes a neural network model architecture based on the prior knowledge of sleep experts and deeply analyzes the model's lightweight nature and interpretability. Additionally, a set of interpretable sleep staging systems has been developed to achieve low-cost, efficient automatic sleep stage classification and describe the sleep cycle, aiming to improve the prevalence and efficiency of sleep monitoring in both home and clinical environments. By adopting the latest machine learning technologies, this paper not only enhances the level of automation in sleep staging but also increases the system's interpretability, enabling medical professionals to better understand and trust the model's decision-making process. The main contributions of this paper include: 1. High-performance sleep staging model: The DetectSleepNet model developed in this study utilizes single-channel EEG signals and achieves competitive accuracy on publicly available large sleep staging datasets without additional processing. This result showcases that the proposed method streamlines operations while maintaining high accuracy, offering a robust tool for sleep research and clinical applications. 2. Lightweight model DetectSleepNet-tiny: To better cater to the requirements of home and mobile devices, this study introduces a lightweight model with a minimal parameter count. This model significantly reduces computational resource requirements while retaining high accuracy. 3. Decision visualization: By integrating a model inference head capable of evaluating the attention level of each sleep frame to specific EEG segments, this system enhances result interpretability and provides medical professionals with an intuitive decision support tool. Through these efforts, we look forward to extending automatic sleep staging technology from clinical laboratories to homes and mobile devices, more broadly serving public health and the development of sleep science. Acknowledgements Extend the sincerest gratitude to Associate Professor Sun Guobing of Heilongjiang University for the invaluable guidance provided. § DECLARATIONS
http://arxiv.org/abs/2406.17704v1
20240625164150
Alignment-Induced Self-Organization of Autonomously Steering Microswimmers: Turbulence, Vortices, and Jets
[ "Segun Goh", "Elmar Westphal", "Roland G. Winkler", "Gerhard Gompper" ]
cond-mat.soft
[ "cond-mat.soft", "physics.bio-ph", "physics.comp-ph" ]
Theoretical Physics of Living Matter, Institute for Advanced Simulation, Forschungszentrum Jülich, 52425 Jülich, Germany Peter Grünberg Institute and Jülich Centre for Neutron Science, Forschungszentrum Jülich, 52425 Jülich, Germany Theoretical Physics of Living Matter, Institute for Advanced Simulation, Forschungszentrum Jülich, 52425 Jülich, Germany g.gompper@fz-juelich.de Theoretical Physics of Living Matter, Institute for Advanced Simulation, Forschungszentrum Jülich, 52425 Jülich, Germany § ABSTRACT Systems of motile microorganisms exhibit a multitude of collective phenomena, including motility-induced phase separation and turbulence. Sensing of the environment and adaptation of movement plays an essential role in the emergent behavior. We study the collective motion of wet self-steering polar microswimmers, which align their propulsion direction hydrodynamically with that of their neighbors, by mesoscale hydrodynamics simulations. The simulations of the employed squirmer model reveal a distinct dependence on the swimmer flow field, i.e., pullers versus pushers. The collective motion of pushers is characterized by active turbulence, with nearly homogeneous density and a Gaussian velocity distribution. Pullers exhibit a strong tendency for clustering and display velocity and vorticity distributions with fat exponential tails; their dynamics is chaotic, with a temporal appearance of vortex rings and fluid jets. Our results show that the collective behavior of intelligent microswimmers is very diverse and still offers many surprises to be discovered. Alignment-Induced Self-Organization of Autonomously Steering Microswimmers: Turbulence, Vortices, and Jets Gerhard Gompper July 1, 2024 ========================================================================================================== § INTRODUCTION Emergence of dynamic structures and patterns is an essential feature of biological active motile systems. Examples include microbial swarms <cit.> on the cellular level, to schools of fish <cit.>, flocks of birds <cit.>, and collective motion in human crowds <cit.> on a macroscopic level. Also, in artificial active systems consisting of synthetic self-propelled particles and microrobots, the collective dynamics of constituent objects is of fundamental importance for their application in engineering and medicine to achieve a large spectrum of functionalities <cit.>. A fundamental aspect in such systems is the active and autonomous motion of the constituting particles. While activity and self-propulsion can give rise to several novel types of collective behaviors, such as motility-induced phase separation (MIPS) <cit.> and active turbulence <cit.>, the fact that biological microswimmers are not only motile, but also gather information about their environment and adapt their motion by self-steering, remains largely unexplored and yet to be elucidated <cit.>. Many living organisms are immersed in a fluid medium, and their collective behavior is strongly affected or even dominated by hydrodynamics <cit.>. The hydrodynamic environment is not just the background medium in which aquatic microorganisms are based, but it is rather essential for locomotion on the individual level as well as inter-organism interactions <cit.>. On the mesoscale, life has adapted to low-Reynolds hydrodynamics, e.g., in the emergence of bacterial turbulence <cit.> and in coordinated cell migration during embryogenesis <cit.>; similar physical laws govern the swarming microrobots <cit.>. However, studying large-scale hydrodynamic systems is challenging as the fluid adds a large number of degrees of freedom, while the consideration of large-scale systems is unavoidable to accurately capture long-range hydrodynamics interactions and to minimize potential finite-size effects. So far, only limited studies have been conducted in this direction, particularly in three dimensions. The goal of our current endeavor is to unravel the emergent collective behavior of systems, which combine two essential components of living and artificial active systems, self-steering and hydrodynamics. In the context of active matter, the Vicsek model <cit.> is among the earliest and simplest models for the collective directional motion of self-propulsion and self-steering particles due to mutual alignment. While the role of alignment interactions has been extensively investigated for dry active systems <cit.>, hydrodynamic interactions themselves have rather been viewed as a physical alignment mechanism in wet systems <cit.>. However, hydrodynamic propulsion can in fact also destabilize polar order <cit.>, which implies the necessity for an additional stabilization mechanism for alignment. To achieve stable alignment, we consider a hydrodynamic extension of the Vicsek model. Our active agents, modeled as squirmers <cit.>, sense the propulsion direction of neighboring agents and adapt their propulsion direction accordingly by hydrodynamic self-steering <cit.>, with a “slow" temporal response due to limited maneuverability. We perform large-scale simulations in three-dimensional systems, capturing the fluid environment by the multiparticle collision dynamics (MPC) technique <cit.>, a particle-resolved mesoscale hydrodynamic simulation approach. We observe and characterize the emergence of swarming dynamics in polar active fluids, consisting either of pusher or puller microswimmers. Our results show that hydrodynamic interactions destabilize polar order, giving rise to rich collective spatio-temporal behavior beyond the simple symmetry breaking of the dry Vicsek model. Pusher systems feature active turbulence with non-universal scaling exponents in the kinetic energy spectrum, revealing a route toward active turbulence via self-steering. Systems of pullers are accompanied by formation of dense, swarming clusters driven by hydrodynamic interactions. In particular, formation of toroidal structures are observed in the vorticity field, which are characterized by enhanced spatial vorticity-velocity cross correlations at short distances. This demonstrates that the formation of vortex rings is a direct consequence of strong active jets caused by propulsion and alignment. § RESULT §.§ Hydrodynamic Vicsek model and Polar Order We consider a system of N spherical active microswimmers with radius R_ sq and instantaneous orientation e_i, i∈{1, …, N}. For self-propulsion and self-steering, the squirmer model is employed, where the self-propulsion is achieved via an axisymmetric surface-slip boundary condition, characterized by the speed v_0 and the active stress β [see Materials and Methods, Eq. (<ref>)]. Self-steering of microswimmers is modeled via an adaptive non-axisymmetric surface-slip boundary condition [see Materials and Methods, Eqs. (<ref>) and (<ref>)), and Fig. <ref>(b),(c)], which enables the squirmer to rotate and consequently reorient toward a desired direction e_ aim,i with the limited angular velocity <cit.> ω_i = C_0 e_i × e_ aim,i, where C_0 characterizes the strength of adaptation. Accordingly, we introduce two dimensionless parameters, the Péclet number Pe and the maneuverability Ω in the form Pe = v_0/σ D_R, Ω = C_0/D_R , where σ=2R_ sq and D_R are the diameter and the (thermal) rotational diffusion coefficient of a squirmer, respectively. In Eq. (<ref>), sensed information of each particle about the orientation of other neighboring particles is represented by a vector e_ aim,i({ e_j}). In the interest of the investigation and understanding of the generic collective behavior of an active polar fluid, we focus here on a “minimal” model, where only the swimming and steering mechanism via the adaptive surface flow field is explicitly considered, whereas sensing and information processing is taken into account implicitly by the sensed-information vector e_ aim in Eq. (<ref>), see Ref. <cit.> for a more detailed discussion. As a representative example of information exchanges between intelligent microswimmers, we consider a Vicsek-type alignment interaction, where each microswimmer aims at adapting its orientation and propulsion direction to the average orientation of neighboring particles, see Fig. <ref>(a) for illustration. Specifically, we employ a non-additive rule of orientation adaptation, which results in non-reciprocal interactions between microswimmers, where <cit.> e_ aim,i = 1/N_i∑_j ∈Γ_i e_j. Here, Γ_i is the set of neighbors of the i-th particle in its alignment range R_a, and N_i is the number of neighbors. As apparent from the definition in Eq. (<ref>), e_ aim, which serves as an input signal triggering adaptive surface flows according to Eqs. (<ref>) and (<ref>), is typically not a unit vector. Therefore, the magnitude of the adaptation force depends on the strength of orientational order of the neighboring microswimmers. We emphasize that in our approach the steering is achieved solely via the modification of the surface flow fields, mimicking the autonomous behaviors of microorganisms, in contrast to external driving forces, see, e.g., Ref. <cit.>. This difference leads to very different behaviors in wet and dry systems. For the MPC fluid dynamics simulations, we consider a MPC variant with angular momentum conservation <cit.>, see Supplementary Materials (SM), Sec. S-I for more details. Our highly parallelized, GPU-accelerated implementation employed for the simulations is based on the framework proposed in Ref. <cit.>. For an accurate characterization of emergent behaviors, we consider large system sizes up to L/a = 1024, where L is the length of the cubic simulation box and a is the side length of a MPC collision cell, and up to N=884,736 squirmers. For the squirmers, we choose a sensing range R_a = 4 R_ sq, and a strength of the active stress β = -3 and β =3 for pushers and pullers, respectively. For most simulations, we consider the packing fraction ρ≡ (4π R_ sq^3/3)N/L^3 =0.093 (based on squirmer radius), or ρ_a ≡ (4π R_ a^3/3)N/L^3 ≈ 6.0 (based on sensing range), if not explicitly stated otherwise. For comparison, we also perform simulations of a dry system of aligning self-steering “intelligent" active Brownian particles (iABPs) of the same packing fraction. Results for the global polar order parameter Ψ≡ |∑_i e_i |/N are shown in Fig. <ref>(d). While the systems are disordered for small maneuverability Ω in both dry and wet cases, a global polar order emerges only in dry systems for large Ω, indicating that hydrodynamic interactions destabilize the polar order. Instead of simple polar ordering, our squirmer systems show swarming dynamics with and without density modulation depending on the swimming mechanism (pusher or puller). §.§ Pushers: Active Turbulence via Self-Steering Systems of rearly propelled squirmers (pushers) exhibit active turbulence, see Fig. <ref>(a)-I for a snapshot (also Movie S1), and their collective motion features vortical flows. Figure <ref>(a)-II displays the fluid velocity field reflecting vortical structures and fluctuations in the magnitude of the vorticity. Accordingly, equal-time spatial velocity correlations become negative at large distances (less than L/2) for large Ω, see Fig. <ref>(b). However, no pronounced density fluctuations are visible, see Fig. <ref>(a)-I. This is confirmed by the analysis of the local density distribution via from Voronoi tessellation <cit.>. As shown in Fig. <ref>(c), the distribution exhibits a peak near the global squirmer density independent of Ω. To characterize the dynamics, we examine the kinetic energy spectrum as a representative indicator of active turbulence <cit.>. We determine the energy spectra for both the squirmer and the fluid. For the squirmers, we first calculate spatial velocity correlation functions and then perform a Fourier transform to obtain the energy spectrum. For the fluid, we calculate the energy spectrum directly from the (Eulerian) velocity field. Before proceeding to a more detailed analysis of systems of self-steering squirmers, we briefly discuss as reference case the dynamics of pushers without self-steering, i.e., squirmers with Ω = 0. Interestingly, we do not observe any significant collective behavior without self-steering for Pe = 128 as well as 256, with β = -3 and β=-5, in our simulations (see SM, Fig. S1). It seems that thermal fluctuations and consequently rotational diffusion of the microswimmers suppress any structure formation at the considered small packing fraction, compare also Ref. <cit.> for a related lattice Boltzmann simulation study of pushers. Moreover, the energy spectrum of squirmers strongly deviates from the fluid energy spectrum for large and intermediate k (Fig. S1), indicating that the fluid flows only mildly affect the squirmer dynamics. As also pointed out in Ref. <cit.>, presumably even higher densities of force-dipoles are necessary to promote the emergence of collective motion engaging both the fluid and microswimmers of pushers without self-steering, solely based on hydrodynamic alignment. Indeed, the number density explored in Refs. <cit.> is typically one or two orders of magnitude larger than in our case due to different geometry of active particles and their use of force dipoles. In sharp contrast, systems of self-steering squirmers display pronounced self-organization, as demonstrated in Fig. <ref>. With increasing maneuverability Ω, a scaling regime emerges in the fluid power spectrum for wave numbers k/k_σ≲ 0.3, see Fig. <ref>(d). Corresponding energy spectra of the squirmer motion exhibit the same scaling behavior with the same exponents – with scaling extending even to lower wave numbers k/k_σ≈ 0.02 for Ω=8192 – confirming the emergence of active turbulence, where the dynamics of squirmers and fluid are strongly correlated on larger length scales. The exponents ν of the energy spectrum, E(k) ∼ (k/k_σ)^-ν are found to be non-universal, depending on and increasing with Ω, roughly in the range 2.8 ≲ν≲ 4.0 for 4 ≤ Pe/Ω≤ 64, see Fig. S2. Moreover, with increase Ω, not only ν increases, but also the scaling regimes extends to large length scales (smaller k, as more squirmers participate in the self-organized vortex structures, and the peak height in the energy spectrum increases (up to |E| ≈ 200 v_0^2 for Ω=8192). This indicates that squirmers attain much higher velocities, which is quantitatively confirmed by the mean-square displacement (MSD), see Fig. <ref>. In the ballistic regime, the MSD ⟨ (Δ r)^2 ⟩∼ v^2 t^t, hence, the increasing amplitude in Fig. <ref> is a direct measure of the increasing speed as Ω increases for Ω≥ 512, i.e., in the regime where a broad scaling region can be identified in the energy spectrum (Fig. <ref>(d)). The Péclet number Pe merely affects turbulent dynamics on large length scales, but the ratio Pe/Ω determines the scaling exponent ν as shown in Fig. S2. We also probe density effects varying the number of squirmers. We first notice that the fluid energy spectra seem to converge for high densities, as shown in Fig. <ref>(e), in accordance with the previous report <cit.>. As the packing fraction ρ decreases from 0.186 to 0.012 corresponding to ρ_a = 11.9 and 0.768, downward shifts in the fluid energy spectra are observed, indicating that fluid stirring by squirmers are not strong at low densities. Consequently, the scaling regime in the energy spectra of squirmers shrinks as the density decreases, e.g., to a narrow range of 0.2 ≲ k/k_σ≲ 0.4 at ρ = 0.012. For k/k_σ≳ 0.5, or length scales smaller than about 2σ, differences between the fluid and squirmer energy spectra appear. In this regime, near-field hydrodynamic flows play a significant role. Moreover, the properties of the energy spectra depend on Pe, indicating that noise effects are significant at these small length scales, see Fig. S3 for fluid thermal energy spectra <cit.>. Therefore, in wet systems and small densities, a strong self-steering of active particles is crucial for the emergence of a large-scale coherent collective motion. Otherwise disorder and hydrodynamic instabilities on small length-scales may prevail <cit.>. Such an observation should also apply to systems where the alignment is mediated via steric repulsion of elongated body shapes <cit.> or strong hydrodynamic force dipoles <cit.>. In any cases, in active turbulence of wet systems, collective fluid flows induced by microswimmers build up the collective behavior of microswimmers on large length-scales with fast dynamics. In sharp contrast, active turbulence in dry systems <cit.> requires densely packed active particles, because a speedup mechanism as in wet systems is lacking as steric repulsions may only result in a slow-down of active particles. Instead, in dry active turbulence, the scaling regime develops in large k, or equivalently, small |E| regimes due to chaotic interparticle collisions on small length scales. §.§ Pullers: Swarming Dynamics via Self-Steering In a systems of self-steering pullers, a rich swarming dynamics develops, as shown in Fig. <ref>. The self-organization is characterized by the formation of morphologically complex clusters of microswimmers, which, on larger length scales, exhibit visually chaotic movements and exchange constitutive squirmers with each others, see Fig. <ref>(a)-I (also Movie S2). Still, the puller system exhibits a velocity field with vortical structures (see Fig. <ref>(a)-II), surprisingly similar to pushers. Density Modulation. The local density distribution is calculated by employing a Voronoi tessellation <cit.>. In sharp contrast to pusher systems (Fig. <ref>(c)), puller systems exhibit significant density modulations, see Fig. <ref>(b). For large Ω, Ω≳ 4096, the density distribution is broad, while for Ω = 2048, a clear tendency of segregation is observed (Movie S3) with a low-density peak at ρ_ loc≈ 0, and a high-density peak at ρ_ loc≈ 0.5. For small Ω in the range 128 ≤Ω≤ 512, non-mobile clusters appear (see Fig. S4). We focus here on the dynamic clusters, as formation of a dense static cluster may be affected by a depletion of MPC fluid particles inside the cluster – which is related to the (weak) compressibility of the MPC fluid, and artificially enhance cluster stability <cit.>. For Ω≤ 32, unimodal distributions are recovered, but with a peak at a density smaller than the global density of ρ = 0.093, and with fatter tails than those in pusher systems, which reflects a clustering tendency reported for pullers <cit.>. Decoupling of Puller Orientation and Velocity. The emergence of a high-density regime for Ω≥ 2048 needs to be distinguished from motility-induced phase separation (MIPS) in dry systems. First, squirmers in a (small) cluster exhibit local polar order, giving rise to a coherent directional motion of the cluster. Second, the pullers actually swim faster on average than their bare self-propulsion speed v_0, as shown in Fig. <ref>(c) – as in the system of pushers (see Fig. <ref>(e)). In particular, decoupling of self-propulsion and velocity of squirmers is far more significant than in MIPS. Even situations, where pullers are driven backward occur frequently (Movie S4). Indeed, as shown in Fig. <ref>(d), the probability that the velocity of a squirmer is anti-parallel to its orientation, i.e., v/| v| = - e, is even higher than for the parallel case, i.e., v/| v| = e. Therefore, we conclude that hydrodynamic interactions between self-steering pullers dominate over the self-propulsion forces. The attractive hydrodynamic interactions between aligned pullers in a head-to-tail configuration promotes the formation of dense clusters. Yet, as we will demonstrate below, dense clusters are not static, but exhibit a highly dynamical morphology. Decoupling of Puller and Fluid Dynamics. The kinetic energy spectra E(k) for squirmer motion and the fluid in puller systems are presented in Fig. <ref>. While for the swarming of pullers, E(k) exhibits a power-law decay in the intermediate wave-number regime (0.04 < k/k_σ < 0.2) for the fluid, as in pusher systems, several features significantly deviate from those of pushers, which again demonstrates the uniqueness of puller swarming. Most of all, a pronounced mismatch between squirmer and fluid energy spectra is observed in the intermediate regime of k/k_σ≈ 0.1 (see Fig. S5(a) and Fig. S7(a) for the corresponding correlation functions), which indicates that pullers are not simply driven by the fluid flow. The exponents of the three scaling regimes in Fig. <ref>(e) show universal behavior for the investigated range of Ω. Notably, in the case of the squirmer energy spectra for 0.2 < k/k_σ < 0.6, which correspond the length scales of the high-density intra-cluster regime where steric repulsion and near-field hydrodynamic interactions dominate, the obtained value of the exponent ν≃ 5/3 is in accordance with the Kolmogorov exponent for classical hydrodynamic turbulence and with active turbulence of dense quasi-two-dimensional pusher systems <cit.>. For larger length scales, an exponent ν≃ 11/3 is obtained, which persists under variation of global densities, see Fig. S6. A similar value has been found previously in Lattice Boltzmann simulations of extended force dipoles <cit.>, although at higher densities and for pushers. Formation of vortex ring. An even more striking feature is observed in configurations of the vorticity field. Visual observation of the time evolution of the system indicates a typical dynamical behavior in the morphology of clusters, which involves the pulsatile transformation of aggregates from a spherical shape into jellyfish-like arrangements. In terms of fluid mechanics, the jellyfish-like morphology suggests formation of a vortex ring, which is indeed confirmed by emergence of toroidal structures in the vorticity fields extracted from the simulations, see Fig. <ref>(a). As shown in Fig. <ref>(b), the vorticity field exhibits whirling patterns within regions where the magnitude of the vorticity field is large. For a detailed illustration, we consider the small system size L/a = 128, where only a single cluster emerges (Movie S5). As shown in Fig. <ref>(c)-I, the dynamics initiates with formation of a cluster. Then, due to alignment, squirmers rotate, and a polar order emerges within the cluster, see Fig. <ref>(c)-II, and also Fig. S5(b) for the corresponding order parameter. Such an ordered structure gives rise to a strong collective fluid flow, which generates a pronounced jet in front of the cluster, see the yellow surface in Fig. <ref>(e)-I. Notably, the jet flow is self-generated via active stirring of microswimmers in this case, instead of external perturbation as in passive hydrodynamic fluids. Subsequently, a spread-out motion of squirmers is initiated (Fig. <ref>(c)-III). Simultaneously, a vortex ring is formed around the cluster (blue ring in Fig. <ref>(e)-I), while squirmers are moving forward. As shown in Fig. <ref>, the velocity field is indeed wrapping around the vortex ring, in accordance with Fig. <ref>(b). Then, the pullers continue to spread out, rolling about the region where the vortex ring forms, see Fig. <ref>(c)-IV. While swimmers at the cluster center swim forward, they are dragged backward at the periphery, as shown in Fig. <ref>(e)-II, which contributes to the anomalous behavior in the distribution of v· e/| v| (see Fig. <ref>(d)). Eventually, the cluster dissolves, ending its life cycle. §.§ Velocity-Vorticity Coupling The sequential time evolution described so far indicates a strong coupling between the fluid velocity and the vorticity field in a puller cluster. As shown in Fig. <ref>(a), the rotation of the vorticity field for pullers is indeed centered at regions with a strong velocity field. For a more quantitative characterization, we examine the distribution of the Cartesian components of the velocity and vorticity fields. In pusher systems, both the velocity and vorticity fields (Fig. S7(b) and Fig. <ref>(b), respectively) exhibit a Gaussian distribution, which is an indicator of active turbulence <cit.>. For pullers, both deviate from a Gaussian, demonstrating that the swarming dynamics of pullers is not active turbulence. Specifically, the velocity distribution of pullers exhibits “fat" exponential tails, as shown in Fig. S7(b), in line with the emergence of stronger fluid flows than “expected", i.e., the occurrence of jet plumes induced by aligned pullers. Also the vorticity field for pullers shows a broader distribution than that of pushers as displayed in Fig. <ref>(b). Furthermore, the vorticity distribution for pullers in Fig. <ref>(b) shows a rather sharp peak at ω̅_α, the average of three Cartesian components ω_α (α=x,y,z) of ω, indicating a weak separation between regions with strong and weak vorticity. A more fundamental difference in velocity-vorticity coupling is revealed by a cross correlations between the magnitude of vorticity and that of the velocity field (|ω ( r)| and | v ( r)|), as defined in Eq. (<ref>). In Fig. <ref>(c), the various cross correlations for pushers exhibit a pronounced peak at r/a=20, 60, and 120 for Ω=512, 2048, and 8192, respectively, while the cross correlation at r=0 is not strong. Hence, for pushers, the velocity field of a vortex is weak at the center but strong at intermediate regions between the center and periphery. In sharp contrast, for pullers, the cross correlation between vorticity and velocity fields is already strong at small distances, which demonstrates that a strong velocity field generates a strong vorticity in the immediate vicinity of the jet flow. Moreover, the cross correlation decays faster than for pushers, assuming negative values before approaching zero. § DISCUSSION AND CONCLUSIONS We have studied self-organization and dynamics in three-dimensional wet systems of self-steering squirmers, which aim for alignment of their orientation with that of their neighbors. We demonstrate that alignment via hydrodynamic self-steering gives rise to a rich collective behavior in such polar active fluids, depending on the type of active stresses, i.e., whether the microswimmers are pushers or pullers. In both cases, an essential role of hydrodynamics is the breaking of long-range polar order, which causes the emergence of chaotic behavior. For pushers, the particle distribution is quite homogeneous, the distribution of the Cartesian velocity components is Gaussian, and the kinetic energy spectrum displays a peak and subsequent power-law decay with increasing wave vector, which indicates active-turbulent behavior. An intriguing feature is that strong self-steering enhances the coherent movement of microswimmers, which leads to collective particle motion with speeds much faster then the individual swim speed – as can be seen in the increasing magnitude of the peak in the energy spectrum, combined with an extension of the scaling regime toward large length scales. This implies that large-scale flows are induced by the collective motion, which drag the microswimmers along and supersede their individual motion. Thus, the polarity field and the fluid flow field are strongly coupled. For pullers, another type of self-organization emerges, which is strictly distinguished from motility-induced phase separation (MIPS) of dry ABP systems as well as active turbulence of pushers. The particle density is now found to be very inhomogeneous, as the pullers tend to form clusters. However, these clusters are not static, but appear as quite unstable. The particle alignment inside the cluster generates a strong fluid jet and a vortex ring, which pulls apart the cluster and leads to its disintegration. These strong flows imply that the probability of fast fluid flows is enhanced, which is reflected in the emergence of fat tails in the velocity distribution. In contrast, wet systems of self-propelled squirmers without self-steering display no interesting collective behavior in three dimensions, not even at high squirmer volume fractions. Our numerical observations for ensembles of self-steering pullers challenges current theoretical views on collective behaviors in wet active systems. So far, it has been typically assumed that the polarity and velocity fields of active fluids are essentially identical, based on the assumption of a nearly homogeneous distribution of active particles. Heterogeneous densities have been observed recently in models of compressible polar active fluids for bacterial suspensions <cit.>; however, the mechanism is entirely different in this case, as hydrodynamic interactions are not considered, and clustering is driven by a strong dependence of self-propulsion speed on the local density. Our results demonstrate that in wet systems of self-steering microswimmers in three dimensions, the interplay of the particle density and polarity, and of the fluid velocity field can give rise to a surprisingly rich variety of emergent behaviors – already for a highly simplified model systems with only a single particle type. § MATERIALS AND METHODS §.§ Mesoscale Fluid Model: Multi-Particle Collision Dynamics We adopt the multiparticle collision dynamics (MPC) method <cit.>, a particle-based mesoscale simulation approach, as model for the fluid. Specifically, we employ the stochastic rotation variant of MPC with angular momentum conservation (MPC-SRD+a) <cit.> and the cell-level Maxwell-Boltzmann scaling thermostat <cit.>. The algorithm consists of alternating streaming and collision steps. In the streaming step, the MPC point particles of mass m propagate ballistically over the collision time interval h, denoted as collision time. In the collision step, fluid particles are sorted into the cells of a cubic lattice of lattice constant a defining the collision environment. Then, their relative velocities, with respect to the center-of-mass velocity of the collision cell, are rotated around a randomly oriented axes by a fixed angle α. The algorithm conserves mass, linear, and angular momentum on the collision-cell level, while thermal fluctuations are automatically incorporated. More details are described in Refs. <cit.> and the SM, which refers to Refs. <cit.> additionally. Our GPU-based, highly parallelized implementation employed for the simulations is proposed in Ref. <cit.>. We use the average MPC fluid density (particles per collision cell) ⟨ N_c ⟩ = 20, the collision time h=0.02 a√(m/(k_ BT)) and the rotation angle α = 130^∘, which yield the fluid viscosity of η = 42.6√(mk_ BT)/a^2 <cit.>. With the squirmer radius R_ c= 3 a, these MPC parameters yield the rotational diffusion coefficient D_R = 4.1 × 10^-5√(k_ BT/m)/a. §.§ Hydrodynamic Self-Steering Hydrodynamic self-steering of squirmers is achieved via adaptive surface flow fields, given as u_ϕ = 3/2v_0sinθ (1+βcosθ) - 1/R_ sq^2(C̃_11cosϕ - C_11sinϕ) u_θ = cosθ/R_ sq^2 (C_11cosϕ +C̃_11sinϕ), where θ and ϕ are the polar and azimuthal angles in a body-fixed reference frame. The parameter v_0 characterizes the swim speed and β the strength of the force dipole, where β <0 for pullers and β>0 for pushers <cit.>. C_11 and C̃_11 control the magnitudes of non-axisymmetric surface-flow components, leading to rotational motion of the body <cit.>. Specifically, the non-axisymmetric flow fields enable the adaptive motion of Eq. (<ref>) via C_11 = C_0 R_ sq^3 ( e× e_ aim)· e_x, C̃_11 = C_0 R_ sq^3 ( e× e_ aim)· e_y, where e_x and e_y are the unit vectors along the axis of the body-fixed reference frame. For the self-propulsion, we use v_0/√(k_ BT/m) = 0.007872 and 0.031488, which correspond to Pe = v_0/(σ D_R) =32 and 128, and Re=0.022 and 0.089, respectively. The values of the self-steering strength C_0 are varied from 0 to 0.335872 √(k_ BT/m)/a yielding 0 ≤Ω≤ 8192. §.§ Steric squirmer interaction Steric repulsion between two squirmers is described by the separation-shifted Lennard-Jones potential U_ LJ (d_s) = 4ϵ_0 [ ( σ_0/d_s +σ_0)^12 -( σ_0/d_s +σ_0)^6 +1/4], for d_s < (2^1/6-1)σ_0 and zero otherwise, where d_s indicates the surface-to-surface distance between the two squirmer. To avoid loss of hydrodynamic interactions when two squirmers contact with each other, we also include a virtual safety distance d_v <cit.>, which leads to the effective distance d_s = r_c - σ -2d_v, where r_c denotes the center-to-center distance and σ is the squirmer diameter. We choose σ_0 = 2d_v. Numerically, the equations of motion for the rigid-body dynamics of the squirmers are solved by the velocity-Verlet algorithm, see SM for more details. §.§ Spatial correlation and Energy Spectrum Squirmers: Particle-based approach. The spatial velocity correlation function of the squirmers is defined as C_ sq (r) = 1/v̅^2⟨∑_i≠ j v_i· v_j δ (r - | r_i - r_j|) ⟩/⟨∑_i≠ jδ (r - | r_i - r_j|) ⟩. where v̅^2 ≡( ∑_i | v_i|^2 /N ). Here, we use the velocity averaged over a short time interval Δ t, for which we consider Δ t = 0.26 σ/v_0. The energy spectrum can be calculated via Fourier transformation. Here we consider the Fourier sine transform <cit.> E_ sq(k) = k/π∫ dr r sinkr v̅^2 C_ sq(r). Fluid: Field-based approach. We first extract the fluid velocity field v_ fl from the simulation data by introducing a grid dividing the whole system into N_g^3 cells. The velocities of all MPC particles, averaged over a short time interval Δ t as for squirmers, are additionally averaged over each cell to obtain v_ n with n=(n_x,n_y,n_z)^T for n_i=0,…, N_g-1 and i∈{x,y,z}. Then the discrete Fourier transform v_ fl ( k) = 1/N_g^3∑_ n v_ sq( n) e^i2π a k· n/N_g is performed. The energy spectrum is calculated straightforwardly via <cit.> E_ fl( k)= 1/2 | v_ fl( k)|^2, which is then averaged over all directions of k to obtain E_ fl(k). Then, the spatial velocity correlation function is obtained via the Fourier transformation C_v ( n) = ∑_ n⟨ E_ fl( k) ⟩ e^-i2π a k· n /N_g, from which we calculate C_ fl(r) by averaging over all directions of n. To reduce noise effects at small length scales, velocity fields are averaged over boxes with side lengths σ, 2σ, or 3σ. The vorticity field is then defined as ω ( n) ≡∇_ n× v_ fl ( n). Numerically, the vorticity field is calculated by the five-point stencil method from the velocity field. The vorticity spatial correlation C_ω are also obtained from Eqs. (<ref>)-(<ref>). Moreover, a gliding time average is performed for the velocity with a time window Δ t, corresponding to v_0 Δ t ≈ 0.26σ. Cross correlation. We again utilize Fourier transformation to calculate cross correlations between velocity and vorticity fields. Specifically, we first calculate the magnitudes of velocity and vorticity fields, which are then shifted by their average values, i.e., ṽ( n) ≡ v( n) - ∑_ n v ( n)/N_g^3 and ω̃( n) ≡ω( n) - ∑_ nω ( n)/N_g^3. Then, from the Fourier transforms of the fields ṽ ( k) and ω̃ ( k), the cross correlation is obtained via C_vω( n) = ∑_ n⟨ṽ( k) ω̃^* ( k) ⟩ e^-i2π a k· n/N_g, where superscript * indicates the complex conjugate. C_vω(r) is obtained by averaging over all n directions. 69 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Pedley and Kessler(1992)]pedl:92 author author T. J. Pedley and author J. O. Kessler, title title Hydrodynamic phenomena in suspensions of swimming microorganisms, https://doi.org/10.1146/annurev.fl.24.010192.001525 journal journal Annu. Rev. Fluid Mech. volume 24, pages 313 (year 1992)NoStop [Be'er and Ariel(2019)]beer:19 author author A. Be'er and author G. Ariel, title title A statistical physics view of swarming bacteria, https://doi.org/10.1186/s40462-019-0153-9 journal journal Mov. Ecol. volume 7, pages 9 (year 2019)NoStop [Katz et al.(2011)Katz, Tunstrøm, Ioannou, Huepe, and Couzin]katz:11 author author Y. Katz, author K. Tunstrøm, author C. C. Ioannou, author C. Huepe, and author I. D. Couzin, title title Inferring the structure and dynamics of interactions in schooling fish, https://doi.org/10.1073/pnas.1107583108 journal journal Proc. Natl. Acad. Sci. USA volume 108, pages 18720 (year 2011)NoStop [Gautrais et al.(2012)Gautrais, Ginelli, Fournier, Blanco, Soria, Chaté, and Theraulaz]guat:12 author author J. Gautrais, author F. Ginelli, author R. Fournier, author S. Blanco, author M. Soria, author H. Chaté, and author G. Theraulaz, title title Deciphering interactions in moving animal groups, https://doi.org/10.1371/journal.pcbi.1002678 journal journal PLOS Comput. Biol. volume 8, pages 1 (year 2012)NoStop [Ballerini et al.(2008)Ballerini, Cabibbo, Candelier, Cavagna, Cisbani, Giardina, Orlandi, Parisi, Procaccini, Viale, and Zdravkovic]ball:08.1 author author M. Ballerini, author N. Cabibbo, author R. Candelier, author A. Cavagna, author E. Cisbani, author I. Giardina, author A. Orlandi, author G. Parisi, author A. Procaccini, author M. Viale, and author V. Zdravkovic, title title Empirical investigation of starling flocks: a benchmark study in collective animal behaviour, https://doi.org/10.1016/j.anbehav.2008.02.004 journal journal Anim. Behav. volume 76, pages 201 (year 2008)NoStop [Bialek et al.(2012)Bialek, Cavagna, Giardina, Mora, Silvestri, Viale, and Walczak]bial:12.1 author author W. Bialek, author A. Cavagna, author I. Giardina, author T. Mora, author E. Silvestri, author M. Viale, and author A. M. Walczak, title title Statistical mechanics for natural flocks of birds, https://doi.org/10.1073/pnas.1118633109 journal journal Proc. Natl. Acad. Sci. USA volume 109, pages 4786 (year 2012)NoStop [Helbing et al.(2000)Helbing, Farkas, and Vicsek]helb:00 author author D. Helbing, author I. Farkas, and author T. Vicsek, title title Simulating dynamical features of escape panic, https://doi.org/10.1038/35035023 journal journal Nature , pages 487 (year 2000)NoStop [Xie et al.(2019)Xie, Sun, Fan, Lin, Chen, Wang, Dong, and He]xie:19 author author H. Xie, author M. Sun, author X. Fan, author Z. Lin, author W. Chen, author L. Wang, author L. Dong, and author Q. He, title title Reconfigurable magnetic microrobot swarm: Multimode transformation, locomotion, and manipulation, https://doi.org/10.1126/scirobotics.aav8006 journal journal Sci. Robot. volume 4, pages eaav8006 (year 2019)NoStop [Wang et al.(2021)Wang, Chan, Schweizer, Du, Jin, Yu, Nelson, and Zhang]qian:21 author author Q. Wang, author K. F. Chan, author K. Schweizer, author X. Du, author D. Jin, author S. C. H. Yu, author B. J. Nelson, and author L. Zhang, title title Ultrasound doppler-guided real-time navigation of a magnetic microswarm for active endovascular delivery, https://doi.org/10.1126/sciadv.abe5914 journal journal Sci. Adv. volume 7, pages eabe5914 (year 2021)NoStop [Chen and Bechinger(2022)]chen:22 author author C.-J. Chen and author C. Bechinger, title title Collective response of microrobotic swarms to external threats, https://doi.org/10.1088/1367-2630/ac5374 journal journal N. J. Phys. volume 24, pages 033001 (year 2022)NoStop [Wang et al.(2023)Wang, Chen, Kroy, Holubec, and Cichos]xian:23 author author X. Wang, author P.-C. Chen, author K. Kroy, author V. Holubec, and author F. Cichos, title title Spontaneous vortex formation by microswimmers with retarded attractions, https://doi.org/10.1038/s41467-022-35427-7 journal journal Nat. Commun. volume 14, pages 56 (year 2023)NoStop [Fily and Marchetti(2012)]fily:12 author author Y. Fily and author M. C. Marchetti, title title Athermal phase separation of self-propelled particles with no alignment, https://doi.org/10.1103/PhysRevLett.108.235702 journal journal Phys. Rev. Lett. volume 108, pages 235702 (year 2012)NoStop [Buttinoni et al.(2013)Buttinoni, Bialké, Kümmel, Löwen, Bechinger, and Speck]butt:13 author author I. Buttinoni, author J. Bialké, author F. Kümmel, author H. Löwen, author C. Bechinger, and author T. Speck, title title Dynamical clustering and phase separation in suspensions of self-propelled colloidal particles, https://doi.org/10.1103/PhysRevLett.110.238301 journal journal Phys. Rev. Lett. volume 110, pages 238301 (year 2013)NoStop [Wensink et al.(2012)Wensink, Dunkel, Heidenreich, Drescher, Goldstein, Löwen, and Yeomans]wens:12 author author H. H. Wensink, author J. Dunkel, author S. Heidenreich, author K. Drescher, author R. E. Goldstein, author H. Löwen, and author J. M. Yeomans, title title Meso-scale turbulence in living fluids, https://doi.org/10.1073/pnas.1202032109 journal journal Proc. Natl. Acad. Sci. USA volume 109, pages 14308 (year 2012)NoStop [Qi et al.(2022)Qi, Westphal, Gompper, and Winkler]qi:22 author author K. Qi, author E. Westphal, author G. Gompper, and author R. G. Winkler, title title Emergence of active turbulence in microswimmer suspensions due to active hydrodynamic stress and volume exclusion, https://doi.org/10.1038/s42005-022-00820-7 journal journal Commun. Phys. volume 5, pages 49 (year 2022)NoStop [Aranson(2022)]aran:22 author author I. S. Aranson, title title Bacterial active matter, https://doi.org/10.1088/1361-6633/ac723d journal journal Rep. Prog. Phys. volume 85, pages 076601 (year 2022)NoStop [Ziepke et al.(2022)Ziepke, Maryshev, Aranson, and Frey]ziep:22 author author A. Ziepke, author I. Maryshev, author I. S. Aranson, and author E. Frey, title title Multi-scale organization in communicating active matter, https://doi.org/10.1038/s41467-022-34484-2 journal journal Nat. Commun. volume 13, pages 6727 (year 2022)NoStop [Sawicki et al.(2023)Sawicki, Berner, Loos, Anvari, Bader, Barfuss, Botta, Brede, Franović, Gauthier, Goldt, Hajizadeh, Hövel, Karin, Lorenz-Spreen, Miehl, Mölter, Olmi, Schöll, Seif, Tass, Volpe, Yanchuk, and Kurths]sawi:23 author author J. Sawicki, author R. Berner, author S. A. M. Loos, author M. Anvari, author R. Bader, author W. Barfuss, author N. Botta, author N. Brede, author I. Franović, author D. J. Gauthier, author S. Goldt, author A. Hajizadeh, author P. Hövel, author O. Karin, author P. Lorenz-Spreen, author C. Miehl, author J. Mölter, author S. Olmi, author E. Schöll, author A. Seif, author P. A. Tass, author G. Volpe, author S. Yanchuk, and author J. Kurths, title title Perspectives on adaptive dynamical systems, https://doi.org/10.1063/5.0147231 journal journal Chaos volume 33, pages 071501 (year 2023)NoStop [Negi et al.(2024)Negi, Winkler, and Gompper]negi:24 author author R. S. Negi, author R. G. Winkler, and author G. Gompper, title title Collective behavior of self-steering active particles with velocity alignment and visual perception, https://doi.org/10.1103/PhysRevResearch.6.013118 journal journal Phys. Rev. Res. volume 6, pages 013118 (year 2024)NoStop [Koch and Subramanian(2011)]koch:11 author author D. L. Koch and author G. Subramanian, title title Collective hydrodynamics of swimming microorganisms: Living fluids, https://doi.org/https://doi.org/10.1146/annurev-fluid-121108-145434 journal journal Annu. Rev. Fluid Mech. volume 43, pages 637 (year 2011)NoStop [Elgeti et al.(2015)Elgeti, Winkler, and Gompper]elge:15 author author J. Elgeti, author R. G. Winkler, and author G. Gompper, title title Physics of microswimmers—single particle motion and collective behavior: a review, https://doi.org/10.1088/0034-4885/78/5/056601 journal journal Rep. Prog. Phys. volume 78, pages 056601 (year 2015)NoStop [Dombrowski et al.(2004)Dombrowski, Cisneros, Chatkaew, Goldstein, and Kessler]domb:04 author author C. Dombrowski, author L. Cisneros, author S. Chatkaew, author R. E. Goldstein, and author J. O. Kessler, title title Self-concentration and large-scale coherence in bacterial dynamics, https://doi.org/10.1103/PhysRevLett.93.098103 journal journal Phys. Rev. Lett. volume 93, pages 098103 (year 2004)NoStop [Drescher et al.(2011)Drescher, Dunkel, Cisneros, Ganguly, and Goldstein]dres:11 author author K. Drescher, author J. Dunkel, author L. H. Cisneros, author S. Ganguly, and author R. E. Goldstein, title title Fluid dynamics and noise in bacterial cell-cell and cell-surface scattering, https://doi.org/10.1073/pnas.1019079108 journal journal Proc. Natl. Acad. Sci. USA volume 108, pages 10940 (year 2011)NoStop [Deneke et al.(2019)Deneke, Puliafito, Krueger, Narla, De Simone, Primo, Vergassola, De Renzis, and Di Talia]dene:19 author author V. E. Deneke, author A. Puliafito, author D. Krueger, author A. V. Narla, author A. De Simone, author L. Primo, author M. Vergassola, author S. De Renzis, and author S. Di Talia, title title Self-organized nuclear positioning synchronizes the cell cycle in drosophila embryos, https://doi.org/https://doi.org/10.1016/j.cell.2019.03.007 journal journal Cell volume 177, pages 925 (year 2019)NoStop [Hernández-López et al.(2023)Hernández-López, Puliafito, Xu, Lu, Talia, and Vergassola]hern:23 author author C. Hernández-López, author A. Puliafito, author Y. Xu, author Z. Lu, author S. D. Talia, and author M. Vergassola, title title Two-fluid dynamics and micron-thin boundary layers shape cytoplasmic flows in early Drosophila embryos, https://doi.org/10.1073/pnas.2302879120 journal journal Proc. Natl. Acad. Sci. USA volume 120, pages e2302879120 (year 2023)NoStop [Ceron et al.(2023)Ceron, Gardi, Petersen, and Sitti]cero:23 author author S. Ceron, author G. Gardi, author K. Petersen, and author M. Sitti, title title Programmable self-organization of heterogeneous microrobot collectives, https://doi.org/10.1073/pnas.2221913120 journal journal Proc. Natl. Acad. Sci. USA volume 120, pages e2221913120 (year 2023)NoStop [Vicsek et al.(1995)Vicsek, Czirók, Ben-Jacob, Cohen, and Shochet]vics:95 author author T. Vicsek, author A. Czirók, author E. Ben-Jacob, author I. Cohen, and author O. Shochet, title title Novel type of phase transition in a system of self-driven particles, https://doi.org/10.1103/PhysRevLett.75.1226 journal journal Phys. Rev. Lett. volume 75, pages 1226 (year 1995)NoStop [Chaté(2020)]chat:20 author author H. Chaté, title title Dry aligning dilute active matter, https://doi.org/10.1146/annurev-conmatphys-031119-050752 journal journal Annu. Rev. Condens. Matter Phys. volume 11, pages 189 (year 2020)NoStop [Reinken et al.(2018)Reinken, Klapp, Bär, and Heidenreich]rein:18 author author H. Reinken, author S. H. L. Klapp, author M. Bär, and author S. Heidenreich, title title Derivation of a hydrodynamic theory for mesoscale dynamics in microswimmer suspensions, https://doi.org/10.1103/PhysRevE.97.022613 journal journal Phys. Rev. E volume 97, pages 022613 (year 2018)NoStop [Liu et al.(2021a)Liu, Shi, Zhao, Chaté, qing Shi, and Zhang]liu:21a author author Z. T. Liu, author Y. Shi, author Y. Zhao, author H. Chaté, author X. qing Shi, and author T. H. Zhang, title title Activity waves and freestanding vortices in populations of subcritical quincke rollers, https://doi.org/10.1073/pnas.2104724118 journal journal Proc. Natl. Acad. Sci. USA volume 118, pages e2104724118 (year 2021a)NoStop [Aditi Simha and Ramaswamy(2002)]simh:02 author author R. Aditi Simha and author S. Ramaswamy, title title Hydrodynamic fluctuations and instabilities in ordered suspensions of self-propelled particles, https://doi.org/10.1103/PhysRevLett.89.058101 journal journal Phys. Rev. Lett. volume 89, pages 058101 (year 2002)NoStop [Llopis and Pagonabarraga(2010)]llop:10 author author I. Llopis and author I. Pagonabarraga, title title Hydrodynamic interactions in squirmer motion: Swimming with a neighbour and close to a wall, https://doi.org/10.1016/j.jnnfm.2010.01.023 journal journal J. Non-Newtonian Fluid Mech. volume 165, pages 946 (year 2010)NoStop [Clopés et al.(2020)Clopés, Gompper, and Winkler]clop:20 author author J. Clopés, author G. Gompper, and author R. G. Winkler, title title Hydrodynamic interactions in squirmer dumbbells: active stress-induced alignment and locomotion, https://doi.org/10.1039/D0SM01569E journal journal Soft Matter volume 16, pages 10676 (year 2020)NoStop [Lighthill(1952)]ligh:52 author author M. J. Lighthill, title title On the squirming motion of nearly spherical deformable bodies through liquids at very small Reynolds numbers, https://doi.org/10.1002/cpa.3160050201 journal journal Comm. Pure Appl. Math. volume 5, pages 109 (year 1952)NoStop [Blake(1971)]blak:71 author author J. R. Blake, title title A spherical envelope approach to ciliary propulsion, https://doi.org/10.1017/S002211207100048X journal journal J. Fluid Mech. volume 46, pages 199 (year 1971)NoStop [Ishikawa et al.(2006)Ishikawa, Simmonds, and Pedley]ishi:06 author author T. Ishikawa, author M. P. Simmonds, and author T. J. Pedley, title title Hydrodynamic interaction of two swimming model micro-organisms, https://doi.org/10.1017/S0022112006002631 journal journal J. Fluid Mech. volume 568, pages 119 (year 2006)NoStop [Götze and Gompper(2010)]goet:10 author author I. O. Götze and author G. Gompper, title title Mesoscale simulations of hydrodynamic squirmer interactions, https://doi.org/10.1103/PhysRevE.82.041921 journal journal Phys. Rev. E volume 82, pages 041921 (year 2010)NoStop [Theers et al.(2016a)Theers, Westphal, Gompper, and Winkler]thee:16.1 author author M. Theers, author E. Westphal, author G. Gompper, and author R. G. Winkler, title title Modeling a spheroidal microswimmer and cooperative swimming in a narrow slit, https://doi.org/10.1039/C6SM01424K journal journal Soft Matter volume 12, pages 7372 (year 2016a)NoStop [Pak and Lauga(2014)]pak:14 author author O. S. Pak and author E. Lauga, title title Generalized squirming motion of a sphere, https://doi.org/10.1007/s10665-014-9690-9 journal journal J. Eng. Math. volume 88, pages 1 (year 2014)NoStop [Goh et al.(2023)Goh, Winkler, and Gompper]goh:23 author author S. Goh, author R. G. Winkler, and author G. Gompper, title title Hydrodynamic pursuit by cognitive self-steering microswimmers, https://doi.org/10.1038/s42005-023-01432-5 journal journal Commun. Phys. volume 6, pages 310 (year 2023)NoStop [Malevanets and Kapral(1999)]male:99 author author A. Malevanets and author R. Kapral, title title Mesoscopic model for solvent dynamics, https://doi.org/10.1063/1.478857 journal journal J. Chem. Phys. volume 110, pages 8605 (year 1999)NoStop [Kapral(2008)]kapr:08 author author R. Kapral, title title Multiparticle collision dynamics: Simulations of complex systems on mesoscale, https://doi.org/10.1002/9780470371572.ch2 journal journal Adv. Chem. Phys. volume 140, pages 89 (year 2008)NoStop [Gompper et al.(2009)Gompper, Ihle, Kroll, and Winkler]gomp:09 author author G. Gompper, author T. Ihle, author D. M. Kroll, and author R. G. Winkler, title title Multi-particle collision dynamics: A particle-based mesoscale simulation approach to the hydrodynamics of complex fluids, https://doi.org/10.1007/978-3-540-87706-6_1 journal journal Adv. Polym. Sci. volume 221, pages 1 (year 2009)NoStop [Goh et al.(2022)Goh, Winkler, and Gompper]goh:22 author author S. Goh, author R. G. Winkler, and author G. Gompper, title title Noisy pursuit and pattern formation of self-steering active particles, https://doi.org/10.1088/1367-2630/ac924f journal journal New J. Phys. volume 24, pages 093039 (year 2022)NoStop [Chepizhko et al.(2021)Chepizhko, Saintillan, and Peruani]chep:21 author author O. Chepizhko, author D. Saintillan, and author F. Peruani, title title Revisiting the emergence of order in active matter, https://doi.org/10.1039/D0SM01220C journal journal Soft Matter volume 17, pages 3113 (year 2021)NoStop [Bera et al.(2022)Bera, Sahoo, Thakur, and Das]bera:22 author author A. Bera, author S. Sahoo, author S. Thakur, and author S. K. Das, title title Active particles in explicit solvent: Dynamics of clustering for alignment interaction, https://doi.org/10.1103/PhysRevE.105.014606 journal journal Phys. Rev. E volume 105, pages 014606 (year 2022)NoStop [Noguchi and Gompper(2008)]nogu:08 author author H. Noguchi and author G. Gompper, title title Transport coefficients of off-lattice mesoscale-hydrodynamics simulation techniques, https://doi.org/10.1103/PhysRevE.78.016706 journal journal Phys. Rev. E volume 78, pages 016706 (year 2008)NoStop [Westphal et al.(2024)Westphal, Goh, Winkler, and Gompper]west:24 author author E. Westphal, author S. Goh, author R. G. Winkler, and author G. Gompper, title title Htmpc: A heavily templated C++ library for large scale particle-based mesoscale hydrodynamics simulations using multiparticle collision dynamics, journal journal arXiv preprint arXiv:2406.15236 https://doi.org/10.48550/arXiv.2406.15236 10.48550/arXiv.2406.15236 (year 2024)NoStop [Rycroft(2009)]rycr:09 author author C. H. Rycroft, title title VORO++: A three-dimensional Voronoi cell library in C++, https://doi.org/10.1063/1.3215722 journal journal Chaos volume 19, pages 041111 (year 2009)NoStop [Gascó et al.(2023)Gascó, Pagonabarraga, and Scagliarini]gasc:23 author author A. Gascó, author I. Pagonabarraga, and author A. Scagliarini, title title Three-dimensional active turbulence in microswimmer suspensions: simulations and modelling, journal journal arXiv preprint arXiv:2304.03662 https://doi.org/10.48550/arXiv.2304.03662 10.48550/arXiv.2304.03662 (year 2023)NoStop [Stenhammar et al.(2017)Stenhammar, Nardini, Nash, Marenduzzo, and Morozov]sten:17 author author J. Stenhammar, author C. Nardini, author R. W. Nash, author D. Marenduzzo, and author A. Morozov, title title Role of correlations in the collective behavior of microswimmer suspensions, https://doi.org/10.1103/PhysRevLett.119.028005 journal journal Phys. Rev. Lett. volume 119, pages 028005 (year 2017)NoStop [Bárdfalvy et al.(2019)Bárdfalvy, Nordanger, Nardini, Morozov, and Stenhammar]bard:19 author author D. Bárdfalvy, author H. Nordanger, author C. Nardini, author A. Morozov, and author J. Stenhammar, title title Particle-resolved lattice Boltzmann simulations of 3-dimensional active turbulence, https://doi.org/10.1039/C9SM00774A journal journal Soft Matter volume 15, pages 7747 (year 2019)NoStop [Huang et al.(2012)Huang, Gompper, and Winkler]huan:12 author author C.-C. Huang, author G. Gompper, and author R. G. Winkler, title title Hydrodynamic correlations in multiparticle collision dynamics fluids, https://doi.org/10.1103/PhysRevE.86.056711 journal journal Phys. Rev. E volume 86, pages 056711 (year 2012)NoStop [Wensink and Löwen(2012)]wens:12.1 author author H. H. Wensink and author H. Löwen, title title Emergent states in dense systems of active rods: from swarming to turbulence, https://doi.org/10.1088/0953-8984/24/46/464130 journal journal J. Phys.: Condens. Matter volume 24, pages 460130 (year 2012)NoStop [Keta et al.(2024)Keta, Klamser, Jack, and Berthier]keta:24 author author Y.-E. Keta, author J. U. Klamser, author R. L. Jack, and author L. Berthier, title title Emerging mesoscale flows and chaotic advection in dense active matter, https://doi.org/10.1103/PhysRevLett.132.218301 journal journal Phys. Rev. Lett. volume 132, pages 218301 (year 2024)NoStop [Theers et al.(2018)Theers, Westphal, Qi, Winkler, and Gompper]thee:18 author author M. Theers, author E. Westphal, author K. Qi, author R. G. Winkler, and author G. Gompper, title title Clustering of microswimmers: interplay of shape and hydrodynamics, https://doi.org/10.1039/C8SM01390J journal journal Soft Matter volume 14, pages 8590 (year 2018)NoStop [Zantop and Stark(2022)]zant:22 author author A. W. Zantop and author H. Stark, title title Emergent collective dynamics of pusher and puller squirmer rods: swarming, clustering, and turbulence, https://doi.org/10.1039/D2SM00449F journal journal Soft Matter volume 18, pages 6179 (year 2022)NoStop [Worlitzer et al.(2021a)Worlitzer, Ariel, Be'er, Stark, Bär, and Heidenreich]worl:21 author author V. M. Worlitzer, author G. Ariel, author A. Be'er, author H. Stark, author M. Bär, and author S. Heidenreich, title title Motility-induced clustering and meso-scale turbulence in active polar fluids, https://doi.org/10.1088/1367-2630/abe72d journal journal New J. Phys. volume 23, pages 033012 (year 2021a)NoStop [Worlitzer et al.(2021b)Worlitzer, Ariel, Be'er, Stark, Bär, and Heidenreich]worl:21.1 author author V. M. Worlitzer, author G. Ariel, author A. Be'er, author H. Stark, author M. Bär, and author S. Heidenreich, title title Turbulence-induced clustering in compressible active fluids, https://doi.org/10.1039/D1SM01276B journal journal Soft Matter volume 17, pages 10447 (year 2021b)NoStop [Huang et al.(2010)Huang, Chatterji, Sutmann, Gompper, and Winkler]huan:10.1 author author C.-C. Huang, author A. Chatterji, author G. Sutmann, author G. Gompper, and author R. G. Winkler, title title Cell-level canonical sampling by velocity scaling for multiparticle collision dynamics simulations, https://doi.org/10.1016/j.jcp.2009.09.024 journal journal J. Comput. Phys. volume 229, pages 168 (year 2010)NoStop [Omelyan(1998)]omel:98 author author I. P. Omelyan, title title Algorithm for numerical integration of the rigid-body equations of motion, https://doi.org/10.1103/PhysRevE.58.1169 journal journal Phys. Rev. E volume 58, pages 1169 (year 1998)NoStop [Lamura et al.(2001)Lamura, Gompper, Ihle, and Kroll]lamu:01 author author A. Lamura, author G. Gompper, author T. Ihle, and author D. M. Kroll, title title Multiparticle collision dynamics: Flow around a circular and a square cylinder, https://doi.org/10.1209/epl/i2001-00522-9 journal journal Europhys. Lett. volume 56, pages 319 (year 2001)NoStop [Padding et al.(2005)Padding, Wysocki, Löwen, and Louis]padd:05 author author J. T. Padding, author A. Wysocki, author H. Löwen, and author A. A. Louis, title title Stick boundary conditions and rotational velocity auto-correlation functions for colloidal particles in a coarse-grained representation of the solvent, https://doi.org/10.1088/0953-8984/17/45/027 journal journal J. Phys.: Condens. Matter volume 17, pages S3393 (year 2005)NoStop [Theers and Winkler(2014)]thee:14 author author M. Theers and author R. G. Winkler, title title Effects of thermal fluctuations and fluid compressibility on hydrodynamic synchronization of microrotors at finite oscillatory Reynolds number: A multiparticle collision dynamics simulation study, https://doi.org/10.1039/C4SM00770K journal journal Soft Matter volume 10, pages 5894 (year 2014)NoStop [Theers and Winkler(2015)]thee:15 author author M. Theers and author R. G. Winkler, title title Bulk viscosity of multiparticle collision dynamics fluids, https://doi.org/10.1103/PhysRevE.91.033309 journal journal Phys. Rev. E volume 91, pages 033309 (year 2015)NoStop [Shaebani et al.(2020)Shaebani, Wysocki, Winkler, Gompper, and Rieger]shae:20 author author M. R. Shaebani, author A. Wysocki, author R. G. Winkler, author G. Gompper, and author H. Rieger, title title Computational models for active matter, https://doi.org/10.1038/s42254-020-0152-1 journal journal Nat. Rev. Phys. volume 2, pages 181 (year 2020)NoStop [Theers et al.(2016b)Theers, Westphal, Gompper, and Winkler]thee:16 author author M. Theers, author E. Westphal, author G. Gompper, and author R. G. Winkler, title title From local to hydrodynamic friction in Brownian motion: A multiparticle collision dynamics simulation study, https://doi.org/10.1103/PhysRevE.93.032604 journal journal Phys. Rev. E volume 93, pages 032604 (year 2016b)NoStop [Batchelor(1959)]batc:59 author author G. K. Batchelor, @noop title The theory of homogeneous turbulence (publisher University Press, address Cambridge, year 1959)NoStop [Liu et al.(2021b)Liu, Zeng, Ma, and Cheng]liu:21 author author Z. Liu, author W. Zeng, author X. Ma, and author X. Cheng, title title Density fluctuations and energy spectra of 3D bacterial suspensions, https://doi.org/10.1039/D1SM01183A journal journal Soft Matter volume 17, pages 10806 (year 2021b)NoStop § ACKNOWLEDGEMENTS Funding: The authors gratefully acknowledge the Gauss Centre for Supercomputing e.V. (www.gauss-centre.eu) for funding this project by providing computing time through the John von Neumann Institute for Computing (NIC) on the GCS Supercomputer JUWELS at Jülich Supercomputing Centre (JSC). Author contributions: G.G. and R.G.W. designed the research. E.W. wrote the simulation code. S.G. performed the simulations and analyzed the data. S.G., R.G.W., and G.G. discussed the results and wrote the manuscript together. Competing interests: The authors declare that they have no competing interests. Data and materials availability: All data needed to evaluate the conclusions in the paper are available from the corresponding author upon reasonable request.
http://arxiv.org/abs/2406.19112v1
20240627114825
A Teacher Is Worth A Million Instructions
[ "Nikhil Kothari", "Ravindra Nayak", "Shreyas Shetty", "Amey Patil", "Nikesh Garera" ]
cs.LG
[ "cs.LG" ]
main fancy § ABSTRACT Large Language Models(LLMs) have shown exceptional abilities, yet training these models can be quite challenging. There is a strong dependence on the quality of data and finding the best instruction tuning set. Further, the inherent limitations in training methods create substantial difficulties to train relatively smaller models with 7B and 13B parameters. In our research, we suggest an improved training method for these models by utilising knowledge from larger models, such as a mixture of experts (8x7B) architectures. The scale of these larger models allows them to capture a wide range of variations from data alone, making them effective teachers for smaller models. Moreover, we implement a novel post-training domain alignment phase that employs domain-specific expert models to boost domain-specific knowledge during training while preserving the model's ability to generalise. Fine-tuning Mistral 7B and 2x7B with our method[Code can be found at <https://github.com/flipkart-incubator/kd-dae>] surpasses the performance of state-of-the-art language models with more than 7B and 13B parameters: achieving up to 7.9 in MT-Bench <cit.> and 93.04% on AlpacaEval <cit.>. A Teacher Is Worth A Million Instructions Nikhil Kothari Ravindra Nayak Shreyas Shetty Amey Patil Nikesh Garera =========================================================================================================== § INTRODUCTION The recent advancements in Large Language Models (LLMs) have significantly propelled the field of natural language understanding and generation. Pre-trained language models (PLMs) leveraging extensive training corpus sourced from web <cit.> have demonstrated impressive capabilities across various natural language processing (NLP) tasks. However, additional training steps are required for PLMs to follow instructions and keep the responses aligned to human preferences. Instruction tuning (IT) <cit.> trains a PLM further for instruction following; utilising the general knowledge imparted in the pre-training phase along with the imparted instruction following capability it trains the model to generalise well on unseen tasks. However, while proficient at following instructions, these models may produce outputs that are potentially toxic or ethically questionable. To enhance alignment with human values, further training is necessary, utilizing techniques such as reinforcement learning with human feedback <cit.>, direct preference optimization (DPO) <cit.> and monolithic preference optimization without reference model (ORPO) <cit.> based on pairwise preference data. Instruction tuning requires meticulous attention to data quality, optimization of instruction sets, and the implementation of effective training methodologies to ensure peak performance. A primary challenge in training these instruction-tuned models in specific domains is the potential reduction in the model's generalization ability, a factor we monitor using public evaluation benchmarks in our research. In this study, we present a method that not only addresses these concerns but also improves public benchmarks while aligning the model within a specific domain, in this instance, e-commerce. Drawing from the successful implementation of Knowledge Distillation (KD) <cit.> in miniLLMs<cit.> and tasks such as classification, we propose it as an alternative to the commonly used supervised fine-tuning (SFT) and alignment process in language model training. We propose Domain Alignment from Expert (DAE), a unique post-training domain alignment algorithm designed to strengthen domain-specific knowledge within the LLMs. DAE integrates domain-specific expert models into the training process, enhancing the model's understanding of specialized domains while preserving its ability to generalize across broader contexts. This approach surpasses state-of-the-art language models with over 7B and 13B parameters, as evidenced by significant improvements in MT-Bench and AlpacaEval benchmarks. Our main contribution is the successful imparting of knowledge from domain experts while retaining the generalization capabilities of the student model. This study challenges two commonly accepted beliefs: (i) KD with a teacher model smaller than the student model doesn't work, and (ii) KD from larger models cannot yield performance that is equal to or better than SFT and alignment combined. Our work aligns the model within the e-commerce domain, while demonstrating its generalization ability across tasks such as entity extraction, summarization, roleplay, and reasoning using language models with up to 13B parameters. § METHOD In a decoder-only model employing self-attention <cit.>, the task involves generating an output sequence, where variables x_1, ..., x_n represent the input symbols, and z_1, ..., z_m denote the output symbols. The model learns to produce the symbol z_t+1 at time-step t, while only considering preceding symbols ( x_1, ..., x_n, z_1,..., z_t ). During training, Cross-Entropy Loss (CE) is employed as the objective function at every step. CE for a given sequence quantifies the dissimilarity between the predicted probability distribution q_θ(y|x), parameterized by θ, and the true probability distribution p(y|x) at each step in the sequence. CE(p,q_θ) = -1/T-1∑_t=1^T-1 p(y_t+1|x_t' ≤ t) log( q_θ(y_t+1|x_t' ≤ t) ) Here, T denotes the length of the sequence, y_t+1 is the label at t+1 step, x_t' ≤ t is the input sequence till step t, p(y_t+1|x_t' ≤ t) and q_θ(y_i|x_t' ≤ t) represent the respective true and predicted probability of y_t+1 given x_t' ≤ t. In practice since the true probability of y_t+1 is unknown, p(y_t+1|x_t' ≤ t) is set to 1 for the y_t+1 token and 0 for the rest of the tokens in vocabulary. This complete dependence on y_t+1 makes the training data quality and its diversity paramount, and thus also presents a need for something more nuanced. Instead of considering the true probability as a one-hot vector, we could use a teacher for a better estimate of the distribution. §.§ Knowledge Distillation in Transformer The suggested method focuses on the transformers architecture. Our experiments are conducted on Mistral models, namely Mistral 7B v0.1 Base, Mistral Instruct v2, and Mixtral 8x7B Instruct. These models share the same tokenizer, number of decoder layers, attention heads, and hidden dimension, facilitating straightforward distillation with a one-to-one mapping. However, as long as the tokenizer of the teacher and student models is the same, we can still distill knowledge from the teacher with some adjustments. We only focus on the prediction layer and attention based distillation as a part of our study. Even though adding hidden states based distillation also could yield good results, we couldn't extensively try it out due to its high memory demands. Prediction Layer Distillation. To fit the prediction of the teacher model, we use Kullback-Leibler Divergence (KLD) between the student model logits and the teacher model logits. KLD loss for a sequence measures the difference between the student probability distribution q_θ(y|x) parameterized by θ and the teacher probability distribution p(y|x) <cit.> at every step. ℒ_pred = 1/T∑_t=1^T∑_i=1^V p(y_i|x_t' ≤ t) log( p(y_i|x_t' ≤ t)/q_θ(y_i|x_t' ≤ t)) where T is the sequence length, V is the vocabulary size, y_i is the i^th token in the vocabulary, x_t' ≤ t is the input sequence till step t, p(y_i|x_t' ≤ t) and q(y_i|x_t' ≤ t) are the respective true (teacher) and the predicted (student) probabilities of y_i given x_t' ≤ t. Here, p(y|x) is a distribution over the entire vocabulary and hence, gives us a smoother "true" distribution at every step while reducing the overt dependence on the training data quality. Attention Based Distillation. In the transformer architecture, attention weights determine the importance of each token in relation to others at each time-step in the sequence. This attention mechanism enables the model to focus on relevant information, allowing for better capture of long-range dependencies. The Transformer model typically comprises multiple layers, each consisting of parallel self-attention heads. These heads allow the model to attend to different parts of the input simultaneously, facilitating efficient learning of contextual relationships. Mistral 7B v0.1 Base, Mistral 7B Instruct v2 and Mixtral 8x7B Instruct, all 3 have 32 decoder layers and 32 attention heads. This architectural similarity simplifies attention-based calculations. For any time-step t, the attention weights satisfy the constraint: ∑_i=1^t a_it = 1 In the context of this equation and referring to Figure <ref>, where a_ij denotes the attention given to token i when referencing token j, each row in the figure sums to 1. This summation indicates that the attention weights assigned to all tokens preceding (and including) token j when referencing j imply a probability distribution over the tokens. We use KLD loss over this probability distribution as depicted in the loss term ℒ_attn, t (<ref>), representing the loss from a single row in Figure <ref>. The causal mask in Figure <ref> zeroes out future positions in the input sequence. This allows each token to attend only to previous tokens, ensuring that the model generates output based only on current information during generation or decoding. This maintains causality and prevents leakage from future tokens. ℒ_attn, t = ∑_i=1^t a_it, teacherlog( a_it, teacher/a_it, student) The loss term ℒ_attn in equation  (<ref>) is obtained by normalizing the sum of ℒ_attn, t across the entire sequence length (representing all rows in Figure <ref>), batch size, number of attention heads, and layers in the architecture. ℒ_KD = ℒ_pred + ℒ_attn In our ablation study, we separately explore prediction layer distillation (<ref>), attention based distillation (<ref>), and combined distillation techniques (<ref>). Our findings indicate that combined distillation emerges as the most effective method for enhancing the instruction-following capability of the student model. Prediction layer distillation plays a pivotal role in achieving this outcome. Additionally, attention based distillation offers guidance in the distillation process, yielding better results than either standalone distillation method. Although we do not delve into distillation for pre-training in this study, our observations suggest that attention based distillation could potentially prove crucial during that phase of training. We leverage Mistral 7B v0.1 Base <cit.> as our student model and conduct ablations to validate the aforementioned theory. For comparison, Mistral 7B Instruct v2 is employed as a teacher model of similar size, while Mixtral 8x7B Instruct serves as a larger-scale teacher model. These ablations aim to provide empirical evidence supporting the theoretical framework. Subsequently, upon establishing the foundation, we scale up our student model by merging two Mistral models into a 2x7B (13B) mixture of experts <cit.> while using Mixtral 8x7B Instruct as the teacher model. An intriguing observation emerged post training: while our student model may not fully reach the performance level of the teacher model, it exhibits alignment, stemming from the teacher model's adherence to human preferences. Any further effort to enhance alignment using DPO led to a decrease in evaluation metrics. This could indicate that the initial alignment achieved post-training is robust, and further alignment efforts may require an iterative approach with significantly improved preference data and lowered learning rates. §.§ Domain Alignment from Expert Further training an instruction tuned and aligned model in a given domain causes the model to lose its generalising ability; while the model excels at in-domain tasks, it under performs on unseen and out of domain tasks. Domain Alignment from Expert (DAE) uses the above theory as foundation and presents as a novel approach to impart domain knowledge to the trained and aligned model while controlling its generalisation capability. Though we experimented DAE extensively on the student model we got in the previous training phase of knowledge distillation (KD), we successfully conducted an experiment to align a Mistral 7B Instruct v2 as well, implying that this approach could yield good results for any model. In training, the language model q_θ is initialized from the student knowledge distilled model from the previous training phase. We also employ a base reference model p_ref derived from the same knowledge distilled model. The reference model serves to prevent the model from deviating too far from its generalized state during training. Additionally, the domain expert (p_e) is a model specifically trained and aligned on domain-specific data. It is crucial for this expert to excel within its designated domain without the necessity to generalize to unseen or out-of-domain tasks. Within a training batch, the student model utilizes the domain expert as the teacher for in-domain samples and the reference model for non-domain samples. Our study demonstrates that even with domain data just being 10% of the total training data, the model can effectively learn about the domain while still maintaining generalizability. ℒ_d = 1/T∑_t=1^T∑_i=1^V1_d(x) p_e(y_i|x_t' ≤ t) log( p_e(y_i|x_t' ≤ t)/q_θ(y_i|x_t' ≤ t)) ℒ_nd = 1/T∑_t=1^T∑_i=1^V1_nd(x) p_ref(y_i|x_t' ≤ t) log( p_ref(y_i|x_t' ≤ t)/q_θ(y_i|x_t' ≤ t)) ℒ_DAE = ℒ_d + ℒ_nd where ℒ_d represents the loss for domain samples that use the domain expert for the true distribution and ℒ_nd is the loss for general out-of-domain samples that use the reference model. With DAE already being computationally intensive due to three models in memory, we opt to use only prediction layer distillation, as attention based distillation entails even higher computational requirements. Additionally, since DAE serves as an alignment phase and operates on an already distilled model, it could potentially function with slightly less guidance during training. § EXPERIMENTS In this section, we assess the ability of KD and DAE in training models to perform comparably with the standard SFT and alignment approach. We utilize the Transformers library <cit.> for supervised fine-tuning and the TRL library <cit.> for DPO alignment atop SFT models. §.§ Models §.§.§ Ablation Study We commence with several ablations on Mistral 7B v0.1 Base to substantiate our theory. Mistral 7B v0.1 Base is employed as the student model, while Mistral 7B Instruct v2 and Mixtral 8x7B v0.1 Instruct serve as teachers for Knowledge Distillation (KD) experiments. These experiments were benchmarked, alongside the SFT experiment, on 20k samples sourced from the public dataset Ultrachat <cit.>. §.§.§ KD at Scale Following successful ablations, we expand these experiments to include 7B and 2x7B models as students with Mixtral 8x7B v0.1 Instruct as the teacher. We merge two Mistral 7B models using mergekit <cit.> to get a 2x7B MoE base model. §.§.§ Domain Alignment from Expert After successful scaling, we proceed to align the scaled 2x7B model within the e-commerce domain. Additionally, we apply the same approach to Mistral 7B Instruct v0.1 to verify the effectiveness of DAE on any model. To establish an e-commerce domain expert, we train a Mistral 7B v0.2 Base model on internal e-commerce data using SFT and DPO. While this model performs well on our internal e-commerce evaluation, its performance on the public evaluation is average. §.§ Dataset We utilize a different mix of public and e-commerce datasets at each training stage. Our emphasis is not primarily on the quality of the datasets but rather on ensuring that the mix is a fair and diverse representation of all domains. * For <ref>, we utilize 20k different samples sourced from the public dataset Ultrachat <cit.>. * For <ref>, we utilize numerous single-turn and multi-turn public datasets <cit.> across diverse domains such as mathematics, coding, reasoning, and Chain-of-Thought. * For <ref>, we use 45k samples from the <ref> dataset and add 5k samples from our in-house e-commerce dataset. * Our in-house e-commerce dataset is a GPT-4 generated instruction dataset spanning across various e-commerce tasks such as extraction, reasoning, QnA and summarisation. §.§ Training Configuration Deriving from the training configurations of supervised fine-tuning and alignment from Llama 2 <cit.> and Zephyr <cit.>, we adopt similar configurations. For <ref> and <ref>, we adhere to configurations similar to SFT and we adopt the DPO configurations for <ref>. * For <ref>, we use a cosine learning rate scheduler with a peak learning rate of 2e-5 and 10% warmup steps. We train for 2 epochs with a global batch size of 512 and packing with a sequence length of 2048 tokens. * For <ref>, we use a cosine learning rate scheduler with a peak learning rate of 2e-5 and 10% warmup steps. We train for 1 epoch with a global batch size of 6000 and packing with a sequence length of 2048 tokens. * For <ref>, we use a constant learning rate scheduler with a peak learning rate of 5e-7 and 10% warmup steps. We train for 1 epoch with a global batch size of 64 and packing with a sequence length of 4096 tokens. §.§ Leaderboard Evaluation Our main evaluations are on single-turn and multi-turn chat benchmarks that measure a model’s ability to follow instructions and respond to challenging prompts across a diverse range of domains. We mainly use the following two benchmarks in our study: * MT-Bench <cit.>: A multi-turn evaluation benchmark comprising 80 multi-turn questions across 8 different domains, namely, Writing, Roleplay, Reasoning, Math, Code, STEM, Humanities, and Extraction. For each of the 80 questions, the model must provide single-turn responses and then respond to a predefined follow-up question. The model results are scored using GPT-4 on a scale of 1 to 10, with the final score being an average over the two turns. * AlpacaEval <cit.>: A single-turn benchmark where a model is tasked with generating responses to 805 questions covering various topics, predominantly focused on helpfulness. Models are scored by GPT-4, and the final metric is the pairwise win-rate against a baseline model, text-davinci-003. * EcommEval: An elaborate single-turn in-house benchmark comprising 2000 questions across 20 e-commerce tasks. The model is scored by GPT4-turbo[https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo] on a scale of 1 to 10 under multiple aspects such as instruction following, factuality, accuracy, and helpfulness. § RESULTS We present the results of our ablation study (<ref>) in Table <ref>. We conducted experiments on 20,000 random (R) and quality assured (QA) samples sourced from the ultrachat <cit.> dataset. Specifically, our experiments encompassed SFT on random samples (row 1), SFT on QA samples (row 2), and Knowledge Distillation employing Mistral Instruct v2 and Mixtral 8x7B Instruct as teachers, with Mistral 7B v0.1 Base acting as the student (rows 3 to 6). For KD, we run separate experiments for attention based distillation [see Eq.  (<ref>)], prediction layer distillation [see Eq. (<ref>)], and combined distillation [see Eq. (<ref>)], as illustrated in Figure <ref>. We used MT Bench scores to benchmark these experiments. Our analysis indicates that SFT yields results characterized by a lower mean score and higher deviation, while experiments employing KD exhibit promising outcomes, generally evidenced by higher mean scores and lower deviations, indicating potential for improved performance. We used MT Bench scores to benchmark these experiments. We introduce four models derived from our final scaled-up experiments in table <ref>. * Flip-7B-Instruct-α: We run KDS (<ref>) with Mistral 7B v0.1 Base as the student and Mixtral 8x7B Instruct as the teacher. We observe an MT-Bench score of 7.58, 90.17% win-rate in AlpacaEval, and 8.36 on EcommEval. * Flip-7B-Instruct-β: We run e-commerce DAE (<ref>) on Flip-7B-Instruct-α as the student, with our e-commerce model serving as the domain expert. This model achieves an MT-Bench score of 7.69, 91.6% win-rate in AlpacaEval, and 8.75 on EcommEval. * Flip-2x7B-Instruct-α: We run KDS (<ref>) with Mistral 2x7B as the student and Mixtral 8x7B Instruct as the teacher. We observe an MT-Bench score of 7.7, 92.67% win-rate in AlpacaEval, and 8.4 on EcommEval. * Flip-2x7B-Instruct-β: We run e-commerce DAE (<ref>) on Flip-2x7B-Instruct-α as the student, with our e-commerce model serving as the domain expert. This model achieves an MT-Bench score of 7.90, 93.04% win-rate in AlpacaEval, and 8.95 on EcommEval. § RELATED WORK Large Language Models (LLMs; <cit.>) have opened new avenues in Natural Language Processing (NLP) by solving various complex tasks in a generative manner. Recent works explore instruction tuning <cit.> and human preference alignment <cit.> to create general-purpose safe AI assistants. Instruction tuning enables the model to follow instructions and excel at unseen tasks, while alignment step aligns the model with ethical and societal principles. Moreover, the open-sourcing of LLM architectures such as Mistral <cit.> and Llama <cit.> plays a pivotal role in research efforts, and also serves as the foundation of our work. Knowledge Distillation (KD; <cit.>) aims to teach a smaller student model by transferring knowledge from a larger teacher model. This approach popularly has been looked at as a method to compress large models for the ease of deployment. KD has shown great promise in text classification tasks <cit.> by learning from the output distribution of the teacher model, or by learning from the teacher's self-attention output <cit.>. In text generation tasks the student learns by matching the output distribution of the teacher model at every time-step <cit.> or by a supervised fine-tuning on teacher outputs <cit.>. § CONCLUSION In this work, we investigate the problem of knowledge distillation at scale, leveraging teachers of varying sizes. Our KDS approach employs a larger LLM, Mixtral 8x7B Instruct, to teach smaller models of 7B and 2x7B (13B) scale. Conversely, in DAE, we utilize a smaller (7B) domain expert to align larger 2x7B (13B) mixture of experts. We propose knowledge distillation as a legitimate means of model training rather than a mere way to compress larger LLMs for deployment purposes. We demonstrate that our models are either comparable to or, in some cases, even outperforms open-access and proprietary much larger LLMs trained using the popular SFT and alignment approach. While we acknowledge the importance of SFT and alignment in training LLMs, we advocate for our KDS approach as a viable training method for models that could benefit from superior teachers. Additionally, our DAE approach proves to be highly effective for domain alignment without compromising generalization. Further research in this direction has the potential to significantly enrich the LLM training toolbox. plain
http://arxiv.org/abs/2406.18511v1
20240626173129
High-Speed Combinatorial Self-Assembly through Kinetic-Trap Encoding
[ "Félix Benoist", "Pablo Sartori" ]
cond-mat.stat-mech
[ "cond-mat.stat-mech" ]
Instituto Gulbenkian de Ciência, Oeiras, Portugal Instituto Gulbenkian de Ciência, Oeiras, Portugal § ABSTRACT Like the letters in the alphabet forming words, reusing components of a heterogeneous mixture is an efficient strategy for assembling a large number of target structures. Examples range from synthetic DNA origami to proteins self-assembling into complexes. The standard self-assembly paradigm views target structures as free-energy minima of a mixture. While this is an appealing picture, at high speed structures may be kinetically trapped in local minima, reducing self-assembly accuracy. How then can high speed, high accuracy, and combinatorial usage of components coexist? We propose to reconcile these three concepts not by avoiding kinetic traps, but by exploiting them to encode target structures. This can be achieved by sculpting the kinetic pathways of the mixture, instead of its free-energy landscape. We formalize these ideas in a minimal toy model, for which we analytically estimate the encoding capacity and kinetic characteristics, in agreement with simulations. Our results may be generalized to other soft-matter systems capable of computation, such as liquid mixtures or elastic networks, and pave the way for high-dimensional information processing far from equilibrium. High-Speed Combinatorial Self-Assembly through Kinetic-Trap Encoding Pablo Sartori July 1, 2024 ==================================================================== Introduction. The combinatorial usage of different components is a prevalent biological strategy to encode information. For example, in the cytoplasm proteins accurately self-assemble into complexes that share proteins with one another <cit.>. This notion has also permeated nanotechnology <cit.>, where the same set of DNA tiles can be reused to reliably self-assemble multiple structures <cit.>. Besides reusability and high accuracy, a fundamental property of biological self-assembly is high speed, which allows cellular adaptation to quickly changing conditions. This motivates the fundamental question of how self-assembly with reusable components can occur fast and accurately. The standard approach to combinatorial self-assembly encodes target structures as minima of the mixture's free-energy landscape <cit.>. While never explicitly mentioned, this approach is subject to a speed-accuracy trade-off <cit.>: self-assembly of targets is accurate when the mixture can relax to target minima in near-equilibrium conditions. Far from equilibrium, as required for high speed, free-energy encoding results in undesired structures trapping the kinetics. To reconcile self-assembly speed, accuracy, and reusability we propose an alternative encoding approach: tuning the kinetics of the pathways leading to target structures. In this approach, kinetic traps, normally understood as deleterious <cit.>, can be exploited to encode information that is accessible far from equilibrium and at high speed <cit.>. In this article, we model the dynamics of a self-assembling heteropolymer in contact with a reservoir of multiple different component species. We show how a large number of target strings can be encoded kinetically, such that accurate self-assembly of any of them will occur at high speed. Furthermore, we analytically calculate and numerically confirm the scaling of the maximum number of structures that a mixture can kinetically encode, the characteristic lifetime of targets, and the dependence of our results on the heterogeneity of targets and their usage of components. We expect that other soft-matter systems could also perform complex information processing by sculpting not their energy landscape, but their dynamic space. Model setup. We consider a mixture of N_ tot different monomer species, see Fig. <ref>A, labeled i=1, …, N_ tot, that are kept at fixed chemical potentials, μ_i=μ, and temperature, T (hereafter k_ BT=1, with k_ B Boltzmann's constant). We are interested in conditions under which the mixture can self-assemble any of S different target strings, labeled α=1,…,S, that are defined through composition vectors c^α={i_1^α, i_2^α, …,i_L^α}, with L the length (equal for all targets). Each target contains N≤ N_ tot different monomer species, and is thus characterized by its usage of the mixture, u=N/N_ tot≤1, and its compositional heterogeneity, h=N/L≤1. The reusability of components, within and across strings, allows for a combinatorial expansion of the mixture. We study the dynamics of a heteropolymer that grows by adding and removing monomers of the mixture at its distal end, see Fig. <ref>B-D. The polymer in question is characterized by its composition vector {i_1, i_2, …,i_ℓ}, with ℓ the time-varying length. The addition and removal of monomers depends on the composition of the polymer tip, t_n={i_ℓ+1-n,…,i_ℓ}, with n≥1 the length of the tip. We denote the addition rate of a monomer from species i as k_i^+( t_n), and the removal rate of monomer i_ℓ as k^-( t_n+1). Note that the case n=1 corresponds to near-neighbor coupling among monomers, n=2 to next near-neighbor, etc. Within this setup, our goal is to propose a choice of rates that allows polymerization of target strings reliably and fast. To ensure fast retrieval of targets, we encode their compositions in the binding kinetics, rather than in the energetics, of the mixture components. Therefore, considering the binding of component i to a tip t_n and its subsequent unbinding (from a tip t'_n+1 corresponding to the previous tip t_n to which has been added i), the detailed balance condition on the rates reduces to k_i^+( t_n)/ k^-( t'_n+1)=exp(μ) , which encodes no information about the targets. The targets are instead kinetically encoded through the choice of forward rates k_i^+( t_n) = exp(r_iδ) , where r_i is the number of monomers in the tip t_n that have i as neighbor in the corresponding location (e.g., with n=2, near-neighbor location for i_ℓ and next near-neighbor location for i_ℓ-1) in any target string c^α, and δ is a kinetic discrimination parameter. Note that the rates are defined up to an irrelevant time unit. As an illustration, for the simple case n=1 this rule implies: r_i=1, if the monomer i to be added is a neighbor of the tip monomer i_ℓ in any of the target strings; and r_i=0 otherwise. Alternatively, in the example of Fig. <ref>B with n=2, we have k_ D^+({ R,U})=exp(2δ), due to the target c^1={ N,E,R,U,D,A}, but k_ Z^+({ R,U})=1. In the following, we study the conditions under which this minimal model allows accurate and fast retrieval of targets. Retrieving a target string as a kinetic trap. As a starting point, we consider that the mixture encodes a single string (S=1) that is fully heterogeneous (h=1) and uses all components (u=1, such that L=N_ tot=N). In this case, errors are not due to combinatorial usage of components (first error in Fig. <ref>C), but instead emerge from thermal fluctuations (second error in Fig. <ref>C). At equilibrium, the chemical potential of the mixture balances the entropic tendency to grow, and so μ_ eq=-ln N<0 <cit.>. Equation (<ref>) implies that no information is encoded in the binding energies, and so the equilibrium state of the polymer is fully disordered: for μ≳μ_ eq an initially ordered seed will disassemble in favor of a disordered polymer (Fig. <ref>B). This equilibrium state defines a boundary between a depolymerization regime, where the growth speed v (defined as the net rate of monomer addition) is negative, i.e. v<0 for μ<μ_ eq (Fig. <ref>A); and different growth regimes, for which μ>μ_ eq implies v>0 (Fig. <ref>C and D). Equation (<ref>) establishes that target strings are encoded in the kinetics, instead of the energetics. Therefore, the accuracy of retrieval should be maximal when the dynamics are strongly irreversible <cit.>. Since in accurate and irreversible dynamics there is only one possible assembly pathway, the bound on the driving is raised to μ>0. Furthermore, suppressing errors due to the presence of N-1 confounding monomers at each growth step requires that the kinetic discrimination barrier, δ, be sufficiently large. To bound δ, we estimate the error rate p_err as the ratio of the sum of addition rates for all potential erroneous additions to the addition rate of the correct monomer, i.e. p_err≈ (N-1)/exp(nδ). In the highly irreversible regime, the probability to retrieve the seeded target can be estimated as (1-p_err)^N. For N≫1, a significant retrieval probability thus requires Np_err≪1. This leads to a lower bound on the kinetic discrimination parameter: δ_ min = 2/nln N . We distinguish two fast-growth regimes: kinetic disorder for δ<δ_ min, in which thermal fluctuations result in frequent addition errors, i.e. p_err≈1 (Fig. <ref>C); and kinetic order for δ>δ_ min, in which a target string is accurately retrieved, i.e. p_err≈0, until a fluctuation destabilizes it (Fig. <ref>D). Figure <ref>E shows that increasing the driving μ results in an increase of the growth speed, up to saturation at v≈exp(nδ), as well as an increase in retrieval accuracy (defined as the fraction of string length assembled until the first error). Still, high accuracy is only possible for large discrimination barriers, in agreement with Eq. (<ref>). Kinetic encoding implies that targets are not thermodynamically stable. We can however estimate their kinetic stability. The time it takes to retrieve a target string, τ_ ret, is obtained by dividing the length of the string, N, by its growth speed, v, and so τ_ ret≈ Nexp(-nδ). In contrast, the lifetime of the string, τ_ life, is given by the time it takes to add a few incorrect monomers, and so τ_ life≈1/N. Therefore, the lifetime of a string relative to its retrieval time reads τ_ life/τ_ ret≈exp(nδ)/N^2 , and so a larger discrimination barrier results in more stable strings, see Fig. <ref>F. To conclude, we have shown that the kinetic encoding approach in Eq. (<ref>) allows for fast and accurate retrieval of a single target string for strong discrimination far from equilibrium. Combinatorial encoding far from equilibrium. We now turn to the case in which the mixture encodes multiple targets (S>1) by combinatorially reusing components across targets <cit.>. In this scenario, errors arise when one component has as neighbors two different components in two different targets, making these two targets indistinguishable to the tip of a growing polymer. For example, in Fig. <ref>C for the state { M,A,R} of the polymer and n=1 we have that k_ I^+=k_ U^+=exp(δ), due to { R} appearing both in c^1={ N,E,R,U,D,A} and c^S={ M,A,R,I,A,S}, which can result in the error shown. Such types of errors hence cannot be suppressed by increasing δ. Conceptually, if the mixture encodes many kinetic pathways to different targets, such pathways may cross, making it likely to retrieve a chimera (formed by fragments of different target strings) rather than the seeded target string (Fig. <ref>A). In Fig. <ref>B, we show two examples of successful and failed retrieval for a mixture that kinetically encodes multiple target strings. For the same large positive values of μ and δ, if the number of stored targets is below a certain maximal value, S<S_ max, retrieval is successful; instead if S>S_ max, the initial seed nucleated fragments from many other different target strings, yielding a chimeric polymer <cit.>. What determines the maximum number of target strings, S_ max, that can be accurately assembled from a mixture via kinetic encoding? To answer this question, we define the promiscuity of components, π, as the number of specific interactions of a typical component at each neighboring location. A large promiscuity turns components indistinguishable, irrespective of δ, hampering the reliability of assembly. The error rate, p_ err, thus denotes the probability that given a tip of size n there is ambiguity regarding which component can be added. For n=1, we can estimate p_err≈(π-1)/π, whereas for n≥2, the error rate scales as p_err∼(π-1)^n/N^n-1 (see SI for derivation). We can then proceed as in the derivation of Eq. (<ref>), focusing on the case where all components are used once in every target (h=u=1), for which π≈ S. We obtain S_ max=1 for n=1, because the error rate, p_err≈(S-1)/S, prevents retrieval for as little as two targets; whereas for n≥2, S_ max∼ N^1-2/n . The predicted size scaling goes from 1 for n=2, and thus no combinatorial usage is possible, to N for the fully-connected case n=N, akin to neural network capacity <cit.>. Therefore, increasing the tip size improves discrimination, which allows to encode more targets. Figure <ref>C shows the results of numerical simulations relating the capacity, S_ max, to the number of component species, N, for different tip sizes, n. As one can see, for n=1,2 no combinatorial usage of components is possible, whereas for n=3,4 the numerical results are in good agreement with the predictions of Eq. (<ref>). Figure <ref>D shows how the time of retrieval, τ_ret, and lifetime, τ_ life depend on the kinetic discrimination parameter δ for different numbers of target strings, S. While τ_ret follows the behavior derived earlier, τ_life is now bounded by exp(-δ), because errors are dominated by component reusability. As S increases, τ_life decreases, making structures more unstable as S approaches the capacity limit, S_ max. Overall, we conclude that kinetic encoding of a combinatorially large number of components is possible, with a capacity and stability that agree with our analytical estimates. The roles of heterogeneity and usage. We now study the effect of target heterogeneity, h, and usage of components, u, on the capacity of the system, S_ max. In this general case, the promiscuity of components is given by π≈ Su/h for large heterogeneous targets [SI]. Following an argument analogous to the previous section, i.e. Lp_err≪1 with p_err∼(π-1)^n/N_ tot^n-1, the scaling in Eq. (<ref>) generalizes to S_ max∼(h/u)^2-1/nL^1-2/n , for n≥2. This expression highlights the role of the target string length L as a key extensive quantity regulating the scaling. Eq. (<ref>) also shows that increasing heterogeneity and reducing usage both result in an increase of the capacity. The intuition behind this is simple. Increasing heterogeneity will reduce the reusability of components within each target. Likewise, reducing the usage of components made by each target string reduces the reusability across targets. Both aspects reduce the promiscuity of components, thus facilitating that polymerization pathways of different targets stay separate from each other. Figure <ref> shows that our numerical simulations recover the capacity scaling in Eq. (<ref>). In particular the capacity diverges as the usage becomes low (u→0), and reaches a maximum for fully heterogeneous structures (h=1). Therefore, high heterogeneity and low usage of available components results in increased capacity for kinetic encoding, as dictated by Eq. (<ref>). Discussion. Classical self-assembly relies on the stability of target structures, which results in a trade-off between self-assembly speed and accuracy. In contrast, combining ideas of information thermodynamics <cit.> and neuroscience <cit.>, we have shown how self-assembly of heterogeneous target structures can be performed kinetically. In this approach, higher speed implies higher accuracy, breaking the aforementioned trade-off <cit.>. Since assemblies in this scenario do not correspond to deep energy minima, but to long-lived kinetic traps, they are only stable for a finite amount of time. Our encoding approach enables combinatorial usage of components, and therefore constitutes a clear example of how a complex neural-like computation (in this case target retrieval) can be encoded far from equilibrium in a soft-matter system. It remains open how to incorporate other ideas of information processing, such as error correction <cit.>. It is also open whether kinetic approaches facilitate implementation of other information processing tasks routinely performed by neural systems <cit.>. While we have illustrated the concept of kinetic-trap encoding in a toy model, this idea is adaptable to systems in other branches of soft-matter physics, which at present all use energetic interactions to encode target functions. Examples include self-assembly of structures with more complex geometries <cit.>, programmable liquid phases <cit.>, colloidal self-assembly <cit.>, guided self-folding <cit.>, or elastic network models of proteins <cit.>. Since the biophysical systems that these models aim to describe often operate far from equilibrium, we expect kinetic encoding will play a central role in explaining how biological matter is capable of complex high-dimensional information processing. This work was partly supported by a laCaixa Foundation grant (LCF/BQ/PI21/11830032) and core funding from the Gulbenkian Foundation. 37 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Gavin et al.(2006)Gavin et al.]Gavin06 author author A.-C. Gavin et al., 10.1038/nature04532 journal journal Nature volume 440, pages 631 (year 2006)NoStop [Kühner et al.(2009)Kühner et al.]Kuhner09 author author S. Kühner et al., 10.1126/science.1176343 journal journal Science volume 326, pages 1235 (year 2009)NoStop [Dunn et al.(2015)Dunn, Dannenberg, Ouldridge, Kwiatkowska, Turberfield, and Bath]Dunn15 author author K. E. Dunn, author F. Dannenberg, author T. E. Ouldridge, author M. Kwiatkowska, author A. J. Turberfield, and author J. Bath, 10.1038/nature14860 journal journal Nature volume 525, pages 82 (year 2015)NoStop [Meng et al.(2016)Meng, Muscat, McKee, Milnes, El-Sagheer, Bath, Davis, Brown, O'Reilly, and Turberfield]Meng16 author author W. Meng, author R. A. Muscat, author M. L. McKee, author P. J. Milnes, author A. H. El-Sagheer, author J. Bath, author B. G. Davis, author T. Brown, author R. K. O'Reilly, and author A. J. Turberfield, 10.1038/nchem.2495 journal journal Nature Chemistry volume 8, pages 542 (year 2016)NoStop [Evans et al.(2024)Evans, O'Brien, Winfree, and Murugan]Evans24 author author C. G. Evans, author J. O'Brien, author E. Winfree, and author A. Murugan, 10.1038/s41586-023-06890-z journal journal Nature volume 625, pages 500 (year 2024)NoStop [Murugan et al.(2015a)Murugan, Zeravcic, Brenner, and Leibler]Murugan_PNAS15 author author A. Murugan, author Z. Zeravcic, author M. P. Brenner, and author S. Leibler, 10.1073/pnas.1413941112 journal journal Proceedings of the National Academy of Sciences volume 112, pages 54 (year 2015a)NoStop [Bisker and England(2018)]Bisker18 author author G. Bisker and author J. L. England, 10.1073/pnas.1805769115 journal journal Proceedings of the National Academy of Sciences volume 115, pages E10531 (year 2018)NoStop [Sartori and Leibler(2020)]Sartori20 author author P. Sartori and author S. Leibler, 10.1073/pnas.1911028117 journal journal Proceedings of the National Academy of Sciences volume 117, pages 114 (year 2020)NoStop [Bennett(1979)]Bennett79 author author C. H. Bennett, 10.1016/0303-2647(79)90003-0 journal journal Biosystems volume 11, pages 85 (year 1979)NoStop [Sartori and Pigolotti(2013)]Sartori13 author author P. Sartori and author S. Pigolotti, 10.1103/PhysRevLett.110.188101 journal journal Phys. Rev. Lett. volume 110, pages 188101 (year 2013)NoStop [Gartner et al.(2022)Gartner, Graf, and Frey]Gartner22 author author F. M. Gartner, author I. R. Graf, and author E. Frey, 10.1073/pnas.2116373119 journal journal Proceedings of the National Academy of Sciences volume 119, pages e2116373119 (year 2022)NoStop [Jia et al.(2020)Jia, Schmit, and Chen]Jia20 author author Z. Jia, author J. D. Schmit, and author J. Chen, 10.1073/pnas.1911153117 journal journal Proceedings of the National Academy of Sciences volume 117, pages 10322 (year 2020)NoStop [Sanz et al.(2007)Sanz, Valeriani, Frenkel, and Dijkstra]Sanz07 author author E. Sanz, author C. Valeriani, author D. Frenkel, and author M. Dijkstra, 10.1103/PhysRevLett.99.055501 journal journal Phys. Rev. Lett. volume 99, pages 055501 (year 2007)NoStop [Osat et al.(2023)Osat, Metson, Kardar, and Golestanian]Osat_ArXiv23 author author S. Osat, author J. Metson, author M. Kardar, and author R. Golestanian, @noop title Escaping kinetic traps using non-reciprocal interactions, (year 2023), http://arxiv.org/abs/2309.00562 arXiv:2309.00562 [cond-mat.stat-mech] NoStop [Dill and Chan(1997)]Dill97 author author K. A. Dill and author H. S. Chan, 10.1038/nsb0197-10 journal journal Nature Structural Biology volume 4, pages 10 (year 1997)NoStop [Murugan et al.(2015b)Murugan, Zou, and Brenner]Murugan_Nat15 author author A. Murugan, author J. Zou, and author M. P. Brenner, 10.1038/ncomms7203 journal journal Nature Communications volume 6, pages 6203 (year 2015b)NoStop [Lefebvre and Maes(2023)]Lefebvre23 author author B. Lefebvre and author C. Maes, 10.1007/s10955-023-03110-w journal journal Journal of Statistical Physics volume 190, pages 90 (year 2023)NoStop [Andrieux and Gaspard(2008)]Andrieux08 author author D. Andrieux and author P. Gaspard, 10.1073/pnas.0802049105 journal journal Proceedings of the National Academy of Sciences volume 105, pages 9516 (year 2008)NoStop [Hopfield(1982)]Hopfield82 author author J. J. Hopfield, 10.1073/pnas.79.8.2554 journal journal Proceedings of the National Academy of Sciences volume 79, pages 2554 (year 1982)NoStop [Bennett(1982)]Bennett82 author author C. H. Bennett, 10.1007/BF02084158 journal journal International Journal of Theoretical Physics volume 21, pages 905 (year 1982)NoStop [Hertz et al.(1991)Hertz, Krogh, and Palmer]Hertz91 author author J. Hertz, author A. Krogh, and author R. G. Palmer, @noop title Introduction to the Theory of Neural Computation, Santa Fe Institute Series (publisher CRC Press, year 1991)NoStop [Pigolotti and Sartori(2016)]Pigolotti16 author author S. Pigolotti and author P. Sartori, 10.1007/s10955-015-1399-2 journal journal Journal of Statistical Physics volume 162, pages 1167 (year 2016)NoStop [Banerjee et al.(2017)Banerjee, Kolomeisky, and Igoshin]Banerjee17 author author K. Banerjee, author A. B. Kolomeisky, and author O. A. Igoshin, 10.1073/pnas.1614838114 journal journal Proceedings of the National Academy of Sciences volume 114, pages 5183 (year 2017)NoStop [Hopfield(1974)]Hopfield74 author author J. J. Hopfield, 10.1073/pnas.71.10.4135 journal journal Proceedings of the National Academy of Sciences volume 71, pages 4135 (year 1974)NoStop [Poulton et al.(2019)Poulton, ten Wolde, and Ouldridge]Poulton19 author author J. Poulton, author P. R. ten Wolde, and author T. Ouldridge, 10.1073/pnas.1808775116 journal journal Proceedings of the National Academy of Sciences volume 116, pages 201808775 (year 2019)NoStop [Takahashi et al.(2024)Takahashi, Aoyanagi, Pigolotti, and Toyabe]Takahashi24 author author T. Takahashi, author H. Aoyanagi, author S. Pigolotti, and author S. Toyabe, 10.1101/2024.04.19.590219 journal journal bioRxiv (year 2024), 10.1101/2024.04.19.590219NoStop [Zhu et al.(2023)Zhu, Du, King, and Brenner]Zhu23 author author Q.-Z. Zhu, author C. X. Du, author E. M. King, and author M. P. Brenner, @noop title Proofreading mechanism for colloidal self-assembly, (year 2023), http://arxiv.org/abs/2312.08619 arXiv:2312.08619 [cond-mat.soft] NoStop [Krotov(2023)]Krotov23 author author D. Krotov, 10.1038/s42254-023-00595-y journal journal Nature Reviews Physics volume 5, pages 366 (year 2023)NoStop [Lenz and Witten(2017)]Lenz_Nat17 author author M. Lenz and author T. Witten, 10.1038/nphys4184 journal journal Nature Physics volume 13, pages 1100 (year 2017)NoStop [Tyukodi et al.(2022)Tyukodi, Mohajerani, Hall, Grason, and Hagan]Tyukodi22 author author B. Tyukodi, author F. Mohajerani, author D. M. Hall, author G. M. Grason, and author M. F. Hagan, 10.1021/acsnano.2c00865 journal journal ACS Nano volume 16, pages 9077 (year 2022)NoStop [Teixeira et al.(2023)Teixeira, Carugno, Neri, and Sartori]Teixeira23 author author R. B. Teixeira, author G. Carugno, author I. Neri, and author P. Sartori, @noop title Liquid hopfield model: retrieval and localization in heterogeneous liquid mixtures, (year 2023), http://arxiv.org/abs/2310.18853 arXiv:2310.18853 [physics.bio-ph] NoStop [Zwicker and Laan(2022)]Zwicker22 author author D. Zwicker and author L. Laan, 10.1073/pnas.2201250119 journal journal Proceedings of the National Academy of Sciences volume 119, pages e2201250119 (year 2022)NoStop [McMullen et al.(2022)McMullen, Muñoz Basagoiti, Zeravcic, and Brujic]McMullen22 author author A. McMullen, author M. Muñoz Basagoiti, author Z. Zeravcic, and author J. Brujic, 10.1038/s41586-022-05198-8 journal journal Nature volume 610, pages 502 (year 2022)NoStop [Romano et al.(2020)Romano, Russo, Kroc, and ŠŠulc]Romano20 author author F. Romano, author J. Russo, author L. c. š. Kroc, and author P. ŠŠulc, 10.1103/PhysRevLett.125.118003 journal journal Phys. Rev. Lett. volume 125, pages 118003 (year 2020)NoStop [Pinto et al.(2024)Pinto, Araújo, ŠŠulc, and Russo]Pinto24 author author D. E. P. Pinto, author N. A. M. Araújo, author P. ŠŠulc, and author J. Russo, 10.1103/PhysRevLett.132.118201 journal journal Phys. Rev. Lett. volume 132, pages 118201 (year 2024)NoStop [Yan et al.(2017)Yan, Ravasio, Brito, and Wyart]Yan17 author author L. Yan, author R. Ravasio, author C. Brito, and author M. Wyart, 10.1073/pnas.1615536114 journal journal Proceedings of the National Academy of Sciences volume 114, pages 2526 (year 2017)NoStop [Rocks et al.(2017)Rocks, Pashine, Bischofberger, Goodrich, Liu, and Nagel]Rocks17 author author J. W. Rocks, author N. Pashine, author I. Bischofberger, author C. P. Goodrich, author A. J. Liu, and author S. R. Nagel, 10.1073/pnas.1612139114 journal journal Proceedings of the National Academy of Sciences volume 114, pages 2520 (year 2017)NoStop Supporting information for “High-Speed Combinatorial Self-Assembly through Kinetic-Trap Encoding” Ansatz for the error rate. In the regime where the promiscuity π is large, components become indistinguishable, irrespective of δ, which prevents reliable assembly. The error rate, p_ err, defined as the probability that there is ambiguity regarding which component can be added, is estimated as follows in the general case u,h≤1. For any given tip size n, an error requires that two components share the same n neighbors either in different targets or in different locations in the same target. Due to component reusage, the error rate for n=1 is simply proportional to the number of specific neighbors, except the one continuing the string in formation: (π-1). Then, sharing an additional neighbor should be decreased by a factor (π-1)/N_tot, due to the large number of species. For large δ, we thus expect that the average error rate is p_err∼(π-1)(π-1/N_tot)^n-1. This estimate is numerically validated in Fig. <ref> for n≥2 and N_ tot large, where it is below 1. The agreement is particularly good at low error rate, where the estimate yields a large capacity S_ max. For n=1, a better estimate is p_err≈(π-1)/π simply counting the fraction of specific neighbors that leads to a retrieval error. Ansatz for the promiscuity. We define the promiscuity of components, π, as the number of specific interactions of a typical component at each neighboring location. In the general case u,h≤1, we estimate π≈ Su/h for large heterogeneous targets <cit.>. This prediction agrees well with numerical calculations, especially for large heterogeneous targets (brown circles), see Fig. <ref>.
http://arxiv.org/abs/2406.19101v1
20240627112836
DocKylin: A Large Multimodal Model for Visual Document Understanding with Efficient Visual Slimming
[ "Jiaxin Zhang", "Wentao Yang", "Songxuan Lai", "Zecheng Xie", "Lianwen Jin" ]
cs.CV
[ "cs.CV" ]
: A New Cypher-like Operator for Mining Association Rules on Property Graphs Stefano Ceri July 1, 2024 ============================================================================ § ABSTRACT Current multimodal large language models (MLLMs) face significant challenges in visual document understanding (VDU) tasks due to the high resolution, dense text, and complex layouts typical of document images. These characteristics demand a high level of detail perception ability from MLLMs. While increasing input resolution improves detail perception, it also leads to longer sequences of visual tokens, increasing computational costs and straining the models' ability to handle long contexts. To address these challenges, we introduce DocKylin, a document-centric MLLM that performs visual content slimming at both the pixel and token levels, thereby reducing token sequence length in VDU scenarios. DocKylin utilizes an Adaptive Pixel Slimming (APS) preprocessing module to perform pixel-level slimming, increasing the proportion of informative pixels. Moreover, DocKylin incorporates a novel Dynamic Token Slimming (DTS) module to conduct token-level slimming, filtering essential tokens and removing others to create a compressed, adaptive visual sequence. Experiments demonstrate DocKylin's promising performance across various VDU benchmarks. Notably, both the proposed APS and DTS are parameter-free, facilitating easy integration into existing MLLMs, and our experiments indicate their potential for broader applications. § INTRODUCTION With the substantial growth in data and model sizes, as well as alignment with human preferences, current Large Language Models (LLMs) <cit.> have demonstrated remarkable capabilities in reasoning, common sense understanding, expertise in specialized knowledge, zero/few-shot learning, and instruction following, shedding light on the potential for Artificial General Intelligence. Multimodal Large Language Models (MLLMs) aim to enhance these powerful LLMs by endowing them with multimodal capabilities, enabling the integration of diverse modalities beyond text alone. Although there has been significant progress in the development of MLLMs, their performance in visual document understanding (VDU) tasks remains suboptimal <cit.>. A primary reason for this is their support for input resolutions of only up to 448x448, which is inadequate for high-resolution, fine-grained document images. Numerous efforts <cit.> have recently been made to increase the input resolution to address this limitation and achieved notable improvements. However, these efforts still face several challenges: 1) Increasing the resolution also escalates the redundant regions within documents, which results in an increased number of redundant visual tokens, complicating the task for LLMs to identify the correct answers. 2) Some methods <cit.> employ a fixed rectangular resolution, which can lead to text distortion given the typically large aspect ratios of document images. Moreover, scaling smaller images to larger ones results in inefficiencies. 3) To manage the extended sequences of visual tokens resulting from high resolution, most methods <cit.> employ a fixed compression ratio or extract a predetermined number of tokens. While this approach may be suitable for natural scene images, it is less effective for document scenarios where content density varies significantly. For example, images of scientific papers are information-dense, whereas images of identification cards are comparatively sparse. To tackle these challenges, we propose a series of solutions. Firstly, we propose a parameter-free Adaptive Pixel Slimming (APS) preprocessing technique that utilizes gradient information to identify and eliminate redundant regions within document images, thereby reducing the proportion of redundant pixels and enhancing computational efficiency. Secondly, we adopt a more flexible image encoding strategy by capping the maximum number of pixels instead of fixing resolutions. This allows for variable resolutions and aspect ratios. Our lightweight Swin encoder <cit.> processes high-resolution document images holistically, ensuring feature coherence. Thirdly, we propose a Dynamic Token Slimming (DTS) method based on k-means clustering <cit.>, which is also parameter-free. DTS filters useful tokens from a vast pool of visual tokens, obtaining visual sequences with reduced and dynamic lengths. Incorporating these innovations, we developed our DocKylin, which demonstrates superior performance across multiple VDU benchmarks. Ablation studies further confirm the effectiveness of some of these designs and highlight their potential applicability to other MLLMs or scenarios. § RELATED WORKS §.§ Text-centric MLLMs Text is crucial for understanding images, but the ability of existing MLLMs in this regard is generally subpar <cit.>. Therefore, there has been a surge of recent work studying how to enhance the perception, recognition, and understanding abilities of text images in MLLMs. Some approaches <cit.> leverage additional Optical Character Recognition (OCR) engines to recognize text in images and input it into MLLMs. Others <cit.> enhance the fine-grained perceptual capabilities of MLLMs by maintaining the high resolution of the input images. This includes strategies such as splitting the image into small pieces to accommodate the VIT's requirement for fixed and small-size inputs and utilizing visual encoders that are more suitable for text images. Authors of <cit.> explore learning text grounding for MLLMs, while <cit.> investigate the use of more superior and larger scale instruction data to boost performance. Although these efforts have significantly improved the performance of MLLMs on text-centric tasks, there is still substantial room for improvement. §.§ Visual Token Compression in MLLMs Maintaining high-resolution input images is a straightforward and effective method to enhance the performance of MLLMs on text images. However, increased resolutions inevitably lead to a longer sequence of visual tokens. There are various methods for compressing the length of visual sequences in current MLLMs. One of the earlier and commonly used approaches <cit.> involves utilizing a group of learnable query tokens to extract information in a cross-attention manner. Another method <cit.> involves concatenating adjacent tokens along the channel dimension to reduce sequence length. Additionally, convolutional neural networks (CNNs) have also been proven effective for visual token compression <cit.>. However, all these methods either extract tokens of a fixed length or compress them at a fixed ratio. This approach is unsuited for document images, where the content density can vary significantly. A token sequence that is too short or a compression ratio that is too high can result in an inadequate representation of some document images. Conversely, longer token sequences or lower compression rates can reduce computational efficiency. Recent studies <cit.> have shown that the optimal length of visual token sequences varies for each sample. Consequently, some recent approaches have explored the adaptive compression of visual sequences. Most of these methods <cit.> dynamically discard nonessential tokens using attention scores between visual tokens or between visual tokens and class tokens. However, considering the differences between text and natural objects with complete semantics, this paradigm may not be suitable for text images. For text-centric images, HRVDA<cit.> employs text position annotations to train a separate visual token filtering network, which requires introducing additional parameters training procedures. Compared to the methods above, we explore alternative dynamic compression schemes specifically for document images, which include pixel-level redundancy removal and unsupervised clustering for filtering out nonessential tokens. § METHODOLOGY As illustrated in Fig. <ref>, similar to many multimodal large language models (MLLMs), our architecture also incorporates an image encoder, an MLP layer that aligns the visual modality to the language modality, and a LLM. Building on this framework, DocKylin includes the following enhancements: Images are preprocessed by the Adaptive Pixel Slimming module before being fed to the image encoder, which removes redundant regions and reduces resolution while increasing the proportion of informative pixels. The visual encoder takes the resulting image as input and maintains the aspect ratio. The visual tokens obtained by the visual encoder and MLP layer are handled by our Dynamic Token Slimming (DTS) module to remove nonessential tokens to enhance both performance and efficiency. We provide a detailed description of these improvements in the following. §.§ Adaptive Pixel Slimming Perceiving textual information in images typically requires maintaining a high resolution of input, especially for text-rich document images. This poses significant challenges for current resource-intensive MLLMs. To address this issue, existing methods employ strategies such as dividing the images into patches for processing <cit.>, employing visual encoders that support high resolution <cit.>, leveraging OCR results as additional inputs <cit.>, or applying compression transformations to the images beforehand <cit.>. Here, we propose a novel approach called Adaptive Pixel Slimming (APS) to efficiently reduce the resolution of document images, which is orthogonal to the approaches mentioned above. APS is based on the observation that, as illustrated in Fig. <ref>, while document images usually exhibit high resolution, they commonly contain a significant proportion of redundant regions. These regions not only increase the computational burden on the visual encoder but may also pose challenges <cit.> to the reasoning capabilities of the LLMs by the increased redundant visual tokens. APS is thus designed to identify and remove these regions. Specifically, as shown in Fig. <ref>, during the gradient extraction stage, we utilize the Sobel operator to extract the gradient map I_G from the original image I_S. Regions with low gradient values are considered smooth and devoid of significant information. Therefore, in the redundant region determination stage, we identify the redundant regions in the horizontal and vertical directions by locating areas with consistently low gradient values. For instance, to identify redundant regions in the image's horizontal direction: if the sum of gradient values for a specific row is below a predefined threshold, we classify that row as redundant. A sequence of contiguous redundant rows constitutes a redundant region in the horizontal direction. The computation for the vertical direction follows a similar procedure. The final image can be obtained by removing the identified redundant rows and columns. For more details on the algorithm and parameter settings, please refer to the Appendix. Table <ref> presents the reduction in image resolution after applying our APS module, which demonstrates that APS significantly decreases the resolution of images. Some visual results are available in the Appendix. It is worth noting that APS is primarily designed for document images, where redundant regions are commonly distinctly defined. In contrast, elements in natural scene images tend to be more complex, making it challenging to identify extensive redundant regions. Consequently, APS generally has a limited impact on natural scene tasks. In the subsequent experimental section of our paper, we demonstrate that the application of APS consistently improves performance on multiple document image benchmarks without adversely affecting tasks involving natural scenes. §.§ More Flexible Image Encoding Most existing MLLMs <cit.> utilize a fixed rectangular resolution, such as 224×224 or 336×336. This approach distorts the original aspect ratio of the images, leading to text deformation and inefficiencies when upscaling images smaller than the preset resolution. While some methods <cit.> allow for more flexible aspect ratios and resolutions, they often employ a patch-based processing strategy. Such a strategy may be suitable for natural scene images where the continuity and context of visual elements are less critical. However, it is less effective for document images <cit.> that contain dense text. Segmenting such images can disrupt the textual content, severing lines of text or splitting words and sentences across different patches. This discontinuity can impair the ability to accurately recognize and interpret textual information essential for effective document image understanding. We employ a more flexible approach that can accommodate varying resolutions and aspect ratios without the need for image patching. Specifically, for an image obtained by APS with the size of H×W, we set a maximum allowable number of pixels, MAX_SIZE. If the product of H and W is less than MAX_SIZE, the image will not be resized. Otherwise, a scaling factor r=Int(√(MAX_SIZE/H× W)) is calculated, and then the image is resized to rH×rW. Here Int() denotes the floor function to ensure that the resulting image does not exceed the maximum allowable pixels. One advantage of setting a maximum number of pixels rather than fixed dimensions is the ability to support more varied resolutions and preserve the original aspect ratio. Additionally, this manner avoids the loss of image details due to resolution reduction as much as possible without increasing computational resource consumption. For example, if we set the maximum pixels number MAX_SIZE as 960×960, we can accommodate an input resolution of 1280×720 without alteration. In contrast, a fixed-dimension strategy would require resizing the image to 960×960, resulting in a 25% compression of the long side. Both strategies consume comparable computational resources since the number of visual tokens remains consistent. Although ViT pre-trained with CLIP <cit.> demonstrates impressive generalization capabilities for natural scene images, they perform suboptimally on dense document images <cit.>. Furthermore, their self-attention mechanism, which has a computational complexity that scales quadratically with image resolution, makes it challenging to support high-resolution inputs. Consequently, we opt to use the more efficient Swin Transformer <cit.> as our visual encoder, which has been pre-trained on full-text recognition tasks <cit.>. It enables us to process entire images as single inputs. §.§ Dynamic Token Slimming Although the APS module has been effective in removing some redundant regions, it is limited to processing entire rows or columns and is ineffective for more complex scenarios. In contrast, our Dynamic Token Slimming (DTS) method offers a more flexible approach to eliminate redundant visual content at the token level. The design of the DTS is based on the premise that a well-trained visual encoder should be capable of effectively distinguishing between essential and nonessential regions within an image, with the features corresponding to different regions exhibiting sufficient discriminative properties. We propose to separate these two types of tokens directly through clustering and then aggregate nonessential tokens into essential tokens. We next detail the proposed DTS, which comprises two main processes: Dual-center K-Means Clustering and Similarity Weighted Aggregation. r0.54 -2-2 Dual-center K-Means Clustering. To distinguish tokens into essential and nonessential types without introducing additional parameters, we apply K-Means clustering <cit.> with two centers to the output sequence from the visual encoder, as illustrated in Fig <ref>. However, after obtaining the two clusters, it is unclear which cluster contains the essential tokens and which contains the non-essential tokens. Drawing on findings from <cit.>, nonessential tokens typically lack uniqueness and are often similar to other tokens. Therefore, we identify potential nonessential tokens by evaluating the similarity of each visual token with all other ones. The cluster with fewer of these nonessential tokens is determined to be the one containing the essential tokens. For more implementation details, please refer to Algorithm <ref>. Similarity Weighted Aggregation. After obtaining essential and nonessential tokens, how to handle the nonessential tokens becomes a crucial issue. Inspired by <cit.>, to mitigate potential clustering errors and prevent information loss, we do not discard the nonessential tokens outright. Instead, we aggregate them into the essential tokens to minimize the potential loss of effective information while enhancing the model's processing efficiency. As illustrated in Fig. <ref>, for each nonessential token from the clustering step, we find the most similar essential token. Each essential token may correspond to zero or multiple nonessential tokens. If an essential token has corresponding nonessential tokens, we perform a similarity weighted summation of these tokens to obtain an aggregated token. Specifically, after determining the set of essential tokens ℰ={e_1, e_2, e_3, …, e_N} and the set of nonessential tokens ℰ={e_1, e_2, e_3, …, e_N}, we aggregate nonessential tokens e_j into essential tokens e_i by similarity weighted summation: ê_i=w_ie _i+∑_e _j∈ℰ w_j e_j, where w_i and w_j are the weight for the i-th essential and j-th nonessential token, and ê_i is the aggregated token. We use the similarity between e_i and e_j to determine w_i and w_j. Specifically, we first define the similarity matrix between essential tokens and nonessential tokens: c_i,j=e_i^Te_j/e_ie_j. Next, we obtain a mask matrix to ensure that each nonessential token is aggregated only with the most similar essential token: m_i,j=1, i=max_e_j∈ℰc_i,j, 0, otherwise,. Finally, we can obtain aggregating weight w_j and w_i by: w_j=exp(c_i,j)m_i,j/∑_e_j∈ℰexp(c_i,j)m_i,j+e and w_i=e/∑_e_j∈ℰexp(c_i,j)m_i,j+e. The pseudo-code for the specific implementation can be found in Algorithm <ref> in the appendix. r0.5 Datasets used in pre-training and instruction tuning phase. Stage Datasets 7*Pre-training CASIA-HWDB <cit.>, CCpdf <cit.>, ChartQA <cit.>, DeepForm <cit.>, DocVQA <cit.>, DVQA <cit.>, FigureQA <cit.>, InfoVQA <cit.>, KLC <cit.>, OCRVQA <cit.>, PlotQA <cit.>, PubTabNet <cit.>, RVL-CDIP <cit.>, SCUT-HCCDoc <cit.>, SynthDog <cit.>, TAT-QA <cit.>, VisualMRC <cit.>, WildReceipt <cit.> 5Instruction tuning ChartQA <cit.>, DeepForm <cit.>, DocILE <cit.>, DocVQA <cit.>, DVQA <cit.>, InfoVQA <cit.>, InsuranceQA <cit.>, KLC <cit.>, OCR-TROSD<cit.>, TAT-QA <cit.>, VisualMRC <cit.>, WildReceipt <cit.>, WTQ <cit.> § EXPERIMENTS §.§ Implementation Details Our model underwent two training stages: pre-training and instruction tuning. Training data used in each phase are listed in Table <ref>. During the pre-training stage, we trained only the visual encoder and the MLP, keeping the LLM parameters fixed. In this phase, the model learns to perform full-text recognition on document images and convert charts and tables into markdown formats <cit.>. The goal is to enable the visual encoder to comprehend document content and align the visual features with the language space through the MLP. During the pre-training phase, the model was trained for 450,000 iterations with a batch size of 8. The learning rate decayed from 1e-4 to 1e-5 using cosine annealing, which took approximately 40 A800 GPU-days. In the instruction tuning phase, the model underwent 300,000 iterations with the same batch size and learning rate as in the pre-training phase. This phase further consumed approximately 24 A800 GPU-days. Our Adaptive Pixel Slimming method is employed in both phases, while the Dynamic Token Slimming (DTS) is used only during the instruction tuning phase. The maximum allowed number of pixels, MAX_SIZE, is set to 1728×1728. We initialize our visual encoder with Donut-Swin (0.07B) <cit.>, pre-trained on the reading texts task, and use Qwen-7B-Chat <cit.> as our LLM. §.§ Comparisons on VDU Benchmarks We first compare our model with several existing general-purpose multimodal large language models (MLLMs). As these models sometimes tend to provide detailed responses, following <cit.>, we adopt accuracy as our metric: a response is considered correct if it contains the complete answer. The results, as shown in Table <ref>, demonstrate that DocKylin achieves a significant advantage over the competing models. This highlights that VDU tasks remain challenging for current general-purpose MLLMs. Note that descriptions of all these benchmarks are included in the Appendix. We also compare our model with several current text-centric MLLMs. The results, presented in Table <ref>, use the same metrics as those employed in the original benchmarks. Detailed descriptions of the specific metrics used for each benchmark can be found in the Appendix. Our method demonstrates advantages over these text-centric MLLMs, achieving leading performance across multiple benchmarks. Notably, our visual encoder is more lightweight than those used by the compared methods. Some visual results from DocKylin on these benchmarks are presented in Fig. <ref>. The first row shows the results of redundant regions detected by APS, with corresponding text explaining the resulting reduction in resolution. The second row displays the nonessential tokens removed by DTS, with accompanying text illustrating its effect on compressing the visual sequence. These results demonstrate their effectiveness in accurately retaining important regions while significantly reducing the resolution and the length of the visual sequence. To further demonstrate the superiority of our method, we compared it with other parameter-free visual token compression methods, using LLaVA1.5 <cit.> as our baseline model (without retraining). The results in Table <ref> indicate that our APS and DTS achieve performance improvements with shorter sequence lengths compared to the baseline model, significantly outperforming existing SOTA compression methods. It is worth noting that these results are achieved without retraining. Typically, after retraining, models can better adapt to the new mode of compressed token sequences, leading to further performance improvements <cit.>. §.§ Ablation Studies and Potential Broader Applications We validate the effectiveness of APS and DTS on DocKylin and give the results in Table <ref>. Note that, settings without DTS in Table <ref> refer to retraining a new model from scratch without employing DTS in the instruction tuning phase, rather than simply removing DTS from the already trained DocKylin model. The results demonstrate that both APS and DTS effectively reduce token sequence length and enhance computational efficiency, while also improving performance. This aligns with findings from previous studies <cit.>, which suggest that excessively long sequences due to high resolution may degrade performance. This degradation likely occurs because the visual sequences contain too many redundant tokens, placing greater demands on the model's ability to handle long contexts and making the learning more challenging. Notably, the APS and DTS methods can also be used online during the training phase to accelerate training. Specifically, applying the DTS method during the instruction tuning phase can reduce training time by approximately 15%. Since APS is a parameter-free method that requires no additional training or modification of model architecture, it is easily applicable to other MLLMs to enhance their performance on high-resolution document images. To demonstrate its effectiveness, we apply APS to various existing MLLM models (employing it before the image processing module of these models). As shown in Table <ref> of our experimental results, APS consistently improves document image tasks across a range of MLLMs with different resolutions. Additionally, results on TextVQA demonstrate that APS does not lead to performance degradation in tasks involving scene images. The redundant regions in document images are often simple, such as plain backgrounds, distinguishing these regions is relatively easy. This raises the question of whether DTS's effectiveness is due to this simplicity. Although this paper primarily explores VDU tasks, we are also interested in the generalizability of DTS. Therefore, we conducted some experiments on natural scene images. Specifically, we retrained a scene text image full-text recognition model using some open-source scene text data <cit.>, performing only the first pre-training phase. We then applied DTS to the trained model and tested it on the TextVQA dataset. Some visual results are shown in Fig. <ref>. It can be seen that DTS is effective in distinguishing nonessential regions even in the complex backgrounds of natural scene images. § LIMITATIONS While DocKylin exhibits outstanding performance in visual document understanding (VDU) tasks, it currently lacks versatility for more general tasks, such as question answering and captioning in natural scenes. This limitation primarily stems from our current image encoder, which has not undergone extensive image-text pair pre-training and also possesses insufficient parameter capacity. Future work could explore scaling up the Swin Transformer and conducting pre-training similar to that of the current CLIP ViT. Alternatively, a dual-encoder approach <cit.> could be employed, where a flexible, lightweight Swin Transformer is specifically used for extracting fine textual details within images, while a larger-capacity ViT model is utilized for capturing more general content. The combined image features from both encoders could then be integrated into the LLM. § CONCLUSIONS To address the issues of high input resolution and excessively long token sequences in current MLLMs when counters with VDU tasks, we propose the DocKylin model. It includes a parameter-free Adaptive Pixel Slimming (APS) preprocessing module that removes redundant regions from document images, effectively reducing their resolution. The model employs a flexible image encoding approach to adapt to the varying resolutions and aspect ratios of document image scenarios. Moreover, we propose a parameter-free token compression method, Dynamic Token Slimming (DTS), which clusters and selects essential tokens from a long visual sequence and employs aggregation to avoid potential information loss. Experiments demonstrate that DocKylin achieves leading performance across multiple VDU benchmarks. APS and DTS effectively shorten the length of visual tokens, enhancing both computational efficiency and model performance. Additionally, we have validated the potential of applying these methods to other MLLMs or other scenarios beyond document images. plain § APPENDIX § BENCHMARK DATASETS EVALUATION METRICS DocVQA <cit.>, InfoVQA <cit.>, ChartQA <cit.>, WTQ <cit.>, and VisualMRC <cit.> are all datasets designed for visual document image question answering. DocVQA encompasses a variety of document types, featuring a rich diversity of questions and data types. InfoVQA focuses on infographics that contain an abundance of visual elements. ChartQA and WTQ specifically target question answering for chart data and tabular data, respectively. VisualMRC, on the other hand, places a greater emphasis on developing natural language understanding and generation capabilities. DeepForm <cit.>, FUNSD <cit.>, SROIE <cit.>, and POIE <cit.> are key information extraction datasets where the model is required to identify the value corresponding to a given key. Widely used text-centric benchmarks employ diverse metrics to better capture the distinct characteristics among various datasets. As depicted in Table <ref> within the main text, both DocVQA <cit.> and InfoVQA <cit.> utilize ANLS <cit.> as their metric. The results reported are evaluated on their respective official websites. ChartQA <cit.> is evaluated by accuracy relaxed accuracy <cit.>, while VisualMRC <cit.> relies on CIDEr <cit.>. DeepForm <cit.> is measured by F1 score, and WTQ <cit.> evaluated by accuracy. § MORE DETAILS AND RESULTS OF ADAPTIVE PIXEL SLIMMING The specific computation method and parameter settings for Adaptive Pixel Slimming are detailed in Algorithm <ref>, which provides additional details beyond the main text: 1. The 2D gradient map is obtained by taking the maximum value from the gradient maps in the x and y directions. This approach aims to ensure the accuracy of redundancy detection while minimizing false positives. Using the maximum value rather than the average prevents the removal of regions with features in a single direction, such as table lines. 2. Noise removal is performed by setting gradient values below a threshold t_n to zero, thereby reducing interference from background texture noise. 3. Before identifying redundant regions based on the value threshold t_v, we normalize the gradient map to a fixed resolution. This normalization ensures that the threshold is robust to varying resolutions. In Fig. <ref>, we present some results of Adaptive Pixel Slimming, demonstrating its effectiveness in removing redundant regions from document images and reducing image resolution. The impact on natural scene images is minimal. § MORE DETAILS OF DTS § DECLARATION OF POTENTIAL HARMFUL CONTENT While our model performs well on many tasks, we acknowledge that, like all MLLMs, due to the breadth of its training data and the diversity of its content, our model may occasionally generate inaccurate, misleading, or harmful content. We hereby declare that, despite our efforts to mitigate this risk by selecting appropriate training data and adjusting model parameters, we cannot entirely eliminate the possibility of the MLLM producing harmful content. Therefore, when referring to this paper or using the model proposed herein, please be aware that the generated text may contain uncertainties and risks. We encourage readers to exercise caution and independently assess the accuracy and credibility of the content generated by the MLLM.
http://arxiv.org/abs/2406.17667v1
20240625155702
This Paper Had the Smartest Reviewers -- Flattery Detection Utilising an Audio-Textual Transformer-Based Approach
[ "Lukas Christ", "Shahin Amiriparian", "Friederike Hawighorst", "Ann-Kathrin Schill", "Angelo Boutalikakis", "Lorenz Graf-Vlachy", "Andreas König", "Björn W. Schuller" ]
cs.SD
[ "cs.SD", "cs.CL", "eess.AS" ]
Understanding phase transitions of α-quartz under dynamic compression conditions by machine-learning driven atomistic simulations Karsten Albe =================================================================================================================================== § ABSTRACT Flattery is an important aspect of human communication that facilitates social bonding, shapes perceptions, and influences behavior through strategic compliments and praise, leveraging the power of speech to build rapport effectively. Its automatic detection can thus enhance the naturalness of human-AI interactions. To meet this need, we present a novel audio textual dataset comprising 20 hours of speech and train machine learning models for automatic flattery detection. In particular, we employ pretrained AST, Wav2Vec2, and Whisper models for the speech modality, and Whisper TTS models combined with a RoBERTa text classifier for the textual modality. Subsequently, we build a multimodal classifier by combining text and audio representations. Evaluation on unseen test data demonstrates promising results, with Unweighted Average Recall scores reaching 82.46% in audio-only experiments, 85.97 % in text-only experiments, and 87.16 % using a multimodal approach. § INTRODUCTION Flattery is a pervasive social influencing behavior in which one individual (i.e., the flatterer) provides deliberate appreciation toward another individual or group (i.e., the flattered) <cit.>. Specifically, flattery accentuates the positive qualities of the flattered with the objective of interpersonal attractiveness and ultimately winning favour <cit.>. While flattery is rather difficult to detect for the flattered, it is easier to be detected by bystanders <cit.>. As such, flattery is one of the most commonplace social influencing behaviors that many individuals employ, receive, or witness on a daily basis <cit.>. Research across various disciplines has extensively explored the potency of flattery, particularly within organizational contexts. Kumar and Beyerlein <cit.>, for instance, devised the Measure of Ingratiatory Behaviors in Organizational Settings (MIBOS) scale to unveil how employees adeptly employ flattery as an upward influencing technique towards immediate superiors <cit.>. Likewise, <cit.> demonstrated that job interviewees may employ flattery toward their job interviewers to enhance these job interviewers’ evaluations. In the broader management context, it was shown that journalists exhibit more positive coverage of a firm and its CEO when flattered by the CEO and capital markets analysts also respond to CEO flattery by issuing more positive firm ratings <cit.>. Motivated by the relevance of flattery in interpersonal communication, we investigate its automatic detection via ML methods, utilizing speech data. Such a methodology may be applicable in, human-computer interaction, communication training <cit.>, and computational psychometrics <cit.>. Automatic analysis of speech for understanding different facets of human communication is an active research field, with SER arguably being the most prominent task ( <cit.>). Other examples include the detection of humour ( <cit.>), sarcasm <cit.>, interpersonal adaptation <cit.>, and defensive communication <cit.>. Oftentimes, it is beneficial to also consider linguistic information, transcripts of the speech samples, when addressing such tasks <cit.>. To the best of our knowledge, our work is the first to consider the task of detecting flattery from speech. Our contributions are as follows: (i) we introduce a novel dataset for flattery detection from speech; (ii) we provide ML approaches for automatic flattery detection. § DATASET Considering the large body of literature on flattery in the management context, we analyze flattery as applied by business analysts. We obtain a dataset of 2159 dyads of analyst questions and CEO answers within earnings calls of large, publicly listed U. S. hard- and software as well as pharmaceutical firms from 2013 to 2018, which were transcribed and published by Seeking Alpha. Drawing from prior research <cit.>, we systematically develop a reliable, context-sensitive, content-analytical measure for detecting and assessing analyst flattery. A detailed account of our annotation guidelines is provided in <cit.>. A team of three expert human annotators label the dataset for instances of flattery on a span-level, flattery is identified in subsentential word sequences. An example is considered flattery if and only if all three annotators agree on it. In our machine learning experiments, the problem of flattery detection is framed as a sentence-level binary classification task here. The transcripts are split into sentences via the pySBD <cit.> tool. We then project the subsentential annotations to the sentence level by treating a sentence as a positive example of flattery if it contains a passage identified as flattery by the annotators. To build the audio dataset, we first reconstruct word-level timestamps from the speech data using the MFA <cit.> library. Based on these timestamps, the speech recordings are cut such that each resulting audio sample corresponds to one sentence. In total, 10,903 such samples uttered by 255 different speakers are obtained, of which 752 (6.90 %) are considered flattery. Overall, the dataset consists of almost 20 hours of speech, with an average sample length of 6.59 seconds. A speaker-independent splitting into a training, development, and test set is created, where the training partition comprises 70 % of the speakers (178/255) while 15 % of the speakers are assigned to the development and test partition each. We ensure that the fraction of positive examples as well as the mean duration of samples in each partition is comparable. A detailed overview of the resulting dataset is provided in <Ref>. § METHODS With both audio samples and their transcripts at our disposal, we conduct three types of experiments. First, we train text-based classifiers, for which we not only use the manual gold standard transcripts but also explore the outputs of various ASR systems (<Ref>). Second, we aim to predict flattery based on the speech samples only, utilizing a range of pretrained audio foundation models (<Ref>). Third, we seek to combine the merits of text-based and audio-based approaches in one model (<Ref>). Considering the class imbalance (cf. <Ref>), we choose UAR, also known as balanced accuracy as our evaluation metric in all experiments. All models are taken from the hub. §.§ Text-based Classification We build a text classifier utilizing a pretrained RoBERTa <cit.> model in its base variant, a 12-layer Transformer encoder with about 110M parameters. We add a classification head after the final layer's encoding and fine-tune all weights of the model utilizing the training partition. The training process runs for at most 7 epochs but is aborted earlier if no improvement on the development set is observed for two epochs. The learning rate is set to 10^-5 after initial experiments with the values {10^-4, 5×10^-5, 10^-5, 5×10^-6}. As the loss function, binary cross-entropy with positive samples weighted inversely to their frequency is utilized. We repeat the training process five times with different fixed random seeds. Since in practice manual “gold standard” transcripts are typically not available, we also explore automatically generated transcripts obtained from different ASR models. We consider 6 different pretrained models from the Whisper <cit.> family, ranging from the tiny variant with 39M parameters to large models comprising 1.5B parameters. We generate automatic transcripts using the models without any further adaptation. For each of the 6 ASR systems' outputs, we train instances of the RoBERTa classifier described above, applying the same procedure and hyperparameters as for the “gold standard” texts. Moreover, we compute the WER on the dataset for every ASR system. §.§ Audio-based Classification We consider three different types of pretrained audio Transformers, namely variants of AST <cit.>, W2V <cit.>, and Whisper <cit.>. AST <cit.> is a Transformer model with 12 layers that takes spectrograms as input. Specifically, we utilize AST trained on the Speech Commands V2 dataset <cit.>, as this is the only speech-related model provided in <cit.>. W2V <cit.> is pretrained for reconstructing masked parts of speech signals. We employ both the base (12 Transformer layers) and large (24 Transformer layers) variant of W2V which were both pretrained and, subsequently, finetuned on the Librispeech <cit.> dataset containing 960 hours of speech. Moreover, as the task of SER is arguably related to our problem of flattery detection, we experiment with a W2V model finetuned on MSP-Podcast <cit.> for SER <cit.>, denoted as W2V-MSP. As for Whisper <cit.>, we make use of the base, medium, and large pretrained models. Our choice of model families is motivated by the finding of <cit.> that models in the fashion of W2V acquire linguistic information when fine-tuned. Hence, we opt for a selection of W2V models fine-tuned for both ASR and SER, variants of Whisper, as a more recent ASR-based approach and AST that is solely trained on spectrograms and should thus not be equipped with linguistic knowledge. This, in combination with the text-based classification, allows us to reason whether flattery can mainly be recognized via prosody, text or both. §.§.§ Layer-Wise Encodings It has been shown that different layers of pretrained Transformer models for speech encode different acoustic and linguistic properties of an input signal <cit.>. Hence, for each model mentioned above, we investigate the aptitude of each of its layers for the flattery detection task. First, for every model, we extract representations of the speech signal from each layer. For AST, we take the layer's embedding of the special token as the representation. For Whisper and W2V models, representations are obtained by averaging over all of the respective layer's token representations. We then determine the most promising layer per model by training linear binary classifiers on each layer's representations. Given the large amount of trials, we choose SVM classifiers as a lightweight and deterministic option here. We optimize the regularization constant C and the weighting of minority class examples in every experiment. In the second row of experiments, for every model, we only consider the layer with the best results in the first step and, in addition, the final layer. For both layers' representations, more extensive SVM hyperparameter searches are conducted that also optimize the kernel type (RBF, linear, Sigmoid, polynomial), the kernel coefficient γ, and, if applicable, the degree of the kernel function. §.§.§ Audio Model Tuning From each model family (AST, W2V, Whisper), we first select the variant performing best in the initial SVM experiments (cf. <Ref>) and then finetune the pretrained model by training (i) the full model, and (ii) a version pruned to the layer that performed best among all layers in the SVM experiments. To do so, we add a linear classification head on top of each pretrained model, similar to the fine-tuning of RoBERTa (cf. <Ref>). Analogously to the feature extraction process, we feed the final layers' encoding for AST and the mean over the final layers' token representations for W2V and Whisper into the classification head. We determine a suitable learning rate for each model by training its pruned version for one epoch with different learning rates ({10^-3, 10^-4, 10^-5, 10^-6}) and three fixed random seeds. Binary Cross Entropy is employed as the loss function. We apply random oversampling to tackle the imbalanced label distribution here. Experiments with a weighted loss function did not yield promising results. The final models are trained with five different fixed random seeds. §.§ Text + Audio Fusion The fusion of the speech and text modality makes use of the models trained on speech only and text only, respectively. We consider the text-based models trained on i) the gold standard transcripts, ii) the weakest ASR system's (Whisper-tiny) outputs, and iii) the best ASR system's (Whisper-large) outputs (cf. <Ref>). For the sake of uniformity, we utilize the best fine-tuned audio model, W2V-MSP (cf. <Ref>) for the speech modality. We apply a weighted late fusion on the respective predictions, where the weights for both models are chosen according to their respective performance on the development set. Furthermore, we experiment with early fusion. Specifically, for each pair of audio and text models, we extract their final layers' representations and concatenate them. Then, SVM classifiers are trained on these features, analogously to the process described in <Ref>. Both the late and early fusion methods are deterministic, however, the models to be fused are all trained for the same five fixed random seeds. Thus, we can report means and standard deviations across these seeds by always fusing the models trained with the same seed. § RESULTS In the following, we report the results of the text-based (<Ref>), speech-based (<Ref>) and the fusion of text and speech (<Ref>). §.§ Text-based Classification Results <Ref> presents the results of training the RoBERTa classifier on different transcripts. In addition, the WER of the different ASR models are given. It can be observed that the larger Whisper ASR models perform better than the smaller ones regarding their WER, with the tiny model producing texts with a WER of 26.60 % while the WER of the medium, and large variants are around 15.00 %. Regarding the flattery classification, the best average result of 82.67 % UAR on the development and 85.97 % UAR on the test set is achieved when training with the gold standard transcripts. All results prove to be stable across seeds, as no standard deviation exceeds 2 % on the test set. While the gold standard transcriptions model outperforms the best ASR transcriptions model, Whisper-large, by more than 2 percentage points on the test data, all ASR transcript-based models still achieve over 80 % mean UAR on the test set. This indicates that for the task of textual flattery detection, even relatively high WER such as 26.60 % for Whisper-tiny are not too detrimental to the text classifier's performance. One explanation for this is that high WER in this particular data set are mainly due to highly domain-specific terms that carry no information related to flattery and are thus less relevant for the classification. Nevertheless, there is a connection between WER and the corresponding classification results, with Whisper-tiny being responsible for the worst result on the development set (78.79 % UAR) while the best ASR-based classification result on the development set (81.68 % UAR) is achieved with the transcripts of Whisper-large that have the lowest WER among all the ASR models (14.68 %). §.§ Audio-based Classification Results The results for the audio-based flattery detection with both SVM and finetuning are given in <Ref>. The AST experiments yield considerably worse results than those based on the different W2V and Whisper variants. While most results of W2V and Whisper exceed 70 % UAR, all AST-based experiments only slightly surpass the chance level of 50 % UAR. Considering the UAR values of over 80 % observed in the text-based experiments, we assume that this performance gap is partially due to W2V and Whisper encoding linguistic information, which is not the case for AST. Consequently, the low UAR values for AST suggest that flattery can rarely be detected via prosodic information only. Another aspect that may contribute to AST's rather poor performance is that the SpeechCommand data it is initially trained on differs from our data in that all its speech samples are only one second long. Lastly, as our data is obtained from calls, the audio quality may be impaired, suppressing prosodic attributes of the speech samples that might prove beneficial for audio-based classification. The layer-wise results confirm that different layers of pretrained models are of different suitability for the flattery detection task. This is particularly prominent for the W2V variants, where layer 7 clearly outperforms the final layer (12) in the base model and layer 11 leads to a considerably better result (75.60 %) on the test set than layer 24 (62.63 %) in the large model. As for the Whisper models, the best layers are always close, but never identical, to the ultimate layer. All W2V and Whisper variants yield results better than 75 % UAR on the development set in their best layer in the SVM results, with W2V-MSP achieving the best UAR values overall on both the development (79.71 %) and test (82.46 %) set. Finetuning generally does not improve upon the SVM results. The standard deviation of 6.44 for Whisper-medium, however, shows that, depending on the random seed, results over 80 % UAR on the test set are possible. Overall, the best audio-based classifiers perform slightly worse than the best text-based classifiers that achieve over 83 % mean UAR on the test set, cf. <Ref>. §.§ Text + Audio Fusion Results We report the multimodal results in <Ref>. It is evident that for all transcripts considered, a combination with the speech modality improves upon the text-only approach. Hence, it can be assumed that our speech-based models, though arguably also making use of linguistic information, encode information that complements the text-only representations to a degree. A comparison of late and early fusion shows that in all cases, the early fusion approach outperforms the late fusion method. The comparably weak performance of the latter may be attributed to the poor calibration we observe in our fine-tuned Transformers' predictions. Among the different transcripts, the largest improvement over the purely textual model can be observed for those generated with Whisper-tiny, the worst performing ASR system (cf. <Ref>). Specifically, its mean early fusion UAR value on the development set (81.85 %) exceeds its mean text-only UAR result (78.79 %) by 3.88 %. The relative improvement is lower for the transcripts obtained via Whisper-large and the gold standard, namely 2.38 % and 2.58 %, respectively. This suggests that the audio modality can complement text-only approaches to flattery detection, especially when the ASR system's WER is relatively high. A closer manual inspection of data points for which audio and text classifiers disagree reveals another class of instances that benefit from speech-based classification: certain phrases, such as variants of Great! and Good morning, are sometimes labeled as flattery and sometimes not, depending on their context. Thus, as our simple text-based models do not consider the surrounding sentences, audio-based classifiers prove helpful in correctly predicting flattery in such utterances. §.§ Generalization to Female Speakers Given that women are considerably underrepresented in our dataset (cf. <Ref>), we investigate our models' generalizability for female speakers. In <Ref>, the results of the best finetuned Transformer models are broken down into female and male speakers. While for both the text and the audio approach, the UAR values for women are lower than those for men, they are typically close. The largest gap is observed for the text-based predictions on the development set, with the UAR for females (75.99) being about 7 percentage points lower than that for males (83.27) on average. It is also evident that the results for the comparatively few female data points tend to vary more depending on the random seed. An explanation for the rather small gap in performance for female and male speakers may be that, at least in the business context, there may not be many gender-based differences when it comes to using flattering phrases. As the W2V models arguably also draw heavily on linguistic information, this reasoning would apply to them as well. § DISCUSSION We observe that the textual modality, what is said, is crucial for predicting flattery. Second, the speech signal, while yielding less promising results on its own, still encodes valuable information that complements and thus improves text-based classification – especially in cases where the automatic transcription of utterances performs comparably poorly. Besides, the speech-based experiments with AST, Whisper, and W2V again demonstrate that fine-tuned ASR-based audio foundation models encode both linguistic and prosodic information. Potential limitations to the generalisability of our models are induced by the nature of the data set. As the data is sourced from business analyst calls in US companies, it is arguably highly context-specific and not representative of the general population with respect to demographic aspects such as educational background or age. § CONCLUSION We introduced the problem of flattery detection from speech alongside a novel data set. Furthermore, we trained an extensive set of ML approaches based on speech, text, and the combination of both modalities, thus providing insights into the nature of this novel task. Future work may include extending the database to cover broader demographics. Regarding the methodology, considering larger textual units in order to capture the sentences' contexts better is a promising avenue. Moreover, more refined fusion methods than those utilized in <Ref> can be devised. While we cannot publish our raw data due to copyright restrictions, we make our code, extracted features, and the best-performing models available[<https://github.com/lc0197/flattery_from_speech>]. § ACKNOWLEDGEMENTS This work was supported by MDSI – Munich Data Science Institute as well as MCML – Munich Center of Machine Learning. Björn W. Schuller is also with the Konrad Zuse School of Excellence in Reliable AI (relAI), Munich, Germany. IEEEtran
http://arxiv.org/abs/2406.18649v1
20240626180002
Dynamically localized but deconfined excitations in strained classical spin ice
[ "Zhongling Lu", "Robin Schäfer", "Jonathan N. Hallén", "Chris R. Laumann" ]
cond-mat.str-el
[ "cond-mat.str-el", "cond-mat.stat-mech" ]
Department of Physics, Boston University, Boston, Massachusetts 02215, USA Zhiyuan College, Shanghai Jiao Tong University, Shanghai 200240, China Department of Physics, Boston University, Boston, Massachusetts 02215, USA Department of Physics, Boston University, Boston, Massachusetts 02215, USA Department of Physics, Harvard University, Cambridge, Massachusetts 02138, USA Department of Physics, Boston University, Boston, Massachusetts 02215, USA Department of Physics, Harvard University, Cambridge, Massachusetts 02138, USA Max-Planck-Institut für Physik komplexer Systeme, 01187 Dresden, Germany § ABSTRACT We study classical spin ice under uniaxial strain along the [111] crystallographic axis. Remarkably, such strain preserves the extensive ice degeneracy and the corresponding classical Coulomb phase. The emergent monopole excitations remain thermodynamically deconfined exactly as in the isotropic case. However, their motion under local heat bath dynamics depends qualitatively on the sign of the strain. In the low-temperature limit for negative strain, the monopoles diffuse, while for positive strain, they localize. Introducing additional ring exchange dynamics into the ice background transforms the localized monopoles into sub-dimensional excitations whose motion is restricted to diffusion in the (111)-plane. The phenomena we identify are experimentally accessible in rare-earth pyrochlores under uniaxial pressure as well as in tripod kagome materials. The diffusive versus localized nature of the monopoles manifests in characteristic magnetic noise spectra, which we compute. Dynamically localized but deconfined excitations in strained classical spin ice Chris R. Laumann July 1, 2024 ================================================================================ § INTRODUCTION The nearest-neighbor Ising model on the pyrochlore lattice, called spin ice, is the quintessential frustrated magnetic system <cit.>. Its unusual properties include extensive zero-point entropy <cit.> and fractionalized excitations in the form of thermodynamically deconfined magnetic monopoles <cit.>. This work focuses on spin ice under uniaxial [111]-strain, see <ref>. Such strain preserves the extensive entropy corresponding to the thermodynamic Coulomb phase. Nonetheless, the strain leads to dramatic changes in the dynamics of the monopoles. In the minimal, “standard model” of spin ice dynamics, single spins flip stochastically according to a Markovian heat-bath <cit.>. This leads to monopole diffusion in the isotropic case. Remarkably, the introduction of [111]-strain leads to qualitatively distinct monopole dynamics depending on the sign of the strain. If the strain increases the interactions between spins separated in the [111]-direction, the monopoles completely localize. In the opposite scenario, monopoles remain diffusive with a weak anisotropic correction to the diffusion tensor. Both, the dynamical localization and the anisotropic diffusion, manifest in macroscopic dynamical properties such as the magnetic noise spectrum <cit.>, which has been of much recent experimental interest. The zero-temperature thermodynamics are completely insensitive to (moderate amounts of) strain; the monopoles are thus excitations that are thermodynamically deconfined but dynamically localized. If one additionally includes stochastic ring exchanges of the ice background on top of the standard model dynamics, the localized monopoles become diffusive in the (111)-plane, leading to sub-dimensional diffusion. In this sense, they are classical “sub-dimensional excitations” <cit.>. Our model is experimentally motivated, and moderate values of strain can be induced by pressure along the [111] crystallographic axis, see <ref>. Experiments on the classical spin ice compounds Dy_2Ti_2O_7 and Ho_2Ti_2O_7 have been carried out with uniaxial pressure applied along the [100], [110], and [111] axes <cit.>. These experiments probed static quantities, which are insensitive to the monopole localization we predict. Atomic substitution is an alternate route to exploring spin ice with the same reduced symmetry as the [111]-strained pyrochlore. By systematically substituting atoms on a particular sublattice, one obtains larger effective strain parameters, see <ref>. This could even flip the sign of the interactions between different species relative to the usual spin ice. A special case of such materials is provided by kagome Ising magnets such as Dy_3Mg_2Sb_2O_14 <cit.>. These realize the extreme case where one sublattice spin is non-magnetic, leaving effectively decoupled kagome layers. Motivated by this possibility, we compute the full phase diagram of the symmetry-reduced model, shown in <ref>. Using exact zero-temperature analysis combined with a lattice-scale duality transformation, we identify two distinct Coulomb phases and two distinct symmetry-breaking ordered phases. We further confirm the finite-temperature extent of these phases using Monte Carlo techniques. Thermodynamic properties of spin ice under uniaxial strain have been investigated theoretically in the past <cit.>. However, the main focus of previous works was the effect of pressure applied along the [100] direction, where any amount of strain breaks the extensive degeneracy of the ice states, resulting in qualitatively different behavior compared to the [111] case we consider. The remainder of the paper is structured as follows: <ref> introduces the model and the exact ground states in the four thermodynamic phases, as well as the relevant excitations. <ref> then explores the finite-temperature thermodynamics of our model using Monte Carlo simulations, and <ref> is devoted to the low-temperature equilibrium dynamics of the monopoles. Possible experimental realizations, including estimates of realistic values of the distortion, are addressed in <ref>. § GROUND STATES AND EXCITATIONS Our starting point is the only symmetry-allowed classical nearest-neighbor Hamiltonian describing spin ice under strain: H_ nn = J (1 + δ) ∑_⟨ i, j ⟩ i ∈, j ∉σ_i σ_j + J ∑_⟨ i, j ⟩ i,j ∉σ_i σ_j . The first sum runs over pairs with one spin on sublattice – the sublattice on which spins align with the [111] direction – and the second sum runs over pairs with no spin on sublattice . We denote the other three sublattices collectively as as these spins form kagome planes. We take J>0 as the interaction strength. In the σ=± 1 pseudospin Ising convention used in <ref>, J>0 corresponds to anti-ferromagnetic interactions. In the vector spin language used to describe spin ice materials like Dy_2Ti_2O_7 and Ho_2Ti_2O_7, which we use pictorially throughout the manuscript, the spin-spin interaction is, in fact, ferromagnetic <cit.>. δ is a dimensionless scalar parameter that modulates the interaction strength and can take both positive and negative values. A non-zero δ reduces the space group of the lattice from the usual Fd3m of the pyrochlore lattice to R3m <cit.>. As in conventional spin ice, we can re-express H_ nn up to a constant term as a sum over all tetrahedra, H_S = J/2∑_t ((1+δ)σ_t,1+σ_t,2+σ_t,3+σ_t,4)^2 , where the spins σ_t, 1 are members of . Much of the model can be understood by examining the 2^4 = 16 spin configurations on a single tetrahedron, see <ref>. Six of these have two spins pointing in and two spins pointing out of the tetrahedron; we refer to these states as Type 0 configurations. Pointing “in” and “out” corresponds to σ = +1 and σ=-1 in the notation of <ref>. Any configuration where every tetrahedron follows this “two-in-two-out” rule, also known as the “ice rule” <cit.>, is a ground state of isotropic nearest-neighbor spin ice. As pointed out by Anderson <cit.>, there is an extensive entropy of such ground states, originally estimated by Pauling <cit.> in the context of water ice <cit.>. Flipping a single spin out of a two-in-two-out ground state creates a pair of excitations that take the form of three-in-one-out and one-in-three-out tetrahedra. These are the famous emergent magnetic monopoles <cit.>, which we call Type 1 configurations. Once created, monopoles can move independently through further spin flips and only annihilate if they meet a monopole of the opposite sign. Eight of the sixteen spin configurations on a tetrahedron correspond to a single monopole. The remaining two “double monopole” configurations have all four spins pointing in the same direction, and we call these Type 2 configurations. While type 0 and type 2 configurations remain degenerated, the [111]-strain lifts the degeneracy of the eight type 1 configurations. The single monopoles are type 1A with energy J(2 + δ)^2 /2 if the minority spin is not on , and type 1B with energy J(2 - δ)^2 /2 if the minority spin is on . The minority spin is the out spin in a three-in-one-out tetrahedron and the in spin in a three-out-one-in tetrahedron. The two type 1B configurations with minority spin on sublattice are energetically “cheaper” for δ > 0, and monopoles with the minority spin on any of the other sublattices are favored for δ < 0. Monopoles can change type when they move, but this carries an associated energy cost. Dynamical localization of monopoles occurs when the temperature is low compared to the cost of converting a type 1B to a type 1A monopole, as further explained in <ref>. Duality. Before discussing the different ground states, we focus on the point δ=-1, which reveals a duality of the Hamiltonian H_S. This can be seen by defining η = 1 + δ and rewriting the Hamiltonian as H_S(η) = J/2∑_t (ησ_t,1+σ_t,2+σ_t,3+σ_t,4)^2 . At η = 0 (δ = -1), spins on sublattice are decoupled from their neighbors such that the system is decomposed into a set of kagome planes and a set of completely free spins. This phenomenon can be observed in so-called “tripod kagome” materials such as Dy_3Mg_2Sb_2O_14 <cit.>. We can define a duality transformation, H_S(η)→ H_S(-η), by σ_i → -σ_i for i∈ σ_j →σ_j for j ∈ , which is an inversion of all spins on sublattice . Utilizing this duality reduces our focus to only half of the phase diagram at δ≥ -1. The model is thus symmetric around η=0, as reflected in the phase diagram, <ref>. We shall now explain the four phases from the right (positive δ) to the left (negative δ). Ferromagnetic regime. For δ > 1, the energy of the type 1B monopoles E_1B = J(2-δ)^2/2 is lower than the energy of any other group. Ground states are formed by a complete tiling of type 1B monopoles on every tetrahedron with all minority spins on sublattice . There are only two configurations that minimize the energy on all tetrahedra simultaneously, corresponding to polarizing all spins along or against the [111] direction. The system undergoes a phase transition into this spontaneous symmetry-broken ferromagnetic state, where the transition temperature T_c grows with increasing δ. Phrased differently, this is a pressure-induced crystallization of a cooperative paramagnet <cit.>. The ferromagnetic phase persists up to arbitrarily large δ. The lowest-energy excitations in this regime are two-in-two-out (type 0) states. A single spin flip out of the ground state can create a pair of such defects, but they are not mobile – there are no deconfined excitations in the ferromagnetic phase. Spin ice regime. For |δ|<1, the lowest energy is E_0=Jδ^2/2 and the ground states are given by the conventional spin ice configurations, realizing a finite residual entropy at T=0 <cit.>. Either the type 1A or type 1B monopoles are the lowest-energy excitations, depending on whether δ is smaller or greater than zero. At δ=0, conventional spin ice, the two types become degenerate, and monopoles diffuse isotropically. High symmetry point. At the point δ=-1, spins on sublattice are completely decoupled from all other spins. At this special point, the spins are free to take any value they want, and the remaining spins form decoupled antiferromagnetic kagome planes. In terms of the single tetrahedron, E_0=E_1A = J/2, making this the most degenerate point, with 12 of the 16 configurations sharing the same energy. Inverted spin ice regime. For -3 < δ < -1, the energy is minimized by placing a single type 1A monopole on each tetrahedron. Type 1A configurations have a majority spin on sublattice , resulting in much more freedom in how to form ground states compared to the type 1B tiled ferromagnetic phase. In fact, there is a direct one-to-one mapping between the monopole-favoring ground states for -3 < δ < -1 and the two-in-two-out ground states for -1 < δ < 1 due to the duality around δ=-1. Any ice configuration can be mapped uniquely by the duality transformation, <ref>, to a ground state for -3 < δ < -1, and we dub this phase inverted spin ice. All-in-all-out regime. For δ < -3, the double monopole (type 2) energy is the lowest of the four groups and the system orders into an all-in-all-out state. There are two such states with a broken tetrahedral sublattice symmetry, where all spins pointing either from down to up tetrahedra or from up to down tetrahedra. In the language of Ising spin, this refers to all spins being 1 or -1. This ordered phase persists for arbitrarily negative δ, and again leads to a symmetry-breaking phase transition at a critical temperature T_c that grows with decreasing δ<-3. As before, the duality transformation maps the all-in-all-out states to the two ferromagnetically ordered ground states at δ > 1. § THERMODYNAMICS We now consider the thermodynamics of our model using classical Monte Carlo simulations. Similar to the case of isotropic spin ice, we find a cross-over from the disordered Coulomb phases to the paramagnet at finite temperature. In contrast, for the symmetry-breaking phases, we identify a critical temperature marking the transition from the paramagnet into the ordered regime. We only conduct simulations at δ≥ -1 (η≥ 0) due to the duality in <ref>. Our results are summarized in <ref>, which contains the specific heat (a-c), monopole density (d-f), and entropy (g,h) for the different regimes discussed previously. Panel (i) shows the magnetic susceptibility measured in the [111] direction. The first column (a,d,g) covers the parameter regime -1≤δ≤ 0 – the region of the spin ice phase where type 1A monopoles are the lowest-energy excitations. Similarly, the second column (b,e,h) covers 0≤δ≤ 1 – the spin ice phase where type 1B monopoles are lowest in energy. Lastly, the third column (c,f,i) focuses on the ferromagnetic phase, with clear signs of a phase transition occurring at T_c. Specific heat. The specific heat, C_V, contains direct information on the distribution of tetrahedra between the four groups in <ref>. Let us start by examining the spin ice phase shown in panels (a,b) of <ref>. At infinite temperature, all configurations are equally probable. Upon cooling, higher energy configurations are frozen out. At the level of individual tetrahedra, this means that double monopoles are avoided first, followed by the two types of monopoles. Freezing out these configurations induces a Schottky anomaly, which is visible in the specific heat. When type 1A and 1B monopoles are well separated in energy, each group gives rise to an individual Schottky peak, and as |δ| increases, we observe the single specific heat peak of δ=0 splitting into two distinct peaks. Note that the entropy released by freezing out these excitations is constant and the heights of the peaks are related to this entropy. As there are more type 1A than type 1B monopoles, the freezing out of type 1A monopoles leads to a larger peak in the specific heat. As E_1A<E_1B for δ < 0, the low-temperature peak in the specific heat is higher in this regime, whereas the opposite is true for δ > 0. At δ=-1 (δ=1), type 1A (1B) monopoles are degenerate with the type 0 configurations, and one of the two monopole types is not frozen out, leaving a single peak in the specific heat. When δ > 1 we enter the ferromagnetic ground state at low temperatures and observe a sharp peak in the specific heat at the critical temperature T_c, consistent with a conventional symmetry-breaking phase transition. The critical temperature as a function of δ can be extracted from the position of the specific heat peak. Monopole density. The single monopole number density, n, is a clear indicator of the phase realized at a certain temperature and distortion δ. Here, we only show the total number of single monopoles, combining types 1A and 1B but not counting the double monopoles. At high temperatures, within the paramagnetic phase, all states are equally likely, yielding n=8/16=1/2. In the spin ice phase -1 < δ < 1, shown in panels (d,e) of <ref>, the density drops to zero as ground states are ice configurations without any monopoles. At the phase boundaries, δ=± 1, type 0 and type 1B/1A configurations are degenerate, and the density can be well understood by simple state counting. Remarkably, for δ=-1, the darkest red curve in (d), n, remains constant at 1/2. This is due to the degeneracy of the six type 1A and the six type 0 configurations and, in addition, the degeneracy of type 2 and the type 1B configurations. The freezing out of excitations, therefore, leaves no trace in n despite clearly manifesting in the specific heat and entropy. At δ =1, the lightest blue curve in (f), n drops to one quarter as the two type 1B and the six type 0 configurations are degenerate: n=2/(2+6)=1/4. In the ferromagnetic regime, for δ >1, the density approaches n=1. The monopole density initially drops as the type 1A monopoles are frozen out and then begins to increase at temperatures slightly above T_c as the system arranges itself into a type 1B monopole crystal. Residual entropy. Spin ice materials are famously known for their residual entropy. Panels (g,h) in <ref> show the entropy obtained by integrating the specific heat from the high-temperature limit according to S(T) = S_∞ - ∫_T^∞ dT' C_V(T')/T' , where C_V(T) is the specific heat and S_∞ = N log(2) is the infinite temperature entropy for a system of Ising spins. The number of spin ice ground states grows exponentially with the volume, and the system is said to have an extensively degenerate set of ground states for -3≤δ≤ 1. As pointed out in <ref>, the total number is known to be accurately estimated using a method originally devised by Linus Pauling to compute the residual zero-temperature entropy of water ice <cit.>. A generalized Pauling entropy estimate can be structured as follows: For every tetrahedron, there are sixteen possible spin configurations, of which k minimize the energy on that tetrahedron. If we treat the state of each tetrahedron as an independent constraint, a system with N spins (N/2 tetrahedra) has approximately 2^N (k/16 )^N/2 = (k / 4 )^N / 2 ground states. This results in a residual entropy of S = k_B N /2ln(k/4) . For |δ| < 1, k=6 and the residual entropy per spin is s = S / (k_bN) = 1/2ln(3 / 2 ). This is also true for -3 < δ < -1 since each ice ground state can be mapped to a valid monopole configuration using the transformation in <ref>. At δ=1, the six two-in-two-out configurations have the same energy as the two type 1B monopole configurations. Similarly, at δ=-3, the six type 1A monopole configurations have the same energy as the all-in and all-out (type 2) configurations. Thus, for these two points k=8 and s = k_B/2ln(2 ) as illustrated by the lightest blue curve in (h). The residual entropy is even higher at the duality point, δ=-1, with k=12 as the six type 0 and six type 1A configurations are all degenerate. This gives the estimate s = k_B/2ln(3 ), consistent with the Monte Carlo result highlighted in dark red in (g). The ordered phases at δ < -3 and δ > 1 have s = 0. Here, it is sufficient to pick the direction of a single spin to fix all other spins in the system. The constraints from different tetrahedra can no longer be treated as independent, and the Pauling estimate does not apply. Magnetic Susceptibility. The susceptibility becomes anisotropic since the distortion in our model breaks the lattice symmetry. Here, we only show the magnetic susceptibility in the [111] direction for δ > 1, panel (i) of <ref>. It develops sharp peaks at the same temperatures as the specific heat and is a clear sign of a symmetry-breaking phase transition. An analysis of the critical exponents of the susceptibility and specific heat close to T_c is provided in app:ordering. § DYNAMICS The preservation of the extensive ice degeneracy for a broad range of strain in our model, in contrast with previous studies <cit.>, makes it possible to apply the standard model of spin ice dynamics <cit.>. There, the dynamics of the low-energy excitations is modeled under the assumption that the system evolves through random, independent single-spin flips occurring at a characteristic time scale τ_0, app:numericalmethods for details. More precisely, we perform heat-bath Monte Carlo simulations with uniform single-spin flip attempt rates. This allows us to focus solely on the impact of energetic constraints imposed by strain. Qualitatively similar constraints on monopole motion arise in the “beyond the standard model” for spin ice dynamics <cit.>, albeit from a different origin as dynamical rules on spin flip rates are coming from the quantum mechanical description rather than energetic barriers imposed by strain. §.§ Phenomenology We focus on the magnetic monopole excitations in the spin ice phase for |δ| < 1. The lowest-energy excitations are single monopoles with an energy cost J(2-2|δ|). A monopole moves when one of the three majority spins in its tetrahedron flips, causing the monopole to hop to the neighboring tetrahedra, see <ref>. Flipping the minority spin instead generates a new monopole pair on top of the existing monopole. The energy penalty associated with this process is large, and such events are effectively forbidden. Depending on the sign of δ, either type 1A or type 1B monopoles are lower in energy – resulting in either type A or type B dynamics as indicated in <ref>. The energy difference between the two monopole types is 4 J |δ|. Hence, at high temperatures T ≫ 4J|δ|, the monopoles can readily change types, and the monopoles are isotropically diffusive as in the isotropic case without strain <cit.>. Here, we emphasize a temperature regime T/J ≪ 4|δ|, 2-2|δ|, where the dynamics are dominated by the motion of a dilute set of “cheap” monopoles, with rare promotions to “expensive” ones. Cheap and expensive refer to the energies, J(2-2|δ|) and J(2+2|δ|), of either type of monopole configuration compared to a two-in-two-out configuration. The promotion occurs on a time scale κ_0 = exp( J 4|δ|/T) τ_0 , coming from the Boltzmann weight of the heat bath. Moves that conserve or reduce the energy occur on a time scale τ_0. In the following, we focus on time scales smaller than κ_0 and discuss the possible zero-energy moves. Monopole motion is diffusive for δ=0 <cit.>, since the three majority spins are randomly distributed. This situation is in stark contrast to our anisotropic model, where energetic constraints are encoded in the rules governing if a monopole belongs to type 1A or 1B, leading to different levels of restrictions on monopole motion for the two types. Diffusive behavior is recovered in all cases at time scales larger than κ_0, where the distinction between monopole types becomes effectively meaningless. Type 1B dynamics Type 1B monopoles are the lowest-energy excitations for δ > 0 and have their minority spin on sublattice . They cannot move with zero energy moves through sublattice spins, leaving them energetically constrained to move in planes normal to [111]. Furthermore, moves in the plane at zero energy cost are only possible if the monopole remains type 1B after the move, which requires the new minority spin to also be a sublattice spin. This is only true for one of the three possible configurations of the surrounding tetrahedra in the (111) plane, <ref>. Therefore, type 1B monopoles can only move through, on average, one out of the three spins at each tetrahedron. The centers of tetrahedra form honeycomb lattices in the (111) planes. Hence, we can understand the zero-energy motion of type 1B monopoles as a random walk on these honeycomb lattices with 2/3 of the bonds removed at random. This can be viewed as a bond percolation model with an effective filling fraction p_(111)=1/3. However, the bond percolation threshold for the honeycomb lattice is p_c ≈ 0.65 <cit.>, and we thus expect monopoles to be localized to small clusters for δ > 0 and T ≪ 4J δ. One effect of the localization of type 1B monopoles is that the energetically more expensive type 1A monopoles play a significant role in the system relaxation. This introduces another characteristic time scale in the problem κ_1 = exp( J (2+2|δ|)/T) τ_0 , proportional to the density of type 1A monopoles in the δ > 0 regime. Type 1A dynamics For δ < 0, type 1A monopoles are lower in energy. The minority spin of type 1A monopoles is not on sublattice , making their motion less constrained than that of type 1B monopoles. A majority spin can be flipped without energy cost if the sublattice spin in the new tetrahedron hosting the monopole is a majority spin after the flip. In this case, motion in the [111] direction, corresponding to flipping a sublattice spin, can always happen at zero energy when there is not already a monopole in the adjacent tetrahedron. Motion through each of the other two majority spins costs zero energy in 2/3 of cases, <ref>. As 2/3 of the spins are majority spins, the probability that the monopole can move through an arbitrary spin on sublattice is thus p_(111)=4/9. This is still much below the percolation threshold of the honeycomb lattice. However, monopoles always have the option to propagate perpendicular to the honeycomb layer along [111]. This allows them to jump between finite clusters in the (111) plane and leaves the monopoles diffusive with an anisotropic diffusion tensor favoring motion in the [111] direction. In the language of (anisotropic) percolation, we can view their motion as a random walk on a diamond lattice with two filling fractions: p_[111]=1 and p_(111)=4/9. Inverted spin ice The same arguments apply to the inverted spin ice phase. For -3 < δ < -1, the ground states have type 1A configurations everywhere, red part in <ref>. The picture explained above is, however, directly applicable if one invokes the duality around δ=-1. The lowest-energy excitations in the regime -3 < δ < -2 are type 2 tetrahedra (double monopoles), that move on top of a background of single monopoles with the same rules underpinning their motion as that of the type 1B monopoles. In the regime -2 < δ < -1, the lowest-energy excitations are instead type 0 tetrahedra that move on top of the background of single monopoles following the rules governing type 1A monopole moves. Ordered phases The long-range ordered phases for δ < -3 and δ > 1 do not have point-like excitations. Instead, excitations take the form of domain boundaries that do not move without an energy penalty. §.§ Mean-squared displacement The impact of strain on the monopole motion can be observed by tracking the trajectories of individual monopoles in Monte Carlo simulations. More details are provided in <ref>. <ref> shows the mean-squared displacement (MSD) of a single, isolated monopole moving without annihilation or creation events. This mimics the natural motion of monopoles at low monopole densities. To capture the anisotropic response of our model, the monopole displacement is projected along the [111] direction, as well as along two orthogonal directions. The choice of these two orthogonal directions is arbitrary and has no impact on the observed MSD. Since creation and annihilation events are banned, the energetics are completely set by the ratio J δ T. Consistent with our phenomenological arguments, anisotropic diffusive monopole motion in the MSD is observed in the type A regime (δ≤ 0; panel (a) of <ref>). Motion in the [111] direction is favored as expected. The response in the type B regime (δ > 0; panel (c) of <ref>) is quite distinct. Here, the MSD initially appears to plateau, indicating that monopoles are localized to small clusters. This is consistent with type 1B monopoles never moving through spins, effectively restricting their motion to a small number of tetrahedra within the (111) plane. Indeed, the localization is stronger in the [111] direction[Note that even in the limit of J |δ| /T→∞, the MSD projected along [111] will not be zero, as monopole moves through any spins in have a non-zero component along [111].]. While type 1B monopoles are initially localized, promotion events occur on a time scale κ_0 and this process restores diffusive monopole motion. In contrast to the type A regime where movement along [111] is favored, the diffusion is isotropic on time scales of κ_0. Once an expensive type 1A monopole is created, its motion is not restricted in any direction because its propagation never enhances the energy. Therefore, as long as no double monopole is created, every move is accepted. Sub-dimensional diffusion Within a spin ice state, any closed loop of aligned spins can be flipped at zero energy cost. The smallest such loop consists of six spins forming a hexagonal plaquette. These “ring exchange” moves naturally arise within the context of quantum spin ice as the lowest-order relevant perturbation to the model, leading to quantum fluctuations mixing the classically degenerate spin ice states <cit.>. The classical analog of the ring exchange moves consists of stochastically finding a hexagonal plaquette with aligned spins and flipping the six spins. In our model this lifts the localization of type 1B monopoles, as shown in <ref>. The localization is only lifted in the (111) plane as monopoles remain localized in the [111] direction. This is because any ring exchange update, including a spin neighboring a monopole, converts the monopole from type 1B to type 1A at an energy cost of 4 J δ. This process is thermally suppressed for T ≪ 4 J δ. Thus, when ring exchange updates are included, the monopoles exhibit sub-dimensional diffusion at zero temperature – reminiscent of fractonic motion <cit.>. We also note that flipping an open string of four spins attached to a type 1B monopole can move it along the [111] direction while preserving the energy. The sub-dimensional diffusion of the monopole can be understood in terms of a random walk on a honeycomb lattice where the traversability of a bond changes in time. In a completely unconstrained random walk on a honeycomb lattice, meaning that monopoles can move freely in the (111) plane, diffusion can be observed with a constant D: ⟨ r^2⟩ = D t. However, for localized 1B monopoles, moves across any bond are only possible in one of three cases. The additional ring exchange updates scramble the ice background and allow new moves across previously disallowed bonds. The typical time scale, , it takes to unlock such a bond scales inversely proportional to the ring exchange rate F. Given the single-spin flipping rate τ_0, there are two important limits: (i) τ_0≫ and (ii) τ_0≪. For (i), where the ice background is fully scrambled between two attempted hops, the probability of any attempted hop being accepted is 1/3, and this effectively reduces the diffusion constant to D/3. In contrast, for (ii), the time a monopole is localized to a specific cluster is governed by . One can model this as monopoles performing a random walk with a new hopping time , leading to a reduced diffusion constant D (τ_0 / ) ∝ D F. In this limit, the diffusion constant thus scales linearly with the ring exchange rate F. §.§ Magnetic noise The stochastic thermal motion of monopoles generates magnetic noise, which is experimentally observable and has been measured in spin ice materials <cit.>. Within the spin ice regime -1 < δ < 1, we characterize the thermal magnetic noise of our model by extracting the power spectral density (PSD) for the different spin subspecies. The subspecies PSD is defined as PSD_ S(ω)=1/N_ S⟨|M̃_̃ ̃S̃(ω)|^2⟩ , where N_ S = N/4 is the number of spins that belong to subspecies S, and M̃_̃ ̃S̃(ω) is the Fourier transform of the temporal trace of their total magnetization. For diffusive monopole motion, the PSD is a Lorentzian with functional form ∝ (1 + ω^2 τ^2 )^-1, where τ is the relaxation time <cit.>. Characteristic for a Lorentzian is the low-frequency plateau for ω≲ 1/τ and ω^-2 decay for ω≳ 1/τ. <ref> shows the PSD for subspecies in the top row and the PSD of one different subspecies, , in the bottom row. We fix the temperature such that (2 - 2|δ|) J/ T = 4, ensuring that the average density of the low-energy monopoles is the same for different values of δ. These simulations are initialized from thermal equilibrium and we now allow for monopole creation and annihilation. Type 1A dynamics Focusing on type 1A dynamics first (-1 < δ≤ 0; left column of <ref>), we find that the PSDs of all spin subspecies are essentially Lorentzian, consistent with the predicted anisotropic diffusive monopole motion observed in panel (a) of <ref>. The PSD of subspecies depends only weakly on δ, which emphasizes the fact that type 1A monopoles are always free to move along [111]. The PSD of sublattice displays a stronger δ-dependence, and for increasing δ, the low-frequency plateau appears to shift to lower frequency and larger amplitude, consistent with an increase in the relaxation time τ. This is a manifestation of the increased energy cost associated with converting a type 1A to a type 1B monopole, which reduces the monopole mobility through subspecies . Type 1B dynamics The monopole motion is strongly constrained for δ > 0, and this manifests itself in the PSD (0≤δ<1; right column of <ref>). The fluctuations of spins belonging to , panel (b), come purely from the motion, creation, or annihilation of type 1A monopoles, which are now energetically more expensive than type 1B. The resulting PSD is Lorentzian, as in the δ < 0 case but with significantly suppressed magnitude and an increased relaxation time owing to the lower density of type 1A monopoles in this regime. Rescaling the PSD and frequency by κ_1 from <ref>, proportional to the density to type 1A monopoles, we find a perfect scaling collapse as illustrated in the inset of panel (b) in <ref>; see also <ref> for further details. The PSD for spins belonging to , panel (d) of <ref>, behaves quite differently in the δ>0 regime. Here, the high-frequency noise is primarily generated by the localized motion of type 1B monopoles. For ω≳ 1/τ_0 the PSD displays a ω^-2 decay, which is typical for PSD curves measured in Monte Carlo simulations <cit.>. For larger values of δ, there is a regime of intermediate frequencies where the PSD decays significantly slower with frequency – a manifestation of the monopole localization. This slower decay would turn into a plateau persisting to arbitrarily small frequencies as T → 0 if the only relaxation mechanism in the system was the zero-energy motion of type 1B monopoles. However, both the presence of a small but non-zero density of type 1A monopoles and the spontaneous creation of new type 1B monopoles offer alternate avenues for the system to relax. This manifests as a faster decay of the noise spectrum at low frequencies. Below some frequency, the noise must plateau in all cases (unless T→ 0) as all spins can flip at a finite energy cost, and beyond some time scale, the system is therefore completely uncorrelated from its previous configurations. This frequency is, however, not accessible in our simulations for δ > 0. Experimentally, there has recently been renewed interest in the dynamics of magnetic monopoles <cit.>, largely driven by the ability to probe magnetic noise spectra using SQUIDs. The magnetic noise plots in <ref> took advantage of the microscopic nature of the Monte Carlo simulation to present sublattice-resolved PSDs. For completeness, we present data for the (experimentally accessible) directional PSD curves in <ref>. These can be obtained from the sublattice resolved data in <ref>. We choose moderate values of δ/T that might be realized in future experiments using the sublattice doping discussed in the next section. As expected, we find stark differences between the [111] direction and the orthogonal direction [110]. <ref> also demonstrates the transition between short-time anisotropic and long-time isotropic behavior. This crossover occurs on the time scale ∼κ_0, beyond which the distinction between the monopole types loses its meaning. § EXPERIMENTAL REALIZATIONS Famously, spin ice is approximately realized in nature by the rare-earth pyrochlore compounds Dy_2Ti_2O_7 <cit.> and Ho_2Ti_2O_7 <cit.>. Experiments at thermodynamic equilibrium are possible down to temperatures of approximately 0.55 K <cit.>, revealing characteristic properties such as pinch-points <cit.> as well as signatures of the magnetic monopoles <cit.>. 0.55 K corresponds to roughly T/J≈ 0.25 in our nearest-neighbor model <cit.>. There are different approaches to engineer the anisotropic model, <ref> with δ≠ 0. The simplest way to achieve this is the application of uniaxial pressure along the [111]-axis (δ >0) or “hydrostatic” pressure from the sides (δ<0). Naively, this reduces the distances between atoms in the direction of the applied pressure. A rough estimate, neglecting the influence of the oxygen environments or spin alignment <cit.>, is based on the dipolar interaction that predicts an increase of the coupling constant scaling with r^-3. For rare-earth pyrochlores, hydrostatic pressure reaches values up to ∼50 GPa <cit.>, which reduces, for example, the lattice constant in In_2Mn_2O_7 <cit.> from 9.7113 at ambient pressure to 9.3764 at a pressure of 29GPa. In this naive approximation, the nearest-neighbor dipolar term is increased to an extent equivalent to δ∼ 0.11. Uniaxial pressure is more limited than hydrostatic pressure, with previous experiments achieving pressures of 2.2GPa (1.3GPa) for Ho_2Ti_2O_7 (Dy_2Ti_2O_7) <cit.>. By modeling the magnetization with respect to the applied pressure and an external field, Refs. <cit.> estimate the increase in the nearest-neighbor coupling parallel to the pressure axis and the lattice compression. Taking both, dipolar and exchange interactions with the dipolar spin ice Hamiltonian <cit.>, into account, we expect this to correspond to an increase of the effective nearest-neighbor coupling by δ∼ 0.06 and δ∼ 0.02 for Ho_2Ti_2O_7 and Dy_2Ti_2O_7, respectively. The novel monopole behaviors described in <ref> are largely governed by the energy difference between type 1A and type 1B monopoles: 4|δ| J in our model. It is the ratio 4|δ| J/T that determines the time scales on which anisotropic diffusion and monopole localization occur. Combining the predicted values of δ from the experiments in Refs. <cit.> and the lowest accessible temperature, 0.55 K <cit.>, places us in a region where anisotropic effects are barely visible in simulations. However, due to the cubic scaling of the dipolar interaction with distance and the exponential dependence of monopole motion on the energy-temperature ratio, small increases in the achieved strain have large effects on the dynamics. We are, therefore, optimistic that future high-pressure experiments will be able to observe parts of the phenomenology we have identified. Another mechanism to achieve larger magnitudes of |δ| involves substituting spins located on the sublattice with a species of spins characterized by a different magnetic moment. This approach might be capable of exploring the full phase diagram in <ref>, which contains ferro- and antiferromagnetic bonds. For example, introducing spins with a larger or smaller magnetic moment, like doping Ho atoms into the positions within Dy_2Ti_2O_7, naturally induces variations in δ. Extending this idea, atoms that interact antiferromagnetically with the other species open up the possibility to observe phases predicted for the δ < -1. A notable example of this approach are tripod kagome systems where atoms on the sublattice are substituted with non-magnetic ions <cit.>. They realize the extremal case δ=-1, where we have disconnected kagome planes. Note, however, that doping atoms at the right position is extremely challenging, and it is much easier to place them randomly within the crystalline structure. Hence, understanding and modeling the Hamiltonian where δ is randomly assigned to a subset of spins is potentially important future work. Lastly, a growing effort in artificial spin ice systems <cit.> might provide an avenue to realize our model and confirm the relation between Monte Carlo simulation and real-time dynamics. § CONCLUSIONS Our simple model Hamiltonian H_S in <ref>, nominally describing spin ice under uniaxial [111]-strain, has turned out to support a rich and varied physical landscape. Using the grouping of the sixteen single-tetrahedron configurations into four degenerate types, we have successfully described the full phase diagram of H_S. Remarkably, the extensive degeneracy of the ice ground states is preserved for a wide range of distortion values, which is in stark contrast to the behavior of uniaxial pressure along other crystallographic axes <cit.>. In contrast, the thermodynamic properties contain information about the distortion, for example, in the form of two peaks in the specific heat. The distortion further manifests itself in the dynamical properties, with two dynamical regimes appearing in the low-temperature limit. Using single-spin flip dynamics, we find that the magnetic monopoles split into a diffusive and a localized type, which dominate the dynamics at negative and positive δ, respectively. For δ>0, the localization of the low-energy monopoles suppresses the magnetic noise in a characteristic, anisotropic fashion. The time scale on which the monopoles remain localized grows exponentially with inverse temperature, making their localization a relevant property at low temperatures regardless of the strength of the distortion. This, presumably, also leads to an exponential slow-down of thermalization at low temperatures. Including additional dynamic processes, such as the ring exchange updates discussed here, partly restores the diffusive motion of the localized monopoles. In this case, the monopoles remain localized in the strain direction – effectively making them sub-dimensional excitations. Several indicators of novel monopole dynamics we identify are experimentally accessible <cit.>. This includes not only the magnetic noise spectra but also measurements of thermodynamic properties such as the specific heat. Furthermore, the impact of uniaxial strain on transport properties and out-of-equilibrium monopole dynamics, which are beyond the scope of this paper, is another interesting avenue for further research. Although additional work is required to fully model the impact of strain on real spin ice materials, with the inclusion of long-range dipolar interactions as an obvious route for improvement, we believe the phenomenology identified here to be largely applicable. We hope our prediction will stimulate further experimental studies of spin ice under strain, focusing on the two distinct monopole types. While we only considered global changes to the spin-spin interactions along the [111] direction, other intriguing possibilities exist. One of these is the application of a strong uniaxial electric field. Another is the partial substitution of atoms on sublattice , leading to a Hamiltonian where the distortion only appears on a fraction of the tetrahedra. Finally, it has recently been proposed that monopoles in Dy_2Ti_2O_7 move on emergent dynamical fractals <cit.>. This opens up another interesting avenue for future work on the interplay of these fractals and energetic barriers induced by perturbations such as strain. § ACKNOWLEDGEMENT We are thankful for stimulating discussions with Evan Smith, Benedikt Placke, Owen Benton, Roderich Moessner, and Claudio Castelnovo. RS acknowledges the AFOSR Grant No. FA 9550-20-1-0235. CRL thanks the Max Planck Institute for the Physics of Complex Systems for its hospitality. This work was in part supported by the Deutsche Forschungsgemeinschaft under grants SFB 1143 (project-id 247310070) and the cluster of excellence ct.qmat (EXC 2147, project-id 390858490). § NUMERICAL METHODS All numerical results presented in this work were acquired through Monte Carlo simulations using the Metropolis algorithm and its variants. The majority of simulations were carried out on a sample of linear size L=10 with L× L × L =1000 fundamental unit cells, corresponding to a total of N=4000 spins and 2000 tetrahedra. Periodic boundary conditions were used in all cases. Worm updates were utilized to speed up equilibration at low temperatures in the spin ice phase. These updates involve identifying a chain of spins aligned head to tail, forming either a closed loop or terminating at a monopole at one end. Subsequently, an attempt is made to simultaneously flip the chain of spins. The acceptance of any update follows the Metropolis probability, denoted as P = min[1, exp (-Δ E / T) ], where Δ E is the overall change in energy. Note that accepted worm updates neither create nor annihilate any monopoles nor do they change the number of double monopoles. The only contribution to Δ E for worm updates is thus the conversion of type 1A to type 1B monopoles or vice versa. The ring exchange updates discussed in <ref> are equivalent to closed loop worm updates of length six. For thermodynamic measurements, we simulated the system by annealing from a high temperature, applying both worm updates and single-spin flip updates at each temperature step. As mentioned above, we only performed simulations for δ≥ -1 and then invoke the duality given in <ref> to deduce the behavior for δ < -1. The equilibrium system dynamics were simulated using the standard model of spin ice dynamics <cit.>: Metropolis algorithm Monte Carlo using only single spin flip updates and defining the flipping attempt time τ_0=1 MC-sweep (1 MC-sweep corresponds to N attempted spin flips). A combination of single-spin flip and worm updates were used to initialize the system at the desired temperature, and the dynamical measurements were only begun once the system was in thermal equilibrium. The PSD was computed from time traces of the sublattice magnetization using Welch's method <cit.>. In an infinite system, the monopole density only goes to zero as temperature goes to zero. However, in a finite system, if the temperature is chosen too low, all monopoles can vanish, preventing any magnetization dynamics. Thus, when measuring the magnetization noise, we ensured that we were always at temperatures where the actual monopole number did not drop to zero at any point during the simulation. Finally, the MSDs of individual monopoles were measured by first cooling a system of 54,000 spins until an ice state without any monopoles was reached. A single pair of low-energy monopoles is then created at an arbitrary position. One of these monopoles was kept fixed at its original position, whereas the other monopole was allowed to move. Each Monte Carlo update now consists of randomly selecting one of the four spins surrounding the mobile monopole and flipping this with probability min [1, exp(-Δ E / T] if the selected spin is a majority spin. As creation and annihilation are not allowed, the monopole dynamics only depend on the ratio J δ/T. We define four such updates to be equivalent to time τ_0. This mimics the motion of a monopole under standard model dynamics in the limit of very low monopole density. To minimize the impact of the static monopole on the measurement results, the mobile monopole was allowed to move for approximately 10^6 τ_0 before its trajectory was measured. The MSD curves shown in <ref> and <ref> are computed by performing a bootstrap average of the trajectories of 200 independent monopoles. § FURTHER NUMERICAL RESULTS §.§ Phase transition at δ > 1 We have identified a symmetry-breaking phase transition upon cooling when δ > 1 (and δ < -3), see <ref> and <ref>. This is a conventional paramagnet to ferromagnet phase transition in an Ising spin system, and we expect it to fall within the 3d Ising universality class <cit.>. In the thermodynamic limit, one would then expect the susceptibility to diverge as χ∼|T - T_c |^-γ and the specific heat to diverge as C_v ∼|T- T_c |^-α as T approaches T_c from above. In <ref>, we plot the specific heat and the susceptibility for all spin subspecies for temperatures close to the critical temperature, approaching T_c from above. The critical temperature is taken to be the temperature at which we observe the maximal specific heat and susceptibility value. We display results for two different linear system sizes, L = 6 and 10, and find that the growth of both the susceptibility and specific heat approximately agree with the predicted critical exponents: γ=1.23 and α=0.11 <cit.>. This is consistent with the thermal phase transitions being 3d Ising transitions, as expected. §.§ Scaling collapse for type 1A monopoles In <ref>, we discussed the dynamical behavior of type 1A monopoles and argued that the magnetic noise of the spins is completely dominated by their motion. Remember that type 1B monopoles cannot move through spins on . We also argued that type 1A monopoles move diffusively in the entire spin ice regime, and we, therefore, expect them to generate a Lorentzian magnetic noise spectrum <cit.>, described by f=A τ/1+(ωτ)^2 , where τ is the correlation time. The fact that the diffusion constant is anisotropic in the -1 < δ < 0 regime does not change the frequency dependence of the noise but impacts the magnitude of noise measured along different crystallographic axes. The correlation time of the noise is expected to be inversely proportional to the density of the dynamically active monopoles, in this case, the type 1A monopoles. We have implicitly made a scaling collapse in panel (a) of <ref> for -1 < δ≤ 0, since we chose values of T and δ which keep (2-2|δ|)J/T constant. The density of type 1A monopoles goes as exp((2-2|δ|)J/T) for negative δ. was sidetracked by other projects... In the regime 0 < δ < 1 we thus expect the PSD of spins on to go as PSD_[111]≈ A κ_1/1 + (ωκ_1 )^2 , with a global constant A that is the same for all T and δ (A=2 for our parametrization) and κ_1 = e^-(2 + 2|δ|) J / Tτ_0 from <ref>. Indeed, we find a perfect scaling collapse for PSD_[111] curves at different T and δ by plotting PSD_[111] / κ_1 versus κ_1 ω. This is shown in <ref> for the regime 0 ≤δ < 1. §.§ Further MSD and PSD results Here, we present further numerical results on the MSD of monopoles and directional magnetic noise under various settings of δ and T, complementary to the main text. <ref> shows how the behavior of MSD depends on different values of Jδ/T. The monopole movement in the positive δ regime is much more constrained, as explained in the main text. The PSDs shown in <ref> are measured along two orthogonal directions, providing a different perspective from the sublattice resolved PSD curves presented in the main text. The PSD along [111] has a mixed contribution from monopole movement through and S_K as spins belonging to S_K have non-zero components along the [111] direction, while the PSD along [11̅0] has contributions from S_K spins only. The anisotropic diffusion of type 1A monopoles for δ < 0 is manifested due to dynamical constraints imposed in directions other than [111], whereas signals of dynamical localization appear in the suppression of [111] directional PSDs for type 1B monopoles at δ > 0. The PSD measured along individual lattice directions is the experimentally relevant quantity, while it also demonstrates a clearer connection with the monopole MSD results we present. We notice the similarities of the anisotropic-to-isotropic transition in time observed in both <ref> and <ref> for δ > 0.
http://arxiv.org/abs/2406.18632v1
20240626180000
Revising the quantum work fluctuation framework to encompass energy conservation
[ "Giulia Rubino", "Karen V. Hovhannisyan", "Paul Skrzypczyk" ]
quant-ph
[ "quant-ph" ]
*theoremTheorem stu-dies sa-ti-sfy-ing mi-ni-mi-se ex-pli-ci-tly si-tua-tion [name=S]suppfigure [name=S]supptable
http://arxiv.org/abs/2406.18803v1
20240627003252
Non-modal growth analysis of high-speed flows over an inclined cone
[ "Xi Chen", "Bingbing Wan", "Guohua Tu", "Maochang Duan", "Xiaohu Li", "Jianqiang Chen" ]
physics.flu-dyn
[ "physics.flu-dyn" ]
Divide, Ensemble and Conquer: The Last Mile on Unsupervised Domain Adaptation for On-Board Semantic Segmentation Tao Lian, Jose L. Gómez, and Antonio M. López, Member, IEEE Tao Lian, Jose L. Gómez and Antonio M. López are with the Computer Vision Center (CVC) at UAB, 08193 Bellaterra (Barcelona), Spain. Antonio M. López are also with the Dpt. of Computer Science, Universitat Autònoma de Barcelona (UAB), 08193 Bellaterra (Barcelona), Spain. The authors acknowledge the support received from the Spanish grant Ref. PID2020-115734RB-C21 funded by MCIN/AEI/10.13039/501100011033. Antonio M. López acknowledges the financial support to his general research activities given by ICREA under the ICREA Academia Program. Tao Lian acknowledges the financial support from the China Scholarship Council (CSC). The authors acknowledge the support of the Generalitat de Catalunya CERCA Program and its ACCIO agency to CVC’s general activities. =============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT Spatial optimal responses to both inlet disturbances and harmonic external forcing for hypersonic flows over a blunt cone at nonzero angles of attack are obtained by efficiently solving the direct-adjoint equations with a parabolic approach. In either case, the most amplified disturbances initially take the form of localized streamwise vortices on the windward side and will undergo a two-stage evolution process when propagating downstream: they first experience a substantial algebraic growth by exploiting the Orr and lift-up mechanisms, and then smoothly transition to a quasi exponential-growth stage driven by the crossflow-instability mechanism, accompanied by an azimuthal advection of the disturbance structure towards the leeward side. The algebraic-growth phase is most receptive to the external forcing, whereas the exponential-growth stage relies on the disturbance frequency and can be significantly strengthened by increasing the angle of attack. The wavemaker delineating the structural sensitivity region for the optimal gain is shown to lie on the windward side immediately downstream of the inlet, implying a potent control strategy. Additionally, considerable non-modal growth is also observed for broadband high-frequency disturbances residing in the entropy layer. boundary layer stability, compressible boundary layers, high-speed flow § INTRODUCTION Hypersonic vehicles are faced by three-dimensional (3-D) boundary layer transition problems that yield an undesirable increase in heat flux and skin friction. In contrast to the two-dimensional (2-D) boundary layer over a flat plate, cone at zero degrees angle of attack, etc, the 3-D counterpart features nonuniform pressure distribution in the azimuthal direction, and as such exhibits three distinct regions across the vehicle surfaces, i.e., the attachment-line region (high-pressure region), the vortex region (low-pressure region) and the crossflow region in between. Each region sustains certain modal instabilities likely responsible for the boundary-layer breakdown. Of particular interest is the crossflow region where boundary-layer transition is generally dominated by crossflow instabilities comprising stationary and traveling modes <cit.> and occurs over the largest part of the vehicle surface. Base flow in the crossflow region varies mildly in the azimuthal direction, which appears amenable to one-dimensional (1-D) stability analyses. To obtain the evolution of crossflow vortices, it is common practice to first model a vortex path and the azimuthal wavenumber variation along the path, and then integrate the (nonlinear) parabolized stability equations (PSE) assuming azimuthal periodicity <cit.>. However, the 1-D tools find it difficult to model the typical wavepacket structures of the crossflow mode, and also could not deal with the nonlinear interactions between multiple modes as local modes may have different trajectories. Therefore, one attempts to establish a global stability analysis framework to delineate the crossflow instability behaviour in a 3-D boundary layer. <cit.> and later <cit.> adopted BiGlobal approach solving 2-D eigenvalue equations to uncover the global modal structure of the traveling crossflow instability over an elliptic cone. They found that the shape function of the crossflow mode exhibits rapid oscillations in the azimuthal direction and covers almost the entire surface of the vehicle, thus requiring a large number of grid points to resolve. In a similar study on a lifting body, <cit.> documented that the global crossflow modal eigenvalues are too sensitive to azimuthal resolution to be grid converged. This extreme sensitivity of eigenvalues reflects, on the one hand, the highly non-orthogonal nature of the linear operator and, on the other hand, is physically due to the weakly non-parallel effects (i.e., the azimuthal wavelength of the crossflow mode is much smaller compared to the azimuthal variation scale of the base flow)<cit.>, analogous to previous studies on streamwise BiGlobal problem <cit.>. They also showed that the global spectrum for a single frequency contains many unstable modes without a predominant one, which was reported by <cit.> as well in studying an inclined cone configuration. The poor grid convergence of eigenvalues and the multiplicity of crossflow modes make the interpretation of global modal instability results challenging. As a result, the global modal stability analysis may be unable to properly describe the crossflow instability in a complex 3-D boundary layer. Furthermore, it has become increasingly evident that modal instability analysis alone can not explain all transition phenomena, as exemplified by the famous blunt-cone paradox concerning subcritical transition on a large-bluntness cone <cit.>. Recently, <cit.> and <cit.> who investigated the leeward modal instabilities of hypersonic flow over yawed blunt cones, both identified strikingly low N-factors (below 2) correlated with the observed transition location, pointing to unknown transition mechanisms. The alternative approach is based on the non-modal stability analysis framework which considers the interferences of multiple modes simultaneously <cit.>. Instead of focusing on the modal instability, the non-modal stability analysis aims to identify the most dangerous case from all potential routes of disturbance evolution, which is important since inlet conditions are typically unavailable in realistic situations. Moreover, the optimal solution proves to be robust to small perturbations even if the eigenvalues may not <cit.>. The non-modal stability analysis comprises two main branches, i.e., the optimal growth analysis and the input-output/resolvent analysis (abbreviated as I-O analysis). The optimal growth analysis finds the intrinsic disturbances in the flow field that result in the maximal energy growth over a certain time interval (temporal approach) or distance downstream (spatial approach). On the other hand, the I-O analysis <cit.> focuses on determining the optimal external forcing (input) that leads to the largest response (output, measured by disturbance energy) in the flow field. Albeit with different physical meanings and focuses, both branches share a similar mathematical framework <cit.> and even yield the same results when restricting the input and output at the inlet and outlet, respectively <cit.>. Hereafter we will concentrate our attention on the optimal growth analysis. The early non-modal stability analyses were mostly conducted for simple shear flows (Poiseuille flow, Couette flow, Blasius boundary layer, etc) in the low-speed regime, showing that disturbances can be considerably (linearly) amplified through the Orr mechanism and lift-up effect even if they are modally stable <cit.>. The Orr mechanism is predominant for 2-D waves and relies on the deformation of vorticity patches under the action of the base flow. The deformation reduces the support area of the vorticity such that the vorticity extrema increase owing to the Kelvin circulation preservation theorem, thereby causing a transient energy growth <cit.>. The lift-up effect corresponds to the emergence of streaks induced by streamwise vortices. In general, 3-D optimal perturbation combines these mechanisms in a synergistic manner in which the cross-stream velocity produced by the Orr mechanism enhances the streamwise rolls associated with streak production <cit.>. The optimal perturbation can sometimes initiate the modal instability, especially in 3-D boundary layers where modal and non-modal growths are observed for similar disturbance structures. <cit.> who studied temporal optimal disturbances in the Falkner-Skan-Cooke flow modeling the boundary layer over an infinite swept wing found that these initially take the form of vortices which are almost aligned with the external streamline and evolve into crossflow modes downstream. They argued that non-modal growth may provide proper initial conditions for modal growth and thus constitutes a preferential receptivity path for the selection of exponential instabilities in 3-D boundary layers. On a similar configuration, <cit.> utilized a spatial approach based on the method of Lagrange multipliers and PSE to reveal that the optimal disturbance manifests as streamwise vortices aligned against the crossflow, extracts energy by leveraging both the Orr and lift-up mechanisms, and develops smoothly into crossflow modes downstream. They <cit.> later extended the study to compressible flow (up to Mach 1.5), showing that optimal growth increases for higher Mach numbers and wall cooling destabilizes disturbances of non-modal nature. Nonetheless, the physical mechanisms of optimal growth in compressible boundary layers are found to be similar to the incompressible counterpart. The first non-modal stability analysis in compressible flow comes from <cit.> who, adopting the temporal approach, examined the effects of different parts of spectra on the optimal growth, and obtained similar optimal structures as the incompressible flow. Their work was later complemented by a spatial one performed by <cit.>. <cit.> identified a constantly non-modal growth mechanism in the streamwise development of the streak mode in oblique breakdown, by which the streak mode retains a higher amplitude level than other nonlinearly generated modes. <cit.> observed substantial non-modal growth of disturbances peaking within the entropy layer around the nose of a blunt cone, and furthermore showed that such disturbances could trigger the boundary layer transition through oblique breakdown. The aforementioned studies considered essentially 2-D compressible boundary layers where the flow field is treated as homogeneous in at least one direction (usually the azimuthal/spanwise direction). Non-modal analyses on truly 3-D compressible boundary layers are rare. <cit.> obtained the optimal response of a hypersonic finned-cone boundary layer to wall roughnesses by partitioning the entire domain into smaller segments and progressively solving the I-O equations of each segment. <cit.> calculated the optimal disturbances for flow field around an isolated 3-D roughness with harmonic linearized Navier-Stokes (HLNS) equations, showing that the optimal disturbances amplify most rapidly in the backward-flow region of the roughness. The above two studies both entail iteratively solving large-scale linear systems with unknowns at order of one million, which are prohibitively expensive especially for a parametric study. <cit.> performed temporal non-modal stability analysis on an elliptic-cone boundary layer under hypersonic flight conditions. They observed a strong transient growth in the crossflow region, which is strengthened with increasing the flight altitude and as such is likely the important mechanism responsible for the high-altitude boundary-layer transition. In their study, the optimal disturbance is constructed by a linear combination of a finite number of global modes whose amplitude coefficients are optimized to achieve the maximum amplification at a given time horizon. Their results may, however, be compromised by the aforementioned high sensitivity of global crossflow modes, as well as the partial consideration of the eigenvalue spectrum <cit.> since obtaining the entire spectrum is infeasible. Moreover, the spatial optimal growth pertaining to crossflow disturbances in a truly 3-D boundary layer remains largely unknown. In the present study, we aim at establishing a robust description of the crossflow instabilities from a global perspective with help of the non-modal analysis, and furthermore, uncovering the optimal growth mechanisms for a prototypical 3-D boundary layer. To this end, we attempt to constitute an efficient yet accurate spatial optimal growth calculation framework based on a parabolic set of equations, and employ it to systematically study the optimal responses over a hypersonic inclined cone to both inlet perturbation and external forcings, with a special focus on the influence of the angle of attack (AoA). The related methodologies are summarized in <ref>. Then, the linear non-modal characteristics of the selected configuration are presented in <ref>, followed by discussions on the effects of AoA in <ref>. Finally, concluding remarks and discussions are given in <ref>. § NUMERICAL SET-UP AND LINEAR STABILITY THEORIES §.§ Model The model currently being studied is a 7^∘-half-angle blunt cone at angles of attack of 2^∘, 4^∘ and 6^∘, respectively. The model is 900 mm long and has a nose radius of 5 mm. The freestream Mach number Ma = 6, unit Reynolds number Re_∞^* = 4×10^7/m, freestream static temperature T_∞^* = 61.1 K, the wall temperature T_w^* = 300 K. The detailed flow configuration can be found in figure <ref>. The dimensional variables are denoted with the upscript ()^*. All flow quantities are non-dimensionalized using the reference length L^* = 0.0387 mm (the boundary layer scale at X^* = 10 mm), the freestream velocity, density and temperature. §.§ Base flow calculations A well-validated in-house code <cit.> was used to solve the unsteady compressible Navier-Stokes equations. The grid resolution is N_ξ× N_η× N_ζ = 501×301×301 for the first two cases and N_ξ× N_η× N_ζ = 751×301×361 for the last case (6 deg AoA), where N_ξ,N_η,N_ζ is the grid number in the streamwise, wall-normal and azimuthal direction, respectively. The grid is stretched in the wall-normal and streamwise directions so that sufficient (at least 100) points are in the boundary layer and in the vicinity of the nose part, and is furthermore aligned with the leading-edge shock to diminish the numerical oscillations from the shock layer. Only half part of the flow field is simulated by exploiting the azimuthal symmetry. The inviscid fluxes are computed by using a third-order WENO scheme, while the viscous fluxes are discretized using a fourth-order central-difference scheme. The time integration is performed using a third-order Runge-Kutta scheme. The grid convergence has been tested by increasing the grid points along the azimuthal and wall-normal directions. The flow fields are illustrated by the surface pressure distribution and the axial velocity distribution at several cross sections in figure <ref>. As expected, the azimuthal pressure gradient is present, driving the fluid towards the leeward region. Increasing the angle of attack strengthens the pressure gradient, leading to a more prominent vortical structure in the vicinity of the leeward ray. §.§ Governing equations of disturbances We concentrate on the linearized compressible Navier-Stokes equations describing the evolution of infinitesimal perturbations in a 3-D geometry with two rapidly varying (wall-normal and azimuthal) directions and one slowly varying (streamwise) direction. Decompose the flow state Q≡ (ρ,u,v,w,T)^tr into the base flow q̅≡ (ρ̅,u̅,v̅,w̅,T̅)^tr plus the infinitesimal disturbances q' = (ρ', u', v', w', T')^tr, where tr stands for the transpose and u,v,w describe the velocity components in the streamwise, wall-normal and azimuthal direction, respectively. The set of governing equations is then conveniently written in the form ∂q'/∂ t = ℒ q' + f, where ℒ is the linearized Navier-Stokes operator; f denotes the external forcing. Our intention is to find a set of linear governing equations that is parabolic in nature. Such a system can be solved efficiently by employing marching techniques, and allows for extensive parametric studies. To this end, we can take the PSE ansatz since the base flow varies slowly in the streamwise direction q'(ξ,η,ζ,t) = q̂(ξ,η,ζ)exp(i∫_ξ_0^ξα(s)ds-iω t) + c.c., where c.c. designates the complex conjugate part. The real, streamwise varying wavenumber α(ξ) is determined iteratively by α_new = α_old + Im(1/E∂ E/∂ξ), with E denoting the total disturbance energy defined by <cit.>, so that the streamwise oscillation of the disturbance is mainly absorbed in the phase part; Im(·) is the imaginary part. Consequently, the second order, viscous derivative terms of the amplitude function in ξ can be neglected. Introduction of the ansatz (<ref>) in (<ref>) yields the plane-marching parabolized stability equations (PSE3D) 𝒫q̂≡(𝒜 + ℬ∂/∂ξ)q̂ = f̂, where 𝒫 denotes the parabolized linearized Navier-Stokes operator (as detailed in <cit.>) and f̂ is the Fourier component of the forcing at frequency ω. It is well known that some streamwise ellipticity remains in the streamwise pressure gradient term ∂p̂/∂ξ and may cause the numerical instability. To eliminate the residual ellipticity, the streamwise pressure gradient term is either dropped from the equations as justified in <cit.> and <cit.> for stationary disturbances or replaced by Ω_PNS∂p̂/∂ξ, where Ω_PNS is the Vigneron parameter <cit.>, for non-stationary disturbances. As criticized by <cit.>, equation (<ref>) constrains the disturbances to have a unique, local wavenumber at any streamwise location such that PSE are likely to encounter difficulties when the perturbation field includes multiple modes with disparate streamwise wavenumbers or wavelengths. <cit.> have shown that the errors introduced by the regularization techniques of PSE are proportional to the differences of the wavenmubers (or equivalently the distance between modes in the spectrum). Nevertheless, our local transient analysis results in section <ref> indicate that the optimal growth perturbations primarily comprise modes concentrating in a small portion of the entire spectrum as is also observed by <cit.>. This is reasonable since disturbances with close wavenumbers or phase velocities could efficiently interfere to achieve the optimal non-modal growth. Therefore, the absolute differences of the wavenumbers of these significant modes, and the errors pertaining to PSE shall be small. More detailed verifications can be found in Appendix <ref>. §.§ Optimality system We seek the optimal response to the inlet disturbance or the external forcing. This problem can be readily formulated in an input-output framework below, 𝒫q̂ = 𝒞f̂, ψ̂ = 𝒟q̂, where f̂ denotes the input; ψ̂ designates the output; 𝒞 and 𝒟 are projectors (i.e., 𝒞𝒞=𝒞, 𝒟𝒟=𝒟) specifying how the input enters into the state equation and how the output extracts from the state q̂. Define, respectively, the inner product and the energy-weight inner product (q̂,q̂) ≡∫q̂^†q̂dΩ, (q̂,q̂)_E ≡∫q̂^† Mq̂dΩ, (f̂,f̂)_F ≡∫f̂^† Qf̂dΩ, where M ≡diag(T̅/ρ̅γ Ma^2,ρ̅,ρ̅,ρ̅,ρ̅/γ(γ-1)Ma^2T̅) represents the response energy weight, Q the forcing energy weight, Ω the domain where the shape function resides, † the transpose conjugate quantities. Two specific cases of 𝒞 and 𝒟 are considered in this paper. For the first case, we restrict the input as an impulse forcing at the inlet and the output as the state variables at the outlet such that the input and output behave like a Dirac function 𝒞f̂ = 0 except at the inlet, ∫_x_0^x_1𝒞f̂𝐝x = f̂(x_0), 𝒟q̂ = 0 except at the outlet, ∫_x_0^x_1𝒟q̂𝐝x = q̂(x_1) This case corresponds to the optimal response to the inlet perturbation. In the second case, we prescribe 𝒞 = 𝒟 = ℐ (the identity operator) to retrieve the classical input-output problem where the forcing and response are distributed in the entire domain. The input and output can be related by the transfer (resolvent) operator ℋ as ψ̂ = 𝒟𝒫^-1𝒞f̂≡ℋf̂. For the sake of simplicity, we assume q̂ to be a discretized vector such that we can neglect the integral symbol and assume that the forcing energy weight is the same as the response energy weight. As a result, the energy gain between the output and input reads G=(ψ̂,ψ̂)_E/(f̂,f̂)_F = f̂^†ℋ^†Mℋf̂/f̂^† Mf̂ = f̂^† F^† F^† -1ℋ^† F^† Fℋ F^-1 Ff̂/f̂^† F^† Ff̂, where M≡ F^† F. The maximum energy gain reads G_max = (f̂^† F^† F^† -1ℋ^† F^† Fℋ F^-1 Ff̂/f̂^† F^† Ff̂)_max = || Fℋ F^-1||_2^2 = σ_1^2( Fℋ F^-1_ A). σ_1^2( A) denotes the square of the maximum singular value of matrix A, which equals the largest eigenvalue of the matrix A^† A, and is readily solved by the power iteration with an arbitrarily given initial guess. §.§.§ Variational formulation In practical computation, the above compact formulation (<ref>) for optimal gain is usually adopted in local or two-dimensional temporal transient growth analysis when the matrix representation for ℋ is available. For global non-modal analysis, the matrix-free method is preferred and it is customary to reformulate the optimal problem in a variational framework to facilitate calculation. The Lagrangian functional for this problem reads J = (𝒟q̂,𝒟q̂)_E - Re(p̂,𝒫q̂-𝒞f̂) - λ((𝒞f̂,𝒞f̂)_F-1), where p̂, the Lagrange multiplier, is referred to as the adjoint variable; λ is another Lagrange multiplier implementing the restriction on the input magnitude; Re(·) is the real part. At the stationary point of the Lagrangian functional, we have, aside from the state equation (<ref>) and the input energy constraint (𝒞f̂,𝒞f̂)_F = 1, the adjoint equation 𝒫^†p̂ = 𝒟 M𝒟q̂, and the optimal condition 𝒞p̂ = λ Q𝒞f̂, which implies that the optimal forcing should be aligned with the adjoint state. 𝒫^† is the adjoint operator such that (p̂, 𝒫q̂) = (𝒫^†p̂,q̂). Substitute (<ref>) into (<ref>) to eliminate the forcing term f̂, we obtain 𝒫q̂ = 1/λ Q^-1𝒞p̂. Eqs. (<ref>) and (<ref>) form an eigenvalue problem which can be written as a single eigenvalue equation 𝒫^†^-1𝒟M𝒟𝒫^-1 Q𝒞p̂ = λp̂, where the leading eigenvalue λ is exactly the optimal gain to the optimal forcing, and can be readily obtained by iteratively marching the direct and adjoint equations up to a converged state. For the optimal response to the inlet perturbation, we can derive an equivalent formulation that is more computationally tractable. Since 𝒞, 𝒟 satisfy (<ref>), we can take integral with respect to ξ in (<ref>) and let the outlet location ξ_1 approach the inlet location ξ_0, such that the forcing only enters into the inlet profile of the state variable, q̂_0, and is null downstream. As such, Lagrangian functional (<ref>) can be reformulated as J = (q̂_1,q̂_1)_ES - Re(p̂,𝒫q̂) - λ((q̂_0,q̂_0)_ES-1), where q̂_1 denotes the state profile at the outlet, ()_ES the energy-weight inner product over either the inlet or outlet. At the stationary point of J, we have the direct and adjoint equations 𝒫q̂ = 0, 𝒫^†p̂ = 0, wherein 𝒫^†≡𝒜^† - ℬ^†∂/∂ξ is the adjoint PSE3D (APSE3D) operator, and the optimal condition ℬ^†p̂_1 = - Mq̂_1, ℬ^†p̂_0 = λ Mq̂_0. Again, the optimal growth can be readily obtained by iteratively solving the direct and adjoint equations with the optimal condition providing the inlet profiles (given an arbitrary initial guess). The optimal N-factor quantifying the integrated growth rate of the optimal disturbance is then defined as N = 1/2ln(λ). The boundary conditions for the adjoint variables (p̂ = (ρ̂_p,û_p,v̂_p,ŵ_p,T̂_p)^tr) are chosen so that the irrelevant boundary terms arising from the integration by parts vanish. At the wall, we have û = v̂ = ŵ = T̂ = 0, ρ̂_p = û_p = v̂_p = ŵ_p = T̂_p = 0. At the upper boundary, we set û = v̂ = ŵ = T̂ = 0, û_p = v̂_p = ŵ_p = T̂_p = 0. Symmetry boundary conditions are implemented at the leeward and windward ray. Fourth-order finite difference scheme is used to discretize the azimuthal and wall-normal directions, while second-order Euler backward scheme is applied in the streamwise direction. The grid resolution of 200×120 for the azimuthal and wall-normal directions is used for the cases with 2 deg and 4 deg angles of attack, while the azimuthal points are increased to 300 for the largest AoA case. The solution was tested by changing the number of grid points and it was found to be grid-independent. Generally, four iterations suffice to get the converged results with the relative error regarding the optimal gain smaller than 10^-3. §.§.§ Wavemaker One concept that we use in our subsequent analysis is that of structural sensitivity and the wavemaker <cit.>. The wavemaker has its origin in global stability analysis and provides a way to highlight regions in which the global modes are most sensitive to the flow field changes (specifically, a localised flow feedback). Analogously, the resolvent wavemaker <cit.> suggests the regions where flow field variations affectively change the optimal gain. Following the methodology of <cit.>, we consider base-flow perturbation of operators in (<ref>) and (<ref>), which reads δ𝒫^†p̂ + 𝒫^†δp̂ = 𝒟δ M𝒟q̂ + 𝒟 M𝒟δq̂, δ𝒫q̂ + 𝒫δq̂ = -δλ/λ^2 Q^-1𝒞p̂ + 1/λ Q^-1𝒞δp̂+ 1/λδ( Q^-1)𝒞p̂. Left multiplying (<ref>) by q̂^† and (<ref>) by p̂^†, respectively, yields (q̂,δ𝒫^†p̂) + (q̂,𝒫^†δp̂) = (q̂,𝒟δ M𝒟q̂) + (q̂,𝒟 M𝒟δq̂), (p̂,δ𝒫q̂) + (p̂,𝒫δq̂) = -(p̂,δλ/λ^2 Q^-1𝒞p̂) + (p̂,1/λ Q^-1𝒞δp̂) + (p̂,1/λδ Q^-1𝒞p̂). Summing (<ref>) and (<ref>) to eliminate the terms associated with the perturbed shape functions, i.e., δq̂ and δp̂, by employing the direct and adjoint equations (<ref>) and (<ref>), leads to (q̂,δ𝒫^†p̂) + (p̂,δ𝒫q̂) = (q̂,𝒟δ M𝒟q̂) + (p̂,1/λδ Q^-1𝒞p̂) -(p̂,δλ/λ^2 Q^-1𝒞p̂). By substituting (<ref>) into (<ref>), and remembering the projector feature and the forcing amplitude constraint, (<ref>) can be further simplified to 2Re(p̂, δ𝒫q̂) = -δλ + (q̂,δ𝒟 M𝒟q̂) + (p̂,1/λδ Q^-1𝒞p̂), or equivalently 2Re(f̂, δ𝒫q̂) = -δλ/λ + 1/λ(q̂,δ𝒟 M𝒟q̂) + (p̂,1/λ^2δ Q^-1𝒞p̂). Therefore, we obtain the changes of the optimal gain to perturbation of base flow as δλ = -2 Re(p̂, δ𝒫q̂) + (q̂,δ𝒟 M𝒟q̂) + (p̂,1/λδ Q^-1𝒞p̂) = -2 Re(f̂, δ𝒫q̂)λ + (q̂,𝒟δ M𝒟q̂)+ (p̂,1/λδ Q^-1𝒞p̂). The terms (q̂,𝒟δ M𝒟q̂) and (p̂,1/λδ Q^-1𝒞p̂) simply reflect the changes related to the energy weight, while the first term of the right hand side indicates that the important structural-sensitivity region corresponds to the overlap regions of the adjoint variable (forcing) and the response. By neglecting the trivial terms pertaining to the energy weight, we can derive a bound for the gain drift under a unit-norm perturbation δ𝒫 = 1 as |δλ| ≤ 2∫q̂p̂dΩ = 2λ∫q̂f̂dΩ, where · should be understood as the pointwise norm of the mode. The wavemaker Λ is then defined as Λ≡q̂p̂ = λq̂f̂. In this way, the wavemaker Λ(ξ,η,ζ) can be thought of as a field quantifying what changes to the optimal gain occur from localised feedback at each location. It can also be interpreted as a mechanism for self-sustained oscillations in the flow, since the optimal forcing (adjoint state) shows regions receptive to external perturbations and the optimal response reveals the structures being excited due to the forcing. § RESULTS FOR THE CASE OF 2 DEGREES ANGLE OF ATTACK In this section, we will present results on the case of 2 degrees AoA, scrutinizing the salient features of the non-modal growth over a truly 3-D boundary layer, whereupon the results on the other AoA cases would be briefly discussed in the next section. As mentioned above, the crossflow modes are highly sensitive, which can be best illustrated by the pseudospectrum σ_ϵ( A) <cit.>. The pseudospectrum is defined by the set of eigenvalues (z∈ℂ) of a perturbed matrix (A) such that z∈σ( A+ẽ), for some ẽ with ẽ<ϵ. The pseudospectrum can also be characterized alternatively by the resolvent norm as (z- A)^-1>ϵ^-1, which quantifies physically the maximum response of the flow to any forcing. When A is non-normal, the resolvent norm may retain extremely large values even far from the spectrum of A. A representative pseudospectrum visualized by contours of the resolvent norm in the logarithmic scale for frequency of 20 kHz at X^* = 100 mm is shown in figure <ref>(a). Many unstable modes can be observed, as expected, and they almost all lie within the contour line of (z- A)^-1 = 10^7, meaning that one can not exactly distinguish any of these unstable modes under perturbations as small as 10^-7. Such a high sensitivity of the eigenvalues obviously stems from the non-normality of the operator which can in turn cause a substantial transient growth. The non-normality of the operator can be further inferred by inspecting figure <ref>(b,c), showing the spatial distribution of a typical direct-adjoint mode pair. The direct mode manifests as a tilted wavepacket structure lying on the leeward region, whereas the adjoint mode leaning in the opposite direction resides on the windward side. Such a clear separation in space between direct and adjoint modes indicates high sensitivities of the eigenvalues and a strong azimuthal non-normality induced by crossflow <cit.>. §.§ Optimal response to inlet perturbation As a first step we perform a parametric study on the optimal growth regarding the inlet/outlet positions and the disturbance frequency. The results are displayed in figure <ref> from which we can make several observations. First, the optimal disturbances undergo two distinct evolution stages: they initially experience a short-term, yet substantial transient growth (achieving growth of δ N≈4 within about 20 mm)(stage I) followed by a relatively long-term moderate growth (stage II). The disturbance frequency seemingly affects the second stage only. The most amplified frequency is around 20 kHz whose amplitude evolution of the direct and adjoint variables are shown in figure <ref>(b). It can be seen that the temperature fluctuation amplitude rapidly grows and soon becomes dominant, followed by the streamwise velocity component (û). In contrast, the adjoint variables exhibit an opposite trend, suggesting a strong receptivity of the response to streamwise vortices (v̂, ŵ) in the vicinity of the inlet. Second, there exists an optimal initial position given a frequency and an outlet position where the seeded disturbance can amplify maximally. For frequency of 20 kHz, the optimal inlet location is at around X^* = 40 mm. At last, extending the outlet position with a given initial position does not modify the path but only raises the final energy gain reached by the disturbance. This can be explained by the wavemaker position. By definition (<ref>), the wavemaker should lie in the overlap region of the direct and adjoint states. Since the adjoint variable amplitude decays at a smaller rate in the vicinity of the inlet than the growth rate of the direct mode near the outlet, the wavemaker concentrates in a small region downstream the inlet. Therefore, the optimal disturbance characteristics would be substantially preserved as long as the instability core (wavemaker) is included inside the spatial interval. The parametric effects on the optimal disturbance profiles at the inlet and outlet locations are illustrated in figure <ref>. One can observe that the optimal disturbances all initially manifest as a tilted wavepacket lying preferentially on the windward side, very reminiscent of the shape function of a typical adjoint mode as displayed in figure <ref>. At the outlet position, they all tilt in the opposite direction and reside on the leeward side like a crossflow mode. With increasing disturbance frequency, the wavepacket is initially more tilted and ultimately resides closer to the side ray (ϕ = 90 deg). On the other hand, the initial wavepacket progressively approaches the windward ray for a more upstream inlet, yet appears to be insensitive to the outlet location. This observation is important because one could obtain the optimal initial profile for even the entire model through performing optimal calculation over a relatively small domain where the wavemaker is strong. Figure <ref> shows the spatial structures of direct and adjoint states for three frequencies. We observe that the direct disturbance starts on the windward side and moves towards the leeward side, exhibiting a triangle-like amplitude envelope embodying several bent streaks inside; with increasing frequency, the core part of the spatial structure moves progressively towards the windward side, and the streaks are more prominently tilted. On the other hand, the adjoint fluctuation, quantifying the high-receptivity region, lies preferentially on the windward side close to the inlet, exhibiting wave angles with respect to the streamwise direction opposite to those of the direct disturbances. Such remarkable spatial separation of the direct and adjoint states is indicative of the streamwise as well as azimuthal non-normality <cit.>. Interestingly, the wavemaker of whatever frequency, marking the structural sensitivity region, manifests as a longitudinal structure slightly downstream from the inlet on the windward side. Consequently, one may expect to employ some control strategy in the wavemaker region for the crossflow instabilities with any frequency. The area occupied by the amplitude envelope is obviously proportional to the N-factor of the disturbance. In analogous to the classical eN method utilizing the 1-D stability analysis (LST), we define the local N-factor distribution on the cone surface based on the temperature disturbance amplitude as N_T(ξ,ζ) = ln(max_η |T̂(ξ,η,ζ)|/max_η,ζ|T̂(ξ_0,η,ζ)|). Note that a similar formulation was adopted by <cit.> with wall pressure as the alternative measure of disturbance amplitude. As such, the amplitude envelope of the optimal disturbance provides an upper bound of the transition front pertaining to a given N-factor in the eN framework. Figure <ref> compares the transition fronts corresponding to N_T = 5, 8 and 11 (as long as the targeted N_T can be reached) predicted by the non-modal analysis and by modal analyses based on LST (LST-eN) and on PSE3D tracing every unstable global mode of 20 kHz at X^* = 100 mm downstream (PSE3D-eN). The first N-factor (N_T = 5) is typically associated with laminar-turbulent transition of stationary crossflow vortices in the incompressible regime <cit.>, while the second one (N_T = 8) is close to the transition N-factor of traveling crossflow waves on the Mach 6 HyTRV model <cit.>. Evidently, the optimal disturbance amplitude envelopes significantly extend in the upstream direction the LST-eN envelope at the same N-factor as well as the PSE3D-eN envelope even with a much smaller N-factor (N = 2.3), highlighting the appreciable non-modal growth. Nevertheless, modal analyses also predict the predominance of the 20 kHz waves, suggesting that non-modal growth does not change the peak frequency. Another interesting observation is that the core part of the downstream optimal structure matches well with the most unstable region predicted by the LST-eN method, but is remarkably far away from the leeward ray compared to the global modal amplitude envelope. This implies that the optimal disturbance does exploit the crossflow instability, yet not in the same manner as a global crossflow mode does, which will be discussed in detail later. §.§.§ Growth mechanism The non-modal growth mechanisms can be inferred from downstream evolution of the velocity profiles as shown in figure <ref>. First, we observe an azimuthal shift of the disturbance structure from the windward side to the leeward side as the optimal disturbance travels downstream. Such a spatial movement originates in the azimuthal non-normality <cit.>, and is well known for the temporal optimal disturbance in 2-D boundary layers, who superposed by streamwise global modes is convected from upstream to downstream <cit.>. As opposed to the 2-D cases where the optimal disturbance gains energy through the T-S instability mechanism during the upstream-downstream propagation process, the crossflow instability mechanism is mainly responsible for the disturbance energy growth in the present azimuthal movement. Moreover, the tilted direction of the disturbance structures changes rapidly at the first three stations, suggesting the Orr mechanism <cit.>. By analogous to the analysis of <cit.>, the Orr mechanism in the present case is rooted in the production term, -Re(ŵ^†v̂)∂w̅/∂η, stemming from the azimuthal momentum equation. When this term is positive, or equivalently the disturbance structure is titled against the shear of the crossflow, the disturbance is able to gain energy from the mean flow. Otherwise, the disturbance would lose energy. Figure <ref> reveals that the crossflow profile exhibits two distinct regions across the boundary layer with opposite sign pertaining to the shear term (∂w̅/∂η). As a result, if the disturbance gains energy in the upper region, it would return energy back to the base flow in the lower region with the same tilted direction. At the initial position, the disturbance structure well extends into the free stream and is tilted against the upper shear, favoring the energy gain. The slanted structure soon turns upright at the next station and tilts along the opposite direction further downstream. Beyond the second station, the disturbances penetrate more deeply into the boundary layer, occupying almost equally the two shear regions of the crossflow, and the total energy transfer pertaining to the Orr mechanism should thus be smaller thereafter. Figure <ref> clearly delights how the optimal profile initiates a crossflow-instability wavepacket, from which an important insight on the physical mechanisms in stage I can be gained. It can be seen that the optimal profile gives rise to immediately downstream alternating positive and negative longitudinal vortical rolls with axes aligned along the axial direction, which are well known in shear-layer flows to be able to optimally leverage the lift-up effects. The lift-up effect can also be inferred from the observation that the initially dominant vortex components (v̂,ŵ) are soon taken over by the streak component (û) in amplitude, as displayed in figure <ref>(b). Despite weakening rapidly due to viscous dissipation, these longitudal vortices induce counter-rotating vortices below, forming two-cell vorticity structures with axes being tilted with respect to the axial direction. Such vorticity structures are no longer optimal regarding the lift-up effect because of the counteracting forces from two opposite-orientation vorticity layers, yet are typical of crossflow instabilities <cit.> of which they can take advantage to sustain the downstream growth. At the same time, the apparent change of the vorticity inclination in the cross section as shown in figure <ref>(b-d) is indicative of the Orr mechanism that can enhance the lift-up effect as well as the near-wall vorticity induction by strengthening the primary vorticity. This vortex evolution process is representative of optimal disturbances, the quantitative difference being that the inclined angle of the two-cell vorticity structures increases with the frequency. As per the observations made so far we might thus conclude that the non-modal growth of the optimal disturbance in stage I primarily stems from a synergistic combination of the lift-up effect and the Orr mechanism, which conforms the optimal growth results for the infinite swept wing boundary layer (SWBL) <cit.>. In the following, we consider whether the crossflow-instability wavepacket initiated by the optimal profile is governed by a dominating unstable mode as in the SWBL <cit.>. Figure <ref>(a) compares the N-factor of the optimal disturbance of 20 kHz and those calculated by PSE3D with initial profiles provided by different crossflow modes (see figure <ref>(a) for spectrum) at X^* = 100 mm (deemed close to the neutral location). Obviously, the optimal disturbance at stage II amplifies at a substantially larger rate than the modal disturbances do, indicating that it could not be described properly by a single mode. This is not surprise because the global crossflow instability spectrum as shown in figure <ref>(b) contains multiple unstable modes without a predominant one, in stark contrast to the SWBL case where generally only one unstable crossflow mode exists given a frequency and a wavenumber. The (linear) interference among these similarly-looking modes may yield a much more rapid energy growth than the most unstable mode, which could have been exploited by the optimal disturbance. To justify such speculation, we have conducted the local transient growth analysis, seeking an optimal combination of modes that reaches the maximum energy growth at the specific distance downstream, for two representative stations. In the framework of the local transient analysis, the disturbance is expanded into n eigenmodes q'(t,ξ,η,ζ) = exp(-iω t)∑_k=1^nκ_kq̂_k (η,ζ)exp(iα_kξ). The vector-function q̂_k corresponds to the kth eigenfunction. The coefficients κ_k are optimized to achieve the maximum energy growth at the specific downstream coordinate. The maximum energy gain can be cast as the same form as (<ref>). Thanks to the ansatz (<ref>), the resolvent or transfer matrix ℋ can be put explicitly as follows <cit.>: ℋ = diag(exp(iα_1ξ),exp(iα_2ξ),…,exp(iα_nξ)), and accordingly, the entries of the weighting matrix M_k,l = (q̂_k,q̂_l)_E. The resulting maximum energy gain (<ref>) can be determined by using the singular value decomposition. The spectrum used in calculating the local optimal energy gain for X^* = 134 mm is displayed in figure <ref>(b). It contains a total of 208 modes, but only a small portion of the spectrum primarily comprising significantly unstable modes plays an important role. The corresponding optimal energy gain, shown in figure <ref>(a) with an artificial upward shift to facilitate the comparison, clearly reveals the initial algebraic-growth stage and the ensuing quasi exponential-growth stage. It should be noted that the local transient growth analysis fails to capture initial energy gain due to the Orr and lift-up mechanisms which might need much more modes than presently used <cit.>. Nevertheless, the growth rate pertaining to the quasi exponential-growth stage is substantially larger than that of the most unstable mode, and bears a striking resemblance to that of the global optimal disturbance for an appreciably long distance. The optimal energy gain based on few most unstable modes (delimited by the dashed box) exhibits a diminishing algebraic growth, yet quantitatively the same exponential growth as the fuller spectrum. This indicates that the quasi exponential-growth stage is governed by the few most unstable modes while the less unstable modes would affect the initial algebraic-growth stage only. Such observations seem to hold at other stations as well, justifying the speculation that the optimal disturbance may have exploited the interference among few most unstable modes locally to achieve the global optimal growth. Indeed, the global transient analysis on 2-D flow configurations has shown that superposition of non-normal global (even decay) modes recovers the convective (exponential) instability <cit.>. The difference is that the disturbances are allowed to amplify only in the streamwise direction in the 2-D flows, whereas they can gain energy along both the streamwise and azimuthal directions in the present 3-D configuration. The above results reveal three active mechanisms beneath the optimal growth, i.e., the Orr mechanism, the lift-up effect, and the crossflow instability mechanism manifesting both in the streamwise and azimuthal directions. A significant insight regarding the role played by these mechanisms can be gained from the typical convergence process of the optimal growth, as displayed in figure <ref>. We observe that a regular disturbance concentrated in the leeward ray already emerges from the random initial profile after just one forward-backward iteration, retrieving all the aforementioned mechanisms and yielding nearly 80 percent of the final optimal energy gain (figure <ref>(b)): its streawise velocity component is much smaller than the vortex component in amplitude, an indicator of the lift-up effect; the slanted structures in the η-ζ plane suggest the Orr mechanism; at last, the outlet profile exhibiting typical crossflow-instability features implies the crossflow-instability mechanism. After one more iteration, the inlet disturbance structure entirely leans against the crossflow shear such that it can take full advantage of the Orr mechanism; moreover, the spatial separation is more pronounced, enhancing the disturbance energy gain along the azimuthal direction through crossflow instability. Further iteration drives the inlet disturbance to the windward side progressively, yet barely improves the optimal gain, indicating that the azimuthal crossflow-instability growth is relatively weak compared to other mechanisms. In summary, the iteration process tends to first filter out the lift-up effect and the streamwise crossflow instability mechanism, which can thus be viewed as most important for the optimal growth, and then fully exploits the Orr and azimuthal crossflow instability mechanisms to achieve the maximum amplification. §.§.§ High-frequency optimal growth In addition to the low-frequency optimal disturbances, we also calculated optimal disturbances at high frequencies which exhibit totally different features from the former. Figure <ref> depicts the evolution of N-factors of frequencies ranging from 100 kHz to 800 kHz for a specified streamwise interval and the temperature contour of one representative frequency. One can observe moderate transient growths, albeit much weaker than the low-frequency optimal growth, experienced by a broad-band disturbance far from the modal instability frequencies (around 1 MHz for Mack mode instabilities). The relative differences of the peak N-factors among various frequencies are less than 1, indicating a relatively flat frequency amplification. The optimal shape function, exemplified by the frequency of 200 kHz, concentrates in the domain between the boundary-layer edge and the entropy-layer edge of the windward side throughout the entire streamwise interval. Following the work of <cit.>, the entropy-layer edge, δ_S, is defined as the location where the local entropy increment is 0.25 times the entropy increment at the wall (ΔS̅(η=Δ_S) = 0.25ΔS̅_wall). The entropy increment, ΔS̅, is defined with respect to the freestream value, i.e., ΔS̅ = 1004 ln (T̅)-287ln (P̅). The fact that the disturbance remains in the vicinity of the windward ray indicates that the aforementioned azimuthal shift of the low-frequency optimal disturbances is solely due to the crossflow instability, which is absent here. The mode shapes at the inlet and outlet positions exhibit opposite slanted structures in both the azimuthal and streamwise directions, suggesting an Orr-like mechanism responsible for the transient growth. As opposed to the classical Orr mechanism which mainly amplifies the velocity components, the temperature component of the high-frequency perturbation is the prime benefactor (see figure <ref>(b,c)) from algebraic amplification where the temperature gradient within the entropy layer is a driving factor. Specifically, the optimal disturbance is capable of extracting energy from the mean temperature shear in the wall-normal and azimuthal directions through terms -Re(T̂^†v̂)∂T̅/∂η and -Re(T̂^†ŵ)∂T̅/∂ζ, respectively. Finally, the spatial evolution of the optimal disturbance at 200 kHz is illustrated in figure <ref>(e), featuring a wave train structure in the vicinity of the windward ray with a preferential wave angle on either side. Such wave structure is reminiscent of that of the Mack mode instability on a similar configuration <cit.>, and is thus presumably to trigger the boundary layer breakdown via the same mechanism, i.e., the (generalized) fundamental resonance. It is noteworthy that <cit.> have also discovered entropy-layer traveling disturbances that experience appreciable non-modal growth on a hypersonic blunt cone at zero AoA. They <cit.> later showed that a pair of oblique non-modal waves could induce onset of transition via oblique breakdown. Our work extends their results to a 3-D configuration, showing that the optimal entropy-layer disturbances concentrate in the vicinity of the windward ray. Because the optimal disturbance on either side of the windward ray comprises exclusively components with wave angles of the same sign, prohibiting interactions between components with opposite wave angles, the classical oblique breakdown mechanism is not active herein. It is interesting to assess whether high-frequency disturbances can undergo transient growth in other places than the windward ray as well. A straightforward method is to calculate the optimal disturbance within a restricted azimuthal domain. As inferred from figure <ref>, the optimal disturbance restricted in the leeward region, exhibiting an essentially 2-D structure, turns out to enjoy a commensurate energy amplification as well through exploiting an Orr-like mechanism in the streamwise direction (not shown here). §.§ Optimal response to external forcing In this subsection, we present results about optimal response to the external forcing in the entire domain. Without loss of generality, we set Q = I in measuring the forcing energy. Because the inlet response is prescribed null for this case, the conventional N-factor would yield an infinitely large value. To circumvent such difficult, we consider the integrated growth rate between a position slightly downstream the inlet and the outlet. Figure <ref> shows the results for a representative spatial interval X^*∈(44,215) mm. The N-factors starting from X^* = 47 mm in figure <ref>(a) feature an algebraic growth stage followed by a long-term (quasi exponential-growth) phase, and peak at 20 kHz, closely resembling the optimal growth results. Direct comparison between the dominant N-factor evolution for the optimal forcing case and the optimal inlet-profile case indicates that their differences lie primarily in the algebraic-growth stage wherein the former is remarkably enhanced. Interestingly, the algebraic growth is more pronounced with increasing frequency, in stark contrast to the optimal inlet-profile case wherein it almost coincides for different frequencies. The streamwise distribution of the component amplitude of the optimal forcing and the response for 20 kHz is displayed in figure <ref>(b). As expected, the forcing amplitudes exhibit an almost opposite trend against the response counterpart; they peak within the algebraic-growth stage and rapidly diminish downstream. Forcing terms on the ζ- and η-direction momentum equations are dominant, implying that the optimal forcing tends to leverage the lift-up effect to prompt the energy growth by enhancing the streamwise vorticty. Moreover, the optimal forcing shape function shown in figure <ref>(c) is always tilted against the crossflow shear, suggesting that the optimal forcing exploits the Orr mechanism as well. The resulting response shown in figure <ref>(d) is very reminiscent of that in the optimal inlet-profile case. To conclude, the optimal disturbance under the external forcing undergoes essentially the same evolution process with the same growth mechanisms as in the optimal inlet-profile case; the optimal forcing mainly acts on the algebraic-growth stage (stage I) by constantly enhancing the lift-up and Orr mechanisms, yet barely affects the ensuing stage where the crossflow instability prevails. As inferred from figures <ref> and <ref>, the optimal forcing is exerted on the same position of the adjoint state except for the vicinity of the inlet where it manifests as longitudinal structures aligned with the axial direction. Since axially aligned vortices (with nearly zero streamwise wavenumber) are most favorable for the lift-up effect <cit.>, the optimal forcing presumably takes more advantage from the lift-up effect, thus enhancing the algebraic growth. Moreover, the wavemaker almost coincides with that of the optimal growth, implying the same localized feedback mechanisms for both cases. § INFLUENCE OF THE ANGLE OF ATTACK In this section, we will assess the influence of the angle of attack on the non-modal growth. The N-factors of optimal disturbances to inlet perturbation for different frequencies in the case of 4 degrees AoA and the case of 6 degrees AoA are displayed in figure <ref>(a) and (b), respectively. For whatever AoA, the optimal inlet position remains at around X^* = 44 mm where the optimal disturbance achieves the largest energy growth, and the outlet position beyond the strong wavemaker region does not modify the evolution path except for raising the final energy gain reached by the disturbance (not shown here). Therefore, we only present the optimal growth results for the spatial interval of X^*∈(44,250) mm here. Notably, the optimal disturbances in all three cases share qualitatively the same evolution path consisting of the algebraic-growth stage (stage I) insensitive to the disturbance frequency and the following quasi exponential-growth stage (stage II) with growth rates relying on the disturbance frequency. Moreover, the most amplified frequency is always around 20 kHz. As shown in figure <ref>(c), increasing the angle of attack does not change the growth rate in the algebraic-growth stage, but considerably enhances the disturbance amplification in stage II in accord with the strengthening of the crossflow velocity and thus the crossflow instability. Moreover, a larger AoA yields a more pronounced separation of the inlet and outlet disturbance profiles in the azimuthal direction (figure <ref>(d-f)), with the inlet disturbance closer to the windward ray and the outlet disturbance closer to the leeward ray. This obviously stems from the stronger crossflow velocity pertaining to a larger AoA and implies an increasingly important role played by the azimuthal crossflow instability mechanism in the optimal growth. The spatial structures of the most amplified optimal disturbances for three cases are compared in figure <ref>, featuring the tilted streaks on the cone surface with larger inclined angles for increasing AoA. Intriguingly, the optimal disturbance at 6 deg AoA has penetrated partly into the leeward vortex region, which likely interferes or even precipitates the transition process therein as documented by <cit.> in a numerical study on a similar flow configuration. <cit.> reported that the temporal optimal crossflow disturbances on a hypersonic elliptic cone ultimately reside in the vortex region, manifesting as intrinsic vortex modes. In this study, however, we find that the optimal disturbance preserves the crossflow-instability feature towards the outlet rather than develops into an intrinsic vortex instability. It is instructive to compare the optimal amplitude envelopes with the modal counterpart calculated respectively by PSE3D and LST. All methods predict stronger disturbance energy growth with increasing AoA, as expected. Again, the optimal disturbance is remarkably more amplified than the global modal counterpart, but such superiority quickly diminishes as the AoA increases. This is because increasing AoA greatly promotes the modal instability yet barely affects the algebraic growth, and as such, the modal instability becomes increasingly important. The N-factors based on LST are significantly larger than those of global modes and are even commensurate with the optimal ones at high AoAs. Aside from the N-factor, the amplitude envelopes from the three also differ in terms of the peak position in the azimuthal direction: the global modal amplitude envelope lies closest to the leeward ray, followed by the local one and the global optimal one. In summary, increasing the AoA simply enhances the modal instability, resulting in a more pronounced azimuthal shift of the disturbance, yet exerts relatively small influence on the initial algebraic growth. This observation also holds true for the optimal response to external forcing. For brevity, we only present the spatial structures of the optimal forcing/response and the wavemaker for the two higher AoA cases in figure <ref>, showing essentially the same pattern as in the case of 2 deg AoA in figure <ref>. § CONCLUSIONS AND DISCUSSIONS In this work, the optimal responses to the inlet disturbances and the external forcing in the entire domain are investigated within a uniform input-output framework for hypersonic boundary layers on a blunt cone at nonzero angles of attack (AoAs). Well-established parabolized governing equations (PSE3D and APSE3D) are employed to efficiently obtain the spatial evolution of direct as well as adjoint states, which allows for a systematic parameter study. Comparisons with relevant literature data and other elaborate numerical tools including HLNS and DNS demonstrate the reliability of the present solver in calculating the non-modal disturbances. The optimal growth analysis seeking the optimal response to inlet perturbation is first conducted for the case with 2 degrees AoA, which lays the foundation for finding the optimal response to external forcing and subsequent study on the effects of AoA. The optimal disturbances, peaking at 20 kHz, initially take the form of azimuthally compact tilted streamwise vortices on the windward side. While convected downstream, they quickly rise to an upright position and give rise to bent streaks, accompanied by a substantial energy growth. The initial phase of energy growth is algebraic, attributed to a combination of the lift-up effect and the Orr mechanism, and is later smoothly replaced by an exponential-like growth primarily driven by the crossflow instability mechanism. Such a two-stage growth process bears a strong resemblance to what has been observed by <cit.> in an infinite swept-wing like boundary layer (SWBL). However, we note two important differences caused by the azimuthal inhomogeneity in the present configuration. Firstly, instead of following a single crossflow mode as in the SWBL case, the optimal disturbance in the second stage behaves more like a non-modal one comprised of several leading global crossflow modes, and thus amplifies much faster than the most unstable crossflow mode does. Secondly, the optimal disturbances are localised in the azimuthal direction rather than uniformly distributed in the SWBL case, and shift azimuthally from the windward side towards the leeward ray while traveling downstream such that they can exploit the crossflow instability mechanism not only along the streamwise direction but also along the azimuthal direction. This spatial movement of the disturbance structure prevails in the 2-D streamwise global non-modal analyses and is known to stem from the combination of multiple global modes <cit.>. Increasing the AoA barely affects the algebraic-growth stage, yet substantially promotes the exponential growth through strengthening the crossflow instability, leading to a more prominent azimuthal shift of the disturbance structure. The structural sensitivity (wavemaker) of the optimal gain is shown to be quantified by the pointwise product between the direct and adjoint (or forcing) states. Our results indicate that the wavemakers of low-frequency optimal disturbances always reside on the windward side immediately downstream of the inlet position where efficient transition control might be implemented. The optimal external forcing concentrates in the vicinity of the inlet with amplitude rapidly decaying downstream. The azimuthal and wall-normal forces are dominant components that are expected to enhance the Orr and lift-up mechanisms. The resulting optimal response, also peaking at 20 kHz, is essentially the same as the optimal growth disturbance mentioned above except that the initial algebraic growth becomes more pronounced due to external forces. We have also identified relatively weaker, yet broadband high-frequency optimal growth disturbances that preferentially reside between the boundary layer and the entropy layer on the windward side. They are initially tilted against both the azimuthal and streamwise flow directions, but their orientation with respect to the surface gradually rotates to the opposite direction, suggesting an Orr-like mechanism that mainly amplifies the temperature component of the disturbance. Finally, we have drawn a comparison between the optimal growth initiated by the inlet profile and the modal growth obtained separately from local (LST) and multi-dimensional (global) stability analyses (PSE3D). The results from various approaches turn out to differ significantly in terms of the N-factor (optimal disturbance reaches the maximum followed by the local and global modal disturbances) as well as the amplitude envelope shape (transition front), highlighting the caution of choosing the stability analysis tool in employing the eN transition prediction method. While local analysis is deemed unable to retrieve the disturbance structures and to tackle the nonlinear interactions of disturbances in a truly 3-D boundary layer, tracing a global mode is of little importance too given the high sensitivity and multiplicity of crossflow eigenvalues <cit.>, especially when characterizing the transition mechanism under noisy freestream conditions (e.g. in a conventional wind tunnel). Alternatively, the optimal solution serves as a robust and physically meaningful representation of the crossflow instability, offering a starting point for investigating the nonlinear development and the ensuing breakdown process of crossflow instability disturbances from a global perspective, and moreover, provides an upper bound on the linear growth experienced by disturbances. However, the link between the real freestream disturbances and the optimal profiles is not addressed herein. The recent DNS performed by <cit.> and <cit.> for an axisymmetric blunt cone showed that the “tunnel-like" acoustic disturbance field, which faithfully models the real wind-tunnel noise, could trigger prominent entropy layer disturbances resembling the predictions of optimal growth <cit.>. Their results give confidence that the optimal disturbances are physically realizable. Another important issue that warrants further investigation is figure out whether the present 3-D boundary layer is a low-rank (i.e., with a large gain separation between optimal and suboptimal disturbances) system similar with the 2-D counterpart <cit.>. If so, even though the real disturbances have a small fraction of their energy in the optimal perturbation, one could still expect to observe the response like the optimal one, which would remarkably benefits the transition prediction. § VERIFICATION OF OPTIMAL GROWTH CALCULATION The PSE method specifies a single streamwise wave number in order to absorb the main streamwise oscillations of the disturbances into the exponential term, thereby keeping the shape function slowly varying. This casts doubt on whether the PSE method is able to accurately trace the evolution of non-modal disturbances inherently consisting of many modes with varying streamwise wavenumbers <cit.>, although PSE and its variant have been widely employed in the non-modal studies <cit.>. In order to assess the effects exerted by the PSE method on the optimal growth, we have examined seven cases of three groups as shown in table <ref>. The cases from the first group come from <cit.> who considered the optimal growth for stationary disturbances over a flat-plate boundary layer at several Mach numbers. The second group is to test the ability of the PSE methodology in accurately capturing the non-stationary optimal disturbances by comparing with results of the harmonic linearized Navier-Stokes equations (HLNS) which retain all the elliptic effects. Particularly, the B3 case lies in the second Mack mode regime, and is the most challenging one of the first two groups. Finally, we used DNS to calculate the spatial evolution of the optimal disturbance for two frequencies over the inclined cone at 2 degrees AoA studied in the present work to directly verify the PSE3D's ability in tracing a non-modal disturbance in a 3-D boundary layer. The optimal energy gains for the first two groups are displayed in figure <ref>. It can be seen that the agreement between the present PSE results and the literature results in figure <ref>(a), as well as the HLNS results in figure <ref>(b) is satisfactory. The fluctuation contours in case B3 are further compared in figure <ref>. Again, good agreement between HLNS results and PSE results is observed for the temperature disturbance except for marginal differences in the freestream region. This is reasonable because the freestream disturbances (mainly consisting of acoustic waves) and the fluctuations within the boundary layer have substantially different streamwise wavenumbers (or phase velocities), and might thus not be captured by PSE simultaneously. Compared to the temperature fluctuations, the pressure fluctuations exhibit slightly larger differences, as was also documented by <cit.>. This is because the pressure disturbance amplitude is about three order smaller than the temperature disturbance amplitude and small discrepancies in temperature would appear more pronounced in pressure. Since the pressure and freestream disturbances play a secondary role in the non-modal growth process as shown above (see also <cit.> who noted that the acoustic continuous branches do not contribute to the transient growth of the energy), such discrepancies do not affect the main results of the non-modal analyses. At last, we present in figure <ref> the direct comparison between results from DNS and PSE3D for the last group of test cases. The DNS computation domain is the same as the PSE3D. The DNS grid for the lower frequency (most amplified one) is N_ξ× N_η× N_ζ = 1001×201×401 and for the higher one is N_ξ× N_η× N_ζ = 1201×201×501, with slightly more grid points being distributed in the leeward vortex region. Evidently, results from these two approaches match well in terms of the amplitude evolution as well as the spatial structures, thus verifying the PSE3D results. jfm
http://arxiv.org/abs/2406.18285v1
20240626121237
LLCoach: Generating Robot Soccer Plans using Multi-Role Large Language Models
[ "Michele Brienza", "Emanuele Musumeci", "Vincenzo Suriani", "Daniele Affinita", "Andrea Pennisi", "Daniele Nardi", "Domenico Daniele Bloisi" ]
cs.RO
[ "cs.RO" ]
Brienza, Musumeci et al. LLCoach: Generating Robot Soccer Plans using Multi-Role LLMs Dept. of Computer, Control, and Management Engineering Sapienza University of Rome, Rome (Italy), {lastname}@diag.uniroma1.it School of Engineering, University of Basilicata, Potenza (Italy), vincenzo.suriani@unibas.it Dept. of International Humanities and Social Sciences, International University of Rome, Rome (Italy), domenico.bloisi@unint.eu LLCoach: Generating Robot Soccer Plans using Multi-Role Large Language Models M. Brienza 1 0009-0000-1549-9500 E. Musumeci 0009-0004-2359-5032 1 V. Suriani 2 0000-0003-1199-8358 D. Affinita 1 A. Pennisi 30000-0002-9081-0765 D. Nardi10000-0001-6606-200X D. D. Bloisi 30000-0003-0339-8651 July 1, 2024 =========================================================================================================================================================================================================================== § ABSTRACT The deployment of robots into human scenarios necessitates advanced planning strategies, particularly when we ask robots to operate in dynamic, unstructured environments. RoboCup offers the chance to deploy robots in one of those scenarios, a human-shaped game represented by a soccer match. In such scenarios, robots must operate using predefined behaviors that can fail in unpredictable conditions. This paper introduces a novel application of Large Language Models (LLMs) to address the challenge of generating actionable plans in such settings, specifically within the context of the RoboCup Standard Platform League (SPL) competitions where robots are required to autonomously execute soccer strategies that emerge from the interactions of individual agents. In particular, we propose a multi-role approach leveraging the capabilities of LLMs to generate and refine plans for a robotic soccer team. The potential of the proposed method is demonstrated through an experimental evaluation, carried out simulating multiple matches where robots with AI-generated plans play against robots running human-built code. § INTRODUCTION Recent advancements in Language Modeling, particularly the proliferation of technologies associated with Large Language Models (LLMs), have unlocked numerous ways to enhance embodied AI systems with common sense knowledge <cit.>. The expansive scope of LLMs serves as a vast repository of information, facilitating multi-modal interactions with textual and visual data. When appropriately queried, LLMs provide access to semantic cues that are crucial for bridging the conceptual gap between abstract planning domains and intricate and unpredictable real-world environments <cit.>. Consequently, they are increasingly integrated into embodied AI systems <cit.>. In this work, we present a first attempt to use generative AI in the context of robot soccer for creating successful game plans using an approach to leverage the LLMs' chain of thought and generate game tactics by impersonating a coach. In particular, we present a multi-role pipeline, featuring four stages in which the capabilities of a foundation model are exploited to generate and refine a plan for a soccer robot (see Fig. <ref>). Robot soccer offers an ideal environment for planning within a multi-robot, highly dynamic environment <cit.>. Here, we focus on the RoboCup Standard Platform League (SPL), which involves teams of Aldebaran NAO V6 humanoid robots competing autonomously in soccer matches. Participating teams are tasked with executing multi-agent strategies aimed at securing victory in the matches. It is worth noting that, in SPL matches, the applicability of commonsense knowledge is constrained, given that a detailed understanding of game rules and the capabilities of robots during soccer matches is often specialized knowledge. Access to such information, including the actions available to each agent, is typically facilitated through a Retrieval-Augmented Generation (RAG) system <cit.>. The contribution of this work is two-fold: * A methodology for extracting high-level strategies from videos of RoboCup soccer matches. * A pipeline for creating detailed, multi-agent plans that can be directly executed by a NAO robot using its available set of basic actions. Experimental results are obtained in a simulated environment by performing multiple matches where the robots with the AI-generated plans play against the robots running the code[<https://github.com/SPQRTeam/spqr2023>] used by SPQR Team in RoboCup 2023. The code and the additional material mentioned in this paper are publicly available at <https://sites.google.com/diag.uniroma1.it/llcoach/>. The rest of the paper is organized as follows. Section <ref> presents a brief overview of related work, while Section <ref> showcases the methodology and the proposed architecture. Experimental results are shown in Section <ref>. Finally, Section <ref> draws the conclusions and future directions. § RELATED WORK Before the advent of the LLMs, few approaches have been proposed for coaching teams of robots. The first attempt is represented by a language to coach a RoboCup team<cit.>, where COACH UNILANG is presented for Simulated 3D League. More recently, in SPL, <cit.> presented an approach to conditioning the robot's behavior before or during the time of the match. More recently, LLMs have been used as planners for both structured <cit.> and unstructured environments <cit.>. However, when using LLMs as planners, especially for embodied AI (where visual cues must be conveyed through text), there is the possibility of incurring hallucinations, causing skewed or imprecise results. Hallucination is a phenomenon typically observed when the provided textual inputs are too structured and long. For such a reason, it proves beneficial to provide examples of desired output and to define roles for the LLM in a request, by instructing the model on its role and providing constraints on how to perform its task. Demanding to each role a very specific task in a pipeline composed of multiple steps allows to obtain a better final result that would be too complex for the model with a single, lengthy, and complex query. Thanks to the introduction of VLMs <cit.>, LLMs can get both textual and visual inputs reducing the length of the context that must be provided textually in scenarios where visual cues are essential to understand the requested task. Such an approach alleviates the issue of hallucination in applications of embodied AI. However, due to their considerable size in terms of parameters and the consequent computational demands for a single training or inference step, LLMs are less likely to be employed in embedded systems. Moreover, fine-tuning them is an exceedingly resource-intensive endeavor, seldom justifiable for enhancing the accuracy of tasks reliant on specialized commonsense knowledge. To better fit unknown domains, Retrieval-Augmented Generation (RAG) is implemented within intricate pipelines. Here, segments of potentially lengthy documents are indexed based on their semantics in an embedding space and stored within a Vector Index. This approach facilitates enhanced and expedited customization of LLM query results compared to traditional fine-tuning methods. Moreover, additional pieces of information can be seamlessly integrated into the supplementary knowledge base by appending them to the Semantic Database. This capability enables, for instance, the generation of documents from semantically akin templates <cit.>. RAG-based systems offer interesting applications to robotics. In <cit.>, RAG-Driver is presented as a novel multimodal RAG LLM for high-performance, explainable, and generalizable autonomous driving. RAGs in robotics offer the possibility to enrich the available commonsense knowledge with more specialized knowledge about the environment or the robotic agent's capabilities. § PROPOSED APPROACH Fig. <ref> shows our multi-role LLM-based planning sequential pipeline, featuring four steps, namely Action Retrieval, Coach VLM, Plan Refinement, and Plan Synchronizer. Input. Although commonsense knowledge is usually enough for most generic tasks, in the case of robot soccer, specific knowledge must be provided. Thus, the prompts for the LLM-based generation steps along the pipeline are constructed from case-specific information. In particular, case-specific knowledge includes: * The domain, containing a natural language description of the application (a soccer match between autonomous robots), the definition of tokens representing the locations of waypoints, described in natural language as a discrete distribution of several relevant locations. This allows managing the complexity of planning in a continuous coordinate space by delegating the grounding of waypoint locations (into actual spatial coordinates) to the robot control framework, where a world model will be available. * A planning goal: "The own team should score a goal in the opponent's goal." * A description of robot team formations and agent capabilities in natural language, containing tokens associated with the actions they can perform. Actions are described using natural language, following a structure similar to STRIPS actions (similar to Planning Domain Definition Language <cit.>) where tokens used from the domain and agent descriptions are used to give the LLM a better understanding of their pre-conditions and effects. To prevent hallucination, the generation process is subdivided into several subsequent generation steps, centered on the interaction with an LLM or a VLM, each with a properly engineered prompt to minimize its size and increase the quality of the information provided to the model. §.§ Action Retrieval To minimize the size of the prompts sent to the LLM or VLM modules requiring an understanding of agent actions, in an attempt to prevent hallucination, actions are stored in a Semantic Database, where each data node corresponds to the vector embedding of their natural language description. The semantic database leverages a Vector Index for efficient storage and retrieval of action definitions. The Action Retrieval module retrieves actions from the database based on their semantic similarity with the input plan. A Sentence Transformer, specifically the MPNet model <cit.>, is employed to compute this semantic similarity. This embedding space is crucial for both populating the database and performing retrieval operations. Actions stored in the database follow a STRIPS-like template: During retrieval, the planning domain and goal are also embedded within this same space, so that only relevant actions may be extracted using a k-nearest neighbor search based on cosine similarity, measuring the alignment between the required task and the actions stored in the database. This enables the extraction of pertinent actions only, minimizing the prompt length for subsequent pipeline steps and improving its quality. §.§ Coach VLM §.§.§ Visual Analysis. To acquire the necessary frame information for the coach's utilization of the VLM to generate the desired output, we leverage MARIO <cit.>, a tool specialized in soccer match video analysis. MARIO extracts a representation of the soccer game using its tracking capabilities and converts it into a 2D plan view applying a homography geometrical transformation. This representation is useful to characterize video frames in a way that a VLM can easily understand. In particular, each robot is marked with a dot representing the color of the robot's jersey, which indicates the team color (see the bottom-left of Fig. <ref>). §.§.§ VLM module. The coach module exploits the capabilities of GPT4-Vision <cit.>, a recent Visual Language Model (VLM). A video frame resulting from the visual analysis of a relevant action, like the one in Fig. <ref>, is provided to the Visual Language Model, along with a textual prompt. While the video frame provides a detailed view of the current configuration of agents and obstacles on the field, the textual prompt is used to instruct the VLM on its task and constraints. In particular, the coach is instructed to perform two tasks: * Generating a schematic description of the video frame. * Generating a high-level plan in natural language. The prompt for the first task is based on the following template. Given the proven capabilities of LLMs as few-shot learners <cit.>, the prompt is enriched by an example of an ideal output structure. Then, a description of the robot roles in the multi-agent team is added, followed by a list of the available movement waypoints, with a semantic description for each one. Upper-case words are used in an attempt to increase attention over specific tokens. The second task consists of generating a high-level plan in natural language, describing the optimal strategy to achieve the planning goal, given the same inputs as the first task. The prompt follows the template: Here: * [ACTIONS] is replaced with actions extracted at the retrieval step. * [PLANNING_GOAL] is the original planning goal for the current task, which consists simply in "The own team should score a goal in the opponent's goal." * [TACTICS] allow additional customization of the planning goal (which is instead fixed). In addition to the task, several constraints over the desired output are added: Then an example of ideal output is provided, to enforce a specific structure: The output of the VLM is the concatenation of the responses to the two tasks: * A structured description of the scenario represented in the input video frame. * A multi-agent high-level plan, featuring all the relevant agents detected in the provided snapshots, that represents the suggested strategy for the task, for the given domain, extracted actions, described agents, and planning goal. Therefore, the VLM returns the initial configuration of the agents and obstacles in the video frame, and an ideal strategy to pursue the planning goal. §.§.§ Role Retrieval. A description of the relevant roles in the multi-agent team is provided to the Coach VLM in the input prompt. In this way, the Coach module implicitly identifies the roles of the robots in the multi-agent strategy, by mapping the robot in the video frame to the role assigned in the high-level plan, based on its relevance in the strategy. To enable the coach to do so, relevant roles, such as the goalie (the goalkeeper), the jolly (tasked with moving around to receive passes), or the striker (tasked with managing the ball) are defined in natural language in the input prompt. For example, the striker role is described as follows: Similarly, it is possible to describe waypoints (locations reachable by robots on the field) in natural language: Having discretized the field in a distribution of waypoints, it is possible to identify robot roles by comparing their position on the field with the assignment to waypoints that the VLM returns in its first task. §.§ Plan Refinement The plan obtained up until now is a high-level abstract plan, containing tokens representing real-world entities, such as agents, the ball, waypoint locations on the field, and actions. Still, to be executed, two problems have to be solved: * The plan must be translated into a more structured version, so that a parser may be used to build a Finite-State Machine that is readily executable by the robotic agents. * Given that several agents concur with the execution of this plan, a hint on the synchronization between the actions of different agents must be provided. §.§.§ Plan Grounding. The first issue can be solved by transforming the original high-level abstract plan, into a more structured version of the plan. To this aim, the LLM is instructed to satisfy the following task: This simple task prompt is then followed by: * The same planning domain provided to the VLM. * The actions retrieved by the action database. * Constraints about the structure of the plan, to make sure that actions are written as (where is the agent acting), followed by a list of possible values for the token (the retrieved actions). * A list of possible values for the field, containing all the relevant roles for the plan, and descriptions of the agents, in natural language. * Several more fine-grained constraints to ensure that the result is desirable. * The "scenario", is the configuration of elements on the field extracted from the output of the VLM. * Finally, the high-level plan to be refined, presented as "COACH RECOMMENDATIONS" The result is a more structured version of the high-level plan returned by the "coach" VLM, with action names and arguments correctly grounded in the domain tokens. §.§.§ Plan Synchronizer. At this stage, actions in the generated plan feature multiple agents but still follow a sequential order. To obtain a multi-agent plan, there should be some indication of how actions performed by different agents at the same time should be synchronized. For this reason, the component of Plan Parallelization instructs the LLM to return a plan where actions that are executed by different agents at the same time are put together in a "" block. The prompt provided to the LLM is then further enriched with a positive example of a valid result and several negative examples of results to avoid, with a brief explanation of the reason that makes them invalid. The resulting plan will then look like: §.§ Plan Execution The offline portion of the pipeline is used to generate a collection of plans from a given set of video frames, as in Fig. <ref>. Collected plans are then selected based on their similarity to the real-world scenario. The similarity between the two scenarios can be performed using a clustering technique where the current game scenario is obtained by the global model perceived by the robot. An accurate generation of plans based on various game situations allows for the handling of almost all game situations during matches and allows the robots to apply when needed a suitable behavior that maximizes the likelihood of scoring a goal. § EXPERIMENTAL RESULTS This section illustrates the experimental results obtained from the evaluation in a simulated environment of the results of the current work, with the primary objective of assessing its effectiveness in generating high-quality plans collected by processing video frames from real RoboCup SPL matches. §.§.§ Prompt Engineering. To obtain the desired results, an initial phase of prompt refinement was required. The prompt of the VLM "coach" module could be more information-rich with a smaller risk of hallucination, given the multi-modal input. Providing both a video frame and textual input allows the coach to give better insight into the subsequent stages along the pipeline. The accuracy of the output at this stage is key, as any error at this stage could mislead the entire generation pipeline. The application domain constitutes a critical component of the prompt for this module, particularly in a use case where the objective is to describe the location of the players based on both an image and a text description. The main obstacle is the representation of 2D coordinates within the provided frame, which cannot be easily achieved using a VLM due to its difficulty in working on cartesian coordinates. Instead of coordinates, tokens representing waypoints are used as a way of discretizing the field, by marking significant field locations. The description of these tokens should not mislead the VLM and the significance of waypoints must be explained in natural language. For example, an earlier iteration of the prompt contained the waypoint "KICKING_POSITION": initially described as a "vantage point", whose semantics are far from having a geometric value. The final version of the prompt avoids such bias-inducing mistakes by describing it as a "favorable position". This improved dramatically the results of the scenario description task. The second task assigned to the coach consisted of generating a plan given the situation depicted in the provided input frame. Ideally, the plan should be structured according to an easily parsable template. A first attempt at generating a structured plan directly using the VLM resulted in heavy hallucination. To overcome this problem, the VLM was instead tasked with generating a high-level plan, as if it were a soccer coach. To exert more control on the overall strategy, the possibility to customize the overall "tactics" was added, for example by specifying whether passes were desirable or if the strategy should be offensive or defensive. A subsequent plan refinement module was added, to refine the high-level plan into a parsable plan. This module used only a textual input, therefore it was key to minimize the input token count, to avoid hallucination. §.§.§ Setup. The VLM Coach module relies on the OpenAI GPT 4 Turbo model with vision while the text-only modules feature the latest OpenAI GPT 3.5 Turbo model gpt-3.5-turbo-0125. Tests were conducted in a simulated environment based on SimRobot <cit.>. Pertinent frames were extracted from historical SPL matches, publicly accessible on the web. For every frame, the robot configuration was replicated within the simulation, reflecting each robot's pose p_i = (x,y,θ) as extracted from the MARIO, ensuring a faithful recreation of real-world scenarios within the simulated environment. Starting from the same initial configuration, the evolution of plans was considered, comparing the pre-existing behaviors and the plans generated by the presented pipeline. For simplicity, only offensive scenarios were taken into account due to their complexity and larger action space compared to defensive scenarios. §.§.§ Evaluation. The proposed approach is evaluated using the success rate metric, measuring the team's ability to score goals in offensive situations during gameplay. To assess the effectiveness of our method, we compared two approaches: the baseline approach, relying on the predefined behaviors of the SPQR Team 2023 code release, and the dynamic plans generated by the Large Language Model (LLCoach). In addition to the success rate, we considered the average number of passes and average scoring time in seconds as supplementary metrics to illustrate the evolution of gameplay dynamics over different settings. Considering two different situations from the final match BHuman - HTWK we recreated the robot configurations running eight simulations in total. It is worth noting that, since several API calls are required to obtain and refine the plan throughout the multi-role pipeline, the overall execution time is affected mainly by the inference times of LLM/VLM agents. § CONCLUSIONS AND FUTURE DIRECTIONS In this paper, we presented a novel approach to plan generation, exploiting the common knowledge in Large Language Models. The proposed approach represents a first attempt at using generative AI in the context of robot soccer. In particular, we described a complete pipeline, in four stages, where a high-level plan, is progressively refined into a multi-agent plan. The first stage, a Visual Language Model acting as a coach, generates plans from video frames, extracted from videos RoboCup SPL matches. Collected plans can then be run by real NAO robots, as shown by the promising results conducted in the context of RoboCup SPL matches. As future work, we intend to use videos from human soccer matches to create policies for soccer robot players. Our aim is to reduce the gap between robot behaviors and human moves in order to achieve the 2050 RoboCup goal. § ACKNOWLEDGEMENT This work has been carried out while Michele Brienza was enrolled in the Italian National Doctorate on Artificial Intelligence run by Sapienza University of Rome. This work has been supported by PNRR MUR project PE0000013-FAIR. splncs04
http://arxiv.org/abs/2406.18473v2
20240626163336
Unveiling the connection between the Lyndon factorization and the Canonical Inverse Lyndon factorization via a border property
[ "Paola Bonizzoni", "Clelia De Felice", "Brian Riccardi", "Rocco Zaccagnino", "Rosalba Zizza" ]
cs.FL
[ "cs.FL" ]
Near-Term Quantum Spin Simulation of the Spin-1/2 Square J_1-J_2 Heisenberg Model Trevor David Rhone July 1, 2024 ================================================================================= § ABSTRACT The notion of Lyndon word and Lyndon factorization has shown to have unexpected applications in theory as well in developing novel algorithms on words. A counterpart to these notions are those of inverse Lyndon word and inverse Lyndon factorization. Differently from the Lyndon words, the inverse Lyndon words may be bordered. The relationship between the two factorizations is related to the inverse lexicographic ordering, and has only been recently explored. More precisely, a main open question is how to get an inverse Lyndon factorization from a classical Lyndon factorization under the inverse lexicographic ordering, named _in. In this paper we reveal a strong connection between these two factorizations where the border plays a relevant role. More precisely, we show two main results. We say that a factorization has the border property if a nonempty border of a factor cannot be a prefix of the next factor. First we show that there exists a unique inverse Lyndon factorization having the border property. Then we show that this unique factorization with the border property is the so-called canonical inverse Lyndon factorization, named . By showing that is obtained by compacting factors of the Lyndon factorization over the inverse lexicographic ordering, we provide a linear time algorithm for computing from _in. § INTRODUCTION The theoretical investigation of combinatorial properties of well-known word factorizations is a research topic that recently have witnessed special interest especially for improving the efficiency of algorithms. Among these, undoubtedly stands out the Lyndon Factorization introduced by Chen, Fox, Lyndon in <cit.>, named . Any word w admits a unique factorization (w), that is a lexicographically non-increasing sequence of factors which are Lyndon words. A Lyndon word w is strictly lexicographically smaller than each of its proper cyclic shifts, or, equivalently, than each of its nonempty proper suffixes <cit.>. Interesting applications of the use of the Lyndon factorization and Lyndon words are the development of the bijective Burrows-Wheeler Transforms <cit.> and a novel algorithm for sorting suffixes <cit.>. In particular, the notion of a Lyndon word has been re-discovered various times as a theoretical tool to locate short motifs <cit.> and relevant k-mers in bioinformatics applications <cit.>. In this line of research, Lyndon-based word factorizations have been explored to define a novel feature representation for biological sequences based on theoretical combinatorial properties proved to capture sequence similarities <cit.>. The notion of a Lyndon word has a counterpart that is the notion of an inverse Lyndon word, i.e., a word lexicographically greater than its suffixes. Inverting the relation between a word and its suffixes, as between Lyndon words and inverse Lyndon words, leads to different properties. Indeed, although a word could admit more than one inverse Lyndon factorization, that is a factorization into a nonincreasing product of inverse Lyndon words, in <cit.> the Canonical Inverse Lyndon Factorization, named , was introduced. maintains the main properties of : it is unique and can be computed in linear time. In addition, it maintains a similar Compatibility Property, used for obtaining the sorting of the suffixes of w (“global suffixes”) by using the sorting of the suffixes of each factor of (w) (“local suffixes”) <cit.>. Most notably, (w) has another interesting property <cit.>: we can provide an upper bound on the length of the longest common prefix of two substrings of a word w starting from different positions. A relationship between (w) and (w) has been proved by using the notion of grouping <cit.>. First, let _in(w) be the Lyndon factorization of w with respect to the inverse lexicographic order, it is proved that (w) is obtained by concatenating the factors of a non-increasing maximal chain with respect to the prefix order, denoted by , in _in(w) (see Section <ref>). Despite this result, the connection between _in(w) and the inverse Lyndon factorization still remained obscure, mainly by the fact that a word may have multiple inverse Lyndon factorizations. In this paper, we explore this connection between _in and the inverse Lyndon factorizations. Our first main contribution consists in showing that there is a unique inverse Lyndon factorization of a word that has border property. The border property states that any nonempty border of a factor cannot be a prefix of the next factor. We further highlight the aforementioned connection by proving that the inverse Lyndon factorization with the border property is a compact factorization (Definition <ref>), i.e., each inverse Lyndon factor is the concatenation of compact factors, each obtained by concatenating the longest sequence of identical words in a . We then show the second contribution of this paper: this unique factorization is itself and then provide a simpler linear time algorithm for computing . Our algorithm is based on a new property that characterizes (w): the last factor in an inverse Lyndon factorization with the border property of w is the longest suffix of w that is an inverse Lyndon word. Recall that the Lyndon factorization of w has a similar property: the last factor is the longest suffix of w that is a Lyndon word. § WORDS Throughout this paper we follow <cit.> for the notations. We denote by Σ^* the free monoid generated by a finite alphabet Σ and we set Σ^+=Σ^*∖ 1, where 1 is the empty word. For a word w ∈Σ^*, we denote by |w| its length. A word x ∈Σ^* is a factor of w ∈Σ^* if there are u_1,u_2 ∈Σ^* such that w=u_1xu_2. If u_1 = 1 (resp. u_2 = 1), then x is a prefix (resp. suffix) of w. A factor (resp. prefix, suffix) x of w is proper if x ≠ w. Two words x,y are incomparable for the prefix order, denoted as x y, if neither x is a prefix of y nor y is a prefix of x. Otherwise, x,y are comparable for the prefix order. We write x ≤_p y if x is a prefix of y and x ≥_p y if y is a prefix of x. The notion of a pair of words comparable (or incomparable) for the suffix order is defined symmetrically. We recall that, given a nonempty word w, a border of w is a word which is both a proper prefix and a suffix of w <cit.>. The longest proper prefix of w which is a suffix of w is also called the border of w <cit.>. A word w ∈Σ^+ is bordered if it has a nonempty border. Otherwise, w is unbordered. A nonempty word w is primitive if w = x^k implies k = 1. An unbordered word is primitive. A sesquipower of a word x is a word w = x^np where p is a proper prefix of x and n ≥ 1. Two words x,y are called conjugate if there exist words u,v such that x=uv, y=vu. The conjugacy relation is an equivalence relation. A conjugacy class is a class of this equivalence relation. Let (Σ, <) be a totally ordered alphabet. The lexicographic (or alphabetic order) ≺ on (Σ^*, <) is defined by setting x ≺ y if * x is a proper prefix of y, or * x = ras, y =rbt, a < b, for a,b ∈Σ and r,s,t ∈Σ^*. In the next part of the paper we will implicitly refer to totally ordered alphabets. For two nonempty words x,y, we write x ≪ y if x ≺ y and x is not a proper prefix of y <cit.>. We also write y ≻ x if x ≺ y. Basic properties of the lexicographic order are recalled below. For x,y ∈Σ^+, the following properties hold. (1) x ≺ y if and only if zx ≺ zy, for every word z. (2) If x ≪ y, then xu ≪ yv for all words u,v. (3) If x ≺ y ≺ xz for a word z, then y = xy' for some word y' such that y' ≺ z. (4) If x ≪ y and y ≪ z, then x ≪ z. Let 𝒮_1, … , 𝒮_t be sequences, with 𝒮_j = (s_j,1, … , s_j,r_j). For abbreviation, we let (𝒮_1, … , 𝒮_t) stand for the sequence (s_1,1, … , s_1,r_1, … , s_t,1, … , s_t,r_t). § LYNDON WORDS A Lyndon word w ∈Σ^+ is a word which is primitive and the smallest one in its conjugacy class for the lexicographic order. Let Σ = {a,b} with a < b. The words a, b, aaab, abbb, aabab and aababaabb are Lyndon words. On the contrary, abab, aba and abaab are not Lyndon words. Each Lyndon word w is unbordered. A class of conjugacy is also called a necklace and often identified with the minimal word for the lexicographic order in it. We will adopt this terminology. Then a word is a necklace if and only if it is a power of a Lyndon word. A prenecklace is a prefix of a necklace. Then clearly any nonempty prenecklace w has the form w = (uv)^ku, where uv is a Lyndon word, u ∈Σ^*, v ∈Σ^+, k ≥ 1, that is, w is a sesquipower of a Lyndon word uv. The following result has been proved in <cit.>. It shows that the nonempty prefixes of Lyndon words are exactly the nonempty prefixes of the powers of Lyndon words with the exclusion of c^k, where c is the maximal letter and k ≥ 2. A word is a nonempty prefix of a Lyndon word if and only if it is a sesquipower of a Lyndon word distinct of c^k, where c is the maximal letter and k ≥ 2. In the following L = L_(Σ^*, <) will be the set of Lyndon words, totally ordered by the relation ≺ on (Σ^*, <). Any word w ∈Σ^+ can be written in a unique way as a nonincreasing product w=ℓ_1 ℓ_2 ⋯ℓ_h of Lyndon words, i.e., in the form w = ℓ_1 ℓ_2 ⋯ℓ_h, ℓ_j ∈ L ℓ_1 ≽ℓ_2 ≽…≽ℓ_h The sequence (w) = (ℓ_1, …, ℓ_h) in Eq. (<ref>) is called the Lyndon decomposition (or Lyndon factorization) of w. It is denoted by (w) because Theorem <ref> is usually credited to Chen, Fox and Lyndon <cit.>. The following result, proved in <cit.>, is necessary for our aims. Let w ∈Σ^+, let ℓ_1 be its longest prefix which is a Lyndon word and let w' be such that w= ℓ_1 w'. If w' ≠ 1, then (w) = (ℓ_1, (w')). Sometimes we need to emphasize consecutive equal factors in . We write (w) = (ℓ_1^n_1, …, ℓ_r^n_r) to denote a tuple of n_1 + … + n_r Lyndon words, where r > 0, n_1, … , n_r ≥ 1. Precisely ℓ_1 ≻…≻ℓ_r are Lyndon words, also named Lyndon factors of w. There is a linear time algorithm to compute the pair (ℓ_1, n_1) and thus, by iteration, the Lyndon factorization of w <cit.>. Linear time algorithms may also be found in <cit.> and in the more recent paper <cit.>. § INVERSE LYNDON WORDS For the material in this section see <cit.>. Inverse Lyndon words are related to the inverse alphabetic order. Its definition is recalled below. Let (Σ, <) be a totally ordered alphabet. The inverse <_in of < is defined by ∀ a, b ∈Σ b <_in a ⇔ a < b The inverse lexicographic or inverse alphabetic order on (Σ^*, <), denoted ≺_in, is the lexicographic order on (Σ^*, <_in). Let Σ = {a,b,c,d } with a < b < c < d. Then dab ≺ dabd and dabda ≺ dac. We have d <_in c <_in b <_in a. Therefore dab ≺_in dabd and dac ≺_in dabda. Of course for all x, y ∈Σ^* such that x y, y ≺_in x ⇔ x ≺ y . Moreover, in this case x ≪ y. This justifies the adopted terminology. From now on, L_in = L_(Σ^*, <_in) denotes the set of the Lyndon words on Σ^* with respect to the inverse lexicographic order. Following <cit.>, a word w ∈ L_in will be named an anti-Lyndon word. Correspondingly, an anti-prenecklace will be a prefix of an anti-necklace, which in turn will be a necklace with respect to the inverse lexicographic order. In the following, we denote by _in(w) the Lyndon factorization of w with respect to the inverse order <_in. A word w ∈Σ^+ is an inverse Lyndon word if s ≺ w, for each nonempty proper suffix s of w. The words a, b, aaaaa, bbba, baaab, bbaba and bbababbaa are inverse Lyndon words on {a,b}, with a < b. On the contrary, aaba is not an inverse Lyndon word since aaba ≺ ba. Analogously, aabba ≺ ba and thus aabba is not an inverse Lyndon word. The following result, proved in <cit.>, and also in <cit.>, summarizes some properties of the inverse Lyndon words. Let w ∈Σ^+. Then we have * The word w is an anti-Lyndon word if and only if it is an unbordered inverse Lyndon word. * The word w is an inverse Lyndon word if and only if w is a nonempty anti-prenecklace. * If w is an inverse Lyndon word, then any nonempty prefix of w is an inverse Lyndon word. An inverse Lyndon factorization of a word w ∈Σ^+ is a sequence (m_1, …, m_k) of inverse Lyndon words such that m_1 ⋯ m_k = w and m_i ≪ m_i+1, 1 ≤ i ≤ k-1. As the following example in <cit.> shows, a word may have different inverse Lyndon factorizations. Let Σ = {a,b,c,d } with a < b < c < d, z = dabdadacddbdc. It is easy to see that (dab,dadacd,db,dc), (dabda,dac,ddbdc), (dab, dadac, ddbdc) are all inverse Lyndon factorizations of z. § THE BORDER PROPERTY In this section we prove the main result of this paper, namely, for any nonempty word w, there exists a unique inverse Lyndon factorization of w which has a special property, named the border property. Let w ∈Σ^+. A factorization (m_1, … , m_k) of w has the border property if each nonempty border z of m_i is not a prefix of m_i+1, 1 ≤ i ≤ k-1. We first prove a fundamental property of the inverse Lyndon factorizations of w which have the border property. Let w ∈Σ^+, let (m_1, … , m_k) be an inverse Lyndon factorization of w having the border property. If α is a nonempty border of m_j, 1 ≤ j ≤ k-1, then there exists a nonempty prefix β of m_j + 1 such that |β| ≤ |α| and α≪β. Let w ∈Σ^+, let (m_1, … , m_k) be an inverse Lyndon factorization of w having the border property, let α be a nonempty border of m_j, 1 ≤ j ≤ k-1. We distinguish two cases: either |m_j + 1| < |α| or |m_j + 1| ≥ |α|. Assume |m_j + 1| < |α|. By hypothesis (m_1, … , m_k) is an inverse Lyndon factorization, hence m_j ≪ m_j + 1, that is, there are r,s,t ∈Σ^*, a,b ∈Σ, such that a < b and m_j = ras, m_j + 1 = rbt. Obviously |ra| ≤ |m_j + 1| < |α|, thus there is s' ∈Σ^* such that α = ra s'. Consequently, α = ra s' ≪ rbt = m_j + 1 and our claim holds with β = m_j + 1. Assume |m_j + 1| ≥ |α|. Let β be the nonempty prefix of m_j + 1 such that |β| = |α|. Clearly β≠α because (m_1, … , m_k) has the border property. Since α and β are two different nonempty words of the same length, either β≪α or α≪β. The first case leads to a contradiction because if β≪α then m_j + 1≪ m_j by Lemma <ref> and this contradicts the fact that (m_1, … , m_k) is an inverse Lyndon factorization. Thus, α≪β and the proof is complete. For each w ∈Σ^+, there exists a unique inverse Lyndon factorization of w having the border property. The proof is by induction on |w|. If |w| = 1, then F_1(w) = F_2(w) = (w) and statement clearly holds. Thus assume |w| > 1. Let F_1(w) = (f_1, …, f_k) and F_2(w) = (f'_1, …, f'_v) two inverse Lyndon factorizations of w having the border property. Thus f_1 ⋯ f_k = f'_1 ⋯ f'_v = w If |f_k| = |f'_v| and v = 1 or k = 1, clearly f_k = f'_v and F_1(w) = F_2(w). Analogously, if |f_k| = |f'_v|, v > 1 and k > 1, then f_k = f'_v and F'_1(w') = (f_1, …, f_k - 1), F'_2(w') = (f'_1, …, f'_v - 1) would be two inverse Lyndon factorizations of w' having the border property, where w' is such that w = w' f_k. Of course, |w'| < |w|. By induction hypothesis, F'_1(w') = F'_2(w'), hence F_1(w) = F_2(w). By contradiction, let |f_k| ≠ |f'_v|. Assume |f_k| < |f'_v| (similar arguments apply if |f_k| > |f'_v|). The word f_k is a proper suffix of f'_v. Clearly k > 1. Let g be the smallest integer such that f_g+1⋯ f_k is a proper suffix of f'_v, 1 ≤ g ≤ k - 1, that is, f'_v = α f_g+1⋯ f_k where α∈Σ^+ is a suffix of f_g. Notice that α≪̸f_g+1 Indeed, if α≪ f_g+1, then, by Eq. (<ref>), we would have f'_v = α f_g+1⋯ f_k ≪ f_g+1⋯ f_k, which is impossible because f'_v is an inverse Lyndon word. The word α is a nonempty proper suffix of f_g since otherwise we would have α = f_g ≪ f_g+1, contrary to Eq. (<ref>). Since f_g is an inverse Lyndon word and α is a nonempty proper suffix of f_g, either α≤_p f_g or α≪ f_g. If α≤_p f_g, then α is a nonempty border of f_g, then, by Lemma <ref>, there exists a nonempty prefix β of f_g+1 such that |β| ≤ |α| and α≪β. Thus, α≪ f_g+1 which contradicts Eq. (<ref>). Assume α≪ f_g. Since f_g≪ f_g+1, by Lemma <ref> we have α≪ f_g+1 which contradicts once again Eq. (<ref>). This finishes the proof. § GROUPINGS AND COMPACT FACTORIZATIONS In this section we prove a structural property of an inverse Lyndon factorization having the border property, namely it is a compact factorization. This result is crucial to characterize the relationship between _in(w) and the factorization into inverse Lyndon words of w. First we report the notion of grouping given in <cit.>. We refer to <cit.> for a detailed and complete discussion on this topic. Let _in(w) = (ℓ_1, … , ℓ_h), where ℓ_1 ≽_inℓ_2 ≽_in…≽_inℓ_h. Consider the partial order ≥_p, where x ≥_p y if y is a prefix of x. Recall that a chain is a set of a pairwise comparable elements. We say that a chain is maximal if it is not strictly contained in any other chain. A non-increasing (maximal) chain in _in(w) is the sequence corresponding to a (maximal) chain in the multiset {ℓ_1, … , ℓ_h } with respect to ≥_p. We denote by a non-increasing maximal chain in _in(w). Looking at the definition of the (inverse) lexicographic order, it is easy to see that a is a sequence of consecutive factors in _in(w). Moreover _in(w) is the concatenation of its . The formal definitions are given below. Let w ∈Σ^+, let _in(w) = (ℓ_1, … , ℓ_h) and let 1 ≤ r < s ≤ h. We say that ℓ_r, ℓ_r+1, … , ℓ_s is a non-increasing maximal chain for the prefix order in _in(w), abbreviated , if ℓ_r≥_p ℓ_r+1≥_p …≥_p ℓ_s. Moreover, if r > 1, then ℓ_r - 1≱_p ℓ_r, if s < h, then ℓ_s≱_p ℓ_s +1. Two 𝒞_1 = ℓ_r, ℓ_r+1, … , ℓ_s, 𝒞_2 = ℓ_r', ℓ_r'+1, … , ℓ_s' are consecutive if r' = s+1 (or r = s' +1). Let w ∈Σ^+, let _in(w) = (ℓ_1, … , ℓ_h). We say that (𝒞_1, 𝒞_2, … , 𝒞_s) is the decomposition of _in(w) into its non-increasing maximal chains for the prefix order if the following holds (1) Each 𝒞_j is a non-increasing maximal chain in _in(w). (2) 𝒞_j and 𝒞_j + 1 are consecutive, 1 ≤ j ≤ s-1. (3) _in(w) is the concatenation of the sequences 𝒞_1, 𝒞_2, … , 𝒞_s. Let w ∈Σ^+. We say that (m_1, … , m_k) is a grouping of _in(w) if the following holds (1) (m_1, … , m_k) is an inverse Lyndon factorization of w (2) Each factor m_j, is the product of consecutive factors in a in _in(w). Let Σ = {a,b,c,d }, a < b < c < d, and w = dabadabdabdadac. We have _in(w) = (daba, dab, dab, dadac). The decomposition of _in(w) into its is ((daba, dab, dab), (dadac)). Moreover, (daba, dabdab, dadac) is a grouping of _in(w) but for the inverse Lyndon factorization (dabadab, dabda, dac) this is no longer true. Next, let y = dabadabdabdabdadac. We have _in(y) = (daba, dab, dab, dab, dadac). The decomposition of _in(w) into its is ((daba, dab, dab, dab), (dadac)). Moreover, (daba, (dab)^3, dadac) and (dabadab, (dab)^2, dadac) are two groupings of _in(y). For our aims, we need to consider the words that are concatenations of equal factors in _in. This approach leads to a refinement of the partition of _in into non-increasing maximal chains for the prefix order, as defined below. Let C=(ℓ_1, … , ℓ_h) be a non-increasing maximal chain for the prefix order in _in(w). The decomposition of C into maximal compact sequences is the sequence ( G_1, …, G_n) such that (1) C =( G_1, …, G_n) (2) For every i, 1 ≤ i ≤ n, G_i consists of the longest sequence of consecutive identical elements in C Let (𝒞_1, 𝒞_2, … , 𝒞_s) be the decomposition of _in(w) into its non-increasing maximal chains for the prefix order. The decomposition of _in(w) into its maximal compact sequences is obtained by replacing each 𝒞_j in (𝒞_1, 𝒞_2, … , 𝒞_s) with its decomposition into maximal compact sequences. Let ( G_1, …, G_n) be the decomposition of _in(w) into its maximal compact sequences. For every i, 1 ≤ i ≤ n, the concatenation g_i of the elements in G_i is a compact factor in _in(w). Let w ∈Σ^+. We say that (m_1, … , m_k) is a compact factorization of w if (m_1, … , m_k) is an inverse Lyndon factorization of w and each m_j, 1 ≤ j ≤ k, is a concatenation of compact factors in _in(w). Consider again y = dabadabdabdabdadac over Σ = {a,b,c,d }, a < b < c < d, as in Example <ref>. The decomposition of _in(y) = (daba, dab, dab, dab, dadac) into its maximal compact sequences is ((daba), (dab, dab, dab), (dadac)). The compact factors in _in(w) are daba, (dab)^3, dadac. Moreover, (daba, (dab)^3, dadac) is a compact factorization whereas (dabadab, (dab)^2, dadac) is a grouping of _in(y) which is not a compact factorization. Let w ∈Σ^+. If (m_1, … , m_k) is an inverse Lyndon factorization of w having the border property, then (m_1, … , m_k) is a compact factorization of w. Let w ∈Σ^+, let (m_1, … , m_k) be an inverse Lyndon factorization of w having the border property. Let _in(w) = (ℓ_1, … , ℓ_h), where ℓ_1 ≽_inℓ_2 ≽_in…≽_inℓ_h and ℓ_1, … , ℓ_h are anti-Lyndon words. First we prove that (m_1, … , m_k) is a grouping of _in(w) by induction on |w|. If |w| = 1 the statement clearly holds, thus assume |w| > 1. The words m_1 and ℓ_1 are comparable for the prefix order, hence either m_1 is a proper prefix of ℓ_1 or ℓ_1 is a prefix of m_1. Suppose that m_1 is a proper prefix of ℓ_1. Thus, there are j, 1 < j ≤ k, and x, y ∈Σ^*, x ≠ 1, such that m_j = xy and ℓ_1 = m_1 ⋯ m_j-1 x. Necessarily it turns out j = 2 because otherwise m_1 ≪ m_j-1, hence, by Lemma <ref>, ℓ_1 ≪ m_j-1x and this contradicts the fact that ℓ_1 is an anti-Lyndon word. In conclusion ℓ_1 = m_1x and m_2 = xy. We know that m_1 ≪ m_2, that is, there are r,s,t ∈Σ^*, a,b ∈Σ, such that a < b and m_1 = ras, m_2 = rbt = xy. If |x| ≤ |r|, then r is a nonempty border of ℓ_1 and if |x| > |r|, then there is a word t' such that x = rbt' which implies ℓ_1 ≪ x. Both cases again contradict the fact that ℓ_1 is an anti-Lyndon word. Therefore, ℓ_1 is a prefix of m_1. Let i be the largest integer such that m_1 = ℓ_1 ⋯ℓ_i-1 x, x, y ∈Σ^*, ℓ_i = xy, 1 < i ≤ h, y ≠ 1. Let (𝒞_1, 𝒞_2, … , 𝒞_s) be the decomposition of _in(w) into its non-increasing maximal chains for the prefix order. We claim that ℓ_1 ⋯ℓ_i - 1 is a prefix of the concatenation of the elements of 𝒞_1, thus (ℓ_1, … , ℓ_i - 1) is a chain for the prefix order. If i = 1 we are done. Let i > 1. By contradiction, assume that there is j, 1 < j < i, such that ℓ_j ∉𝒞_1. Therefore, ℓ_1 ≪ℓ_j which implies m_1 ≪ℓ_j ⋯ℓ_i-1 x and this contradicts the fact that m_1 is an inverse Lyndon word. We now prove that x = 1. Assume x ≠ 1. As a preliminary step, we prove that there is no nonempty prefix β of m_2 such that |β| ≤ |x| and x ≪β. In fact, if such a prefix existed, there would be r,s,t ∈Σ^*, a,b ∈Σ, such that a < b and x = ras, β = rbt. If |ℓ_i| ≤ |x r| then ℓ_i = x r' = rasr', where r' would be a nonempty prefix of r, thus a nonempty border of ℓ_i (recall that ℓ_i = xy with y ≠ 1). If |ℓ_i| > |x r|, then there would be a word t' such that ℓ_i = rasrbt' which would imply ℓ_i ≪ rbt'. Both cases contradict the fact that ℓ_i is an anti-Lyndon word. If x ≠ 1, then either ℓ_i is a prefix of ℓ_1 or ℓ_1 ≪ℓ_i. If ℓ_i were a prefix of ℓ_1, then x would be a nonempty border of m_1. By Lemma <ref> there would exist a nonempty prefix β of m_2 such that |β| ≤ |x| and x ≪β which contradicts our preliminary step. If it were true that ℓ_1 ≪ℓ_i then there would be r,s,t ∈Σ^*, a,b ∈Σ, such that a < b and ℓ_1 = ras, ℓ_i = rbt = xy. If |x| > |r|, then there would be a word t' such that x = rbt' which would imply m_1 ≪ x and this contradicts the fact that m_1 is an inverse Lyndon word. If |x| ≤ |r|, then x is a prefix of r and is a nonempty border of m_1. By Lemma <ref> again, there would exist a nonempty prefix β of m_2 such that |β| ≤ |x| and x ≪β which contradicts again our preliminary step. Let w' ∈Σ^* be such that w = m_1 w'. If w' = 1 we are done. Assume w' ≠ 1. Clearly |w'| < |w|. Of course (m_2, … , m_k) is an inverse Lyndon factorization of w having the border property. Moreover, by Corollary <ref>, _in(w') = (ℓ_i, … , ℓ_h) and (𝒞'_1, 𝒞_2, … , 𝒞_s) is the decomposition of _in(w') into its non-increasing maximal chains for the prefix order, where 𝒞'_1 is defined by 𝒞_1 = (ℓ_1, … , ℓ_i - 1, 𝒞'_1). By induction hypothesis, (m_2, … , m_k) is a grouping of _in(w') and consequently (m_1, … , m_k) is a grouping of _in(w). Finally, to obtain a contradiction, suppose that (m_1, … , m_k) is a grouping of _in(w) having the border property such that (m_1, … , m_k) is not a compact factorization of w. To adapt the notation to the proof, set _in(w) = (ℓ_1^n_1, …, ℓ_r^n_r), where r > 0, n_1, … , n_r ≥ 1 and ℓ_1, … , ℓ_r are anti-Lyndon words. By Definitions <ref> and <ref>, there exist integers j, h, p_h, q_h, 1 ≤ j ≤ k - 1, 1 ≤ h ≤ r, p_h ≥ 1, q_h ≥ 1, p_h + q_h ≤ n_h, such that m_j ends with ℓ_h^p_h and m_j + 1 starts with ℓ_h^q_h. Thus, by Definition <ref>, ℓ_h is a prefix of m_j. Moreover, ℓ_h is a proper prefix of m_j. Indeed otherwise ℓ_h = m_j ≤_p m_j + 1 which is impossible because m_j ≪ m_j + 1 ((m_1, … , m_k) is an inverse Lyndon factorization). Thus ℓ_h is a nonempty border of m_j. The word ℓ_h is also a prefix of m_j + 1 and this contradicts the fact that (m_1, … , m_k) has the border property. § THE CANONICAL INVERSE LYNDON FACTORIZATION: THE ALGORITHM In this section we state another relevant result of the paper related to the main one stated in Section <ref>. We have shown that a nonempty word w can have more than one inverse Lyndon factorization but w has a unique inverse Lyndon factorization with the border property (Example <ref>, Proposition <ref>). Below we highlight that this unique factorization is the canonical one defined in <cit.>. This special inverse Lyndon factorization is denoted by because it is the counterpart of the Lyndon factorization of w, when we use (I)inverse words as factors. Indeed, in <cit.> it has been proved that (w) can be computed in linear time and it is uniquely determined for a word w. See Section <ref> for definitions of and all related notions. Since (w) is the unique inverse Lyndon factorization with the border property, from now on these two notions will be synonymous. Below we show another interesting property of : the last factor of the factorization is the longest suffix that is an inverse Lyndon word. Based on this result we provide a new simpler linear algorithm for computing . We begin by recalling previously proved results on , namely Proposition 7.7 in <cit.> and Proposition 9.5 in <cit.>. They are merged into Proposition <ref>. For any w ∈Σ^+, (w) is a grouping of _in(w). Moreover, (w) has the border property. Corollary <ref> is a direct consequence of Propositions <ref>, <ref> and <ref>. For each w ∈Σ^+, (w) is a compact factorization and it is is the unique inverse Lyndon factorization of w having the border property. We end the section with a result which has been proved in <cit.> and which will be used in the next section. Let w ∈Σ^+, let _in(w) = (ℓ_1, … , ℓ_h) and let (𝒞_1, 𝒞_2, … , 𝒞_s) be the decomposition of _in(w) into its non-increasing maximal chains for the prefix order. Let w_1, … , w_s be words such that _in(w_j) = 𝒞_j, 1 ≤ j ≤ s. Then (w) is the concatenation of the sequences (w_1), … , (w_s), that is, (w) = ((w_1), … , (w_s)) We can now state some results useful to prove the correctness of our algorithm. First we observe that, thanks to Corollary <ref> and Proposition <ref>, to compute we can limit ourselves to the case in which _in is a chain with respect to the prefix order. Let ℓ_1, … , ℓ_h be anti-Lyndon words over Σ that form a non-increasing chain for the prefix order, that is, ℓ_1≥_p ℓ_2≥_p …≥_p ℓ_h. If ℓ_1 ≠ℓ_2, then ℓ_1≮_p ℓ_2 ⋯ℓ_h. By contradiction, assume that ℓ_1 is a prefix of ℓ_2 ⋯ℓ_h. Then, ℓ_1 = ℓ_2 ⋯ℓ_tz where either z = 1 and 2 < t ≤ h or z is a nonempty prefix of ℓ_t + 1, 2 ≤ t < h. Thus either ℓ_t or z is a nonempty border of ℓ_1, a contradiction in both cases. <cit.> Let x, y two different borders of a same word w ∈Σ^+. If x is shorter than y, then x is a border of y. Let w ∈Σ^+ and assume that _in(w) form a non-increasing chain for the prefix order. If (m_1, … , m_k) is a factorization of w such that each m_j, 1 ≤ j ≤ k, is a concatenation of compact factors in _in(w), then (m_1, … , m_k) has the border property. Let w ∈Σ^+ and assume that _in(w) form a non-increasing chain for the prefix order. Let (m_1, … , m_k) be a factorization of w such that each m_j, 1 ≤ j ≤ k, is a concatenation of compact factors in _in(w). The proof is by induction on k. If k = 1, then the conclusion follows immediately. Assume k > 1. Let w' ∈Σ^+ be such that w = m_1 w'. It is clear that (m_2, … , m_k) is a factorization of w' such that each m_j, 2 ≤ j ≤ k, is a concatenation of compact factors in _in(w'). Thus, by induction hypothesis, (m_2, … , m_k) has the border property. It remains to prove that each nonempty border of m_1 is not a prefix of m_2. The proof is straightforward if m_1 is unbordered, thus assume that m_1 is bordered. Let (w) = (ℓ_1^n_1, …, ℓ_r^n_r), where ℓ_1^n_1, …, ℓ_r^n_r are the compact factors in (w), that is ℓ_1, … , ℓ_r are anti-Lyndon words such that ℓ_1 ≥_p …≥_p ℓ_h. Since m_i is a concatenation of compact factors in _in(w), there is h, 1 ≤ h < r such that m_1 = ℓ_1^n_1⋯ℓ_h^n_h Notice that ℓ_h is a nonempty border of m_1. Furthermore, since ℓ_h is unbordered, ℓ_h is the shortest nonempty border of m_1. If there were a word z which is a nonempty border of m_1 and also a prefix of m_2, by Remark <ref>, ℓ_h would be a prefix of m_2. Therefore, ℓ_h would be a prefix of the word ℓ_h + 1^n_h + 1⋯ℓ_r^n_r which contradicts Lemma <ref>. Let w ∈Σ^+ and let (w) = (m_1, … , m_k) be the unique inverse Lyndon factorization of w having the border property. Then m_k is the longest suffix of w which is an inverse Lyndon word. Let w ∈Σ^+ and let (m_1, …, m_k) be the unique inverse Lyndon factorization of w having the border property. If k = 1 we are done. Thus suppose k > 1. By contradiction, suppose that m_k is not the longest suffix of w that is an inverse Lyndon word. Let s be such longest suffix. Thus, there exist a nonempty suffix x of m_j, 1 ≤ j < k such that s = x m_j + 1⋯ m_k. Furthermore x must be a proper suffix of m_j or we would have s = m_j ⋯ m_k ≪ m_j + 1⋯ m_k contradicting the hypothesis that s is inverse Lyndon. We claim that x ≪ m_j+1. Indeed, since m_j is an inverse Lyndon word, it holds x ≼ m_j. Thus, if x ≪ m_j or x = m_j, it immediately follows that x ≪ m_j+1. Otherwise, x ≤_p m_j and x is a nonempty border of m_j. By Lemma <ref> applied to (m_1, …, m_k), with x = α, there must exist a prefix β of m_j+1 such that x ≪β, hence x ≪ m_j+1. Since x ≪ m_j+1, we have s = x m_j+1⋯ m_k ≪ m_j+1⋯ m_k, contradicting the hypothesis that s is an inverse Lyndon word. Let w ∈Σ^+ be an inverse Lyndon word, and let ℓ∈Σ^+ be an anti-Lyndon word. Then: * If ℓ≪ w, then for every k ≥ 1, ℓ^k w is not an inverse Lyndon word. * If ℓ w is not an inverse Lyndon word, then ℓ≪ w. Furthermore, for every k ≥ 1, w is the longest suffix of ℓ^k w that is an inverse Lyndon word. By Lemma <ref>, the proof of item 1 is immediate. Suppose ℓ w is not inverse Lyndon. Then, there exists a proper suffix s of ℓ w such that ℓ w ≼ s, hence ℓ w ≪ s. Since ℓ is anti-Lyndon, for every proper suffix x of ℓ it follows x ≪ℓ and consequently x w ≪ℓ w. Thus, s must be a suffix of w. Since w is an inverse Lyndon word, one of the following three cases holds: (1) w = s; (2) s <_p w; (3) s ≪ w. By ℓ w ≪ s, in each of the three cases it is evident that ℓ w ≪ w. Thus there are r, t, t' ∈Σ^* and a, b ∈Σ with a < b such that ℓ w = r a t, w = r b t'. If |ℓ| ≥ |r a|, then clearly ℓ≪ w. Otherwise, |ℓ| ≤ |r | and there is r' ∈Σ^* such that r = ℓ r'. Consequently, w starts with r' a. On the other hand, r' is a border of r, hence w = ℓ r' b t' and r' b t' is a suffix of w. This contradicts the fact that w is an inverse Lyndon word. For every k ≥ 1, w is a suffix of ℓ^k w that is an inverse Lyndon word. Let x be a proper nonempty suffix of ℓ. Of course x ≪ℓ. The word x w is not an inverse Lyndon word, otherwise we would have ℓ≪ w ≼ xw ≪ℓ w, a contradiction. Moreover, by Lemma <ref>, for any j, 1 ≤ j < k, we have x ℓ^j w ≪ℓ^j w and x ℓ^j w is not an inverse Lyndon word. Finally, by item 1, ℓ^k w is not an inverse Lyndon word. We now describe Algorithm <ref>. Function Factorizew will compute the unique compact factorization of w having the border property. First, at line <ref>, it is computed the decomposition of w into its compact factors. Then, the factorization of w is carried out from right to left. Specifically, in accordance with Proposition <ref>, the for-loop at lines <ref>–<ref> will search for the longest suffix m' of w that is an inverse Lyndon word. The update of m' is managed by iteratively applying Proposition <ref> at line <ref>. Once such longest suffix is found (that is, when the condition at line <ref> is true) it is added to the growing factorization ℱ and it is initiated a new search for the longest suffix for the remaining portion of the string. Otherwise, line <ref>, the suffix is extended. In the end, the complete factorization is returned. §.§ Correctness and complexity We now prove that Algorithm <ref> is correct, that is that it will compute the unique inverse Lyndon factorization of w having the border property, namely (w). Formally: Let w ∈Σ^+, and let ℱ be the result of Factorizew. Then, ℱ = (w). Let (ℓ_1^e_1, …, ℓ_n^e_n) be the decomposition of w into its compact factors, and let L_t = ℓ_t^e_t⋯ℓ_n^e_n. We will denote by m'_t (resp. ℱ_t) the value of m' (resp. ℱ) at the end of iteration t. We will prove the following loop invariant: at the end of iteration t, sequence (m'_t, ℱ_t) is a compact factorization of L_t having the border property. The claimed result will follow by Corollary <ref>. Initialization. Prior to entering the loop, (m'_n, ℱ_n) = (ℓ_n^e_n) , where the last equality follows from Proposition <ref>. Maintenance. Let t ≤ n-1. By induction hypothesis, (L_t+1) = (m'_t+1, ℱ_t+1). Suppose ℓ_t ≪ m'_t+1. Then, by item 1 of Proposition <ref> ℓ_t · m'_t+1 is not inverse Lyndon and m'_t+1 is the longest suffix of ℓ_t^e_t· m'_t+1 that is an inverse Lyndon word. Thus, by Proposition <ref> m'_t+1 is the last factor of any compact factorization of ℓ_t^e_t· m'_t+1. Hence, (m'_t, ℱ_t) = (ℓ_t^e_t, m'_t+1, ℱ_t+1) is a compact factorization of F_t having the border property. Now, consider the case where ℓ_t ≪̸m'_t+1. Then, by the contrapositive of item 2 of Proposition <ref>, ℓ_t · m'_t+1 is inverse Lyndon and thus, again by item 2 of Proposition <ref>, ℓ_t^e_t· m'_t+1 is inverse Lyndon. Therefore, (m'_t, ℱ_t) = (ℓ_t^e_t· m'_t+1, ℱ_t+1) is a compact factorization having the border property. Termination. After iteration t=1, sequence (m'_1, ℱ_1) = (L_1) = (w). Finally, line <ref> sets ℱ = (m'_1, ℱ_1) = (w). Function Factorizew has time complexity that is linear in the length of w. Indeed, the sequence of compact factors obtained at line <ref> can be computed in linear time in the length of w by a simple modification of Duval's algorithm (see <cit.>). After that, each iteration t of loop <ref>–<ref> can be implemented to run in time 𝒪(|ℓ_t|). Indeed, condition ℓ_t ≪ m' can be checked by naively comparing ℓ_t against m'. Furthermore, the update of m' and ℱ can be done in constant time: in fact, ℓ_t, ℓ_t^e_t, m' and ℱ can all be implemented as pairs of indexes (in case of the former three) or as a list of indexes (in case of the latter) of w. § CONCLUSIONS We discover the special connection between the Lyndon factorization under the inverse lexicographic ordering, named _in and the canonical inverse Lyndon factorization, named : there exists a unique inverse Lyndon factorization having the border property and this unique factorization is . Moreover each inverse factor of is obtained by concatenating compact factors of _in. These properties give a constrained structure to that deserve to be further explored to characterize properties of words. In particular, we believe the characterization of as a compact factorization, proved in the paper, could highlight novel properties related the compression of a word, as investigated in <cit.>. In particular, the number of compact factors seems to be a measure of repetitiveness of the word to be also used in speeding up suffix sorting of a word. Finally, we believe that the characterization of in terms of _in may be used to extend to the conservation property proved in <cit.> for . This property shows that the Lyndon factorization of a word w preserves common factors with the factorization of a superstring of w. This extends the conservation of Lyndon factors explored for the product u· v of two words u and v <cit.>. § ACKNOWLEDGMENTS This research was supported by the European Union's Horizon 2020 Research and Innovation Programme under the Marie Skłodowska-Curie grant agreement PANGAIA No. 872539, by MUR 2022YRB97K, PINC, Pangenome Informatics: from Theory to Applications, and by INdAM-GNCS Project 2023 unsrt § THE CANONICAL INVERSE LYNDON FACTORIZATION In this section we summarize the relevant material on the canonical inverse Lyndon factorization and we refer to <cit.> for a thorough discussion on this topic. If w is an inverse Lyndon word, then (w) = w. Otherwise, (w) is recursively defined. The first factor of (w) is obtained by a special pair (p , p) of words, named the canonical pair associated with w, which in turn is obtained by the shortest nonempty prefix z of w such that z is not an inverse Lyndon word. Proposition 6.2 in <cit.> provides the following characterization of the pair (p , p). Let w ∈Σ^+ be a word which is not an inverse Lyndon word. A pair of words (p , p) is the canonical pair associated with w if and only the following conditions are satisfied. (1) z = p p is the shortest nonempty prefix of w which is not an inverse Lyndon word. (2) p = ras and p = rb, where r,s ∈Σ^*, a,b ∈Σ and r is the shortest prefix of p p such that p p = rasrb, with a < b. (3) p is an inverse Lyndon word. Given a word w which is not an inverse Lyndon word, Proposition <ref> suggests a method to identify the canonical pair (p , p) associated with w: just find the shortest nonempty prefix z of w which is not an inverse Lyndon word and then a factorization z = p p such that conditions (2) and (3) in Proposition <ref> are satisfied. The canonical inverse Lyndon factorization has been also recursively defined. Let w ∈Σ^+. (Basis Step) If w is an inverse Lyndon word, then (w) = (w). (Recursive Step) If w is not an inverse Lyndon word, let (p,p) be the canonical pair associated with w and let v ∈Σ^* such that w = pv. Let (v) = (m'_1, …, m'_k) and let r,s ∈Σ^*, a,b ∈Σ such that p = ras, p = rb with a < b. (w) = (p, (v)) p = rb ≤_p m'_1 (p m'_1, m'_2, …, m'_k) m'_1 ≤_p r The following example is in <cit.>. Let Σ = {a,b,c,d } with a < b < c < d, w = dabadabdabdadac. We have _in(w) = (daba, dab, dab, dadac) and (w) = (daba, dabdab, dadac). Consider z = dabdadacddbdc. We have (z) = _in(z) = (dab,dadac,ddbdc).
http://arxiv.org/abs/2406.19333v1
20240627170540
Accelerating Multiphase Flow Simulations with Denoising Diffusion Model Driven Initializations
[ "Jaehong Chung", "Agnese Marcato", "Eric J. Guiltinan", "Tapan Mukerji", "Hari Viswanathan", "Yen Ting Lin", "Javier E. Santos" ]
physics.geo-ph
[ "physics.geo-ph", "physics.comp-ph", "physics.flu-dyn" ]
A Contact Binary Satellite of the Asteroid (152830) Dinkinesh Yifan Zhao July 1, 2024 ============================================================= § ABSTRACT This study introduces a hybrid fluid simulation approach that integrates generative diffusion models with physics-based simulations, aiming at reducing the computational costs of flow simulations while still honoring all the physical properties of interest. These simulations enhance our understanding of applications such as assessing hydrogen and CO_2 storage efficiency in underground reservoirs. Nevertheless, they are computationally expensive and the presence of nonunique solutions can require multiple simulations within a single geometry. To overcome the computational cost hurdle, we propose a hybrid method that couples generative diffusion models and physics-based modeling. We introduce a system to condition the diffusion model with a geometry of interest, allowing to produce variable fluid saturations in the same geometry. While training the model, we simultaneously generate initial conditions and perform physics-based simulations using these conditions. This integrated approach enables us to receive real-time feedback on a single compute node equipped with both CPUs and GPUs. By efficiently managing these processes within one compute node, we can continuously evaluate performance and stop training when the desired criteria are met. To test our model, we generate realizations in a real Berea sandstone fracture which shows that our technique is up to 4.4 times faster than commonly used flow simulation initializations. § INTRODUCTION CO_2 and H_2 subsurface storage are seen as important methods to address climate change <cit.>. The injection of CO_2 and H_2 is a multiphase process whereby these fluids displace existing groundwater. Predicting the behavior of these two-phase fluid systems is challenging, as their dynamics are governed by diverse factors including flow path geometries <cit.>, fluid saturations <cit.>, and affinity between the fluids and the host rocks <cit.>. In addition, the presence of fractures, which act as preferential pathways for flow, further complicates the picture <cit.>. These fractures are potential risks for fluid leakage <cit.>, underlining the need for a comprehensive understanding to ensure the integrity and efficiency of underground storage systems <cit.>. Pore-scale simulations, ranging from 10 nm to 10 cm, provide accurate models of how fluids travel through porous media in underground reservoirs <cit.>. In particular, these micro-scale simulations can capture the preferred traveling paths for fluids by identifying local minimum energy states <cit.>. Such configurations arise from the energy balance between fluid displacements and interfacial tensions between fluids and solids <cit.>. While the pore-scale simulations can characterize accurately and precisely fluid configurations in the pore space, scaling these simulations to the reservoir scale, ranging from 10 cm to 100 m, is impractical due to the expensive computational costs associated with large modeling domains with intricate flow paths (complicated geometries). In addition, the need to run multiple simulations for different saturation levels in the same geometries (for example, to compute relative permeability curves) further escalates the computational challenges. Many studies employ machine learning techniques for predicting fluid flow at the macroscale <cit.>. However, applications at the pore scale remain limited. Recent advancements in machine learning have demonstrated significant potential in reducing the computational burden of multiphase flow simulations. Machine learning can help these simulations by offering efficient algorithms that can learn complex patterns and relationships within the data. This capability allows for faster and more accurate predictions without the need for solving extensive and computationally expensive equations traditionally required in multiphase flow modeling. Additionally, machine learning models can be trained on large datasets to generalize well to new, unseen data, thereby enhancing the overall efficiency and scalability of the simulations. Building on these advancements, several recent studies have explored the application of machine learning techniques to multiphase flow problems with encouraging results. <cit.> employed a U-Net architecture to predict fluid displacement in micromodels showing high accuracy in the predicted displacement configurations. <cit.> trained a residual U-Net to predict two-phase configurations in fractures, mapping the 3D geometry into a 2D feature space with a lossless algorithm. Their trained model was able to predict the invasion dynamic of an unsteady state flow of a non-wetting front. <cit.> trained a network to predict the fluid distribution within fractures at steady state using data from lattice Boltzmann simulations. They demonstrated that a trained network can accurately predict fluid residual saturation and distribution based solely on the dry fracture characteristics. <cit.> trained a conditional Generative Adversarial Network to predict invasion percolation configurations in 2D spherepacks. While the model demonstrated a reduction in computational time compared to traditional algorithms, its use cases are still limited as these simulations do not capture the full complexity of two-phase fluid interactions. These studies provide an initial demonstration of successful applications of machine learning for multiphase flow problems. Hybrid approaches, which combine the computational efficiency of deep learning with the physical robustness of numerical simulations, have been proposed as a solution to simulate complex physical problems efficiently. <cit.> introduced an integrated framework that employed a Convolutional Neural Network (CNN) to provide a data-driven initialization, subsequently utilized in single-phase Lattice Boltzmann Method (LBM) simulations. This approach achieved a tenfold acceleration of the simulation process while maintaining physical accuracy. In addition, <cit.> demonstrated that machine learning-based initializations can greatly accelerate simulations of electrical conductivity in porous media. These studies show the potential of a hybrid approach to combine the strengths of deep learning and numerical solvers. Our study aims to extend this hybrid approach to multiphase flow simulations, which are orders of magnitude more expensive compared to single phase and electrical conductivity simulations. Hybrid approaches tackle the challenge of achieving computational efficiency while ensuring compliance with fundamental governing equations, such as continuity and momentum balance. In deep learning, generative models are designed to approximate high-dimensional data distributions <cit.>. Recently, diffusion models have gained prominence for their unique ability to progressively convert noise into highly detailed and structured outputs that approximate the distribution of the training data. These models employ a reverse diffusion process to gradually build coherent structures from noise, and a forward diffusion process to incrementally corrupt the data. This bidirectional approach makes diffusion models particularly effective for complex computer vision tasks, such as generating high-resolution, realistic images <cit.>, establishing new state-of-the-art for generative modeling. In this study, we demonstrate that diffusion models can learn to accurately predict steady-state multiphase fluid configurations in fracture geometries. We introduce a geometric conditioning approach to obtain fluid configurations for specific fracture geometries. The predicted fluid configurations are highly accurate; we used them as initial conditions for physics-based simulations, significantly reducing computational time. We quantify accuracy using both statistical metrics, such as mean squared error, and by measuring the reduction in simulation time, indicating how closely the predicted configuration matches the real one. Our results demonstrate that diffusion models can provide accurate configurations for multiphase flow, significantly enhancing the efficiency of physics-based simulations. This opens the door to hybrid approaches where machine learning and physics-based methods can coexist, leveraging the strengths of both to solve complex problems more effectively. The remainder of this paper is organized as follows: Section <ref> provides detailed information about the fracture dataset and multiphase simulations used in this study. Section <ref> describes the diffusion model used and details the geometric conditioning process for specific fracture geometries. Finally, Section <ref> presents and discusses the results. § DATASET §.§ Fracture geometries To train our model, we generated a dataset of 1,000 fractures using , an open-source Python-based fracture geometry generator <cit.>. We used the spectral method to create fractures by varying the following geometric parameters: Hurst exponent, spatial correlation, and surface roughness. The impact of these parameters is illustrated in Figure <ref>. Although these fractures are synthetic, the implementation and values used align with those measured in profilometry studies of real fractures <cit.>. For readers interested in the underlying methods and their implementations, further details can be found in the literature <cit.>. The generated geometries are projected onto a 128-by-128 grid, enabling us to run numerous lattice Boltzmann simulations within a computationally feasible time frame. This approach enables us to create a sufficiently diverse dataset for training machine learning models. To visualize the diversity within our dataset, we employed t-SNE (t-Distributed Stochastic Neighbor Embedding), a technique for reducing dimensionality and visualizing high-dimensional data structures in a lower-dimensional space <cit.>. Figure <ref> shows that while differences exist within our dataset, the extremes of the t-SNE plot are relatively limited. The goal of this work is to demonstrate the ability of the machine learning model to generalize to a variety of fractures despite the simplicity of its training set. §.§ Multiphase flow simulations We conducted two-phase flow simulations on the fractures geometries using the MP-LBM library <cit.> which uses the lattice Boltzmann method (LBM) to simulate two competing immiscible (wetting and non-wetting) fluids in complex geometries. In our study, the density distribution function for each phase is represented by the Boltzmann equation with the Bhatnagar-Gross-Krook (BGK) collision term <cit.> as follows: f_a^α(x+e_aΔ t,t+Δ t) = f_a^α(x, t) - 1/τ[f_a^α(x, t)-f_a^α, eq(x, t)], where f_a^α(x, t) is the density function with the ath lattice direction, α represents components for two phase systems α = 1 or -1, e_a is the set of discrete velocity vectors of a node, and τ is characteristic relaxation time, related to the fluid viscosity. The equilibrium distribution function, f_a^α, eq(x, t), describes the distribution function in a state of local equilibrium <cit.>. In addition, we consider the acting force on each fluid phase due to the fluid-fluid interaction (interfacial tension) and fluid-solid interaction (wettability) based on the Shan-Chen model <cit.>. Readers interested in the underlying model for specifying a specific contact angle can refer to the literature <cit.>. The simulations were conducted using a 1:1 density ratio and a wetting angle of 24 degrees, representing a system with brine and supercritical CO2. It's worth noting that the proposed workflow is capable of learning systems with different fluid properties or wetting angles, it is not limited to our specific system.For each fracture, we ran thirteen evenly distributed simulations with fluid ratios varying from 20% to 60% to ensure percolation of both fluids and to exclude isolated blobs that would not converge in the simulation. This resulted in a total of 13,000 cases (1,000 synthetic fractures × 13 saturation cases) for the fluid configuration dataset in this study. During the simulation, both fluids are driven through the fracture by an external force. Convergence is monitored by tracking the relative kinetic energy (E_k), which is represented as follows <cit.>: E_k = 1/2( ∂v⃗/∂ t)^2, where ∂v⃗/∂ t denotes the time rate of change of the velocity vector. The simulation is considered converged once the relative energy change across the simulation domain falls below a threshold of 5 × 10^-5 between consecutive 1,000 time steps. The simulations take on average around 100 thousand iterations in time (time-steps) to achieve convergence. Once the simulation converges, the fracture's fluid configurations can be considered stable, implying a minimum free energy state. In other words, the converged simulation depicts the most conductive pathways for each sample given the solid geometry and the initial ratios of fluid. It is noteworthy that these stable fluid configurations exhibit intricate variability, even within the same fracture geometry. In this work, the LBM simulations act as the ground-truth to train our model. Nevertheless, we would like to emphasize that the workflow described above is not limited to LBM simulations, it can also be applied using other methods such as level-set simulations <cit.>, molecular dynamics <cit.>, and computational fluid dynamics<cit.>, among others. We chose LBM simulations because it is practical for us, given that we already have a robust open-source solver in place. § DIFFUSION MODELS CONDITIONED ON FRACTURE GEOMETRIES We employ Denoising Diffusion Probabilistic Models <cit.> to learn data distribution of stable fluid configurations in fractured media. These models operate by constructing two Markov Chains. A forward Markov chain is defined to perturb samples drawn from the data distribution, 𝐱_0∼ q(𝐱_0), towards a limiting isotropic Gaussian distribution. The conditional mean of the reverse-time process was derived <cit.> as the training target of a U-Net <cit.>, which is used to propagate samples drawn from the limiting distribution back to the data distribution after training, generating novel samples. In short, the reverse diffusion process progressively transforms noise into coherent structures, while the forward diffusion process destroys the structure in the data distribution incrementally. The forward and the reverse processes are defined as follows <cit.>: q(𝐱_𝐭|𝐱_𝐭-1) = 𝒩(𝐱_𝐭; √(1-β_t)𝐱_𝐭-1, β_t 𝐈), p_θ(𝐱_𝐭-1|𝐱_𝐭) = 𝒩(𝐱_𝐭-1; μ_θ (𝐱_𝐭, t), Σ_θ(𝐱_𝐭, t)), where x_t is the sample at time t, β_t is a variance schedule controlling the Gaussian noise addition, μ is a mean, and Σ is a covariance matrix. Variational inference (VI) is used to train the diffusion model via minimizing the evidence lower bound (ELBO), a lower bound of the Bayes evidence of the model <cit.>. <cit.> showed that VI can be effectively performed by training the model to predict the noise, ϵ, that is added to x_t, given a noisy image x_t, via the loss function: ℒ(θ) = 𝔼_t ∼ [1,T], x_0, ϵ [ϵ - ϵ_θ (x_t, t) ^2], §.§ Fracture geometry conditioning Since the geometry of flow paths has a first order influence on the configurations of fluids <cit.>, a useful model should be able to provide fluid configurations for user-specified geometries. This task is similar to inpainting in computer vision <cit.>, which requires the model to synthesize images in unknown areas that are both realistic and consistent with the surrounding background. One unique challenge in our inpainting task lies in the fracture geometries which are characterized by sharp and irregular shapes. To ensure that our model generates fluid configurations that adhere to given geometries (i.e., not solely based on random Gaussian noise), we introduce an auxiliary binary image G∈{0,1}^H × W encoding the fracture geometry. Here, H is the height and W is the width of the image: G(i,j) = {[ 1 (i,j) ,; 0 . ]. Figure <ref> shows a schematic diagram of the diffusion process in our model, which generates denoised images (x_t-1) given x_t and G. By including this additional channel in every denoising step during the reverse process, the model is provided with essential information to synthesize images based on a given geometry. Once trained, the model can generate fluid realizations for unseen geometries by concatenating the desired fracture geometry, represented as a binary image (G), with a random isotropic Gaussian field (x_T), and running the reverse process depicted in Figure <ref>. During the model's development, we experimented with using only one channel, where the solid regions were depicted by zeros, and the noise and denoising processes occurred within the fracture space. This approach resulted in very poor outcomes. We hypothesize that in highly rough fractures with very small pore spaces, the limited amount of Gaussian noise added is insufficient to enable the necessary expressiveness for the diffusion process to accurately create fluid configurations. § RESULTS AND DISCUSSION §.§ Training and model performance We train the diffusion model with 100 denoising steps over 4,000 training iterations, a process that takes approximately one hour on a single A100 GPU. During training, we employ two evaluation metrics to assess the model's performance. The first metric is the Mean Squared Error (MSE), commonly used to train machine learning models, which quantifies the discrepancy between predictions and ground truth. While a low MSE value indicates that the model is effectively approximating the training data, determining the optimal model for the physical domain remains challenging. To address this challenge, we introduce a second metric that focuses on the number of iterations required for the LBM simulation to converge, starting from the initial configuration provided by the diffusion model. The rationale behind this metric is based on the functionality of numerical simulators, which approximate solutions to partial differential equations through iterative methods. Therefore, if the fluid configuration generated by our diffusion model closely approximates the numerical solution, the number of iterations needed for convergence should be minimal, as the starting point is already near the converged configuration. Throughout the training, we generate fluid configurations for unseen fracture geometries in intervals of 800 training iterations (during which the model has seen 800 fractures with varying noise levels). These configurations serve as initial conditions for multiphase flow simulations, and we record the number of iterations required by our LBM solver to achieve convergence. Figure <ref> illustrates the evolution of the MSE loss and the convergence iterations over the training period. Notably, the MSE loss starts high and plateaus around 2,000 training iterations, indicating effective denoising for the training dataset. Additionally, we observe a significant reduction in the number of iterations required for convergence. Specifically, the average number of iterations decreased from 2.6 million to 140 thousand, representing a reduction of approximately 95%. The standard deviation also decreased by 95%, further highlighting the model's increasing consistency. These results suggest that our diffusion model not only captures the visual similarity of the training dataset but also learns to generate physical configurations conditioned on the fracture geometry. It is important to note that computing this metric (and running the multiphase lattice Boltzmann simulations) does not incur additional computational overhead, as the simulations are run on the fly using the idle CPUs of our compute node. This approach maximizes the utilization of our computing infrastructure and allows for continuous monitoring of the model's performance without additional resource demands. §.§ Assessing the Trained Model's Performance Against Different Initialization Methods We observed that as the training progressed, the model was able to provide solutions very close to the stable configuration of the LBM solver for the training data, as shown in Figure <ref>. Upon completing the training, we explore the computational advantages of our hybrid approach in data not present in the training set. We generate initial configurations with the trained diffusion model for geometries unseen during training and then assess the number of iterations needed for convergence. To validate our strategy, we benchmark our diffusion-based initialization against two commonly used fluid initialization methods: Euclidean distance-based initialization and random initialization. Additionally, we include a benchmark using the simulation solution as the starting point for a new simulation. Figure <ref> presents examples of the different initializations within the simulation domain. First, the Diffusion-Based Initialization method uses a realization from our trained model within a given geometry. Using the same wetting/non-wetting fluid ratio, the Euclidean Distance-Based Initialization identifies the non-wetting region in the flow path based on provided saturations and positions the non-wetting fluid away from the solid boundary. This is based on the understanding that the invading fluid tends to occupy pores distant from solid walls <cit.>. The Random Initialization method randomly places fluid particles throughout the connected domain until the desired saturation is reached. Finally, The Simulation Solution-Based Initialization uses converged fluid configurations from the LBM simulation as its baseline. It is important to understand that this method sets a benchmark that other initializations are not expected to exceed. We purposely do not use the velocity field saved from the previous simulation since our diffusion model is only trained to provide the fluid configuration, not the velocity field. Therefore, initializing a simulation with only the previously converged fluid flow and not the velocity field may still require several iterations to reconverge. This ensures fairness when comparing it with other methods. A notable advantage of the Diffusion-Based Initialization is the capability to initiate continuous density values. Although we are mainly interested in defining the two phases distinctly, slight variations in their density values inform the LBM simulation about local capillary pressure gradients, among other physical features. In contrast, the Euclidean Distance-Based and Random Initialization methods can only produce discrete phases (see Figure <ref>). Thus, the simulation solution-based initialization method and the random-based initialization method represent the lower and upper bounds of convergence time, respectively, while the Euclidean-based initialization is a reasonable approach, grounded in the domain knowledge that invading fluid tends to occupy pores distant from solid walls <cit.>. Table <ref> provides a detailed comparison of 100 simulation results on geometries not seen during training. We compare each of the initialization methods described above. The simulation solution initialization method, although it may seem somewhat impractical, as it essentially uses the solution of a previous simulation, represents the best possible scenario for initialization. By doing so, it sets a clear benchmark for the fastest possible convergence. This approach allows us to understand the upper limits of efficiency and provides a standard against which the performance of other initialization methods can be measured. On average the simulation took 4.38 × 10^4 iterations to convergence, an average simulation time of 8.5 minutes. These values represent the potential lower bound of computational demand for the diffusion model under optimal conditions. Once this benchmark is established, the diffusion-based initialization method shows an improvement in efficiency compared to other common multiphase simulation initialization methods. On average, the simulations using diffusion-based initialization take 9.61 × 10^4 iterations, roughly 20.8 minutes. The diffusion-based initialization reduces the number of iterations required by 1.3 and 2.4 times compared to the Euclidean and Random initializations, respectively over the entire test set. Even after including the diffusion generation time, our proposed hybrid model remains much faster than the commonly used initialization methods. These results offer compelling evidence of the efficacy of the approach. We hypothesize that the computational cost improvements could be even more pronounced in more complex scenarios, such as in 3D porous media. §.§ Testing on Real Fracture Dataset We tested our trained model on a publicly-available real fracture in Berea sandstone <cit.> with a voxel size of 27.344, 27.344, 32.548 μm. We employed 2D fractures which are slices of the 3D fracture in the direction perpendicular to the longest coordinate of the image. Despite training our model using simple synthetic fractures, we were able to accelerate the LBM simulations on the Berea fracture of 2.7 and 4.4 times compared to the Euclidean and Random initializations, respectively. Figure <ref> shows the procedure to extract the fracture cross-sections from the 3D CT-scan <cit.>. We then generated fluid realization on these cross-sections, some of these are shown in Figure <ref>. These configurations are then used as the starting points of the LBM simulations. In Figure <ref> we show the number of iterations required for convergence and the simulation time for each initialization method, assessed on a dataset of 32 fractures, the average values for each initialization is reported in Table <ref>. Consistent with the synthetic case, the computational costs in terms of iterations to convergence and simulation times still exhibit a clear advantage of our diffusion powered model over commonly used initializations for multiphase flow. § CONCLUSION In this study, we developed a hybrid approach that combines generative diffusion models with multiphase flow simulations. This method efficiently creates fluid configurations specific to fracture geometries, effectively managing varying saturation levels and significantly reducing the computational cost of pore-scale simulations. Our main contributions are the following. Fracture geometry conditioned training: Our model can generate fluid configurations within specific fracture geometries by incorporating geometric conditioning. This allows the model to accurately represent complex fracture surfaces and consider phenomena that occur at or near these interfaces, as well as in more distant regions. Simulation feedback during training: We introduced a new metric to evaluate our model's performance—the number of iterations needed for simulation convergence. This metric improved as the model trained, demonstrating its ability to understand both the visual and physical aspects of fluid configurations. It also provided a more meaningful stopping criterion for training compared to just using the loss function. Thanks to these advancements and the ability of diffusion models to parameterize complex distributions, our model, although trained with relatively simple synthetic data, demonstrated its effectiveness on a real Berea sandstone fracture. The results showed that our model could generalize well to complex fractures, capturing the main factors affecting multiphase flow. Our hybrid method significantly reduced the iterations needed for simulation convergence, resulting in substantial computational savings. In the case of the Berea sandstone fracture, our model accelerated simulations by up a factor of 4.4 compared to commonly used initialization methods. Overall, our study provides evidence that a diffusion model-based approach is effective for pore-scale simulations. Looking ahead, exploring three-dimensional (3D) simulations could be a valuable avenue for future research. However, creating a dataset for 3D simulations would be very computationally expensive, and adopting different neural networks would be necessary to manage the increased computational demands. Despite these challenges, the promising results from our study motivate further exploration in this direction. Models like this could provide researchers with a powerful tool to explore the sources of complex behaviors observed in nature. Furthermore, advancements in these algorithms could significantly transform reservoir modeling in the earth sciences. The future lies in hybrid approaches that combine machine learning with physics-based models, leveraging the strengths of both to achieve the best results. By integrating these methods, we can harness the computational efficiency of machine learning and the robustness of physics, demonstrating that both can coexist and complement each other effectively. § OPEN RESEARCH SECTION The code for generating synthetic fractures, multiphase flow simulations, and denoising diffusion models is available at the following repositories: * pySimFrac: https://github.com/lanl/pySimFrachttps://github.com/lanl/pySimFrac * MP-LBM: https://github.com/je-santos/MPLBM-UThttps://github.com/je-santos/MPLBM-UT * Diffusion Model Codebase: https://github.com/lucidrains/denoising-diffusion-pytorchhttps://github.com/lucidrains/denoising-diffusion-pytorch § ACKNOWLEDGMENTS J.C. thanks the Applied Machine Learning (AML) summer program and Center for Nonlinear Studies (CNLS) at Los Alamos National Laboratory (LANL) for the mentoring and the fellowship received. AM gratefully acknowledge the support of the Center for Non-Linear Studies (CNLS) for this work. J.E.S and Y.T. acknowledge the support by the Laboratory Directed Research & Development project “Diffusion Modeling with Physical Constraints for Scientific Data (20240074ER)”. plainnat.bst figuresection tablesection
http://arxiv.org/abs/2406.18618v1
20240626064731
Markov Decision Process and Approximate Dynamic Programming for a Patient Assignment Scheduling problem
[ "Malgorzata M. O'Reilly", "Sebastian Krasnicki", "James Montgomery", "Mojtaba Heydar", "Richard Turner", "Pieter Van Dam", "Peter Maree" ]
math.OC
[ "math.OC", "cs.SY", "math.PR" ]
The aspherical explosions of the 03fg-like Type Ia supernovae 2021zny and 2022ilv revealed by polarimetry T. Nagao, 1,2,3 K. Maeda, 4 S. Mattila, 1,5 H. Kuncarayakti, 1,6 C. P. Gutiérrez, 7,8 A. Cikota 9 Received ***; accepted *** ======================================================================================================================================================================= § ABSTRACT We study the Patient Assignment Scheduling (PAS) problem in a random environment that arises in the management of patient flow in the hospital systems, due to the stochastic nature of the arrivals as well as the Length of Stay distribution. We develop a Markov Decision Process (MDP) which aims to assign the newly arrived patients in an optimal way so as to minimise the total expected long-run cost per unit time over an infinite horizon. We assume Poisson arrival rates that depend on patient types, and Length of Stay distributions that depend on whether patients stay in their primary wards or not. Since the instances of realistic size of this problem are not easy to solve, we develop numerical methods based on Approximate Dynamic Programming. We illustrate the theory with numerical examples with parameters obtained by fitting to data from a tertiary referral hospital in Australia, and demonstrate the application potential of our methodology under practical considerations. Keywords: Patient Assignment Scheduling problem, Poisson arrivals, Length of Stay distribution, Markov chains, Markov Decision Process, Approximate Dynamic Programming. Mathematics Subject Classification: 60J80 – 60J22 – 92D25 – 65H10 Funding: This research was supported by funding through the Australian Research Council Linkage Project LP140100152. § INTRODUCTION In this paper we consider the Patient Assignment Scheduling (PAS) problem in which, at the start of each day, newly arrived patients are assigned to beds in different wards, considering their needs, priority, and available resources, in a way so as to optimise total expected daily cost over an infinite time horizon. We use the term `bed' in the sense of the resource that consists of the staff (nurses and clinicians) available to attend to a patient in a physical bed, rather than the physical bed itself. We assume that that patients are assigned to the beds at the start of time period d=0,1,2,…, see Figure <ref>. The allocation involves a group of patients, waiting to be admitted to a suitable bed. Upon ceompleting their treatments, patients are discharged. The duration of time between arrival and discharge is a random variable referred to as the Length of Stay (LoS). We assume an infinite horizon problem and develop a Markov Decision Process (MDP) to solve this stochastic problem so as to minimise the total expected long-run cost per unit time. Here, without loss of generality we assume that the index d=0,1,2,… corresponds to day d, but note that this could denote some other time interval of interest such as an 8-hour time block. Patient assignment is a key process in the management of hospitals, whereby bed managers allocate inbound patients to appropriate wards, rooms and beds. These decisions are made by teams of ward and bed managers at intermittent times throughout the day, using their experience and understanding of the natural dynamics of the hospital. From 2018 to 2019, the Australian hospital system recorded 11.5 millions hospitalisations, and 8.4 million emergency department presentations, resulting in 365,000 emergency admissions <cit.>. This corresponded to 31,500 patient assignment decisions made daily across Australia, with 1000 of these being unplanned <cit.>. These decisions are not trivial, since complications can arise due to variations in hospital policy and the highly stochastic and complex nature of healthcare. Potential patients can arrive at any time of day, with a variety of conditions and specific needs. Hospital policy may affect where patients can be assigned, on the basis of a patient's gender or to comply with prescribed healthcare standards. Managers may also evaluate whether it is best to transfer patients between wards. Because of these complicating factors it can be unclear if the assignments made by bed managers are optimal. The importance of maintaining good patient flow cannot be understated. Carter, Puch and Larson in their literature review of emergency department (ED) crowding found that ED overcrowding has a significant positive correlation with patient mortality and with patients leaving the hospital untreated <cit.>. Morley et al. in their literature review also found an increase in patient mortality, as well as a higher exposure to error, poorer patient outcomes and increased patient length of stay, both in the ED, and in the ward to which a patient is eventually assigned <cit.>. They also noted that the inability to quickly assign patients from the ED to an inpatient bed, was a significant contributor to overcrowding. Regarding overcrowding in non-emergency wards, there is a long standing debate on what level of hospital occupancy is too high. Bagust et al. <cit.> found in their simulation-based approach that hospital occupancy above 85% created discernible risk, while occupancy above 90% caused regular bed crises. This figure has since been adopted by many as a standard benchmark for hospital occupancy <cit.>. However, multiple other authors <cit.> have emphasized that adopting 85% as the figure at which hospitals operate most efficiently, misinterprets Bagust et al. <cit.>, as this figure only applies to the specific example in <cit.>. Further, Bain et al. <cit.> state that using any single figure as a target for average occupancy is overly simplistic. Cummings et al. <cit.> note that while the results of <cit.> have been misinterpreted, there is a positive correlation between hospital occupancy and emergency department delays. Solutions to these problems require improvements in the management of patient flow. Howell et al. <cit.> found that by assigning additional resources to the tasks of bed management and patient assignment, ED length of stay was reduced by an average of 98 minutes for admitted patients. Clearly, tangible benefits are made to patients if they are assigned correctly. We call the mathematical formulation of patient assignment, the Patient Assignment Scheduling (PAS) problem, where we attempt to find optimal assignments of all patients as they arrive to the hospital. This problem was described by Demeester et al in <cit.>, Bilgin et al. in <cit.>, and Vancroonenburg et al. in <cit.>. Demeester et al. <cit.> define a useful approach of hard and soft constraints. Patient assignments that violate hard constraints are infeasible, while assignments that violate soft constraints incur some cost, which contribute to the objective function. An example of a hard constraint is that two patients cannot be assigned to the same bed, while an example of a soft constraint is that a patient should be assigned to the ward that corresponds to the their condition. In their model, the variable for patient length of stay (LoS) used is deterministic, and so too is the cost of assigning a patient. This approach ignores the inherent stochasticity of a hospital system, but provides a useful framework for building upon. Bilgin et al. <cit.> and Vancroonenburg et al. in <cit.> use a very similar approach, defining a set of constraints to form an objective function, but each adding their own novel idea. Bilgin et al. <cit.> combine the PAS problem with a nurse scheduling problem, while Vancroonenburg et al. in <cit.> introduce parameters to study the effect of uncertainty and so their model is no longer fully deterministic. However, to truly capture the stochasticity of the system, consideration must be given to how the system evolves in time. The model must incorporate current assignment decisions having an impact on future assignments. Using a similar integer programming approach to <cit.>, Abera et al. <cit.> demonstrate how the PAS problem can be solved in a stochastic scenario. They consider first that a patient's LoS is a random variable that may take any value on some distribution, and is unknown at the time of decision making. They then consider that patient arrivals to the system are also random, meaning that the cost of making any given assignment is stochastic. That is, the cost of an assignment depends on how long a patient stays, and what types of patients will arrive next. To evaluate the assignment cost, and hence the optimal assignment, Abera et al. <cit.> simulate the system several times to determine the expected cost over some planning horizon. The planning horizons used as examples in <cit.> are 14, 28 and 56 days. One shared result that is found in <cit.>, is that the integer linear program of any non-trivial size problem, cannot be solved explicitly. That is, it was not achievable to find the exact optimal solution. Demeester et al. <cit.> found in their first example of a hospital with .17ex∼120 beds, an optimal solution was not found after a whole week of computation, which is an unacceptable amount of time for assigning patients. To overcome this, various heuristics were required in all of the above work. The authors of the papers listed above all begin by using a neighbourhood search technique. That is, starting with some initial solution, and searching `neighbouring' solutions to see if there is an improvement. The exact methodology of how the neighbourhood is defined and searched, differs between authors. Demeester et al. <cit.> use a Tabu search, Abera et al. <cit.> use simulated annealing, while Bilgin et al. <cit.> use a hyper-heuristic approach (heuristics for selecting heuristics). Regardless of the approach, it is clear that heuristics are a necessity for solving problems of any realistic size. While the formulation of the PAS problem as an integer program is well researched, alternative formulations exist. Hulshof et al. <cit.> and Dai and Shi <cit.> build models related to the PAS problem, based on Markovian Decision Processes (MDP). Such an alternative formulation allows for an exploration of possible decisions that a hospital manager can make. Hulshof et al. <cit.> explore resource assignment in hospitals and decision making on how many patients should be assigned to different queues. Dai and Shi <cit.> explore the use of patient overflow (allowing wards to `overflow' into one another when full) and take a `higher-level' approach to decision making. Rather than making assignment decisions for each patient, decisions are made on whether to allow patient overflow for each time period. Similarly to the integer program approaches, the exact solutions of these MDPs cannot be obtained for large-size problems. Approximate Dynamic Programming (ADP) approaches are applied in <cit.> to overcome what is called the curse of dimensionality. The ADP approach, described by Powell in <cit.>, allows the cost in each state to be approximated by a set of features, which record some partial information about states, and corresponding weights. In addition to ADP, Dai and Shi <cit.> utilize a least-squares temporal difference (LSTD) learning algorithm to solve the problem. We now discuss the work of Heydar et al. <cit.>, which is the basis of the model proposed here. Heydar et al. <cit.> models the PAS as a continuous-time MDP, where the system is observed at every individual patient arrival or departure. The goal of the model is to identify the best possible patient assignment given information about the arriving patient and the occupancy of the hospital. Heterogeneity is included in the model by defining different patient classes and wards, and assuming that each patient class has an ideal ward, while other wards are unsuitable. In addition to assigning patients directly, Heydar et al. <cit.> also consider transfers of patients between wards to best fit patients, so as to minimise their appropriately defined objective function. However, Heydar et al. <cit.> note that transfers are positively correlated with patient mortality and LoS, as shown in <cit.>, meaning that transfers are given careful consideration and an additional cost in <cit.>. We adopt this feature as a key component in our model. Similarly to Abera et al. <cit.>, Heydar et al. <cit.> evaluate total expected cost over a finite horizon, meaning limited consideration is given to the state of the system in the long run. In order to address this gap, we generalise the model used in Heydar et al. <cit.> and extend it to an infinite horizon problem. In order to address the curse of dimensionality, we propose methodology based on ADP and the approximate policy iteration algorithm used by Dai and Shi in <cit.>. Furthermore, we assume that patient assignments are made at discrete time points such as at the start of each day. This is arguably a more realistic approach involving a group of patients to be assigned, which also means that assignment decisions are far more complex, as multiple assignments must be made, and an order of assignments must be decided. Finally, we demonstrate that the algorithm used by Dai and Shi in <cit.> can be adapted to our model to find near-optimal solutions to the PAS problem. The rest of this paper is structured as follows. In Section <ref> we describe the key components of our model, such as state space, transition probabilities, decision variables, constraints, and cost variables. In Section <ref> we give the details of the Approximate Dynamic Programming approach, and illustrate the application of the theory through our numerical examples in Section <ref>. This is followed by concluding remarks in Section <ref>. § MARKOV DECISION PROCESS Let ℐ = {1, 2, …, I} be the set of all patient types, where type i∈ℐ may correspond to the medical needs of the patients, their age, gender, and other aspects of inclusive care, see e.g. <cit.>. We assume that type-i patients arrive at a rate λ_i per day on day d. Further, we assume that there are K wards in the hospital, each with capacity m_k, k∈𝒦={1,… ,K}. The waiting room labelled (K+1) has capacity m_K+1. The set of wards that are suitable for patients type i is denoted 𝒦(i), for some 𝒦(i)⊂𝒦. We denote by w(1,i)∈𝒦(i) the best ward for type-i patient, by w(2,i)∈𝒦(i) the second-best ward for type-i patient, and so on, and let 𝒲(i)=(w(1,i),…, w(K,i)) be the ordered sequence of wards in 𝒦(i) corresponding to type-i patients. For example, if 𝒦(i)={1,2,3}⊂𝒦={1,2,3,4,5} and 𝒲(i)=(3,1,2), then this means that ward 3 is the best ward for type-i patient, ward 1 is the second best, ward 2 is the third best, and wards 4 and 5 are not suitable for type-i patients. In Section <ref> below, we contruct a model based on a suitable Markov Decision Process to find the optimal policy ^* from the set of all policies =(a^(s))_s∈𝒮 consisting of decisions a^(s) taken whenever state s in state space 𝒮 is observed, so as to minimise the long-run expected cost per unit time, E^* =min_ E^ =min_lim_D→∞(1/D 𝔼( ∑_d=0^D-1 C(S_d,a^(S_d)) ) ), ^* =min_ E^, where S_d∈𝒮 is a state of the system observed at the start of day d, and C(S_d,a^(s)) is the cost of a decision then taken assuming policy is in place. In practice, we find ^* by solving the Bellman's optimality equation for all states s∈𝒮, E^*+V(s) = min_a ∈𝒜(s){ C(s,a) + 𝔼( V(s' ) | (s,a) ) } = min_a ∈𝒜(s){ C(s,a) + ∑_s' ∈𝒮ℙ( s^' | (s,a) ) V(s') }, where V(s) is the minimum long-run average cost given current state s, and 𝒜(s) is the set of all decisions that are possible when state s is observed. We use Approximate Dynamic Programming methods <cit.> to solve Equation (<ref>) in real-sized problems, in which the number of states and decisions are intractable, as noted in <cit.>. §.§ Model Consider a discrete-time Markov chain {S_d:d=0,1,…}, where S_d= ( [N_k,i]_𝒦×ℐ, [Q_i]_1 ×ℐ)∈𝒮 is the state of the system at the start of day d=0,1,2,… recording N_k,i, the number of type-i patients in ward k, and Q_i, the number of newly arrived type-i patients who are yet to be assigned to the wards. The state space 𝒮 of the process is given by, 𝒮 = {( [n_k,i]_𝒦×ℐ, [q_i]_1 ×ℐ): n_k,i≥ 0,q_i≥ 0, ∑_i=1^I n_k,i≤ m_k, ∑_i=1^Iq_i ≤ m_K+1}, with possibly additional constraints on the total number of accepted arrivals, as discussed below. §.§ Decisions Suppose that we observe state s=( [n_k,i]_𝒦×ℐ, [q_i]_1 ×ℐ)∈𝒮 at the start of a day and choose a suitable decision a∈𝒜(s) from the set of available decisions 𝒜(s), such that a=(x_k,i,y_k,l,i)_i∈ℐ;k,ℓ=1,… K, where * x_k,i is the number of type-i newly arrived patients to assign to ward k, and, * y_k,ℓ,i is the number of type-i patients to be transferred from ward k to ward ℓ. Each decision should satisfy the following sets of constraints. First, all newly arrived type-i patients have to be assigned to some ward, and so ∑_k=1^K x_k,i= q_i, ∀ i ∈ℐ. Next, the total number of patients in ward k, denoted n_k,∙,t, and given by, n_k,∙ = ∑_i=1^In_k,i, n_k,i = n_k,i + x_k,i + ∑_ℓ = 1 ℓ k^K( y_ℓ,k,i - y_k,ℓ,i), where n_k,i is the total number of type-i patients in ward k, cannot exceed the capacity of the ward, and so, n_k,∙≤ m_k, ∀ k ∈𝒦. Further, we may only transfer the available patients, which gives, ∑_ℓ = 1 ℓ k^K y_k,ℓ,i≤ n_k,i, ∀ k ∈𝒦, ∀ i ∈ℐ, and if a transfer occurs, it should be in one direction, that is, I{y_k,ℓ,i≠ 0}+I{y_ℓ,k,i≠ 0} ≤ 1,∀ k,ℓ∈𝒦, ∀ i ∈ℐ, where I{·} is an indicator function taking value 1 if the statement in the brackets is true, and 0 otherwise, which ensures that only one of y_k,ℓ,i and y_ℓ,k,i takes a nonzero value. §.§ Transition probabilities Now, given decision a and current state s ∈𝒮, we define the following key random variables: * Z_k,i, the number of type-i patients who depart from ward k during one day; * Z=∑_k=1^K∑_i=1^I Z_k,i, the total number of departures; * B=∑_k=1^Km_k-Z , the total number of available beds after departures; * Q= ∑_i=1^I Q_i, the total number of arrivals; which we use below to determine the transition probabilities of the process {S_d:d=0,1,…}. The distribution of these variables depends on (s,a), as we discuss below. Assuming state s=( [n_k,i]_𝒦×ℐ, [q_i]_1 ×ℐ) and decision a = (x_k,i,y_k,l,i)_i∈ℐ;kℓ=1,… K, the resulting post-decision state is s^(a) = ( [n^(a)_k,i]_𝒦×ℐ) ≡( [n^(a)_k,i]_𝒦×ℐ,[0]_1 ×ℐ) such that, n^(a)_k,i= n_k,i + x_k,i+ ∑_ℓ = 1 ℓ k^K(y_ℓ,k,i- y_k,ℓ,i) , ∀ k ∈𝒦, ∀ i ∈ℐ. Next, the process transitions from state s^(a) on day d to some state s' = ( [n'_k,i]_𝒦×ℐ, [q'_i]_1 ×ℐ) on day (d+1), for d=0,1,2,…, with probability given by ℙ( s^' | (s,a) ) = ℙ( (Z_k,i=z_k,i)_k=1,…,K;i=1,…,I (Q_i=q'_i)_i=1,…,I) = ℙ( (Z_k,i=z_k,i)_k=1,…,K;i=1,…,I) ×ℙ( (Q_i=q'_i)_i=1,…,I | (Z_k,i=z_k,i)_k=1,…,K;i=1,…,I) = ( ∏_k=1^K∏_i=1^Iℙ(Z_k,i=z_k,i)) ×ℙ( (Q_i=q'_i)_i=1,…,I | (Z_k,i=z_k,i)_k=1,…,K;i=1,…,I) when the condition n'_k,i=n^(a)_k,i-z_k,i is met for all k∈𝒦, i∈ℐ; and ℙ( s^' | (s,a) )=0 otherwise. We apply conditional probabilities ℙ( (Q_i=q'_i)_i=1,…,I | (Z_k,i=z_k,i)_k=1,…,K;i=1,…,I) in (<ref>) since the number of accepted type-i arrivals may depend on the number of available beds after departures, and consider the following alternative modelling approaches: * Suppose that there is no restriction on the total number of arrivals, as in Dai and Shi <cit.>. Then Q_i follows Poisson distribution, Q_i∼ Poi(λ_i), and so ℙ( (Q_i=q'_i)_i=1,…,I | (Z_k,i=z_k,i)_k=1,…,K;i=1,…,I) = ∏_i=1^I ℙ(Q_i = q_i) = ∏_i=1^I (λ_i)^q_i e^-λ_i(q_i)!. In this approach, arriving patients are not lost to the system, however the number of patients still waiting to be assigned and their total waiting times could be very large (and exceed many days). This may not be a realistic assumption since in practice the capacity of the system is limited (due to the availability of beds and staffing) and there are limits on the accepted maximum waiting times for various patient types (e.g. 24-hour limit for some types of emergency patients). * Suppose that the total number of arrivals Q may not exceed the total number of available beds. Then, with b=∑_k=1^K m_k- ∑_k=1^K ∑_i=1^I z_k,i recording the total number of available beds after the departures, we have, ℙ(Q = q | B=b) = (λ)^qe^-λ(q)!, q=0,1,2… ,b-1, ℙ(Q = b | B=b) = 1- ∑_q=0^b-1(λ)^qe^-λ(q)!, and, with q=∑_i=1^I q_i, ℙ( (Q_i=q'_i)_i=1,…,I | (Z_k,i=z_k,i)_k=1,…,K;i=1,…,I) = ℙ( (Q_i=q'_i)_i=1,…,I | B=b ) = ℙ( (Q_i=q'_i)_i=1,…,I | Q=q,B=b ) ×ℙ(Q = q | B=b) = ℙ( (Q_i=q'_i)_i=1,…,I | Q=q ) ×ℙ(Q = q | B=b) = q!/∏_i=1^I q_i!×∏_i=1^I ( λ_i/λ)^q_i×ℙ(Q = q | B=b) , since the conditional distribution of (Q_i=q'_i)_i=1,…,I given Q=q is multinomial, where λ_i/λ is the probability that an arriving patient is of type i. In this approach, patients arriving to a system with no free beds, are redirected to other health systems. Such approach may be more realistic and can be used to determine the rate of patients that are redirected to help decide a suitable size of the hospital system. To model random variables Z_k,i, we assume that departures are independent of one another and so Z_k,i follows Binomial distribution, Z_k,i∼ Bin(n_k,i,p_k,i), with ℙ(Z_k,i=z) = n_k,iz (p_k,i)^z (1-p_k,i)^n_k,i-z, z=0,1,…,n_k,i, p_k,i = ℙ(RLOS_k,i≤ 1 ), where RLOS_k,i is a random variable recording the remaining length of stay of type-i patient that is in ward k at the start of the day. That is, for each type-i patient in ward k, we perform a Bernoulli trial with probability of success p_k,i to determine if the patient leaves the system on the following day or not. §.§ Costs There is an immediate cost C(s,a) associated with decision a given current state s ∈𝒮, which may include costs of assignment, transfer, and patients being in nonprimary wards, defined as follows. * Assignment cost: ∑_k=1^K∑_i=1^I x_k,i× c^(σ)_k,i. * Transfer cost: ∑_k=1^K∑_ℓ=1^K∑_i=1^I y_k,ℓ,i× c^(t)_k,ℓ,i. * Penalty cost for being in a nonprimary ward: ∑_k=1^K∑_i=1^I n_k,i× c^(p)_k,i. The total immediate cost of decision a=(x_k,i,y_k,l,i)_i∈ℐ;k,ℓ=1,… K given state s=( [n_k,i]_𝒦×ℐ, [q_i]_1 ×ℐ) is then, C(s,a) = ∑_k=1^K∑_i=1^I{ x_k,i× c^(σ)_k,i + ∑_ℓ=1 ℓ k^K y_k,ℓ,i× c^(t)_k,ℓ,i + n^(a)_k,i× c^(p)_k,i} = ∑_k=1^K∑_i=1^I{ x_k,i× c^(σ)_k,i + ∑_ℓ=1 ℓ k^K y_k,ℓ,i× c^(t)_k,ℓ,i + ( n_k,i + x_k,i + ∑_ℓ=1 ℓ k^K (y_ℓ,k,i - y_k,ℓ,i) ) × c^(p)_k,i} . By above, for all s=( [n_k,i]_𝒦×ℐ, [q_i]_1 ×ℐ), Eq. (<ref>) can be written as, E^*+V( s ) = min_a ∈𝒜(s){ C(s,a) + ∑_s'=( [n^'_k,i]_𝒦×ℐ, [q'_i]_1 ×ℐ) ∈𝒮ℙ( s^' | (s,a) ) V( s' ) }, where ℙ( s^' | (s,a) ) and C( s, a ) are given by Eqs. (<ref>) and (<ref>), respectively. § APPROXIMATE DYNAMIC PROGRAMMING APPROACH Policy Iteration is a standard method of solving Equation (<ref>) when the size of the state space 𝒮 is not too large, see Algorithm <ref> in Appendix <ref>, also refer to Puterman <cit.> for further details. However, Policy Iteration is not suitable here, and so we apply Approximate Dynamic Programming methods introduced by Powell in <cit.> to address the curses of dimensionality in Eq. (<ref>), as follows. * First, similar to Heydar et al. <cit.>, we apply basis functions ϕ_f(s), f∈ℱ, to record some suitable information about states s, referred to as state features, and then the approximation, V(s)≈∑_ f ∈ℱϕ_f(s) θ^(f)_n = (s)^T _n, where (s)=[ϕ_f(s)]_f∈ℱ is the vector of features, and _n=[θ^(f)_n]_f∈ℱ is the vector of corresponding weights θ^(f)_n, evaluated at the n-th iteration of an algorithm. * Next, we apply the Approximate Policy Iteration presented by Dai and Shi in <cit.>, in which the weights θ^(f)_n are recursively updated in an iteration that is repeated N times. Each iteration n involves a simulation of M states for some large M. We summarise this approach in Algorithms <ref>–<ref> in Appendix <ref>. We note that the Markov chains applied in our models are irreducible and positive recurrent, and so the algorithms are guaranteed to converge as M→∞  <cit.>. Consider our Model in Section <ref>. Suppose that the vector of features of state s=( [n_k,i]_𝒦×ℐ, [q_i]_1 ×ℐ) is the vector s⃗, and so a vector containing all information about state s. We note that to compute the expression in Line 6 of Algorithms <ref> in Appendix <ref> and in Line 4 in Algorithm <ref> in Appendix <ref>, it is then convenient to apply the following equivalence. Assuming s=( [n_k,i]_𝒦×ℐ, [q_i]_1 ×ℐ), and with b=∑_k=1^K m_k-∑_k=1^K∑_i=1^I n^(a)_k,i, we have ∑_s^'∈𝒮ℙ( s^' | (s,a) ) (s^') = 𝔼((s^') | (s,a)) = ∑_k=1^K∑_i=1^I𝔼(N_k,i^')θ_k,i + ∑_i=1^I𝔼(Q_i^')θ_i = ∑_k=1^K∑_i=1^I (n^(a)_k,i-𝔼(Z_k,i))θ_k,i + ∑_i=1^I𝔼(𝔼(Q_i^' | Q ))θ_i = ∑_k=1^K∑_i=1^I n^(a)_k,i(1-p_k,i)θ_k,i + ∑_i=1^Iλ_i/λ ×𝔼(Q)θ_i , where n^(a)_k,i is given by Eq. (<ref>), 𝔼(Z_k,i)=n^(a)_k,ip_k,i (binomial mean), N_k,i^' is a random variable recording the number of type-i patients in ward k and Q_i^' a random variable recording the number of type-i patients waiting to be assigned when state s^' is observed, and Q=∑_i=1^I Q_i^' is the total number of arrivals. If the number of arrivals is unrestricted with m_K+1=∞, then 𝔼(Q)=λ. Alternatively, if the number of arrivals is restricted so that we only allow arrivals that can be assigned, which can be written as, ∑_k=1^K∑_i=1^I n_k,i^'+∑_i=1^I q_i^'≤∑_k=1^K m_k =m, then we have the following approach for evaluating 𝔼(Q). Let Z be the random variable recording the total number of departures, Z = ∑_k=1^K∑_i=1^I Z_k,i, taking values z=0,…,z_max, with z_max=∑_k=1^K ∑_i=1^I n_k,i^(a). Let N be the number of available beds for the new arrivals, given by N = m-∑_k=1^K∑_i=1^I N_k,i^'=m-z_max+Z, taking values n=m-z_max,…,m. Therefore, 𝔼( Q | Z=z ) = 𝔼( Q | N=m-z_max+z ) = ∑_k=0^m-z_max+z k×λ^k/k!e^-λ + ∑_k=m-z_max+z+1^∞ (m-z_max+z) ×λ^k/k!e^-λ = ∑_k=0^m-z_max+z k×λ^k/k!e^-λ + (m-z_max+z) ( 1- ∑_k=0^m-z_max+zλ^k/k!e^-λ) = ∑_k=0^m-z_max+z-1λ×λ^k/k!e^-λ + (m-z_max+z) ( 1- ∑_k=0^m-z_max+zλ^k/k!e^-λ) and so 𝔼(Q) = ∑_z=0^z_max𝔼( Q | Z=z ) ℙ(Z=z) = ∑_z=0^z_max( ∑_k=0^m-z_max+z k ×λ^k/k!e^-λ + (m-z_max+z) ( 1- ∑_k=0^m-z_max+zλ^k/k!e^-λ) ) ℙ(Z=z) = ∑_z=0^z_maxℙ(Z=z) (m-z_max+z) + ∑_z=0^z_max( ∑_k=0^m-z_max+z-1λ×λ^k/k!e^-λ - (m-z_max+z) ( ∑_k=0^m-z_max+zλ^k/k!e^-λ) ) ℙ(Z=z) = m-z_max+𝔼(Z) + ∑_z=0^z_max( λ -(m-z_max+z) ) ( ∑_k=0^m-z_max+z-1λ^k/k! e^-λ) ℙ(Z=z) - ∑_z=0^z_max (m-z_max+z) ( λ^m-z_max+z/(m-z_max+z)! e^-λ) ℙ(Z=z) , where 𝔼(Z)=∑_k=1^K∑_i=1^I n^(a)_k,ip_k,i. Further, in order to compute ℙ(Z=z) in (<ref>), we note that Z=∑_k=1^K ∑_i=1^I Z_k,i I(n_k,i^(a)≠0) is a sum of independent binomial random variables Z_k,i∼ Bin(n_k,i^(a),p_k,i) such that n_k,i^(a)≠0. A method for computing the distribution of a sum of independent binomial random variables was discussed by Butler and Stephens in <cit.>. Below, we suggest an alternative method, which relies on simple matrix multiplications and standard theory of Markov chains. Without loss of generality, suppose that Z=∑_i=1^M Z_i is a random variable such that Z_i∼ Bin(n_i,p_i) are independent Binomial random variables with some parameters n_i>0 and 0<p_i<1, for i=1,… M. To compute ℙ(Z=z), it is convenient to construct a discrete-time Markov chain corresponding to the Bernoulli trials n_1,n_2,…,n_M such that a success at each trial results in the chain moving one step to the right, and failure results in the chain remaining at the original state. Then, the distribution of the chain after all n_1+…+n_M trials will give the distribution of Z. So, we consider a discrete-time Markov chain {(J(t)):t= 0,1,2,…,z_max} terminating at time z_max=∑_i=1^M n_i, with state space 𝒮={0,…,z_max} and an initial state J(0)=0, which evolves as follows. We perform the first n_1 Bernoulli trials at times t=1,…,n_1 and let ℙ(J(t)=J(t-1)+1)=p_1, ℙ(J(t)=J(t-1))=1-p_1. Then, the distribution of the chain after the first n_1 trials, and so at time t=n_1, is given by (n_1) = [ (0) 0_1× n_1] P^(1), where (0)=[α(0)_j]_j=0 with α(0)_0=1, and P^[1]=[P_j,j'^[1]]_j,j'=0,1,…,n_1 such that P_j,j'^[1]=ℙ(Z_1=j'-j). Next, we repeat this and perform n_i Bernoulli trials at times t=n_1+…+n_i-1+1,…,n_1+…+n_i and let ℙ(J(t)=J(t-1)+1)=p_i, ℙ(J(t)=J(t-1))=1-p_i, for each i=2,…,M. Then, the distribution after additional n_i trials is given by the recursion (n_1+…+n_i) = [ (n_1+…+n_i-1) 0_1× n_i] P^(i), where P^[i]=[P_j,j'^[i]]_j,j'=0,1,…,n_1+…+n_i, P_j,j'^[i]=ℙ(Z_1=j'-j). To compute P^(i) we apply the formula P^(i) = ( A^(i))^n_i, where A^(i) = [A^(i)]_j,j'=0,1,…,n_1+…+n_i = [ [ 1-p_i p_i 0 … … 0; 0 1-p_i p_i … … 0; ⋮ ⋮ ⋮ ⋮ ⋮ ⋮; 0 … 0 … 1-p_i p_i ]]. It follows that, for j=0,1,…,z_max, ℙ(Z=j) = [(n_1+…+n_M)]_j = [ (n_1+…+n_M-1) 0_1× n_M] P^(M). This method for computing the probabilities ℙ(Z=z) for a sum Z=∑_i=1^M Z_i of independent nonnegative random variables Z_i that take values in finite sets, can be applied for any desired discrete distributions of Z_i, by replacing (<ref>) with a suitable formula for P^(i). Alternatively, ℙ(Z=z) can be computed by numerically inverting the probability generating function G_Z(s) of the random variable Z, which here is given by G_Z(s) = ∏_i=1^M G_Z_i(s) = ∏_i=1^M (1-p_i+sp_i)^n_i, using a suitable inversion algorithm, for example see Abate and Whitt <cit.>. § NUMERICAL EXAMPLES Here, we construct examples to illustrate the theory and the application of algorithms discussed above. First, in Example <ref>, we construct a Markov model with a small state space, and demonstrate that the approximate solution converges to the exact optimal solution (which we obtained by applying standard dynamic programming methods). Next, in Example <ref>, we construct a realistically-sized Markov model with assumptions driven by practical considerations driven by the conditions of real-world hospitals, and demonstrate the application potential of our methodology. §.§ Small-sized example We consider the following simple example to illustrate the application of our Model in Section <ref>. As the size of the state space 𝒮 in this example is small enough to allow for the application of standard dynamic programming methods, we apply Algorithm <ref> in Appendix <ref> to find the exact solution and then compare it with the approximation obtained using Algorithms <ref>–<ref>. Consider a healthcare facility with K = 2 wards, with capacity m_k = 1 for each ward k, and I = 2 patient types. Assuming that the maximum capacity of the waiting area is m_3=2, that is ∑_i=1^I q_i≤ m_3, and that the arrivals are only accepted if there is available capacity in the system according to ∑_k=1^K∑_i=1^I n_k,i+∑_i=1^I q_i ≤∑_k=1^K m_k =m, we define the state space of the system as follows: 𝒮 = {( [ n_1,1 n_1,2; n_2,1 n_2,2 ], [ q_1 q_2 ]) : ∑_j=1^2 n_1,j≤ 1, ∑_j=1^2 n_2,j≤ 1, ∑_i=1^2 + ∑_j=1^2 n_i,j+∑_i=1^2 q_i≤ 2 } = { 1,2,3,… ,22 }, with 1≡( [ 0 0; 0 0 ], [ 0 0 ]), 2≡( [ 0 0; 0 0 ], [ 1 0 ]), 3≡( [ 0 0; 0 0 ], [ 0 1 ]), 4≡( [ 0 0; 0 0 ], [ 2 0 ]), 5≡( [ 0 0; 0 0 ], [ 1 1 ]), 6≡( [ 0 0; 0 0 ], [ 0 2 ]), 7≡( [ 1 0; 0 0 ], [ 0 0 ]), 8≡( [ 1 0; 0 0 ], [ 1 0 ]), 9≡( [ 1 0; 0 0 ], [ 0 1 ]), 10≡( [ 0 1; 0 0 ], [ 0 0 ]), 11≡( [ 0 1; 0 0 ], [ 1 0 ]), 12≡( [ 0 1; 0 0 ], [ 0 1 ]), 13≡( [ 0 0; 1 0 ], [ 0 0 ]), 14≡( [ 0 0; 1 0 ], [ 1 0 ]), 15≡( [ 0 0; 1 0 ], [ 0 1 ]), 16≡( [ 0 0; 0 1 ], [ 0 0 ]), 17≡( [ 0 0; 0 1 ], [ 1 0 ]), 18≡( [ 0 0; 0 1 ], [ 0 1 ]), 19≡( [ 1 0; 1 0 ], [ 0 0 ]), 20≡( [ 0 1; 1 0 ], [ 0 0 ]), 21≡( [ 1 0; 0 1 ], [ 0 0 ]), 22≡( [ 0 1; 0 1 ], [ 0 0 ]). Further, assume that ward i is the preferred ward for patient type i, for i=1,2, which is reflected by the expected values of the LoS, which are lower when patient type i stays in ward i for the whole duration of their stay. To model this, we assume that probabilities p_k,i=ℙ(RLOS_k,i≤ 1 ) that patient type i that is in ward k today, leaves the hospital the next day, are given by, p=[p_k,i]_𝒦×ℐ = [ [ 1/5 1/4; 1/10 1/3 ]], and so the expected LoS for patient type i=1 is E(LoS_k,i)=5 days if they stay in ward k=1, or E(LoS_k,i)=10 if they stay in ward k=2, and some number between these two if the patient is transferred between the wards during their stay at the hospital. For patient type i=2 we have E(LoS_k,i)=3 days if they stay in ward k=2, or E(LoS_k,i)=4 if they stay in ward k=1, and some number between these two if the patient is transferred between the wards during their stay at the hospital. We consider two decisions, * a=1, assign arrived patients to their best available wards without transferring the patients between the wards, and * a=2, assign arrived patients to their best available wards and allow transferring patients if required. As example, given state s=15≡( [ 0 0; 1 0 ], [ 0 1 ]), decision a=1 with transform it into post decision state s^(a)= ( [ 0 1; 1 0 ]), and then the process will move to state s=10≡( [ 0 1; 0 0 ], [ 0 0 ]) if patient type 1 departs from ward 2, or to state s=13≡( [ 0 0; 1 0 ], [ 0 0 ]) if patient type 2 departs from ward 1. Alternatively, decision a=2 will transform s=15 into post-decision state s^(a)= ( [ 1 0; 0 1 ]), and then the process will move to state s=16≡( [ 0 0; 0 1 ], [ 0 0 ]), if patient type 1 departs from ward 1, or to state s=7≡( [ 1 0; 0 0 ], [ 0 0 ]), if patient type 2 departs from ward 2. No arrivals to waiting area are permitted if they cannot be assigned, that is, when the system is already full. We assume the following cost parameters: * Assignment cost to ward k per type-i patient: c^(σ)_k,i=c^(σ)=1 for all k, i. * Transfer cost from ward k to ward ℓ per type-i patient: c^(t)_k,ℓ,i=c^(t)=1.1 for all k,ℓ, i. * Penalty cost for being in a nonprimary ward k per type-i patient: c^(p)_k,i=c^(p)=0.2 for all k, i. We apply policy iteration summarised in Algorithm <ref> in Appendix <ref> to find the exact solution. To do so, we first write explicit expressions for the probability matrices P^(a) and cost vectors C^(a) for a=1,2. The details of this derivation are given in Appendix <ref>. From policy iteration we identify the optimal policy in our states of interest to be a=2, `assign the waiting patients to their most preferred ward by transferring patients currently in the ward'. We determine that this optimal policy invokes a long run average cost per stage of E^*=0.4098 units. We now present the results of the approximate policy iteration algorithm, summarised in Algorithms <ref> and <ref>. For simulations, we assumed the starting value of each weight _0=1× 10^-4. We find that the estimated value of E converges to the optimal value E^*=0.4098 for large values of M, after a small number of iterations. As expected, the values of θ and E converge more tightly as M, the number of states simulated, increases, as shown in Figure <ref>. Second, we observe that in this example, each value of converges to a distinct value, as shown in Figure <ref>. This is because the parameters p and λ are asymmetric, so we do not expect patients of different types i to be weighted equally. §.§ Realistically-sized example We consider a realistically-sized example of our Model in Section <ref>. The structure of the model here is influenced by the practical considerations within real-world hospitals, in which patients are allocated to their primary (preferred) wards based on their types (medical needs), and when these wards are full, suitable policies are applied to decide which nonprimary ward a patient should be allocated to instead. Furthermore, we assume that patients arriving to a hospital when the system is full (all wards are full), are redirected to another facility. Therefore, the number of daily arrivals is unrestricted, however the number of arrivals admitted to the hospital is bounded by the system capacity. We fitted the parameters of the arrival and length of stay distributions to data, and focused on the application of our model to the analysis of how decision making affects the number of patients in nonprimary wards. Similary to Dai and Shi <cit.>, assume five types of patients, i=1,…,5 (I=5): Orthopedic (Orth), Cardio (Card), Surgery (Surg), General Medicine (GenMed), and Other Medicine (OthMed); and corresponding wards, k=i=1,…,5 (K=5), most suitable to these patient types, respectively. To estimate the key parameters of our model, we applied Matlab and performed the analysis of five-year patient flow data obtained from an Australian tertiary referral hospital, see Table <ref>. In order to focus on the impact of decision making on the number of patients observed in nonprimary wards, we applied the following cost parameters: * Transfer cost from ward k to ward ℓ per type-i patient: c^(t)_k,ℓ,i=c^(t)=1.1 for all k,ℓ≠k, i and assume large value at k=ℓ (to avoid such decision). * Penalty cost for being in a nonprimary ward k per type-i patient: c^(p)_k,i=c^(p)=0.2 for all k, i. (Here, we assumed the assignment costs c^(σ)=1 for each admitted or redirected patient, but have not included these costs in the optimisation, since all patients need to be admitted to some facility, and our focus is on the number of patients in nonprimary wards.) We assume that the capacity of the waiting area is unlimited with m_k+1=∞ and so the total number of arrivals ∑_i=1^I q_i≤ m_k+1 is unrestricted. However, the arrivals are only accepted if there is available capacity in the system according to ∑_i=1^I n'_k,i≤ m_k, where ( [n'_k,i]_𝒦×ℐ) is post-decision state. Patients that may not be admitted, are redirected to another facility. The state space of the system is then 𝒮 = {( [n_k,i]_𝒦×ℐ, [q_i]_1 ×ℐ): n_k,i≥ 0,q_i≥ 0, ∑_i=1^I n_k,i≤ m_k }, while the set of post-decision states is 𝒮' = {( [n'_k,i]_𝒦×ℐ) : ∑_i=1^I n'_k,i≤ m_k } , with |𝒮'|=∏_k=1^K∑_g=0^m_kg+I-1I-1 =∏_k=1^Km_k+II by Heydar et al. <cit.>. Here, |𝒮'|=2.9544e+28. Next, we assume the following order of assignment, represented as a matrix O=[O_i,k], where O_i,k=1 means that ward k is the first choice for type i, O_i,k=2 is the second choice for type i, and so on, with O = [ [ k= k= k= k= k=; i= 1 5 2 3 4; i= 3 1 2 4 5; i= 2 3 1 5 4; i= 3 5 4 1 2; i= 3 5 4 2 1 ]]. As example, for the assignment of 'Ortho' patients, the considered order of wards is `Ortho', then `Surg', then `GenMed', then `OthMed', and `Card' is the last choice (used when there are no other choices). This order is similar to the order of assignment in Dai and Shi <cit.>. Suppose that patient type represents the order of assignment, and so we assign patients in the order of their type, assigning type 1 patients first, then type 2 patients, and so on. We consider two decisions, * a=1, assign arrived patients to their best available wards, in the order of their types, without transferring patients between the wards, and * a=2, 3, assign arrived patients to their best available wards, in the order of their types, and allow transferring patients if required, with the constraint of no more than y=4, 10 transfers in total, respectively. As example, if * q_1=7, q_2=3, q_i=0 for i=3,4,5 (there are 7 type-1 patients and 3 type-2 patients in the queue); * and n_1,1=11, n_1,2=4, n_1,i=0 for i=3,4,5 (there are 22 patients in ward 1, so it is full), then applying a=2 means that * 4 type-1 patients will be assigned to ward 1, while 3 type-1 patients will be assigned to their best available ward other than ward 1, * while 4 type-2 patients will need to be transferred from ward 1 to their best available ward (to make space for type-1 patients), * and finally, 3 type-2 patients will be assigned from the queue to their best available ward. More precisely, the assignments of patients corresponding to decisions a=1 and a=2, 3, are described in Algorithms <ref>-<ref> in Appendix <ref>, respectively. Next, we present the simulation output in Figures <ref>-<ref>, focusing on the impact of these decisions on the number of patients observed that are in nonprimary wards (and ignoring the costs of these decisions for now). Simulation output suggests that under the model parameters assumed in Table <ref>, a policy in which we apply decision a=3 all the time, results in a better performance of the system in the sense that it reduces the number of patients in nonprimary wards when compared with applying decision a=1 or a=2 all the time. The performance of the system under decision a=3 is only slightly better than when compared with decision a=2 however. In order to evalute the performance of the system under decisions a=1,2,3, we require to also consider the costs. Since the size of the state space is very large, we apply the Approximate Policy Iteration summarised in Algorithms <ref>–<ref> in Appendix <ref>. We consider the vector of features (s) of state s=( [n_k,i]_𝒦×ℐ, [q_i]_1 ×ℐ) defined as (s) = [ [ n_1,1, ∑_i≠ 1n_1,i, … , n_5,5, ∑_i≠ 5n_5,i, q_1, … , q_5 ]], that is, for each ward k=1,…,5, n_k,k= the total number of type-i patients in the primary ward i=k; ∑_i≠ kn_k,i, the total number of other-type patients in that ward; and for each patient type i=1,…,5, q_i= the number of type-i patients in the queue waiting to be assigned. Then, by (<ref>), 𝔼(Q)=λ, and given state s and decision a, we have ∑_s^'∈𝒮ℙ( s^' | (s,a) ) (s^') = 𝔼((s^') | (s,a)) = ∑_k=1^K∑_i=1^I n^(a)_k,i(1-p_k,i)θ_k,i + ∑_i=1^Iλ_iθ_i , which is the expression we use in our computatations in Algorithms <ref>–<ref>. The output presented in Figures <ref>-<ref> demonstrates that the algorithm converged as expected. To analyse the impact of the algorithm on the performance of the system, we have then also simulated the evolution of a system in which the decision a(s) given current state s is chosen according to a(s) = _a∈𝒜(s){ C(s,a) + ∑_k=1^K∑_i=1^I n^(a)_k,i(1-p_k,i)θ_k,i + ∑_i=1^Iλ_iθ_i , using the parameters obtained from the output of the algorithm, and compared the output with the performance of the system under decisions a=1,2,3. Each simulation M=1,…,1000, was executed over a 5-year period. The output is presented in Figures <ref>-<ref> and Table <ref> below. We observe (Figure <ref>, Table <ref>) that under the near-optimal policy obtained by the application of this approach, the costs are minimised (costs are the lowest under near-optimal policy, and are the highest under a=1), with mean costs: 6.8177 (a=1), 5.7643 (a=2), 5.6983 (a=3), and 5.6449 (our near optimal solution). Simultaneously, the performance of the system in the sense of reducing the number of patients observed in the nonprimary wards and the number of redirected patients per day, is also improved. The number of patients in nonprimary wards is significantly lower than under a=1, and similar to a=2 but with lower costs than under a=2,3. Next (Figure <ref>), under near-optimal solution, 82.1918% of the time no additional transfers are required. Also, under near-optimal solution, decisions a=1,2,3 are chosen approximately 50%, 40% and 10% of the time, respectively. § CONCLUSION We proposed a Markov Decision Process for a Patient Assignment Scheduling problem where, at the start of each time period (e.g. day, 8-hour block etc), patients are allocated to suitable wards, following relevant hospital policy. Due to the large size of the state space required to record the information about the hospital system, we developed methodology based on Approximate Dynamic Programming, for the computation of near-optimal solutions. We demonstrated the application potential of our model through examples with parameters fitted to data from a tertiary referral hospital in Australia. Our model can be applied to a range of scenarios that are of interest in practical contexts in hospitals. We will analyse scheduling problems in data-driven examples, and consider additional costs, such as waiting costs or redirecting costs, and decisions such as transfers of inpatients within the hospitals that are not triggerred by arrivals, and take into account the duration of stay at the time of decision making. We will also consider scenarios with time varying parameters, in particular with seasonal components. These analyses will be reported in our forthcoming and future work. § STATEMENTS AND DECLARATIONS Data Data used in the paper was obtained following ethical approval from the Tasmanian Health and Medical Human Research Ethics Committee (HREC No 23633) and site-specific approval from the Research Governance Office of the Tasmanian Health Service. Authorship contribution statement This research has contributed to the Honours thesis by Krasnicki <cit.>. The following are the contributions of the authors, Małgorzata M. O'Reilly (MMO), Sebastian Krasnicki (SK), James Montgomery (JM), Mojtaba Heydar (MH), Richard Turner (RT), Pieter Van Dam (PVD), Peter Maree (PM): * Conceptualisation, Mathematical background: MMO, SK, MH, and JM; * Conceptualisation, Clinical background: RT, PVD, PM; * Sections <ref>-<ref>, Appendix <ref>-<ref>, Methodology development: MMO, SK, and JM; * Sections <ref>-<ref>, Derivation of the probability formulae: MMO; * Section <ref>, Example <ref>, Coding and analysis: SK, MMO, and MH; * Section <ref>, Example <ref>, Coding and analysis: MMO and JM. Declaration of competing interests The authors have no competing interests to declare that are relevant to the content of this article. abbrv § APPENDIX: ALGORITHMS § APPENDIX: EXAMPLE 1 – MATRICES P^(A) AND VECTORS C^(A) We note that the set of all possible post-decision states s^(a) is {1,7,10,13,16,19,20,21,22} for both a=1,2, with s^(1)={[ 1 s=1; 7 s=2,7; 10 s=10; 13 s=13; 16 s=3,16; 19 s=4,8,14,19; 20 s= 11,15,20; 21 s=5,9,17,21; 22 s=6,12,18,22 ]. , s^(2)= {[ s^(1) s≠ 11,15; 21 s=11,15 ]. , where the difference between s^(1) and s^(2) is in places corresponding to s=11,15, which are the only two states where choosing between a=1 or a=2 can make a difference. The probability matrices P^(a) of the discrete-time Markov chains corresponding to decisions a=1,2 are evaluated as follows. First, denote q_k,i=1-p_k,i, the probability that type-i patient does not depart from ward k. Next, denote p(k)=λ^k/k!e^-λ, k=0,1,2,…, the probability of observing k arrivals in a Poisson process with rate λ. Let p(n) = 1-∑_k=0^n-1 p(k) =1-∑_k=0^n-1λ^k/k!e^-λ, n=1,2,…, with p(0)=1, interpreted as the probability that at least n arrivals were observed; and p(ℓ_1,…,ℓ_I | n) = n!/∏_i=1^I ℓ_i!×∏_i=1^I ( λ_i/λ)^ℓ_i =n!/λ^n×∏_i=1^I λ_i^ℓ_i/ℓ_i! n=1,2,…, with ℓ_1+…+ℓ_I=n, interpreted as the conditional probability that, given n patients have been admitted, there were ℓ_1,…,ℓ_I patients of type 1,…,I, respectively. Further, let p^(n)_(ℓ_1,…,ℓ_I) be the probability that n patients have been admitted and there were ℓ_1,…,ℓ_I patients of type 1,…,I, respectively, with ℓ_1+…+ℓ_I=n. Then, if the number of admissions is restricted by ℓ_1+…+ℓ_I≤ N, where N is the number of available beds, we have, p^(n)_(ℓ_1,…,ℓ_I) = p(ℓ_1,…,ℓ_I | n)p(n) = e^-λ×∏_i=1^I λ_i^ℓ_i/ℓ_i! n< N, p^(N)_(ℓ_1,…,ℓ_I) = p(ℓ_1,…,ℓ_I | N)p(N) = ( 1-∑_k=0^N-1λ^k/k!e^-λ) ×N!/λ^N×∏_i=1^I λ_i^ℓ_i/ℓ_i!, and p^(n)_(ℓ_1,…,ℓ_I)=0 otherwise, with p^(0)_(0,0)=1. Also, denote, p = [ [ p^(0)_(0,0) p^(1)_(1,0) p^(1)_(0,1) p^(2)_(2,0) p^(2)_(1,1) p^(2)_(0,2) ]], w = [ [ p^(0)_(0,0) p^(1)_(1,0) p^(1)_(0,1) ]] . Then, for a=1, the nonzero entries of P^(1) are, [ [ P^(1)_1,1 … P^(1)_1,6 ]] = p, for s=2,7, [ [ P^(1)_s,1 … P^(1)_s,6 ]] = p_1,1× p, [ [ P^(1)_s,7 … P^(1)_s,9 ]] = q_1,1× w, for s=3,16, [ [ P^(1)_s,1 … P^(1)_s,6 ]] = p_2,2× p, [ [ P^(1)_s,16 … P^(1)_s,18 ]] = q_2,2× w , for s=4,8,14,19, [ [ P^(1)_s,1 … P^(1)_s,6 ]] = p_1,1p_2,1× p, [ [ P^(1)_s,7 … P^(1)_s,9 ]] = q_1,1p_2,1× w [ [ P^(1)_s,13 … P^(1)_s,15 ]] = p_1,1q_2,1× w, P^(1)_s,19 = q_1,1q_2,1, for s=5,9,17,21, [ [ P^(1)_s,1 … P^(1)_s,6 ]] = p_1,1p_2,2× p, [ [ P^(1)_s,7 … P^(1)_s,9 ]] = q_1,1p_2,2× w, [ [ P^(1)_s,16 … P^(1)_s,18 ]] = p_1,1q_2,2× w, P^(1)_s,21 = q_1,1q_2,2, for s=6,12,18,22, [ [ P^(1)_s,1 … P^(1)_s,6 ]] = p_1,2p_2,2× p, [ [ P^(1)_s,10 … P^(1)_s,12 ]] = q_1,2p_2,2× w, [ [ P^(1)_s,16 … P^(1)_s,18 ]] = p_1,2q_2,2× w, P^(1)_s,22 = q_1,2q_2,2, for s=10, [ [ P^(1)_10,1 … P^(1)_10,6 ]] = p_1,2× p, [ [ P^(1)_10,10 … P^(1)_10,12 ]] = q_1,2× w, for s=11,15,20, [ [ P^(1)_s,1 … P^(1)_s,6 ]] = p_1,2p_2,1× p, [ [ P^(1)_s,10 … P^(1)_s,12 ]] = q_1,2p_2,1× w, [ [ P^(1)_s,13 … P^(1)_s,15 ]] = p_1,2q_2,1× w, P^(1)_s,20 = q_1,2q_2,1, and for s=13, [ [ P^(1)_13,1 … P^(1)_13,6 ]] = p_2,1× p, [ [ P^(1)_13,13 … P^(1)_13,15 ]] = q_2,1× w. For a=2, rows s=11,15 are the only rows in P^(2) that differ from the rows of P^(1), with their nonzero entries given by, [ [ P^(1)_s,1 … P^(1)_s,6 ]] = p_1,1p_2,2× p, [ [ P^(1)_s,7 … P^(1)_s,9 ]] = q_1,1p_2,2× w, [ [ P^(1)_s,16 … P^(1)_s,18 ]] = p_1,1q_2,2× w, P^(1)_s,21 = q_1,1q_2,2, and the remaining rows are the same as in P^(1). Finally, costs C(s,a) are recorded as vectors C^(a)=[C(s,a)]_s=1,…,22 corresponding to decisions a=1,2, such that, C(s,1) = 2c^(σ)I{s= 4,5,6 } +c^(σ)I{s=2,3,8,9,11,12,14,15,17,18} + c^(p)I{s=4,6,8,10,12, 13,14,18,19,22} + 2 c^(p)I{s=11,15,20} C(s,2) = 2c^(σ)I{s= 4,5,6 } +c^(σ)I{s=2,3,8,9,11,12,14,15,17,18} + c^(p)I{s=4,6,8,10,12, 13,14,18,19,22} + 2 c^(p)I{s=20}+ c^(t)I{s=11,15}, since the costs do not depend on the next observed state.
http://arxiv.org/abs/2406.19240v1
20240627150449
Data Preparation for Deep Learning based Code Smell Detection: A Systematic Literature Review
[ "Fengji Zhang", "Zexian Zhang", "Jacky Wai Keung", "Xiangru Tang", "Zhen Yang", "Xiao Yu", "Wenhua Hu" ]
cs.SE
[ "cs.SE" ]
1 .001 Data Preparation for Deep Learning based Code Smell Detection: A Systematic Literature Review Fengji Zhang et al. mode = title]Data Preparation for Deep Learning based Code Smell Detection: A Systematic Literature Review fengji.zhang@my.cityu.edu.hk a Department of Computer Science, City University of Hong Kong, Hong Kong, China zexianzhang@whut.edu.cn b School of Computer Science and Artificial Intelligence, Wuhan University of Technology, Wuhan, China c Sanya Science and Education Innovation Park of Wuhan University of Technology, Sanya, China jacky.keung@cityu.edu.hk xiangru.tang@yale.edu d School of Engineering & Applied Science, Yale University, New Haven, United States zhenyang@sdu.edu.cn e School of Computer Science and Technology, Shandong University, Tsingtao, China xiaoyu@whut.edu.cn f Wuhan University of Technology Chongqing Research Institute, Chongqing, China whu10@whut.edu.cn § ABSTRACT Code Smell Detection (CSD) plays a crucial role in improving software quality and maintainability. And Deep Learning (DL) techniques have emerged as a promising approach for CSD due to their superior performance. However, the effectiveness of DL-based CSD methods heavily relies on the quality of the training data. Despite its importance, little attention has been paid to analyzing the data preparation process. This systematic literature review analyzes the data preparation techniques used in DL-based CSD methods. We identify 36 relevant papers published by December 2023 and provide a thorough analysis of the critical considerations in constructing CSD datasets, including data requirements, collection, labeling, and cleaning. We also summarize seven primary challenges and corresponding solutions in the literature. Finally, we offer actionable recommendations for preparing and accessing high-quality CSD data, emphasizing the importance of data diversity, standardization, and accessibility. This survey provides valuable insights for researchers and practitioners to harness the full potential of DL techniques in CSD. Code Smell Detection, Deep Learning, Data Preparation, Systematic Literature Review [ Wenhua Hub ============== § INTRODUCTION Code smell refers to certain symptoms or indications in the source code that suggest there may be underlying problems or potential design flaws  <cit.>. It does not necessarily indicate a functional error or bug in the code but rather highlights programming practices that can impair the maintainability, readability, and extensibility of the software  <cit.>. Code Smell Detection (CSD) aims to automatically identify code smells in software source code to ensure code quality, improve software maintainability, and promote good programming practices. Recently, Deep Learning (DL) techniques are gaining popularity in the CSD task <cit.>. The main advantage of DL models is their ability to automatically encode and learn from raw data, eliminating the need for handcrafted rules and feature engineering presented in previous heuristic-based and machine learning-based CSD methods <cit.>. Despite the outstanding performance of the DL-based CSD methods, they require a substantial amount of training data to model the complexity of code smells <cit.>. High-quality training data plays a key role in the validity of model results. Noisy datasets can hinder effective model training and affect result reliability <cit.>. Data quality could also impact the scalability of models <cit.>. Early decisions when constructing CSD datasets, such as the choice of language <cit.> and application scenarios <cit.>, can affect the scaling and generalization ability of models. Consequently, the availability of high-quality code smell datasets is crucial for building effective DL-based CSD models. Despite the critical importance of data to CSD models, little attention has been paid to systematically analyzing the CSD data preparation process. Recent literature surveys <cit.> comprehensively analyzed DL-based CSD methods. They covered many facets, including code smell types, deep learning techniques, model features, evaluation methods, and the datasets used. They indicated insights on programming language preferences, model effectiveness, and dataset characteristics. However, they overlooked crucial aspects of data preparation, such as requirements, collection, cleaning, and labeling techniques. Consequently, they lack a comprehensive view of data preparation challenges and strategies, limiting researchers and practitioners in harnessing the complete potential of DL techniques in CSD. To address these gaps, this survey systematically analyzes the existing data preparation processes for DL-based CSD. We identify relevant papers through a Systematic Literature Review (SLR) process, collecting 36 papers on DL-based CSD studies published until December 2023. We then carefully analyze the collected papers concerning data preparation considerations, encountered challenges, and proposed solutions. Additionally, we provide recommendations for preparing and accessing high-quality CSD data. This survey is organized around three main Research Questions (RQs): RQ1 What are the critical considerations in constructing CSD datasets? This research question aims to understand the critical factors researchers consider when building DL-based code smell detection datasets. We analyze papers regarding the four main phases of the established machine learning workflow <cit.>: data requirements, collection, labeling, and cleaning. For data requirements, we examine the programming language, code smell types, and detection scenarios addressed. For data collection, we analyze the data sources and types. For data labeling, we summarize the costs and efficiency of automatic, manual, and semi-automatic approaches. Finally, for data cleaning, we identify issues of code noise and redundancy. RQ2 What are the challenges in existing CSD datasets? This research question identifies challenges that may hinder the performance and reliability of DL-based CSD methods due to data issues. We analyze seven primary challenges: data scarcity, limited generalization, inaccessibility, heavy expert dependency, difficulty in labeling, data imbalance, and redundancy. RQ3 What are the solutions presented in the literature? Given the challenges identified in RQ2, this research question summarizes solutions proposed in the literature. To the challenges addressed, we map five approaches - cross-project datasets, two-phase data utilization, resampling, semi-automatic labeling, and data cleaning methods. Finally, we provide recommendations based on this survey. Future work should focus on creating more diverse, publicly available datasets that address current limitations. Researchers could leverage multiple programming languages, data sources, and domains to improve generalizability. Semi-automatic labeling and automated real-world data collection may help scale datasets while maintaining quality. Adopting best practices for data governance, including documenting the data collection and pre-processing details, would enhance transparency and reproducibility. Establishing standard criteria to evaluate datasets could help standardize their construction and quality assessment. These efforts aim to generate larger, higher-quality datasets allowing DL-based models to better learn complex code smell patterns across different application scenarios. The main contributions of this research are: * Introduce the first systematic review of data preparation processes for DL-based CSD methods (RQ1 in Section 4). * Provide thorough solutions mapped to identified data challenges to guide future dataset preparation (RQ2 in Section 5 and RQ3 in Section 6). * Propose recommendations on diversifying and standardizing datasets through multi-language modeling, semi-automatic labeling, and best practices for data governance and accessibility (Section 7). The rest of the paper is organized as follows. Section <ref> describes background and related work. Section <ref> details our SLR methodology. Section <ref> to <ref> presents results addressing each RQ. Section <ref> provides recommendations based on findings. Section <ref> discusses threats to validity. Section <ref> concludes the paper. § RELATED WORK This section provides context on code smells, code smell detection techniques, and related CSD SLRs. The background informs the goals and context of our study. §.§ Code Smell Detection Code smells represent undesirable design or implementation constructs that can degrade code maintainability and quality, indicating structural patterns correlated with increased defect risk <cit.>. One example is the Feature Envy shown in Figure <ref>. In this case, the crop method belongs to the ImageArguments class but is coupled to the CropRegion class. The CropRegion class has a toString() method that converts the CropRegion object to a string. While the crop method calls value.toString(), which focuses too much on the internal details of the CropRegion class about its string conversion logic. Since crop is defined in ImageArguments, it should not read and manipulate attributes of CropRegion rather than its owning class. The crop method would be better defined as a method in CropRegion since it primarily operates on that class's data rather than ImageArguments. This excessive dependency on another class's implementation indicates the presence of Feature Envy, which can negatively impact code understandability, modification, and testing. Early code smell detection approaches are generally heuristic-based or machine learning-based models <cit.>. However, heuristic-based methods are criticized for their subjectivity due to manually adjusted heuristic rules <cit.>. Machine learning-based methods require careful construction and selection of code smell features <cit.>, which also heavily relies on human expertise. With advancements in deep learning technology in the field of both artificial intelligence and software engineering  <cit.>, recent researchers have introduced various deep learning techniques for automatically extracting code smell features from code <cit.>. Model architectures like convolutional neural networks <cit.>, long short-term memory networks <cit.>, and recurrent neural networks <cit.> have achieved state-of-the-art CSD accuracy <cit.>. §.§ Related CSD SLRs Despite data preparation playing a pivotal role in CSD, comprehensive investigations have been lacking. Several surveys in CSD methods explored machine learning and deep learning techniques. While these surveys discussed CSD datasets to varying extents, most of them lacked detailed examination of the data preparation process and systematic identification of challenges or solutions. Specifically, <cit.> delivered a comprehensive review of CSD studies from 1999 to 2016, stressing the pivotal role of code smells in software maintenance. <cit.> and <cit.> explored machine learning-based CSD studies until 2018, focusing on the types and performance of machine learning techniques. They both found that the random forest emerged as the most effective technique for detecting various code smells and emphasized the significance of manually validating datasets, noting a scarcity of available datasets. <cit.> reviewed CSD studies up to 2020, concentrating on simple and hybrid machine learning techniques and their evaluation methods. They revealed that support vector machine and decision tree algorithms were frequently used by the researchers, and much of the research focused on open-source software. Additionally, they noted that most of the researchers used small and medium-sized datasets and lacked valid industrial datasets. <cit.> assessed the reproducibility of CSD research from 1999 to 2020, focusing on machine learning-based studies. <cit.>, <cit.>, and <cit.> are all devoted to systematically reviewing the research progress in the field of DL-based CSD. They provided a comprehensive survey and summary of the developments in the field from various perspectives such as code smell types, deep learning techniques, datasets, and model performance evaluation. They highlighted supervised learning as the most commonly used learning method and pointed out the importance of models such as convolutional neural networks, recurrent neural networks, and long and short-term memory networks in CSD. In addition, they generally observed the prevalence of Java datasets and that method-level code smell is most often detected. Although these reviews discussed the datasets, there is a certain lack of detailed exploration and systematic analysis of the data preparation stages. They focused more on what type of code smell dataset was used, the programming language of the dataset, and the size of the dataset, while the impact of the dataset preparation process was not examined in depth. Compared to general CSD surveys, we specifically target the understudied domain of data preparation to advance understanding and inform future practices. There is one particular survey studying CSD datasets <cit.>. They meticulously compared CSD datasets across properties like size, supported smells, programming languages, and construction methods. Their findings highlighted several limitations within existing datasets, notably imbalances in samples, absence of severity levels for smells, and constraints related to Java-based datasets. However, their analysis predominantly focused on machine learning datasets and did not explore the challenges identified in our survey, i.e., Data Scarcity, Limited Generalization Ability, Limited Data Accessibility, Heavy Expert Dependency, Difficulty of Data Labeling, and Redundancy. In addition, they lacked a detailed exploration of solutions or recommendations to address the challenges found, particularly in the context of deep learning applications within CSD. Instead, we address these challenges by proposing diverse solutions. Furthermore, we provide a set of recommendations to advance the field. These recommendations aim to foster progress by addressing critical issues and promoting standardized practices within the domain of DL-based CSD. § RESEARCH METHODOLOGY Our SLR process strictly adheres to established guidelines <cit.> to ensure an objective review. We also adopt a snowballing approach <cit.> to include additional literature and enhance the completeness of our review. Figure <ref> outlines our SLR process. The first two authors conduct the work closely with review from the other authors. This process occurred in 2023 and identified 36 relevant papers, as detailed in Table <ref>. §.§ Search Strategy To design the search string for our SLR, we utilize the PICO (Population, Intervention, Comparison, Outcomes) framework <cit.>. This framework is widely adopted in systematic reviews to formulate research questions and develop search strategies. PICO helps in breaking down the research topic into four key components: * Population (P): The population of interest in our study is represented by “Code Smell”. * Intervention (I): The intervention refers to the “DL Technique” (Deep Learning Technique). * Comparison (C): Comparison is not applicable in our study, hence it is omitted. * Outcomes (O): The expected outcomes are related to “Code Smell Detection”. Table <ref> details the key terms and synonyms associated with each PICO component used in our search strategy. We constructed the query by combining these PICO components using the Boolean operator “AND”. This approach ensures a comprehensive search by encompassing a broad spectrum of research related to our topic. We include variants of key terms facilitated through the use of wildcard matching. For instance, the term “detect*” covers “detect”, “detection”, “detecting”, etc. We combine the key terms using the “OR” operator. The detailed search string in the SCOPUS format is as follows: TITLE-ABS-KEY((“Code smell” OR “Bad smell*” OR “Design smell*” OR “Design flaw*” OR “Antipattern*” OR “Model smell*”) AND (“DL Technique” OR “Deep learning” OR “Transfer learning” OR “CNN” OR “RNN” OR “Auto-encoder*” OR “Deep neural network*”) AND (“Code Smell Detection” OR “Detect*” OR “Predict*” OR “Identif*”) We adapt the search string as necessary to match each database. The databases queried include Google Scholar, SCOPUS, ACM Digital Library, IEEE Xplore, Springer, and Wiley. These databases are chosen based on recommendations from previous SLRs <cit.>, which highlighted them as sources containing high-quality, peer-reviewed research in software engineering. Furthermore, we apply additional filters to the retrieved papers, including the language (i.e., English-only) and publication status (i.e., The paper should be a peer-reviewed full research paper published in a conference proceeding or a journal). The initial search identifies 1270 papers. We then remove the duplicate records, resulting in 975 papers for further screening. §.§ Paper Selection We aim to identify high-quality studies that could provide valuable insights into data preparation for DL-based CSD. Additional criteria and quality assessment are applied to screen eligible papers. §.§.§ Inclusion/Exclusion Criteria We propose three inclusion and four exclusion criteria, as shown in Table <ref>. A paper is only included if it meets all the inclusion criteria and does not conform to any exclusion criteria. The inclusion criteria require that papers utilize deep learning techniques for CSD and propose novel deep learning-based models or solutions for the task. Papers also need to describe the datasets used clearly. For exclusion, papers that use heuristic or machine learning-based detection techniques, review existing models, or only perform statistical/correlational analyses are removed. Papers with unclear descriptions of the datasets are also excluded. To help validate the consistent application of the criteria, the first two authors also conduct an initial screening of 50 randomly selected papers. They independently assess whether each paper meets or does not meet the inclusion and exclusion criteria. The absence of discrepancies between the authors' assessments lends additional confidence in the reliability and validity of the criteria used in this study. The criteria are applied in two phases. First, the titles and abstracts of retrieved papers are screened according to the criteria. This process leaves 78 papers for potential inclusion. Then, the full texts of the remaining 78 papers are thoroughly reviewed against the criteria. After a full assessment, we have 35 papers that suit the inclusion and exclusion criteria. §.§.§ Quality Assessment We conduct a quality assessment on the remaining papers using the checklist in Table <ref>. Quality criteria are essential for assessing the reliability of extracted information, though there is no standardized approach <cit.>. Following the <cit.>, and <cit.>, our checklist mainly examines independent/dependent variables, validation methods, datasets, and experimental complexity. One author initially performs the quality assessment, with two additional authors conducting another round of results validation. This process excludes four papers for failing to meet one or more quality criteria. Specifically, <cit.>, <cit.>, <cit.>, and <cit.> lack sufficient descriptions of independent/dependent variables, validation approaches, or datasets used. The details of these papers that do not meet our quality standards are omitted for brevity. This assessment aims to screen papers with incomplete reporting that could limit the extraction of meaningful insights. §.§ Snowballing To ensure comprehensive coverage of all relevant literature, we perform manual snowballing as per the guidelines by Wohlin et al. <cit.>. This involves both forward and backward snowballing techniques. Forward snowballing involves examining the citation lists of all papers that meet our inclusion criteria to locate additional relevant papers. Backward snowballing reviews the reference lists to uncover any pertinent studies not previously identified. During the initial round of snowballing, we successfully identify five new relevant papers. Subsequent rounds of snowballing are conducted following the same rigorous screening process, adhering to our predefined inclusion and exclusion criteria and maintaining our quality assessment standards. Despite the additional rounds, no new papers are found that meet our criteria, leading us to conclude that we have reached a saturation point. This is due to the limited scope of current research on DL-based code smell detection, which is a relatively nascent field. In total, our search and snowballing processes yield 36 papers. Of these, 31 papers are initially retrieved through keyword searches across various databases. The remaining five papers are found through manual snowballing of references and citations. This dual-phase approach helps provide a more comprehensive examination of the literature. §.§ Data Extraction We have designed a data extraction form to systematically analyze the identified papers, shown in Table <ref>. The form design is adapted from prior SLR guidelines <cit.> and pilot-tested before finalizing. The form captures qualitative and quantitative attributes across four aspects: metadata, datasets, methods, and other information. One author performs the initial data extraction to organize information collected from each paper. Then, two additional authors verify the extracted data through independent examination. Any disagreements are resolved through group discussion to reach a consensus. Most data extracted is qualitative, such as deep learning techniques applied, data sources, pre-processing approaches, and code smells addressed. Some quantitative data is also collected, like imbalance ratios within datasets. This formal extraction process aims to investigate the research questions proposed in our study comprehensively. The publication details of the 36 primary studies are listed in Table <ref>. The number of these 36 papers over the years is shown in Figure <ref>. § RQ1 - CRITICAL CONSIDERATIONS IN CSD DATA PREPARATION Through a comprehensive literature review, we extract and analyze the various considerations taken to construct datasets for code smell detection. Following the machine learning workflow introduced by  <cit.>, our data preparation analysis centers around four main phases in Figure <ref>, including data requirements, collection, labeling, and cleaning. By thoroughly reviewing these preparation aspects, we clarify current practices and guide practitioners and researchers on effectively addressing critical factors when building datasets. This will help standardize the construction of high-quality datasets for code smell detection. §.§ Data Requirements When constructing high-quality datasets to train and evaluate DL-based code smell detection models, it is crucial to determine which programming language code will undergo smell detection and the specific types of code smells to be detected. Moreover, we should also consider the code smell detection scenario, i.e., whether to use within-project or cross-project data to build the datasets. Therefore, three key factors should be considered when preparing datasets for code smell detection research: programming language, code smell type, and code smell detection scenario. Programming Language: The choice of programming language is an essential early decision in dataset construction. Several aspects influence this choice, including the availability of openly accessible code samples and the types of code smells to be studied for that particular language. As depicted in Figure <ref>, our analysis shows that the vast majority of papers [S1-12, S14-18, S20-31, S33-36] utilize Java datasets due to the widespread use of the Qualitas Corpus — an open-source collection of Java projects. The higher availability of Java datasets helps accelerate research in this area. Besides, two papers [S14, S34] focus on C# to investigate the feasibility of transfer learning across languages. There is one paper [S32] studying Python datasets and one paper [S19] studying UML datasets, where [S32] specifically examines Python security smells, and [S19] studies the presence of functional decomposition in UML models of object-oriented software. S13 does not provide details on the programming language studied or the dataset used, which is not categorized within this section. Code Smell Type: Another critical consideration is the types of code smells. In our study, we categorize the code smells into the Class level, the Method level, and the code smells that are relevant to Both levels. The Class level code smells typically pertain to the design and structure of entire classes, which concerns class-level refactoring or redesign. The Method level code smells are usually related to the internal implementation and behavior of the methods or functions, which concerns refactoring or decomposition of the methods. The Both level represents code smells that may result in both class- and function-level refactoring. We list the details of the Method level code smells, the Class level, and the Both level code smells in Table <ref> and <ref>. As can be seen, certain code smell types have garnered considerable attention from researchers, with four prevalent types covered by ten or more papers. Among them, Feature Envy is the most investigated code smell, which denotes methods that access the data of another object rather than its data. Another prevalent code smell God Class [Also known as the Blob Class, which is studied in [S7, S9, S13, S33].] means classes that have many members and implement different behaviors. The Long Method code smell refers to the methods that are too long and contain too much code logic. And Data Class means a class containing only data fields and methods. The prevalence of these code smell types can be attributed to their distinctiveness, ease of identification, and relatively large number of samples in real-world codebases. However, some code smell types have not received enough attention. One of them is Type checking, which means frequently using type checking to determine the type of an object. Another example is Dummy handler, an exception handler that only logs an error message without taking any meaningful corrective actions. Notably, there has been a recent effort <cit.> studying these code smells to broaden the scope of experimental research and enhance empirical investigations in these domains. Code Smell Detection Scenario: There are three main CSD scenarios. First is within-project detection, splitting a project into the training and testing data with no intersection. The second is cross-project detection. Contrary to the previous, the training and testing data come from different projects. This approach solves the problem of lacking enough training data from a single project. The third is mixed-project detection, which utilizes mixed data from multiple projects to get the training and testing data partitions. In this way, it can create enough data for training and evaluation. We categorize the reviewed papers based on their CSD scenario and draw a pie chart in Figure <ref>. We can find that 23 papers [S1-4, S7, S9, S12-13, S15, S17-23, S25-27, S29-30, S32-33] belong to the within-project scenario; Ten papers [S5-6, S8, S11, S14, S16, S28, S34-36] belong to the cross-project scenario; And three papers [S10, S24, S31] belong to the mixed-project scenario. Within-project detection is the most popular practice for training and testing CSD models. There is a scarcity of papers using mixed-project datasets because unifying the feature extraction from different projects is still an open challenge. §.§ Data Collection The primary considerations during data collection vary based on the data source, which we categorize as real-world, synthetic, or mixed. Real-World Data: Most studies [S1-5, S7-9, S12-15, S18-22, S24-25, S27-28, S30, S32-36] utilize real-world data by collecting open-source projects/repositories or using existing datasets. Real-world data is the best testbed for validating CSD techniques in practical applications. The most commonly used dataset is Qualitas <cit.>, which contains many Java open-source projects. It is used by five papers [S2, S4, S21, S25, S27]. Other well-processed corpora are constructed using multi-lingual source code from Github, Bitbucket, Apache, etc. Examples include LandFill <cit.> [S7, S9], MUSE <cit.> [S12], MLCQ <cit.> [S30], CodeXGlue <cit.> [S32], and the Benchmark <cit.> [S14, S34]. Furthermore, there is also a study [S19] investigating alternative data type, using the Img2UML <cit.> corpus, which consists of XMI file of the UML class models parsed from images. Utilizing these real-world datasets is crucial in CSD research as they provide valuable insights into the complexities and challenges faced in practical software development. Synthetic Data: In the context of CSD, researchers need to generate synthetic data to tackle challenges like insufficient real-world code smell samples or severe data imbalance. We identify six papers [S6, S11, S17, S23, S26, S29] using synthetic methods to overcome these challenges. They usually follow a unified process for synthesizing the needed data. The first is to collect usable code snippets. The second is to assess whether each code snippet can be transformed into a code smell. The final step is to generate positive and negative samples using the identified code snippets in the second step. Negative samples are the unchanged code snippets. Positive samples are artificially altered fragments of the original code to make it smelly. For example, to create feature envy smells, they can perform unnecessary move refactoring, moving methods from one class to another <cit.>. Mixed Data: Another way to create datasets is to mix real-world and synthetic data. This is typically employed to address the challenge of having limited samples while preserving real-world data distribution <cit.>. Our survey identifies three papers [S10, S16, S31] that use mixed data. Specifically, [S10] mixes the real-world data from the Qualitas and synthetic data. [S16] mixes synthetic data created by <cit.> with real-world data from the LandFill <cit.>. [S31] mixes real-world <cit.> and synthetic datasets <cit.> from previous references used. These mixed datasets provide researchers with a valuable resource for conducting experiments that balance real-world complexity's benefits with synthetic data's controlled environment. §.§ Data labeling The scale and quality of data labeling significantly influence the results and reliability of empirical studies. We summarize three labeling methods for constructing CSD datasets: automatic, manual, and semi-automatic ways. Automatic: A common practice of labeling CSD datasets is using automatic tools. Such practice can bring several benefits, including convenience and time efficiency <cit.>. 17 papers [S4-6, S8, S10-13, S15, S21-24, S26, S29-30] use automatic tools to label the datasets. One of the frequently used tools is JDeodorant <cit.>. It is an Eclipse plug-in that detects code smells in Java software and recommends appropriate refactorings to resolve them. For the moment, the tool supports five code smells, namely Feature Envy, Type/State Checking, Long Method, God Class, and Duplicated Code. Another example is Checkstyle <cit.>, which is a development tool for Java, which checks many aspects of the source code. It can find class and method design problems. It also has the ability to check code layout and formatting issues. We provide a summary of all identified automatic tools in Table <ref>. Manual: Manual labeling is time-consuming and labor-intensive, demanding substantial human resources and specialized knowledge. However, manual effort is sometimes necessary because humans can easily generalize to different domains and achieve higher reliability. We identify six papers [S1, S7, S9, S17, S19-20] that manually label the datasets. The manual labeling process first requires experts to manually analyze the source code based on various code smells. Secondly, additional experts are required to validate the accuracy of the previously identified smelly samples. Any disputed samples should be reviewed again with respect to code smell definitions, source code, and change history information to reach a conclusion. Such protocol enhances the reliability and reduces the subjectivity of the labeling process  <cit.>. Semi-automatic: There are 13 papers [S2-3, S14, S16, S18, S25, S27, S31-36] that explore a hybrid approach that combines both methods to address the reliability issue associated with automatic labeling and the labor-intensive nature of manual labeling. Generally, they use automatic tools to label all samples and verify the labeled results by experts. Specifically, a subset of samples regarding code smells is chosen for individual analysis by multiple experts. The experts perform individual analyses without discussing them with others. Then, Cohen's Kappa is calculated to measure inter-expert agreement. Disagreements are discussed to reach a consensus on a final labeled set. Finally, the manually labeled results are compared to the automatic tools' labeled results. If the differences are negligible, the dataset labeled by the automatic tools for all samples is ultimately adopted. §.§ Data Cleaning The final stage of data preparation is data cleaning. Though not all studies comprehensively address this stage, we identify two prevalent cleaning steps involving code noise and data redundancy. Code Noise: Six papers [S3, S18, S21, S27-28, S33] indicate that code noise may introduce irrelevant or erroneous information that can mislead models. [S21, S27-28] identifies several noise types, including outliers, missing data, and mismatching feature types. Textual noise like blank lines and non-ASCII characters are also found in [S3]. [S18, S27] find that incomplete or erroneous data sessions and non-normalized features can introduce additional noise. Object data types instead of expected numerical types are also identified in one dataset [S33]. Such noise can be removed through pre-processing to improve dataset usability for CSD models, which will be detailed in the subsequent sections of this survey. Redundancy: Data redundancy refers to identical or highly similar code samples and redundancy features. This adversely impacts analysis and model performance. To mitigate these effects, two papers [S14-15] remove duplicate code samples from identical code files or fragments. Nine papers [S6, S9-10, S13, S18-21, S25] use various feature selection methods to remove redundant features, including: * Convolutional Neural Networks (CNN): Applied in [S6, S10, S18, S21], CNNs are leveraged for their ability to automatically identify and discard redundant features through generalizing from relevant data. * Gain Ratio: Utilized in [S9, S21], the gain ratio is an extension of the information gain criterion, which normalizes the information gain by the intrinsic information of a split, making it effective in choosing features that provide the most significant discrimination. * Cross-Correlation Analysis: As described in <cit.> and used in [S13], this method involves analyzing the cross-correlation function to identify and eliminate features that exhibit high redundancy with other features. * Chi-Square Test: Employed in [S19, S25], the chi-square test evaluates the independence of features with respect to the target variable, enabling the selection of features that have a statistically significant association with the outcome, thus removing irrelevant or redundant features. * Information Gain: Applied in [S20], information gain measures the reduction in entropy or uncertainty by partitioning the data according to different features, helping to identify and retain the most informative features. § RQ2 - CHALLENGES IN CSD DATA PREPARATION This section presents seven prominent challenges encountered by researchers during the creation of CSD datasets, including Data Scarcity, Limited Generalization Ability, Limited Data Accessibility, Heavy Expert Dependency, Difficulty of Data Labeling, Data Imbalance, and Data Redundancy. Each challenge presents a unique hurdle in the quest for effective DL-based CSD. We briefly summarize these challenges in Table <ref>. §.§ Data Scarcity Data scarcity in DL-based CSD refers to the inadequacy of available real-world data for training and testing DL models. Seven papers [S3, S5-6, S14, S16, S22, S36] point out this issue. In cases where the datasets lack sufficient samples, the model may struggle to acquire essential features and patterns, ultimately leading to a decline in performance  <cit.>. For example, [S5] states that their sample size may limit the generalizability of the results and hope further to evaluate the approach on a larger set of systems. In addition, several researchers have opted to use synthetic datasets due to the scarcity of real-world data. These synthetically generated datasets are large in size and low in labor effort. However, four papers [S6, S11, S16-17] argue that these synthesized datasets can threaten the validity of the proposed methods. It is underscored that the generated data could be significantly different from real-world code smells <cit.>. The generated smelly samples are essentially different from real-world ones that are often more challenging to identify <cit.>. §.§ Limited Generalization Ability The limited generalization ability of CSD models is related to the single programming language of the dataset, the limited number of code smell types, and the choice of data source. Several papers [S1, S3, S7, S12, S15, S22, S24-25, S28, S30, S32, S36] find that the programming language of training data affects model effectiveness. <cit.> also find that Python models have lower performance in transfer learning. This performance degradation becomes even more pronounced when evaluated on datasets containing switch statements. In addition, several papers [S1, S5-6, S9, S12, S15-18, S22, S25, S30, S32, S35] have focused on the different code smell types in the datasets. For example, three papers [S12, S22, S35] recognize performance degradation when applying their approach to other code smells. Six papers [S2, S7, S15, S18, S28, S35] indicate that the choice of dataset sources can also limit the generalization ability of models. [S2] points out that using only datasets collected from open-source projects cannot generalize to close-source industrial projects. The narrowed scope of datasets and classification scenarios may lead to model performance degradation in unseen contexts. §.§ Limited Data Accessibility The limited data accessibility refers to the unreproducible or unavailable datasets used in CSD. Many papers [S3, S8, S13-15, S22, S24, S29-30] do not provide access to their source code or the constructed datasets. Some even do not reveal the dataset construction details. For example, [S6] uses private libraries to build datasets. While other researchers cannot access the same datasets to validate or extend the study. Lack of reproducibility in scientific research means reduced impact of the results <cit.>. For example, it is difficult for the industry to trust, invest in, or apply ideas or findings that cannot be replicated in practice. We suggest that future studies should select publicly available and representative data sources when constructing the datasets to ensure that the study is replicable, scalable, and widely applicable. §.§ Heavy Expert Dependency Heavy Expert Dependency refers to the manual labeling of the code smell datasets described in the previous subsection, which imposes a substantial demand for expertise on the experts <cit.>. Many papers [S3, S6, S14, S17-18, S26, S34, S36] have mentioned that data experts should deeply understand the distinct characteristics and intricate concepts underpinning various code smell types. For example, [S17] proposes to train inspectors and enhance their conceptual and cognitive grasp of the code smell domain, evaluating their aptitude to select excellent graduate students for the manual evaluation. Moreover, it is worth noting that the identification of certain complex code smells can pose formidable challenges to researchers. These intricacies can make the task exceedingly difficult. Even experts can find it hard to agree on the presence of a smell sample <cit.>. For example, <cit.> invite 105 experts to evaluate all refactoring suggestions generated by their models and identify whether they agree with the smelly samples. Results show that 44% of the experts disagree with each other. §.§ Difficulty of Data Labeling Data labeling poses unique challenges for code smell detection, including time costs for manual labeling, accuracy issues with automatic approaches, and potential label noise. Five papers [S5-6, S18, S24, S36] emphasize difficulties with manual and automatic labeling. Manually building labeled datasets is time-consuming due to the extensive effort required [S5]. Though automatic labeling could help with scale, tools have limitations related to predefined heuristics and lower accuracy than human judgments [S6]. Label noise originating during dataset construction also impacts quality. Manual labeling relies on subjective developer perspectives and inconsistent interpretations of smell definitions, which can lead to ambiguous or conflicting labels for the same code [S7, S33, S35]. To experiment on manually validated datasets, [S5] observes significant performance decreases compared to generated data. This highlights the risk of models overfitting to potentially noisy human-generated labels. In summary, both manual and automatic approaches present difficulties that hinder effective labeling at scale. The subjective nature of code smells also makes datasets susceptible to label noise. These challenges point to the need for labeling methods that balance accuracy, efficiency, and consistency in dataset construction. §.§ Data Imbalance Data imbalance refers to an uneven distribution of code smell samples within datasets, where smelly samples are generally outnumbered by non-smelly code. Such imbalance poses challenges for model training and performance  <cit.>. For example, the widely-used dataset Qualitas [S2, S10, S21, S25, S27] exhibits a significant imbalance, containing only 33% smelly samples. Some datasets are even more skewed, with four studies [S5, S11, S16-17] utilizing data comprising just 2% smelly samples. [S28] indicates severe performance degradation caused by data imbalance. And [S6] highlights imbalance slows down model convergence during training due to the overabundance of non-smelly samples, prolonging the training time. Moreover, models trained on such imbalanced data tend to over-predict samples as non-smelly, which hinders the detection of the underrepresented yet important smelly samples. §.§ Data Redundancy Data redundancy refers to duplicate or overlapping samples and features within datasets used for model training  <cit.>. Sample redundancy means multiple identical code samples exist in the dataset. Several papers [S2, S14-15] observe this occurrence. Redundant samples unnecessarily consume storage and computational resources during training, as they provide no additional value <cit.>. Redundant features refer to excessive or overlapping code smell features within the dataset. Overlapping features can complicate models and hurt performance, as highlighted by [S2]. Such redundancy can arise when feature extraction tools derive similar code smell features from source code. We identify four papers [S2, S13, S21, S27] that initially use such tools to characterize code smells and construct deep learning models based on these extracted features. §.§ The Interplay of Challenges In previous discussion, we have addressed each identified challenge in isolation to explain their specific impacts and solutions. However, it is crucial to recognize that these challenges often do not occur independently but interplay in ways that can compound their effects. For example, data scarcity often leads to a disproportionately smaller number of smelly instances than non-smelly instances within datasets, thereby exacerbating data imbalance (referenced in [S14, S16]). Similarly, heavy reliance on expert knowledge not only hinders the efficiency of data labeling but also contributes to overall data scarcity, further complicating the challenges (as noted in [S6, S18, S36]). Furthermore, issues such as limited data accessibility combined with high data redundancy can severely limit the generalization ability of the solutions (discussed in [S3, S6, S15, S22, S24, S28, S30]). To effectively tackle these multifaceted problems, it is apparent that solutions need to be designed with an understanding of these dynamics. Therefore, we propose that a more systematic approach, possibly incorporating integrated solutions, is essential to address the interrelated challenges comprehensively. This holistic perspective is crucial for developing robust and effective strategies to address the complexities. § RQ3 - SOLUTIONS PRESENTED IN THE LITERATURE The previous sections have outlined several key challenges in constructing datasets for DL-based CSD. This section summarizes solutions presented in the literature for mitigating these issues. Figure <ref> summarises the solutions we have identified and marks the challenges in RQ2 that they address. The following analysis examines these proposed approaches, assessing their potential, as well as their limitations. §.§ Utilizing Cross-project Data (Target Challenges in RQ2: Data Scarcity, Limited Generalization Ability, and Heavy Expert Dependency) Leveraging data from multiple software projects is an approach for constructing more high-quality datasets  <cit.>. We identify 13 papers [S5-6, S8, S10-11, S14, S16, S24, S28, S31, S34-36] that apply this approach to address challenges related to manual dataset creation efforts. Key benefits of cross-project datasets include improved efficiency, diversity, and generalizability. Specifically, drawing from various codebases significantly reduces time and resources spent on manual labeling compared to single-project datasets <cit.>. This approach also seamlessly accommodates differing languages and smell types to address dataset homogeneity issues <cit.>. Moreover, validating models across projects further enhances their assessed generalizability. Rather than overfitting to narrow contexts, cross-project datasets support identifying smells in new codebases. This confirms a model's versatility versus the limitations of project-specific evaluations. In summary, utilizing multi-project sources presents an effective strategy for constructing more high-quality, diverse, and representative datasets. The enhanced scale, heterogeneity, and generalizability provided by cross-project data help advance research by enabling the development of detection models applicable to broad codebase populations. This solution directly addresses key challenges around dataset construction efforts. §.§ Two-phase Data Utilization (Target Challenges in RQ2: Data Scarcity, Heavy Expert Dependency, and Difficulty of Data Labeling) Aiming to enhance model performance, there are ten papers [S2-3, S14, S18, S25, S27, S32, S33-35] employing a two-phase pre-training and fine-tuning approach. During pre-training, models learn patterns from synthetically generated data. This provides a foundation of code smell characteristics despite potential differences from real data distributions. The primary goal is exposure rather than perfect replication. Subsequently, fine-tuning involves further refining the model using real-world datasets. Adjusting the learnable parameters of deep learning models helps specialize them to real-world code smells. Research has shown deep learning can cope with noise in training data <cit.>. Thus, synthetic data facilitates dataset expansion while fine-tuning addresses challenges of low reliability and scarce real-world examples. Pre-training establishes a general understanding before fine-tuning customizes performance for real-world accuracy <cit.>. Overall, this staged process maximizes the value of generated data. The two-phase approach cultivates models capable of detecting smells across diverse codebases addressing practical scenarios. By integrating synthetic and real data synergistically, this strategy helps advance the field. §.§ Resampling Imbalanced Data (Target Challenges in RQ2: Data Imbalance) Data resampling aims to address the imbalance by rebalancing class distributions. The appropriate technique depends on the imbalance severity and requirements. We identify 14 papers [S4, S12-16, S21-23, S25, S27, S31, S33, S36] exploring resampling. The approach used by most studies [S4, S13, S21, S25, S27, S33] is Synthetic Minority Over-sampling TechniquE (SMOTE) <cit.>. This oversampling technique generates synthetic smelly samples to help balance the datasets. Apart from SMOTE, one study [S23] uses fuzzy sampling, and [S22] develops an automatic refactoring tool to transform non-smelly samples into smelly ones. Five papers [S12, S14-16, S36] apply undersampling to reduce the non-smelly samples. §.§ Semi-automatic Labeling (Target Challenges in RQ2: Difficulty of Data Labeling) To address labeling challenges, 14 papers [S2-3, S14, S16, S18, S25, S27-28, S31-36] apply a semi-automatic methodology combining automatic tools with manual validation for high-quality labeled data. Generally, this approach uses automatic tools to label all samples and verifies the results that are labeled by experts. Specifically, multiple experts chose a subset of samples for individual analysis regarding code smells. The experts perform individual analyses without discussing them with others. Then, Cohen's Kappa is calculated to measure inter-expert agreement. Disagreements are discussed to reach a consensus on a final labeled set. Finally, the manually labeled results are compared to the automatic tools' labeled results. If the differences are negligible, the dataset labeled by the automatic tools for all samples is ultimately adopted. This approach not only facilitates large-scale labeling but also ensures quality through expert verification. For instance, study S[2] achieves balanced datasets by integrating tool-based advice with expert validation, demonstrating improved robustness in model performance. Similarly, S[16] and S[36] underscore the method's efficacy in refining datasets for enhanced model learning outcomes. Such empirical evidence highlights the semi-automatic approach as both effective and efficient. §.§ Data Cleaning (Target Challenges in RQ2: Data Redundancy) The goal of cleaning is to derive datasets optimized for accurate detection. Data cleaning involves removing redundancy, inconsistencies, and irrelevant content. Our survey identifies several common cleaning methods, including: * Comment/blank line removal to filter non-code contextual data [S3, S21, S27]. * Missing value imputation or sample removal to handle gaps [S21, S27-28]. * Outlier detection/replacement to manage abnormal distributions [S21, S27]. * Data type conversion to fix incorrect format [S33]. * Feature scaling/normalization to standardize attribute ranges [S18, S27-28]. * Feature selection techniques to remove redundancy features [S6, S9-10, S13, S18-21, S25]. * De-duplication to remove replicate samples [S27]. These techniques facilitate downstream smell feature capture by pruning problematic samples and preparing clean, consistent data. Models can then better learn from focused, high-quality inputs. Notably, S[9] and S[13] have shown that careful feature selection is crucial for model accuracy. Further, S[20] and S[25] demonstrate that reducing feature redundancy not only improves model performance but also reduces computational demands. Additionally, S[33] highlights the significant performance enhancements achievable through data standardization, affirming the critical role of these techniques in preparing high-quality datasets for deep learning. Further refinement of these cleaning practices remains an active area of research to develop high-quality CSD datasets. § RECOMMENDATION Figure <ref> summarizes our findings on key data preparation considerations, challenges, and potential solutions based on the literature review. This section aims to provide recommendations for researchers and practitioners guided by these results. Develop Datasets Across Languages and Sources. Most papers [S1-12, S15-33, S35] focus on a single programming language, limiting external generalizability and cross-language applicability. Only two papers [S14, S34] examined transfer learning across languages. Manual dataset creation also poses expertise and labor challenges. Automated generation relies on subjective tools restricting new smell detection. To address these, we recommend utilizing cross-project datasets leveraging multiple codebases. This reduces manual effort while enhancing the diversity of languages and smells represented. Researchers should also focus on expanding language support beyond the dominant Java studies to enable transfer learning assessments. We further suggest applying a two-phase pre-training and fine-tuning strategy. This marries the benefits of plentiful synthetic data for pre-training with refinement on real-world examples, improving generalizability. Standardize Cross-study Dataset. Lack of consistency hinders reproducibility and progress. Researchers should consider standardizing aspects like identifier naming, feature representations, and metadata tracking across publically available datasets. This will facilitate continued research efforts on cross-dataset challenges. Integrating dataset construction into end-to-end pipelines also helps. Many studies optimized individual preparation phases in isolation. Developing consolidated pipelines covering data sourcing, labeling, cleaning, and modeling could promote co-optimization of these interrelated tasks. Automating pipelines would further decrease manual overhead. In addition, we also recommend setting up centralized data repositories. Finding, preparing, and reusing existing datasets requires significant effort. Researchers should consider developing open centralized repositories and standardized metadata to overcome these barriers. Adopt Semi-automated Labeling Approaches. Dataset labeling plays a critical role in the preparation process. While manual labeling guarantees high accuracy, it requires significant time and expertise that risks scalability issues (i.e., expertise requirements, labor intensity). Meanwhile, fully automated labeling raises accuracy concerns, especially for complex smells. To address these challenges, we recommend increased utilization of semi-automated labeling approaches. Combining automated prior generation with expert validation, these hybrid methods capture the benefits of both worlds. They considerably reduce manual effort compared to pure manual labeling while maintaining relatively high-quality labels superior to fully automated techniques alone. This makes semi-automated labeling particularly suitable for training deep-learning models targeting complicated smell types. Additionally, datasets constructed from publicly available sources may not precisely represent industrial contexts due to project-specific differences (i.e., external generalizability challenge). We thus suggest practitioners leverage semi-automated strategies to build customized, organization-focused datasets, improving application scenario reflection. Standardizing such hybrid workflows could advance research reproducibility and industrial adoption of detection solutions. Enhance Data Transparency and Sharing. Current studies face data privacy and replicability challenges. Some datasets solely rely on open-source repositories, introducing subjectivity issues. The inability to fully access or reproduce original private databases also limits validation. We recommend researchers clearly document their full data preparation process. This transparency allows others to comprehensively understand and replicate methodologies. Researchers should also utilize open data platforms to publicly share datasets while preserving privacy. Key metadata around sources and quality assurances enhances usability and trust for the research community. Establishing centralized repositories incentivizing data contributions could help amass more comprehensive benchmark resources. Engaging the industry collaboratively in curating realistic problem snapshots likewise benefits the field. Standardizing metadata schemas and licensing models promotes long-term data maintenance. Proper governance balances privacy, reproducibility, and continued community-driven progress in solving critical problems. Overall, data availability remains paramount for advancing this impactful research domain. Establish Data Quality Evaluation Standards. Dataset quality impacts the credibility and reproducibility of findings. As our review shows, some papers exhibit class imbalance, limited generalizability, noise, and redundancy. We recommend the research community develop a standardized set of data quality criteria. Metrics should assess the key attributes mentioned above and more. For example, balance criteria could specify acceptable sample size ratios between smells/non-smells. Diversity standards may require code drawn from different projects/domains to ensure representativeness. Noise and redundancy checks aim to flag and remove problematic samples. Researchers should systematically apply the proposed criteria when curating and publishing datasets. Studies could also quantitatively benchmark resources pre/post quality improvements to validate enhancement approaches. Establishing clear quality baselines empowers comparative assessment and continued refinement. It promotes replication by formalizing important methodological aspects. Overall, endorsed evaluation practices can help judiciously manage preparation trade-offs and advance the field through high-confidence shared resources. Potentials of Large Language Models. Large Language Models (LLMs) exhibit excellent zero-shot and in-context learning capabilities, can automatically leverage large-scale data, and are applicable across multiple domains <cit.>. In software engineering, LLMs are increasingly recognized for their utility in various applications. For instance, in vulnerability detection, LLMs assist in identifying potential security risks <cit.>, while in code completion, they facilitate the automatic completion of code fragments based on contextual cues <cit.>. Specifically, in the realm of code smell detection, LLMs have the potential to identify common code smell patterns by learning from extensive code samples. This capability can aid programmers in detecting and rectifying issues more efficiently. LLMs offer adaptability across different programming languages and project scales compared to existing techniques. Despite the current absence of research directly combining LLMs with CSD, the potential for LLMs to reduce data labeling workloads and tackle data scarcity challenges significantly is compelling. This makes them a promising technology for addressing the needs of high-quality CSD datasets. Advancing CSD with Proven Data Techniques. Various studies within the software engineering field highlight the importance of robust data preparation. <cit.> discuss the challenges of data imbalance and labeling noise, common to our own findings in Code Smell Detection (CSD). They advocate for using class rebalancing and specific data cleaning techniques, such as removing blank lines, non-ASCII characters, comments from code, and duplicate code instances. A unique approach they propose, which differs from our current methodology, is the replacement of user-defined variables and function names with generic tags to reduce code noise further. This strategy could potentially enhance the generalizability of our CSD models by minimizing overfitting to specific code styles or developer idiosyncrasies. According to <cit.>, a classification of the dataset based on data types—such as code data and metric data—is essential for maintaining relevance and accuracy in data analysis. For code data, it is necessary to filter out irrelevant elements while preserving valuable source code and to remove duplicated instances that can skew the analysis. For metric data, normalization is crucial when values span different orders of magnitude, to ensure that no single metric disproportionately influences the model. As explored by <cit.>, the development of automatic data cleaning tools represents a significant advancement in handling noisy data within software engineering projects. These tools utilize heuristic rules to automatically identify and remove common issues such as empty functions and duplicated code. Incorporating such tools into our data preparation process could streamline our workflows and improve the quality of our datasets, ultimately leading to more reliable CSD detection models. We believe that integrating cross-disciplinary techniques could improve the overall effectiveness and robustness of CSD models. § THREATS TO VALIDITY As with any systematic literature review, this study faces potential threats to validity that could influence results and conclusions. We classify threats based on construct, internal, external, and conclusion validity <cit.>. While not exhaustive, reporting these threats promotes transparency. The study's context is DL-based code smell detection techniques to establish a foundation for methodological discussions over the specified period - general conclusions require corroborating evidence. We aim to contextualize results by openly reporting on these threats and mitigation efforts. Construct Validity is about the connection between the research hypothesis and the findings associated with the RQs. Threats about this category are usually related to the RQs, search strategy, and paper selection process. To mitigate this threat, we create a comprehensive paper selection strategy. Specifically, first, we use the search strings and their alternative spellings and synonyms to ensure that most of the key papers are retrieved. For example, some papers do not necessarily include the term deep learning in the title, abstract, or keywords. We may choose to use the name of the techniques (e.g., RNN or CNN) to ensure that the search string is comprehensive. Second, we create a paper quality assessment strategy to ensure retrieved papers satisfy the RQs. Finally, we employ the snowballing <cit.> approach to obtain further any relevant research that may have been missed. Internal Validity is related to the consistency of research findings. In this survey, it mainly affects the results of the paper quality assessment and data extraction. To mitigate this threat, we collaborate with multiple authors to reduce subjectivity in the quality assessment and data extraction processes. Specifically, during the quality assessment phase, one author assesses the quality of all papers, and two other authors validate the results. We will talk about any disagreements and resolve them. Also, one author extracts the data during data extraction, and the other authors validate the extracted data for all the papers. The data are compared, and any conflicts are disscussed and resolved. External Validity is related to the generalizability of the reported results. This survey focuses on only one area of software engineering - the data preparation process for DL-based CSD. Therefore, the results can not be generalized to other areas. Furthermore, deep learning is a rapidly evolving field with new techniques being introduced every day <cit.>. Our survey is limited to December 2023. Results may not apply to ranges outside the timeline. Conclusion Validity is related to the likelihood of reproducing the research and obtaining the same results. To mitigate this threat, we describe the entire research process in detail, including the RQs, the search string, the inclusion/exclusion criteria, the quality assessment form, and the data extraction form. In addition, our findings, i.e., considerations, challenges, and solutions, are based on data extracted from the original papers. We ensure the integrity of our survey and the reality of our findings through rigorous paper selection. § CONCLUSION Through a systematic analysis of 36 relevant studies on data preparation approaches for deep learning-based code smell detection until December 2023, our survey illuminates key aspects of the data preparation process. We explore considerations across the data requirements, collection, labeling, and cleaning phases, as well as prevalent challenges and proposed solutions from the literature. Key challenges identified include data scarcity, limited generalizability, difficulties in labeling, data imbalance, and redundancy issues. The solutions proposed focused on leveraging cross-project data, two-phase data utilization, semi-automated labeling, resampling, and data cleaning techniques. Based on these results, our primary recommendations are for researchers to establish standardized practices around dataset quality assessment, transparency, and centralized resources. We also recommend techniques for practitioners to construct industrial-strength datasets representative of real-world codebases. This systematic review provides a foundation for rigorously evaluating CSD data preparation efforts. Adopting its recommendations aims to foster continued optimizations towards better detection capabilities with real-world application potential. Limitations and Future Work To address the limitations inherent in a systematic literature review, we acknowledge the absence of empirical validation of the recommendations made in this paper. While our study provides a comprehensive synthesis of available literature on data preparation processes for deep learning-based code smell detection, the conclusions drawn remain hypothetical. Recognizing this, we commit to staying updated on the latest developments in DL-based CSD. We plan to conduct practical experiments and detailed case studies for empirical validation in future studies. Such empirical evidence will be crucial to substantiate the effectiveness of the proposed solutions and further advance the field. cas-model2-names
http://arxiv.org/abs/2406.17981v1
20240625233600
A Split Fast Fourier Transform Algorithm for Block Toeplitz Matrix-Vector Multiplication
[ "Alexandre Siron", "Sean Molesky" ]
math.NA
[ "math.NA", "cs.NA", "65-02, 65-04" ]
Department of Engineering Physics, Polytechnique Montréal, Montréal, Québec H3T 1J4, CAN § ABSTRACT Numeric modeling of electromagnetics and acoustics frequently entails matrix-vector multiplication with block Toeplitz structure. When the corresponding block Toeplitz matrix is not highly sparse, e.g. when considering the electromagnetic Green function in a spatial basis, such calculations are often carried out by performing a multilevel embedding that gives the matrix a fully circulant form. While this transformation allows the associated matrix-vector multiplication to be computed via Fast Fourier Transforms (FFTs) and diagonal multiplication, generally leading to dramatic performance improvements compared to naive multiplication, it also adds unnecessary information that increases memory consumption and reduces computational efficiency. As an improvement, we propose a lazy embedding, eager projection, algorithm that for dimensionality d, asymptotically reduces the number of needed computations ∝ d/ (2 - 2^-d+1) and peak memory usage ∝ 2/((d+1)2^-d + 1), generally, and ∝(2^d + 1)/(d +2) for a fully symmetric or skew-symmetric systems. The structure of the algorithm suggests several simple approaches for parallelization of large block Toeplitz matrix-vector products across multiple devices and adds flexibility in memory and task management. A Split Fast Fourier Transform Algorithm for Block Toeplitz Matrix-Vector Multiplication Sean Molesky July 1, 2024 ======================================================================================== Translationally invariant physics, when considered in a spatial basis, leads to matrix representations with (multi-level) block Toeplitz matrix structure, Fig. <ref>. Through scattering theory—wherein a complex physical system is described in terms of its free space properties and “scattering” effects caused by previously unaccounted for interactions <cit.>—the need to efficiently compute block Toeplitz matrix-vector products accordingly arises in a variety of physical contexts <cit.>. For example, block Toeplitz matrices appear frequently in simulation and device design problems for electromagnetic <cit.> and acoustic waves <cit.>, as well as the analysis of multi-dimensional discrete random processes <cit.>. More directly, in many of these settings (e.g. the simulation of nanophotonic devices utilizing Green functions <cit.>) a sort of scattering phenomena is efficiently described by a highly sparse operation (e.g. the added interaction is local in space) and the primary bottle-neck of iterative methods is the speed of block Toeplitz matrix-vector multiplication <cit.>. The standard method of computing block Toeplitz matrix-vector products—motivated a number of perspectives <cit.>—is to perform a circulant embedding for each level of the Toeplitz structure. That is, the Toeplitz matrix-vector method of “extending the diagonal bands”, cc t_11 t_12 t_21 t_11 _n × n Toeplitz→cc|cc t_11 t_12 s_0 t_21 t_21 t_11 t_12 s_0 s_0 t_21 t_11 t_12 t_12 s_0 t_21 t_11_2n × 2n circulant, is carried out iteratively. Through this procedure, the full matrix acquires a circulant form, and multiplication with the resulting embedded vector is carried out via the FFT approach to convolution <cit.>. Notably, each embedding doubles the size of the corresponding Toeplitz level, and hence overall size of the system, by the inclusion of additional zero coefficients: taking d as the number of levels, the embedding results in a vector space that is 2^d times as large. These padding coefficients occupy an increasingly large fraction of the total system information as the dimension (number of levels) of the Toeplitz structure increases [Fig. <ref>], leading to inefficient memory utilization and computation. Two major categories of alternatives have been previously suggested to this procedure. The first class focuses on how block Toeplitz matrix-vector multiplication happens mechanistically, examining how successive coefficient reorderings may be used to minimize the number of arithmetic operations <cit.>. These methods, to the best of our understanding, have yet to offer substantial speed improvements over the usual FFT approach for large systems, and can not be simply implemented with optimized functions from standard libraries. The second class focuses on the acceleration of block Toeplitz (or Toeplitz) matrix-vector mulplications through the use of matrix or tensor product decompositions, e.g. via Vandermonde matrices <cit.>, QTT tensor formatting <cit.>, or trigonometric transformation <cit.>. While extremely powerful in certain settings <cit.>, these approach generally require some degree of approximation, are more computationally expensive than circulant embedding for arbitrary vectors, and rely on specialized code bases. Similarly, proposals to improve contemporary FFT algorithms, either under specific conditions <cit.> or by providing a functional equivalent <cit.>, are, at present, only beneficial for sparse vectors. However, if progress is made in this area, it should be directly applicable to the algorithm presented below. Here, recognizing that the data layout of a block Toeplitz structure allows for the ordering of embedding, Fourier transformation, and projection to be partially interchanged, we present an algorithm that performs block Toeplitz matrix-vector multiplication by dividing each dimension into a pair of branches. In this “divide and conquer” approach, circulant embeddings are replaced by the creation of phase modified vector copies containing even and odd Fourier coefficients (II.A). This change enables a lazy evaluation strategy that can substantially reduce memory pressure, ∝(2^d + 1)/(d +2) in the case of electromagnetics, and operational complexity. Moreover, the partitioned nature of the algorithm presents several possibilities for tunable parallelization to increase computational speed, or handle large block Toeplitz systems. An accompanying implementation used for all presented results, written in Julia, is available from https://github.com/alsirc/SplitFFT_lazyEmbed. As the procedure is straightforward (II.B), and FFT dominated, similar performance should be simply achievable in any major scientific computing language. § ALGORITHM Conceptually, the proposed algorithm is based on an observation about how the padding elements of a given level of embedding are incorporated into subsequent FFTs. Working from the finest level of Toeplitz structure (numbers) to the coarsest (outer matrix blocks), the added zeros are only mixed with the input vector information once the FFT of the particular level has been completed. Analogously, as soon as the inverse FFT of a particular level has been carried out, half of the elements no longer influence the final matrix-vector product: they are obviated by subsequent projection, Fig. <ref> right. As such, by postponing each embedding step until it is necessary, and performing each projection as soon as possible, required memory and computation can be reduced. The properties described under subsection A indicate how lazy (postponed) embedding and eager (as soon as possible) projection are achieved by a division of the input vector at each level into “original” and phase-modified “child” copies that carry even and odd Fourier coefficients. Via this branching structure, the overall matrix-vector operation is partitioned into smaller calculations, which can be leveraged to reduce memory pressure—information is only calculated once it is needed to continue evaluation of the topmost (all-even) branch Fig. <ref>. The structure also offers possibilities for additional parallelization. For instance, in a multi-device setting, the linearity of the Fourier transform could be used to perform parallel matrix-vector multiplication in a number of ways: a branch may either communicate and merge, decreasing overall computation, or continue to some other predefined termination point, decreasing communication overhead. To handle larger systems, the decomposition structure of the Fourier transform could be used to further divide the number of coefficients treated at any one time. That is, 𝐯 could be pre-partitioned into nearly independent even and odd parts. §.§ Splitting and merging of vector coefficients At each level, splitting into even and odd indices is accomplished using the following property. Let 𝐯 be a vector of size n. Define 𝐯 as 𝐯 = [𝐯, 0], a vector of size 2n, and P as the diagonal phase shift operator, P = diag[1,…, e^i π k/n, …, -1]. ℱ(𝐯) and ℱ(P𝐯) are then, respectively, the even and odd coefficients of ℱ(𝐯), where ℱ denotes the Fourier transform. Let v_k denote coefficients of the vectors 𝐯, and ṽ_k the coefficients of 𝐯. ℱ(𝐯) = ( 1/2n∑_k=0^2n-2v_k ω^lk)_l=0,...n-1, where ω = exp(-iπ/n), and ℱ(𝐯) = (1/n∑_k=0^n-1 v_kω^lk)_l=0,...,n-1 = (1/n∑_k=0^n-1 v_kω^2lk)_l=0,...,n-1, where ω = exp(-2iπ/n). Similarly, ℱ(P𝐯) = 1/√(n)(∑_k=0^n-1ω^lk/√(n)exp(ilπ/n))_l=0,...,n-1×𝐯 = 1/n(∑_k=o^n-1ω^(2l+1)k v_k)_l=0,...n-1. Therefore, ℱ(𝐯) = ℱ(𝐯_e) + ℱ(𝐯_o) = ℱ(𝐯) + ℱ(P𝐯). Through this separation, the circulant embedding operation at a given level of the block Toeplitz structure is transformed into a branching operation, with each branch retaining the size of the initial vector. After the diagonal multiplication step, branches are merged in reverse order—the projection counterpart to embedding. Using the equivalence presented above, accounting for the difference in relative normalization factors, merging is accomplished via the formula 𝐯 = ℱ^-1(1/2(ℱ(𝐯_even) + Pℱ(𝐯_odd))). §.§ Procedure Making use of the above properties, the proposed algorithm follows the pseudo-code presented under Algorithm 1. Additional details on the Julia implementation available via https://github.com/alsirc/SplitFFT_lazyEmbed are given in the appendix. The protocol proceeds recursively by nested calls to the toeMulBrn function, which organizes the splitting, merging, and possible diagonal multiplication operations that occur on a specific level, or dimension, d. As visualized in Fig.<ref>, the first task completed by toeMulBrn is to compute the FFT along the current level. toeMulBrn then creates a phase shifted copy of the transformed vector by applying the sptBrn function, and launches two progeny branches, respectively containing even and odd Fourier coefficients. Overall coordination is ensured by passing each of these calls a branch identifier, bId: the even branch inherits the identifier of its caller, while the odd branch is assigned a new identifier by nxtId. The function then waits for these calls to return, merges their results, using mrgBrn, and performs an inverse FFT, IFFT, before returning. If the maximal level of block Toeplitz structure is reached, d = d_max then toeMulBrn launches no further calls, and instead computes the diagonal multiplication of its given vector with the appropriate Toeplitz data, T[bId], via mulBrn. toeMulBrn then computes the inverse FFT, IFFT, and returns. §.§ Operational complexity and memory usage Consider a d-dimensional vector v of total size s = n^d where d > 0. As before, let v be the fully embedded image of v (along all dimensions)—total length of ṽ is 2^d s. The complexity of a FFT as used in a level of Alg. <ref> is m_2 log_2(m_1) where m_1 is the total length of the vector to be transformed and m_2 is the product of its sizes along the dimensions to be transformed <cit.>. The complexity of the standard circulant embedding method is therefore C_embed = 2^d+1 s log_2(2^d s)_FFTs + 2^d s_multiplication. At the lth level, 1 < l < d, of Alg. <ref> there are 2^l forward FFTs and 2^l inverse FFTs, each along one dimension. Including the additional diagonal multiplications involved with the phase shifting and multiplication with the Toeplitz data, the operational complexity of Alg. <ref> is thus C_split = 2 ∑^d_l=1 2^l slog_2(n)_FFTs + 2^d s_multiplication + 2 ∑^d-1_l=0 2^l s_phase shift = 2(2^d - 1)s(2log_2(n) + 1) + 2^d s Take R_c = C_embed/C_split to be operational complexity ratio between the two methods. Combining the above, R_c = 2^d+1slog_2(2^d s) + 2^d s/2(2^d - 1)s(2log_2(n) + 1) + 2^d s = dlog_2(2n) + 1/(1 - 2^-d)(2log_2(n) + 1) + 1, so that R_c→ d/ (2 - 2^-d+1) as n →∞. In terms of memory usage, the standard circulant method requires an overall doubling in vector size for each level (dimension) of block Toeplitz structure. The memory required for both the Toeplitz data and input vector is, consequently, M_v,embed = 2^d s = M_T,embed, with s as above. For Alg. <ref>, a lazy evaluation strategy leads peak memory usage to occur when the first branch (top of Fig. <ref>) is processed up to multiplication with its corresponding Toeplitz data, leading to a peak allocation of M_v,split = (d+1)s elements. For the Toeplitz information, the full embedded vector of Fourier transformed data is split into 2^d parts of s coefficients each, so M_T,split = 2^d s. Comparing these results, the peak memory ratio R_m of the two approaches, in general, is R_m = M_v,embed + M_T, embed/M_v,split + M_T, split =2^d+1 s/(d + 1)s + 2^d s = 2/(d + 1)2^-d + 1. Because electromagnetics, acoustics, and quatum mechanics typically exhibit reciprocity between sources and receivers <cit.>, there are a number of settings of physical interest wherein system matrices are both block Toeplitz and block symmetric (resp. block skew-symmetric). In such cases, the Fourier transform of the vector containing the generating elements of the associated Toeplitz matrix is mirror symmetric (resp. skew-symmetric) about the middle index of each dimension, and thus can be fully stored in a vector of size s. In such cases, the peak memory ratio between the two methods becomes R_m, sym = 2^d + 1/(d + 2). More precisely, if 𝐯 is a vector with coefficients (v_k)_k=0,n-1 such that v_i = v_n-i for i = 1,..n-1, as happens in the case of a symmetric block Toplitz matrix, then its Fourier coefficents 𝐟 exhibit the symmetry f_i = f_n/2 + i. Hence, only half the coefficients must be stored. Inductively, an equivalent reduction occurs at each level, and single vector's worth of Toeplitz data is required. The antisymmetric case is analogous as v_i = -v_n-i implies that f_i+n/2 = -f_i - 2f_0. Example of coefficient symmetry for two levels [ f^00 f^01 f^02 f^03 0 f^03 f^02 f^01; f^10 f^11 f^12 f^13 0 f^13 f^12 f^11; f^20 f^21 f^22 f^23 0 f^23 f^22 f^21; f^30 f^31 f^32 f^33 0 f^33 f^32 f^31; 0 0 0 0 0 0 0 0; f^30 f^31 f^32 f^33 0 f^33 f^32 f^31; f^20 f^21 f^22 f^23 0 f^23 f^22 f^21; f^10 f^11 f^12 f^13 0 f^13 f^12 V^11 ] § RESULTS As analytic complexity calculations must necessarily overlook many, potentially influential, practical implementation details, we directly measure the relative performance of the split and standard circulant embedding algorithms by performing block Toeplitz matrix-vector multiplication for various dimension and vector sizes. For convenience, the length of each dimension is assumed to be consistent, but this is not a limitation of either method (or code). Our presented results focus on lengths corresponding to powers of 2, as FFTs are most efficient for these lengths. Plots of cases where the sizes are products of primes and plots of consecutive natural numbers are available in the appendix. Versions of the presented plots with uncertainty bars corresponding to the statistical variance of the time computation are also in the appendix. All results were obtained through local computation on a laptop with 16 GiB of RAM, and an AMD® Ryzen 7 5800hs processor. Although the Julia implementation can be run on GPU, for simplicity, only parallelized, multi-threaded, CPU computations are compared. The coefficients used for the matrix vectors and the input vector are complex numbers with random real and imaginary part with mean value equal to 0 and a parametrizable variance. The number of points for each plot is limited by the memory capacity of the machine used for the computation. The results obtained do not clearly show the tendencies predicted by operational complexity calculations. Namely, Alg. <ref> is generally faster than expected, and only converges to the predictions of Eq. (<ref>) as the memory limit of the test machine is reached. The broad uncertainty observed (see appendix) suggests that the statistical data captured by BenchmarkTools.jl is independent and uncorrelated. These differences in measured performance, compared to Eq. (<ref>), and large variance of results, may be caused by several factors. First, all FFTs are computed using the FFTW package <cit.>, which utilizes different methods depending on the structure of the vector to be transformed, and has an internal thread management system that is not controlled by Julia. For a fixed input vector, the two methods perform FFTs of different sizes, and smaller sizes are known to be comparatively faster than expected based theoretical complexity <cit.>. Second, the standard circulant embedding method employs full thread parallelization on fully embedded vectors at all steps. Conversely, Alg. <ref> consistently applies thread parallelization at the size of the input vector and makes greater use of the Julia task scheduler. For small numbers of tasks, and small loops, launch overhead becomes a meaningful percentage of the benchmarks. Parallelization of small loops can also lead to highly variable memory writes and reads, which could explain the spread of measured ratios. § OUTLOOK In this article, we have described how the standard circulant embedding method for multi-level block Toeplitz matrix-vector multiplication can be improved by reordering embeddings and projections with respect to their associated Fourier transformation level. Concretely, by utilizing a lazy embedding / eager projection strategy, for d levels of Toeplitz structure, we have shown that operational complexity can be reduced by a factor of d/ (2 - 2^-d+1), and peak memory usage by a factor of 2/((d+1)2^-d +1). As the size of the matrix-vector product tends to infinity, for 3 levels of structure, these ratios tend to 12/7 and 4/3 respectively. For a fully symmetric (resp. skew-symmetric) matrix peak memory reduction improves to a factor of (2^d+1)/(d+2)—9/5 for d=3. In simulation, the wall clock time ratio between the two methods is better than theoretical expectations in almost all cases. The proposed algorithm also suggests several way in which additional parallelization could be introduced to handle the large scattering problems that occur in acoustics and electromagnetics. In fact, a slightly modified version of the (GPU enabled) Julia implementation is already being used in the GilaElectromagnetics package (https://github.com/moleskySean/GilaElectromagnetics.jl), currently under development by the authors. The algorithm relies only on (highly optimized) standard library functions, and should be easily adaptable to almost any relevant context. § ACKNOWLEDGMENTS The authors thank Justin Cardona and Paul Virally for their careful reading and feedback on this article, and Paul Virally for his assistance in configuring various packages. This work was supported by the Québec Ministry of Economy of Innovation through the Québec Quantum Phonotnics Program, the National Sciences and Engineering Reasearch Council of Canada via Discovery Grant RGPIN-2023-05818, and the Canada First Research Excellence Fund through its collaboration with the Institut de Valorisation des Données (IVADO). 1 IEEEtran ref1 C. Jelich, M. Karimi, N. Kessissoglou, S. Marburg, "Efficient solution of block Toeplitz systems with multiple right-hand sides arising from a periodic boundary element formulation", Engineering Analysis with Boundary Elements, vol. 130, pp. 135-144, 2015. ref2 S. A. Goreinov, D. V. Savostyanov, E. E. Tyrtyshnikov, "Tensor and Toeplitz Structures Applied to Direct and Inverse 3D Electromagnetic Problems", PIERS Proceedings, Institute of Numerical Mathematics, Russian Academy of Sciences, 2009. ref3 B. E. Barrowes, F. L. Teixeira and J. A. Kong, Fast Algorithm for matrix-vector multiply of asymmetric multilevel block Toeplitz matrices in 3D scattering. Massachusetts, U.S., Massachusetts Institute of Technology, 2001. ref4 M. Karimi, P. Croacker, N. Kessissoglou, "Boundary element solution for periodic acoustic problems", Journal of Sound and Vibration, School of Mechanical and Manufacturing Engineering, UNSW Australia, Sydney, Australia, 2015 ref5 P. A. Voois, "A Theorem on the Asymptotic Eigenvalue Distribution of Toeplitz-Block-Toeplitz Matrices", IEEE Transactions on Signal Processing, vol. 44, no. 7, 1996. ref6 P.J.S.G. Ferreira, M. E. Domínguez, "Trading-off matrix size and matrix structure: Handling Toeplitz equations by embedding on a larger circulant set", Digital Signal Processing, Vol. 20, pp. 1711-1722, 2010. ref7 D. Lee, "Fast Multiplication of a Recursive Block Toeplitz Matrix by a Vector and its Application", Journal of Complexity, Vol. 2, pp. 295-305, 1986. ref8 A. Carriow and M. Gliszczyński, The Fast algorithms to compute matrix-vector products for Toeplitz and Hankel matrices. Szczecin, PL, West Pomeranian University of Technology, 2012. ref9 I. Gohberg, V. Olshevsky, "Fast Algorithms with Preprocessing for Matrix-Vector Mutliplication Problems", Ramat Aviv, Israel, Tel Aviv University, Journal of Complexity, Vol.10, pp. 411-427, 1993. ref10 V. A. Kazeev, B. N. Khoromskij, E. E. Tyrtyshnikov, "Multilevel Toeplitz matrices generated by tensor-structured vectors and convolution with logarithmic complexity", Max-Plank-Institute für Mathematik, Naturwissenshaften Leipzig, 2011. ref11 G. Heinig, R. Rost, "Representations of Toeplitz-plus-Hankel matrices using trigonometric transformations with application to fast matrix-vector multiplication", Linear Algebra and its Applications, Vol. 275-276, pp. 225-248, 1998. ref12 X. Wen, M. Sandler, "Calculation of radix-2 discrete multiresolution Fourier transform", Signal Processing, vol. 87, no. 10, pp. 2455-2460, 2007. ref13 Z. Hu, H. Wan, "A Novel Generic Fast Fourier Transform Pruning Technique and Complexity Analysis", IEEE Transaction on signal processing, vol. 53, no. 1, 2005. ref14 J.D. Jackson, Electrodynamics, The Optics Encyclopedia (eds T.G. Brown, K. Creath, H. Kogelnik, M.A. Kriss, J. Schmit and M.J. Weber), 2007. https://doi.org/10.1002/9783527600441.oe014 ref15 P. D. Lax, R. S. Phillips, Scattering Theory, Academic Press, Inc., 1990. ref16 T. Søndergaard, "Modeling of plasmonic nanostructures: Green's function integral equation methods", Physica Status Solidi, vol. 244, Issue 10, 2007. ref17 M. Frigo, S. G. Johnson, "The Design and Implementation of FFTW3", Proc. IEEE, vol. 93, no. 2, pp. 216–231, 2005. ref18 Y. Dotsenko, S. S. Baghsorkhi, B. Lloyd, N. K. Govindaraju, "Auto-tuning of fast fourier transform on graphics processors", ACM SIGPLAN Notices, vol. 46, no. 8, pp. 257-266, 2011. ref19 A. Chai, M. Moscoso, G. Papanicolaou "Imaging strong localized scatterers with sparsity promoting optimization", SIAM Journal on Imaging Sciences, vol. 7, no. 2, pp. 1358-1387, 2014.
http://arxiv.org/abs/2406.18780v1
20240626224705
Investigation on centrality measures and opinion dynamics in two-layer networks with replica nodes
[ "Chi Zhao", "Elena Parilina" ]
physics.soc-ph
[ "physics.soc-ph", "cs.DS", "cs.SI", "90B15, 90B18, 90C40, 05C90, 68R10" ]
Numerical simulations for the Ising model on three dimensional lattices with coordination number equal 5: static and dynamic critical phenomena. Francisco Sastre July 1, 2024 ================================================================================================================================================ § ABSTRACT We examine two-layer networks and centrality measures defined on them. The propose two fast and accurate algorithms to approximate the game-theoretic centrality measures and examine connection between centrality measures and characteristics of opinion dynamic processes on such networks. As an example, we consider a Zachary's karate club social network and extend it by adding the second (internal) layer of communication. Internal layer represents the idea that individuals can share their real opinions with their close friends. The structures of the external and internal layers may be different. As characteristics of of opinion dynamic processes we mean consensus time and winning rate of a particular opinion. We find significantly strong positive correlation between internal graph density and consensus time, and significantly strong negative correlation between centrality of authoritative nodes and consensus time. § INTRODUCTION There are macroscopic and microscopic opinion dynamics models. Macroscopic models including Ising model <cit.>, Sznajd model <cit.>, voter model <cit.>, concealed voter model (CVM) <cit.>, and macroscopic version of general concealed voter model (GCVM) <cit.> examine social networks using statistical-physical and probability-theoretical methods to analyze distribution of opinions. Within GCVM <cit.>, it is supposed that the individuals communicate in two layers (internal and external) and can interact in internal or private layer. The latter assumption is different from CVM, where individuals do not express their true opinions in internal layer. Examples of microscopic models of opinion dynamics are DeGroot model <cit.>, Friedkin-Johnsen (F-J) model <cit.>, and bounded confidence models <cit.>. In the F-J model, actors can also factor their initial prejudices into every iteration of opinion <cit.>. The models of opinion dynamics based on DeGroot and F-J models, and Markov chains with possibility to control agents' opinions are proposed in <cit.>. The levels of influence and opinion dynamics with the presence of agents with different levels of influence is examined in <cit.>. A bounded confidence model (BCM) is a model, in which agents ignore opinions that are very far from their own opinions <cit.>. The BCM includes two essential models: the Deffuant-Weisbuch model (D-W) proposed in paper <cit.>, and the Hegselman-Krause (H-K) model introduced in the work <cit.>. In the D-W model, two individuals are randomly chosen, and they determine whether to interact according to the bounded confidence <cit.>. The micro version of GCVM is introduced in <cit.>, and the difference between macro and micro versions is that in the micro version we do not need to adjust the simulation program according to different network structures. As long as network structure is given, the program automatically produces simulations. Therefore, we can use this program to simulate real networks. But for the macro version, we should adjust the corresponding state transition formulae for different network structures. Since network structure in GCVM is two-layer, it is interesting to examine how not only this structure in general, but also network characteristics, e.g. different centrality measures <cit.>, affect opinion dynamics and resulting opinion in consensus if it is reached. We consider two key performance indicators of opinion dynamics, namely, winning rate and consensus time. Social power (influence centrality) is a concept that ranking the importance of nodes in a network. Centrality measures are used to identify the most powerful nodes in a network. Understand the powerful nodes is very important for opinion dynamics, which can help us to know which nodes play a crucial position for spreading the opinion. The most common centrality measures are betweenness centrality <cit.>, closeness centrality <cit.> and degree centrality <cit.>. There are also some centrality measures based on random walk, such as random walk occupation centrality <cit.> which is the frequency of a node in the network being accessed during a random walk; random walk betweenness centrality <cit.> – the proportion of the path through a node to all paths during a random walk, random walk betweenness centrality does not depend on the shortest path, therefore it is more general than betweenness centrality; Random walk closeness centrality – a variant of closeness centrality <cit.>, the computation of random walk closeness centrality is based on the mean first-passage time (MFPT). The analytical expressions of random walk based centrality measures can be found in <cit.>. Game-theoretic network centrality is a flexible and sophisticated approach to identify the most powerful nodes in a network, the root is from cooperative game theory, MK Tarkowski provide a good review for game-theoretic network centrality in <cit.>. The Shapley value <cit.> and the Myerson value <cit.> are both concepts from cooperative theory which is used to fairly distribute the total payoff among players based on their marginal contribution. In the paper <cit.>, the authors introduce how to use Shapley value to determine the top-k nodes in the social network. Mazalov et al. proposed a game-theoretic centrality measure for weighted graph based on Myerson value in <cit.>. In <cit.>, Mazalov and Khitraya propose a modified Myerson value for unweighted undirected graphs. This characteristic function of this modification considers not only for simple paths but also includes cycles. The next work of <cit.> is <cit.>, where the authors introduce the concept of integral centrality for unweighted directed graph and provide a rigious mathematical proof that this centrality measure satisfies the Boldi-Vigna axioms <cit.>. In this paper, firstly we extended our previous work in <cit.> by adding more centrality measures and examining the connection between centrality measures and opinion dynamics, and secondly we proposed two fast and accurate algorithms to approximate the game-theoretic centrality measures and examined them in randomly generated network and Zachary's karate club network. The primary conclusion drawn from this research indicates that there is a significantly strong positive correlation between internal graph density and consensus time, and significantly strong negative correlation between centrality of authoritative nodes and consensus time. Our proposed algorithms effectively approximate the Shapley and Myerson values in randomly generated networks with high accuracy. Additionaly our algorithms successfully determine the top-2 influential nodes in Zachary's karate club network. The rest of this paper is organized as follows. Section 2 introduces multi-layer network with replica nodes. Network properties are discussed in Section 3. Section 4 presents simulation experiments and results. We briefly conclude and discuss our future work in Section 5. § MULTI-LAYER NETWORK WITH REPLICA NODES §.§ Multi-layer network A multilayer network is a network formed by several networks that evolve and interact with each other. <cit.> §.§ Replica nodes In a multilayer network with replica nodes there is a one-to-one mapping of the nodes in different layers and corresponding nodes are called replica nodes. Since there is a one-to-one mapping between the nodes in different layers, every layer is formed by the same number of nodes. <cit.> §.§ Two-layer netwrok with replica nodes We use the following notations to define a two-layer (external and internal) network with replica nodes: 1. N: number of nodes in each layer; 2. a_i=(a_i^E,a_i^I): one-to-one mapping of node i in the external and internal layer, where a_i^E (a_i^I) is a representation of node i in the external (internal) layer; 3. G_E(𝒱_E, ℰ_E): predefined external network, where 𝒱_E={a_i^E} and ℰ_E represent a set of individuals and set of edges in the external layer; 4. G_I(𝒱_I, ℰ_I): predefined internal network, where 𝒱_I={a_i^I} and ℰ_I represent a set of individuals and set of edges in the internal layer; 5. ℰ_C={(a_i^E,a_i^I)|i=1,…,N}: edges between replica nodes. A two-layer network with N individuals/agents can be defined as[This definition and the corresponding opinion dynamic models was firstly introduced in <cit.> and further discussed in <cit.> and <cit.>.]: G(𝒱,ℰ), where 𝒱=𝒱_E∪𝒱_I, |𝒱_E|=|𝒱_I|=N, and ℰ=ℰ_E∪ℰ_I∪ℰ_C. §.§ Two-layer network simplification G(𝒱,ℰ) is composed of G_E(𝒱_E, ℰ_E) and G_I(𝒱_I, ℰ_I) connected by ℰ_C. Two-layer Networks can also be represented by an adjacency matrix. The adjacency matrix of a two-layer network is a block matrix, where the diagonal blocks are the adjacency matrices of the external and internal layers, and the off-diagonal blocks are the adjacency matrices of the connections between the external and internal layers. The adjacency matrix of G(𝒱,ℰ) is shown in Equation <ref>. A = [ A_EE A_EI; A_IE A_II ] Where, A_EE is the adjacency matrix of the external layer, A_EI is the adjacency matrix of the connections between the external and internal layers, A_IE is the adjacency matrix of the connections between the internal and external layers, and A_II is the adjacency matrix of the internal layer. For the undirected graph, the adjacency matrix A is symmetric. It's possible to redefine a two-layer network G(𝒱,ℰ) to a one layer network with weight. We can define a fusion rule as follows: W'= π_c_e· A_EE + π_c_i· A_II + π_i·Λ_E + π_e·Λ_I Where Λ_E and Λ_I are diagonal matrices, and the elements on the diagonal represent the degrees of each node. Furthermore, we have a new weight network G'(𝒱',ℰ',W'), where 𝒱'={1,2,⋯,N} is the set of nodes in the new network, ℰ' is the set of edges in the new network, and W' is the weight matrix of the new network. Based on the new weight network G'(𝒱',ℰ',W'), we proposed two game-theoretic centrality measures, we will discuss in the Section <ref>. §.§ Zachary's karate club network in two-layer setting As an example of a social network, we consider Zachary's karate club network representing friendship relations among 34 members of a karate club at the US university in the 1970s <cit.>. The study became famous in data and network analytical literature since it highlighted a conflict between manager (Node 0) and director (Node 33), which eventually led to the split of the club into two groups. One-layer Zachary's karate club network is represented in Fig. <ref>. Fig. <ref> shows how one-layer Zachary's Karate Club network can be extended into two-layer network if we add an internal layer of communication between agents. As discussed above, in CVM the nodes in the internal layer are not connected, i.e. internal layer is represented by an empty graph (Fig. <ref>), while in GCVM there may be nonempty network representing internal communication of agents. In Fig. <ref> we represent a star internal structure. The colors in Fig <ref> represent individuals' opinions. The color, blue or red, is randomly initialized according to the given parameters (ρ=0.75 for BVM and ρ_r_e=0.75,ρ_r_i=0.25,ρ_r=0.2 for CVM and GCVM [ρ represent the ratio of red opinion in the BVM network, ρ_r_e,ρ_r_i and ρ_r stand for ratio of red opinion in external, internal and both layer network respectively.]). § CENTRALITY MEASURES IN ONE- AND TWO-LAYER NETWORKS In this section, we represent several centrality measures. Some of them are defined for one-layer networks and can be applied for two layers separately, some of them take into account the multi-layer structure of a network. We also introduce game-theoretical centrality measures and provide an algorithm to calculate their approximation when the network contains large number of nodes. We define the pairwise average shortest path for an external layer d_E in two-layer network as follows: d_E=∑_s,t∈𝒱_Ed_E(s,t)/n_E(n_E-1), where d_E(s,t) is a length of the shortest path between nodes s and t in external layer, 𝒱_E is a set of nodes in external layer, n=|𝒱_E| is a number of nodes in external layer. Similarly, we can define d_I as a pairwise average shortest path for an internal layer. A graph density is a ratio of the number of edges |ℰ| with respect to the maximal number of edges. Since internal layer is represented by an undirected graph, we define an internal graph density as in <cit.>: D_I=2|ℰ_I|/|𝒱_I|(|𝒱_I|-1). §.§ Classical Centrality measures In this section we briefly introduce some (most well-known) centrality measures defined for one-layer networks. §.§.§ Betweenness centrality Betweenness centrality is a basic concept in a network analysis, which was suggested in <cit.>. Betweenness centrality of a node gives the number of geodesics between all nodes that contain this node. It reflects the level of node participation in the dissemination of information between other nodes in a graph. It is calculated by the formula: C_b(v) = 1/n_b∑_s,t∈ Vσ_s,t(v)/σ_s,t, where σ_s,t indicates the number of shortest paths between nodes s and t, and σ_s,t(v) is the number of shortest paths between nodes s and t containing node v. Normalization coefficient is n_b=(|V|-1)(|V|-2) for v∉{s,t}, otherwise n_b=|V|(|V|-1), where |V| is the number of nodes in a one-layer network <cit.>. If s=t, σ_s,t=1 and if v∈{s,t}, then σ_s,t(v)=0. §.§.§ Group betweenness centrality Group betweenness centrality measure indicates a proportion of shortest paths connecting pairs of nongroup members that pass through the group <cit.>, and it is defined by formula: C_gb(X) = 1/n_gb∑_s,t∈ V∖ X σ_s,t(X)/σ_s,t, where σ_s,t(X) is the number of shortest paths between nodes s and t passing through some nodes in group X. Normalization coefficient is n_gb=(|V|-|X|)(|V|-|X|-1), where |X| is the number of nodes in group X. §.§.§ Closeness centrality In a connected graph, closeness centrality of node u is the reciprocal of a sum of lengths of the shortest paths between u and all other nodes in the graph <cit.>. When calculating closeness centrality, its normalized form is usually referred to as the one representing the average length of the shortest path instead of their sum, and it is calculated like this: C_c(u) = n_c/∑_v∈ V∖{u}d(v,u), where normalization coefficient is n_c=|V|-1. §.§.§ Group closeness centrality Group closeness centrality is the reciprocal of the sum of the shortest distances from the group to all nodes outside the group <cit.>. It is defined by the formula: C_gc(X) = n_gc/∑_v∈ V∖ Xd(v,X), where d(v,X) is the shortest distance between group X and v. Normalization coefficient is n_gc=|V-X|. §.§.§ Degree centrality Degree centrality of node v <cit.> is defined as C_d(v) = v_d/n_d, where v_d is a degree of node v, and normalization coefficient is n_d=|V|-1. §.§.§ Group degree centrality Group degree centrality is the number of nodes outside the group connected with the nodes from this group <cit.>. Normalized group degree centrality for group X is given by the formula: C_gd(X) = |{v_i∈ V ∖ X|v_i is connected to v_j ∈ X}|/n_gd, where normalization coefficient is n_gd=|V|-|X|. §.§ Random walk based centralities The second group of centrality measures is based on random walks, simple dynamical process that can occur on a network. Random walks can be also used to approximate other types of diffusion processes <cit.>. §.§.§ Random walk occupation centrality The random walk occupation centrality <cit.> of node v is the probability of that node v being visited by a random walker during an infinitely long walk, and it is defined as: C_rwoc(v) = lim_t→∞n_v(t)/t, where n_v(t) is the number of times node v is visited by a random walker during time interval t. Different exploration strategies can be used to calculate the occupation centrality, we use the uniform exploration strategy in this paper (i.e. each node jumps to its neighbor with the equal probability). In the weighted networks, jumping probabilities are proportional to the weights of the edges. The analytical expressions of random walk occupation centrality with a uniform exploration strategy in interconnected multilayer networks are presented in <cit.>. §.§.§ Random walk betweenness centrality The most common betweenness is the shortest path betweenness <cit.>, where the centrality of a node v is relative to the number of shortest paths between all pairs of nodes passing through v. However, in real networks, entities (rumors, messages, or internet packets) that travel the network do not always follow the shortest path <cit.>. Therefore, the random walk betweenness centrality of node v is defined as the amount of random walks between any pair (s,d) of nodes that pass through node v <cit.>. C_rwbc(v) = 1/n_rwbc∑_s, t ∈ V s ≠ t v ≠ s, v ≠ t1_v ∈Path_s → t Where n_rwbc=2N (N-1) is the normalization coefficient. The indicator function 1_v ∈Path_s → t is equal to 1 if node v is in the path between nodes s and t, and 0 otherwise. The Path_s → t is the random path between nodes s and t in the network. Repeat the random walk process few times to get the different random path. And then we will get the average random walk betweenness centrality. It will be convenient to get the analytical expression of random walk betweenness centrality of nodes by absorbing random walk. Where the absorbing state is selected to be the destination node d <cit.>. The extended analytical expression of random walk betweenness centrality for interconnected multilayer networks can be found in <cit.>. §.§.§ Random walk closeness centrality A variant of closeness centrality is random walk closeness centrality, the computation of which is based on the mean first-passage time (MFPT). The MFPT is defined as the average number of steps to reach a node d, starting from a given node s. The lower average MFPT indicates that a node is on average more quickly accessible from other nodes. Therefore, a node with a lower average MFPT to all other nodes is considered more “central” in the network. The random walk closeness centrality is defined as the reciprocal of the average MFPT, and it is calculated by the formula: C_rwcc(v) = n-1/∑_u∈ V∖{v}τ_uv, Where τ_uv is the MFPT from node u to node v. The MFPT matrix can be calculated analytically by means of Kemeny-Snell fundamental matrix Z <cit.> or by means of absorbing random walks <cit.>. The analytical expressions of random walk closeness centrality in interconnected multilayer networks can be found in <cit.>. §.§ Game-theoretic centrality measures §.§.§ Shapley value based centrality The Shapley value is a solution concept in cooperative game theory, which was introduced by Lloyd Shapley in 1953 <cit.>. It is a measure of the average marginal contribution of a player to all possible coalitions. Shapley value is a unique solution concept in cooperative game theory, it is defined by the formula: ϕ(i) = ∑_S⊆ N∖{i}|S|!(n-|S|-1)!/n!(v(S∪{i})-v(S)), Where S ⊆ V represents a coalition , the value of coalition S can be denoted by v(S). We define characteristic function v(S) as half the sum of the weighted degrees of all nodes in the subgraph induced by S as shown in Equation <ref>. v(S) = 1/2∑_{i, j}⊆ S W(i, j) Where W(i, j) is the weight of the edge between nodes i and j within the subgraph induced by S. The factor 1/2 is used to correct for the fact that in an undirected graph, each edge is counted twice when summing over all node pairs. The Algorithm <ref> describes how to calculate Shapley values based on weighted degree. However, the Shapley value is pretty computationally expensive, especially for large networks because of the number of coalitions[for a nextwork with n nodes, the total number of coalitions are equal to 2^n]. Therefore, we propose a new approach for calculating Shapley value, we will discuss in the following part. §.§.§ Shapley value based centrality approximation According to the fact that the influence from other nodes will decrease with the increase of path length. We have the following ideas to speed up the calculation: 1. Depth limitation: By limiting the depth of the neighbor considered, the number of subsets that need to be considered is reduced, lowering computational complexity. 2. Local subset iteration: Iterating over subsets only within the local neighbor, rather than the entire graph, decreases the number of iterations. 3. Neighbor size sampling: For larger neighbors, computational complexity is further reduced by random sampling, thereby decreasing the number of subsets iterated over. Define ψ(i,d_max) as the set of neighbors of node i up to depth d_max, not include node i. We can calculate the Shapley value of node i based on the local neighbor as shown in Equation <ref>, where β=ψ(i,d_max)+1/m+1 is the scaling factor, and m is the maximal neighbors size. The algorithm for optimized calculation of Shapley values in weighted graphs is shown in Algorithm <ref>. We use the same characteristic function as in the original Shapley value calculation, but we limit the depth of the neighbor considered and sample subsets from the neighbor to approximate the Shapley value. ϕ_a(i) = ∑_S⊆ψ(i,d_max)v(S∪{i})-v(S)/2^|ψ(i,d_max)| if ψ(i,d_max) < m, β∑_S⊆ψ(i,d_max)v(S∪{i})-v(S)/2^|ψ(i,d_max)| if ψ(i,d_max) ≥ m. For ψ(i,d_max)≥ m, we will random sample m neighbors from ψ(i,d_max) several times and calculate the Shapley value based on the samples. The sampling times H_|ψ(i,d_max)|,m is given by the Formula (<ref>) <cit.>. Where γ≈ 0.5772156649 is the Euler-Mascheroni constant. This formula is the mathematical expectation of the number of samples for collecting m neighbors from ψ(i,d_max) until all neighbors are collected. [We can consider this problem as the generalized coupon collector's problem.] H_ψ(i,d_max),m = (ψ(i,d_max)+1/2/m - 1/2)(lnψ(i,d_max)+γ)+1/2 From the Equation (<ref>), we will get the approximation of the Shapley value, the Shapley value is very close to the real value when the density of the graph not very high (less than 0.7). But in any case, the ratio of each node's approximated Shapley value to the sum of all nodes' approximated Shapley values is very close to the ratio of the real one. Additionaly, it's very easy to calculate the Shapley value of the biggest coalition v(N), this makes it possible for us to approximate more accurately. We define the factor ξ as below: ξ = v(N)/∑_i∈ Vϕ_a(i) We can get the more accurate approximated Shapley value of each node by multiplying the factor ξ with the approximated Shapley value ϕ_a(i). We will show the benchmark with this algorithm in the Section <ref>. §.§.§ Myerson value based centrality The Myerson value was introduced by Roger Myerson in 1977 <cit.>. It is a measure of the contribution of participants in a cooperative game that considers the effects of network structure. By modifying the calculation method of the Shapley value, Myerson takes into consideration the connections in the network, thereby reflecting the influence of the network structure on the cooperative game. Consider a game where graph G is a tree, which consists on N nodes and characteristic function is determined by the schema proposed by Jackson <cit.>; Every direct connection gives to coalition S the impact r, where 0≤ r≤ 1. Players also obtain an impact from non-direct connections. This kind of impact will decrease with the increase of the path length. The characteristic function is defined as follows <cit.>: v(S) = a_1r + a_2r^2 + … + a_kr^k + … + a_Lr^L = ∑_k=1^L a_k r^k, where L is a maximal distance between two nodes in the coalition; a_k is the number of paths of length k in this coalition; v(i) = 0, ∀ i ∈ N. Mazalov, Trukhina etc. prove that the Equation <ref> is a Myerson value for unweighted graphs <cit.>. where σ_k(i) is a number of the paths of the length k which include i. Which can be applied to weighted graph by converting the weight of the edge to the number of paths between two nodes. (i.e. converting a weighted graph to a multigraph) <cit.> Y_i (v,g) = σ_1(i)/2 r + σ_2(i)/3 r^2 + ⋯ + σ_L(i)/L+1 r^L = ∑_k=1^Lσ_k(i)/k+1 r^k, §.§.§ Myerson value based centrality approximation However, calculate the Myerson value is also computationally expensive, especially for large networks. Considering the six degrees separation <cit.>, that any two people in the world who don't know each other only need a few intermediaries to establish contact. This means we can reduce the computational expense by limiting the maximal depth L. For a social network, the higher density, the lower intermediate nodes are needed to connect two nodes. We redefine L by the Formula <ref>, where D is the density of the network: L= 6 if D≤ 0.2, 2 if 0.2<D≤ 0.3, 1 if D>0.3. The algorithm for calculation of Myerson values in weighted graphs is shown in Algorithm <ref>. We use the same characteristic function as in the Equation <ref>, but we cutoff the maximal depth to approximate the Myerson value. Similarly, we can also define the scaling factor ξ for the Myerson value based centrality approximation. The factor ξ is defined as below: ξ = v(N)/∑_i∈ VY_i(v,g) For the v(N) of Myerson value, we need to count the number of paths for all length, i.e. a_1, a_2, ⋯, a_L. It's more computationally expensive than the Shapley value. But after the ξ scaling, we will get a more accurate approximation. We will show the comparison in the Section <ref>. § EXPERIMENTS Network structure has a huge impact on KPIs of opinion dynamics realized on this network. Therefore, we define several characteristics of a network which, in our opinion, have most significant correlation with KPIs of opinion dynamics. §.§ Centrality experiments description Due to the computational complexity of the Shapley value and Myerson value, we fixed the number of nodes in the network to 20, and for a given density. We randomly and repeatly take two different nodes from the graph to add the connection between them until the density reaches the desired value. We designed the following experiments to evaluate the performance of the proposed centrality measures: 1. Shapley value based centrality: Density is fixed to 0.1,0.2,⋯,1, for each weighted and unweighted graph, we calculate the Shapley value and approximated Shapley value. We compare the results and the calculation time of the two methods. 2. Myerson value based centrality: Density is fixed to 0.1,0.11,⋯,0.2 (Even a very slight increasing of density will significant increase in computation time.), for each weighted and unweighted graph, we calculate the Myerson value and approximated Myerson value. We compare the results and the calculation time of the two methods. And then we reduce the number of nodes but increase the density to 1 to show the effective of our Myerson value based algorithm. 3. Comparison with classical centrality measures: Based on the real social network dataset “Zachary's karate club”, we generate the two-layer network by adding different internal structures. And simplify the two-layer network to a one-layer weighted network by the method described in Section <ref>. The most important nodes in the network are node 0 (instructor — Mr Hi) and node 33 (manager — John A). We define the accuracy of the centrality measures as shown in Equation <ref>. We compare the accuracy of the proposed centrality measures with the classical centrality measures (betweenness and closeness centralities). Accuracy = |Top 2 nodes of the proposed centrality measures∩{0, 33}|/2 §.§ Centrality experiments results Table <ref> shows the comparison results of the Shapley value based approach, where “SV” refers to “Shapley value”, “ASV” refers to “Approximated Shapley value” and “RMSE” refers to the root mean square error. The lower RMSE indicates the more accurate approximation. The “Ratio” is the ratio of the (approximated) Shapley value of each node to the sum of all nodes' (approximated) Shapley values. The “RMSE Ratio” is the ratio of the RMSE of the approximated Shapley value to the RMSE of the Shapley value. The “Weighted” column indicates whether the edge weight is considered in the calculation. The calculation time is much faster (see the columns “SV Time” and “ASV Time”) than the original Shapley value calculation. And the RMSE of the approximated Shapley value is very low, the RMSE ratio is also very low for both weighted and unweighted graphs. Table <ref> shows the comparison results of the Myerson value based approach with scaling factor ξ. From the “MV Time” and “AMV Time”, we can see the computational complexity of Myerson value based centrality grows rapidly with the increase of the density of the network. The RMSE of the approximated Myerson value grows with the increase of the density of the network, but the RMSE ratio is very low. Therefore, we can use the approximation of Myerson value as the centrality for nodes in the network. Table <ref> shows the comparison results of the Myerson value based approach without scaling factor ξ. By comparing “AMV Time” in Table <ref> and Table <ref>, we can see without ξ the calculation time is much faster. To show the effective of our algorithm for Myerson value even in the high density graph, we reduce the number of node to 10, and increase the density from 0.05 to 1 with step 0.05. The results are shown in Table <ref> and Table <ref>. By comparing the pair of Table <ref>, Table <ref> and Table <ref>, Table <ref>, we can get the conclusion that the scaling factor ξ has a very limited effect on reducing the error of approximation Myerson value and ξ increase the computation time a lot. But the error of ratio is very small. Therefore We recommend to use the Myerson value based centrality approximation without the scaling factor ξ and use the ratio as the approximation of the Myerson value. From Table <ref>, we see the weighted graph with density 1.0 has the highest RMSE. And the weighted graph with density 0.2 in Table <ref> has the highest RMSE. Table <ref> and Table <ref> show the comparison of the real value and approximated value for each node with the relative error. From Table <ref>, we can say even for the biggest RMSE, our algorithm provide a very accurate approximation for Shapley value. From Table <ref>, we can see some nodes have a high relative error but some nodes are not. The reason is that in the approximation we count the number of paths not for all lengths (length ≤ L). We can increase the L to make the more accurate result, but it will increase the computational complexity. Table <ref> shows the comparison with classical centrality measures. Both proposed centrality measures have a higher accuracy than the classical centrality measures. Especialy, the Shapley value based centrality can identify the most important nodes with 100% accuracy. For the Myerson value based centrality, it has 50% accuracy for the special case (karate-twoStar-34), because the Node 17 another center node in the internal layer. For the betweenness centrality, it work not well for the high-density network. Closeness centrality has a worst accuracy in this case. §.§ Network properties with opinion dynamics experiments description We have done simulations of opinion dynamics in one-layer Zachary's karate club network and two-layer network with Zachary's karate club network being external layer and different internal structures. There are internal layers we use in our analysis: 1. karate: Zachary's karate club network; 2. star: star structure with node 0 being the center; 3. two-star: two central nodes 0 and 17, nodes 1–16 are linked with node 0, nodes 18–33 are linked with node 17. Moreover, nodes 0 and 17 are linked; 4. cycle: node 0 is linked with node 1, node 1 is linked with node 2, and so on. Finally, node 33 is linked with node 0; 5. two-clique: nodes 0–16 belong to the first clique, nodes 17–33 belong to the second clique, and these two cliques are connected through link between nodes 0 and 17; 6. complete: all nodes are linked with each other. We start with simulations of opinion dynamics of BVM, and then CVM and GCVM implementing different internal structures and their affect on consensus time and winning rate. Network properties (in this paper, centrality measurements) will also change with changes in the network structure. §.§ Network properties with opinion dynamics experiments results Fig. <ref> shows how internal average shortest path d_I varies with the network structure.[“Karate-34” and “karate-empty-34” refer to a one-layer Zachary's karate club network and to a two-layer network with Zachary's karate club network in external layer and empty internal layer respectively (i.e. d_I does not exist for these two structures, in particular, it is equal to infinity. But in Fig. <ref>, we use a value of 99 instead of infinity).] Fig. <ref> shows how internal density varies with the network structure. Fig. <ref> shows the different centralities for different network structures. In the left of Fig. <ref>, we show the betweenness centrality, closeness centrality, approximated Shapley value, and approximated Myerson value for Node 0 and Node 33 on our simplified one layer weighted network. And in the right one, we show the group degree centrality, group closeness centrality, group betweenness centrality and two different random walk centralities for the two-layer network with different internal structures. Fig. <ref> shows how KPIs vary with different network structures. Looking at Fig. <ref>, we may notice that some centralities trend of Node 33 are very similar to the trend in Fig. <ref>. Fig. <ref> also demonstrates exactly the same trend as in Fig. <ref>. Fig. <ref> shows that the structure has a great impact on winning rate, but we have not yet found reasonable explanations for this. We conducted correlation tests on the above observation results by SciPy <cit.>. Correlation coefficients are presented in Table <ref> (∗,∗∗,∗∗∗ after the correlation coefficients represent the level of significance 0.05, 0.01, 0.001 respectively). [We choose node 33 as the input for centrality. Actually, if we choose another node as input, the conclusion is still valid.] From Table <ref> we can see that: * High positive correlations: (i) The internal density D_I exhibits a very strong positive correlation with consensus time across all correlation methods (Pearson, Kendall, and Spearman), with coefficients above 0.95, highly significant (∗∗∗). This indicates that as D_I increases, the T_cons significantly increases. (ii) d_I is significantly positively correlated with most centrality measures under the methods Kendall and Spearman, and the correlations are generally strong. * Negative correlations: T_cons shows strong negative correlations with network centrality measures like Betweenness and Closeness, especially with Closeness (-0.928∗∗), suggesting that higher centrality scores are associated with shorter T_cons. It's reasonable in the real life, that very authoritative nodes can result in a faster consensus. * Variability in correlations: Different metrics show varying levels of correlation strength across the Pearson, Kendall, and Spearman methods. This variability indicates that the strength and significance of correlations can depend on the correlation method used, likely influenced by the underlying data distributions. In summary, Table <ref> highlights significant relationships between specific network properties and metrics like consensus time. Centrality of authoritative nodes and the density of network playing a crucial role in influencing the consensus time. § CONCLUSIONS AND FUTURE WORK We examined opinion dynamics models including BVM, CVM, and GCVM in Zachary's karate club. In CVM and GCVM, we assume different structures of the internal layer and conduct simulations for all these internal layers. We examined how internal network structure affects consensus time and winning rate, and if these KPIs correlate with network centrality measures. We proposed two fast and accurate centrality measurement algorithms based on game-theoretic approach. Our algorithms can effectively approximate theoretical values and operate at high speeds. Both of our algorithms can identify the most important nodes in the network. The sampling concept in our algorithm can be easily transferred to other fields, such as explainable artificial intelligence. We think the following developments of our work are interesting: (i) incorporating stubbornness and redefining consensus conditions to observe the impact of stubbornness on KPIs, (ii) formalizing consensus time and winning rate, (iii) exploring the reasons of winning rate variation when network structure changes, (iv) get the analytical expression for approximate Shapley/Myerson value based centrality. § ACKNOWLEDGMENTS The work of the second author was supported by Russian Science Foundation grant, grant no. 22-11-00051. <https://rscf.ru/en/project/22-11-00051/> unsrtnat
http://arxiv.org/abs/2406.17750v1
20240625173619
Fast Ground State to Ground State Separation of Small Ion Crystals
[ "Tyler H. Guglielmo", "Dietrich Leibfried", "Stephen B. Libby", "Daniel H. Slichter" ]
quant-ph
[ "quant-ph" ]
LLNL-JRNL-859047 Lawrence Livermore National Laboratory, Livermore CA National Institute of Standards and Technology, Boulder CO Lawrence Livermore National Laboratory, Livermore CA National Institute of Standards and Technology, Boulder CO § ABSTRACT Rapid separation of linear crystals of trapped ions into different subsets is critical for realizing trapped ion quantum computing architectures where ions are rearranged in trap arrays to achieve all-to-all connectivity between qubits. We introduce a general theoretical framework that can be used to describe the separation of same-species and mixed-species crystals into smaller subsets. The framework relies on an efficient description of the evolution of Gaussian motional states under quadratic Hamiltonians that only requires a special solution of the classical equations of motion of the ions to describe their quantum evolution under the influence of a time-dependent applied potential and the ions' mutual Coulomb repulsion. We provide time-dependent applied potentials suitable for separation of a mixed species three-ion crystal on timescales similar to that of free expansion driven by Coulomb repulsion, with all modes along the crystal axis starting and ending close to their ground states. Three separately-confined mixed species ions can be combined into a crystal held in a single well without energy gain by time-reversal of this separation process. Fast Ground State to Ground State Separation of Small Ion Crystals Daniel H. Slichter July 1, 2024 ================================================================== § INTRODUCTION As trapped ion quantum information processors continue to evolve and scale up, efficient all-to-all connectivity becomes increasingly valuable. The quantum charge-coupled device architecture (QCCD) <cit.> laid out a path for scaling ion trap computers beyond single chains to universal quantum information processors with all-to-all connectivity <cit.>. In recent cutting-edge experimental implementations of the QCCD architecture, the separation, transport, recombination, and re-cooling of ions consumes 98 % or more of the runtime of representative algorithms <cit.>. Separation, transport, and recombination can be classified by whether they occur on timescales lasting many periods of the ion motional modes (adiabatic) or on timescales on the order of the motional period (faster-than-adiabatic or diabatic). Faster-than-adiabatic ion transport with motional modes starting and ending near the ground state has been demonstrated experimentally for crystals of one or two ions of the same species <cit.>. However, separation of multi-ion, multi-species crystals into two subsets with motional modes starting and ending in the ground state (ground state to ground state separation, or GGS) has only been demonstrated in the adiabatic regime, which is inherently slow <cit.>. Furthermore, to achieve many key metrics required of efficient quantum computers, one must be able to achieve efficient GGS. For example, high gate fidelities and short recooling times are difficult or impossible without GGS. Hence, GGS is necessary for essentially all ion connectivity reconfigurations in the QCCD architecture. As such, finding and implementing faster-than-adiabatic GGS protocols is crucial for removing a major speed limitation in current ion trap quantum information processors. A method for fast GGS was introduced in Ref. <cit.>, wherein two same species ions initially in a single potential well of a linear rf ion trap were allowed to fly apart through their mutual Coulomb repulsion by rapid removal of the applied confining potential, and then were re-trapped in diabatically-applied separate potential wells. GGS was achieved in this scenario through special initial state preparation of the center-of-mass (COM) and stretch normal modes along the crystal axis (axial modes). In particular, the states were squeezed prior to the release of the ions in the initial trap. Since the ion motional states evolve under approximately quadratic Hamiltonians during separation, the unitary time evolution of each mode, from release to recapture, can be represented by a combination of one squeeze operator and one rotation operator, i.e. the Euler decomposition of SU(1,1) ≅SP(2,ℝ). This decomposition allows one to prepare each mode in advance of separation with a squeezing operation that will be exactly undone by the effects of separation, thus enabling capture of the ions in their axial motional ground states after diabatic separation. Our work extends Ref. <cit.> to the case of more general linear ion crystal configurations that may contain more than two ions and more than one species. This is accomplished through a combination of squeezing and beam-splitting, i.e. the Bloch-Messiah <cit.> decomposition of SP(4,ℝ) <cit.>. In addition, rather than pre-squeezing the axial modes of an ion crystal before separation, we identify suitable time-dependent applied potentials that continuously transform all modes during separation such that they end up close to their ground states in their final potential wells. This “on the fly” transformation should result in further reduction of GGS durations and manifests a method for rapid squeezing that is different from squeezing protocols previously implemented for ion motional modes <cit.>. We present a practically important example, the GGS of a same-species or mixed-species data-helper-data (DHD) three-ion crystal into three separate wells, and lay out the principles for GGS of larger same- or mixed-species crystals with additional axial modes. The method is general to any choice of data and helper qubit ions, but we specialize to the case of Be^+-Mg^+-Be^+ (BMB) for concreteness when discussing a practically relevant example. In Section <ref> we introduce a formalism that is convenient for analyzing Gaussian states and their evolution under quadratic Hamiltonians and will be used throughout the paper. Section <ref> applies this formalism to characterize the squeezing and beam-splitting operations acting on the axial normal modes of a DHD crystal during a swift ramp down of the external potential to zero as well as an equally swift ramp-up of three separate “catching wells”—arresting the three separated ions in analogy to the protocol discussed in Ref. <cit.>. We then outline the approach for even larger crystals in <ref>. Previously, all state preparation had been assumed to have occurred by an unknown mechanism prior to separation <cit.> or that the modes are returned to their ground states after separation is complete. In Sec. <ref> we overcome this limitation by showing how the states of all three modes in a DHD crystal can be transformed through modulation of the trapping potential while the ions separate to allow for GGS in a three ion crystal without prior or posterior operations. We find that realistically feasible potential modulation is sufficient to end near the ground states of all three modes and that the protocol is robust against realistic levels of imperfection of the modulation. Concluding remarks in Sec. <ref> summarize the topics introduced in this paper, compare this work to other proposals, and discuss opportunities for future work. § THEORETICAL FORMALISM Before getting into the specifics of any particular ion chain or transport protocol, we will review a theoretical formalism which describes the quantum dynamics of systems undergoing time evolution through quadratic Hamiltonians. The formalism itself is general, however we will only apply it to Gaussian states. More details can be found in <cit.>. §.§ Evolution under quadratic Hamiltonians We denote the generalized time-independent momentum and position operators of N particles in one dimension as ξ = (p_1, ⋯, p_N, x_1, ⋯, x_N)^T. In our context these are the momentum and position operators for normal modes of coupled ion motion of N ions along the axis of a linear crystal. Components ξ_a and ξ_b satisfy the following commutation relations (note that from here on ħ has been set to 1): ξ_aξ_b = i C_ab, C = [ 0 -𝕀; 𝕀 0 ], with 𝕀 the N-dimensional identity matrix. We further assume that the system evolves under a time dependent purely quadratic Hamiltonian that can be written as H(t) = 1/2ξ^T h(t) ξ, where h(t) is a Hermitian matrix with dimension 2N×2N. Generalization to include sub-quadratic terms is straightforward and not important for this work. We can now write down how ξ evolves in the Heisenberg picture. Going forward, ξ = (p_1,⋯,p_N, x_1, ⋯, x_N)^T, will represent time-dependent Heisenberg picture operators. Formally, we can write the time evolution as the time-ordered exponential U(t) = 𝒯exp(-i _0^t H(t') dt') ξ̇̃̇_a(t) = i UH(t)ξ_a U. Hence, ξ̇̃̇(t) = C·h(t) ξ̃(t), which is equivalent to the classical equations of motion for the momentum and position coordinates, Ξ = (P_1, ⋯, X_1, ⋯). This equation can be rewritten by defining the 2N×2N transfer matrix M(t), ξ̃(t) = M(t) ξ̃_0, which takes ξ̃(t=0) = ξ̃_0 to some later time t. Inserting into (<ref>) yields Ṁ(t) = C·h(t)·M(t). With a solution to (<ref>) in hand, the dynamics of ξ̃(t) are fully determined. From those dynamics, the first and second moments of ξ̃ are also determined. When restricted to Gaussian states, the covariance matrix of ξ̃, V, evolves as <cit.> V(t) = M(t)·V_0 ·M^T(t). This can be understood by realizing that the Wigner function of a Gaussian state evolving under a quadratic potential is constant along classical trajectories <cit.>: W(Ξ, t) = W_0(Ξ_0(Ξ, t), t = 0). Hence, the quadratic form appearing in the initial Gaussian Wigner function evolves as Ξ_0^T V_0^-1Ξ_0 →Ξ^T [M^-1(t)]^T ·V_0^-1·M^-1(t)Ξ , which then implies (<ref>). As such, the state is fully specified for the particular case of Gaussian states <cit.>. This will be important later on when we evolve the ground state of a linear ion crystal. Another important factor to consider is that M·C·M^T = C, which implies M is symplectic, and parameterizes Sp(2N,ℝ), the group under which dynamical time evolution occurs <cit.>. The matrix M(t) can be Bloch-Messiah decomposed into a combination of multi-mode interferometers B(θ_k) and single-mode squeezers S(r_k,ϕ_k) as <cit.> M(t) = B(θ_2)·[⊕_k=1^N S(r_k,ϕ_k) ] ·B(θ_1), where θ_k, ϕ_k and r_k will in general be time dependent, and θ_k are Hermitian matrices composed of rotation angles and mixing angles that determine couplings between different modes. The interferometers and squeezers have the functional form B(θ) = e^i a·θ·a S(r_k, ϕ_k) = exp[r_k/2((a_k^†)^2 e^iϕ_k - a_k^2 e^-iϕ_k)], where a = (a_1,…,a_N) and the ladder operators of mode k are denoted as a_k and a_k^†. For two modes, a two-port interferometer is equivalent to the application of a beam-splitter and a rotation <cit.> B(θ) = B_BS(θ_BS, ϕ_BS) ·[R_1(θ_1) ⊕R_2(θ_2) ] ≡B_BS(θ_BS, ϕ_BS) ·R_12(θ_1, θ_2) with B_BS(θ_BS, ϕ_BS) = exp[θ_BS(a_a a_b e^iϕ_BS - a_a a_b e^-iϕ_BS)], and R_k(θ_k) = exp[-iθ_k a_k a_k] R_lm(θ_l, θ_m) = R_l(θ_l) ⊕R_m(θ_m) A detailed explanation on how this decomposition in (<ref>) is performed can be found in <cit.>. Crucially, this tells us that the dynamics of any time-dependent quadratic Hamiltonian can be decomposed as a combination of single-mode squeezers and beam-splitting operations. This enables efficient computation of suitable further squeezing and beam-splitting operations, either before or after H(t) acts, that compensate for the effects of M(t) on the initial ground states and will result in final ground states, generally in a basis that can be different from the initial phase space basis. Formally, if Q is the orthogonal matrix that transforms the initial operators ξ̃_0 that diagonalize H(0) into a set of operators ξ̃_t that diagonalize H(t), then for GGS T·M(t) = M(t) ·T' =Q, where T (T') are suitable compensation operations composed of single mode squeezers, multi-port interferometers and rotations applied after (before) H(t) has acted. Special cases of this general principle for a single normal mode of ion motion, and for two decoupled normal modes were explored in <cit.>. We will discuss the more general separation of a DHD crystal in more detail in section <ref>. Alternatively, a specific h(t) can be identified such that no initial or final operation is necessary for GGS, M(t) = Q, which removes the necessity to compensate for the effects of H(t) and potentially reduces the total duration of all operations necessary for GGS. This approach to separation of a DHD crystal is discussed in section <ref>. In both cases, since M(t) represents a unitary time evolution of the system, the equations of motion are invariant under time reversal. As long as this is the case, the operators M(t), T and T' can be inverted to run the process in reverse. This implies that a good solution for ion separation also yields a good solution for the time reversed process, ion recombination. §.§ Occupation Numbers from the Covariance Matrix Given a quadratic Hamiltonian, we would like to track various mode occupation numbers over time. For example, if the curvature of a potential well is changing in time, we may be interested in tracking the occupation numbers of the normal modes of ions confined in the well. In the previous section, we explained how the covariance matrix of a Gaussian state evolves in time under a quadratic Hamiltonian. The occupation number for a mode k with frequency ω_k that remains uncoupled from all other modes at all times has expectation value n_k = H_k(t)/ω_k - 1/2, where H_k(t) is the decoupled subsystem's Hamiltonian. The Hamiltonian's expectation value can be computed from the isolated block in the covariance matrix V of the whole system corresponding to mode k as H_k(t) = 1/2 V_11(t) + 1/2ω_k^2 V_22(t), where the matrix subscripts indicate the row and column of the covariance matrix element. In general, when modes are coupled they do not have a well defined occupation number. However, if a mode is decoupled at any time (for example at late times in a separation protocol, when the ions are far apart from each other), we can extend the definition at that time to all times to define a quantity of interest. A natural choice of modes are the final modes of a separated crystal. After separating an ion crystal into individual potential wells, the final modes will be completely decoupled as the Coulomb repulsion between ions will be negligible, and each ion can be considered individually. Throughout this paper, we will make the choice to compute occupation number with respect to the final modes of a DHD crystal. For example, if two modes, j and k, are coupled initially, but decoupled at late times with potential strengths ω_j,k, we can define the following quantities derived from their covariance matrix V^jk at all times, n_j = 1/2 V^jk_11(t) + 1/2ω_j^2 V^jk_33(t)/ω_j - 1/2 n_k = 1/2 V^jk_22(t) + 1/2ω_k^2 V^jk_44(t)/ω_k - 1/2. This can be generalized to M coupled modes that are uncoupled at some time, but the dimension of the block of correlations in the covariance matrix of the full system that needs to be considered increases to 2M × 2M. § TWO-SPECIES THREE-ION (DHD) CRYSTAL We will now show how GGS can be performed in a linear DHD crystal, which we treat in one spatial dimension x along the crystal axis. The specific ion species under consideration are unimportant for this work, however for specificity when performing numerical calculations we choose the mass m_D of D equal to that of Be9 and m_H of H equal to that of Mg25. In the separation considered here, we assume that a linear DHD crystal is aligned along the weakest axis of the trapping potential and its axial normal modes are cooled to near their ground states, with the expectation values of the axial ion positions equal to the classical equilibrium positions of the ions denoted by c_D1, c_D2, and c_H, which will become time-dependent during the separation protocol. The H species ion is held in between the two D species ions. The applied harmonic confining potential at the ion positions is defined in terms of a local spring constant, k_H(t) at the H ion and k_D(t) at the D ions. When this potential is removed, the two D ions are pushed apart by the three ions' mutual Coulomb repulsion, while the H ion ideally remains stationary at the origin at all times. After some amount of time, the D ions have reached a sufficient distance such that approximately harmonic potentials local to each ion can be turned back on with negligible interference between the three wells, thus trapping each ion individually. When neglecting the residual Coulomb repulsion at the final ion distances, the separate potential well minimum locations w_D1 and w_D2 coincide with the classical positions c_D1 and c_D2 of the D ions, symmetrically displaced from the origin where the H ion remained during the separation. This can be described by the classical Hamiltonian ℋ(t) = p_D1^2 + p_D2^2/2m_D + p_H^2/2m_H + 1/2 k_D(t) [ (c_D1 - w_D1)^2 + (c_D2 - w_D2)^2] + 1/2 k_H(t) c_H^2 + k_e/c_H - c_D1 + k_e/c_D2 - c_D1 + k_e/c_D2 - c_H. where p_D1, p_D2, and p_H are the momenta of the three ions. We note that the c and w values are time dependent although we have not written this explicitly in  <ref>. The DHD crystal is arranged such that c_D1<c_H=0<c_D2 so all denominators in the Coulomb interaction terms are positive throughout the separation. The constant k_e = q^2/(4 πϵ_0) with q the elementary charge and ϵ_0 the vacuum permittivity scales the Coulomb interaction. We make the approximation that the classical position of the H ion remains fixed, c_H(t) = 0, and do not consider perturbations of its position at this point. This approximation also implies c_D2(t) = -c_D1(t) ≡ c(t) and w_2(t) =-w_1(t)=w(t), with w_H(t)=0. To model the small oscillations of the ions around their classical motion quantum mechanically, we can transform into a frame of reference moving along the classical trajectory c_D1(t), c_D2(t), and c_H(t) for each ion (determined by solving <ref>) by applying appropriate displacement operators <cit.>. We can then introduce mass weighted quantum mechanical operators p_j →√(m_j) p_j, x_j →x_j/√(m_j). to describe small displacements relative to the classical frame of reference and accommodate ions of different mass. Under the assumption that the real space (non mass weighted) displacements are much smaller than the relative distances between ions, we can also expand the Coulomb term to quadratic order. Reinterpreting the classical position and momentum variables as quantum mechanical operators, we arrive at a quantum mechanical Hamiltonian corresponding to three coupled harmonic oscillators: H(t) ≈1/2[ p_D1^2 + p_D2^2+p_H^2] + 1/2k_D(t)/m_D[ x_D1^2 + x_D2^2 ] + 1/2k_H(t)/m_Hx_H^2 + k_e/c^3(t)[x_D1^2+x_D2^2/m_D + 2 x_H^2/m_H + (x_D2 - x_D1)^2/8 m_D - 2 x_H(x_D1+x_D2)/√(m_D m_H)]. As long as the separating ions are sufficiently closely approximated by (<ref>), the general formalism described in section <ref> can be applied to describe the ensuing dynamics. We can partially decouple the oscillators by introducing in-phase and out-of-phase coordinates for the D ions, x_op = x_D2 - x_D1/√(2), x_ip = x_D2 + x_D1/√(2) p_op = p_D2 - p_D1/√(2), p_ip = p_D2 + p_D1/√(2), along with corresponding time-dependent oscillator frequencies ω_op^2(t) = k_D(t)/m_D + 5 k_e/2m_D c^3(t), ω_ip^2(t) = k_D(t)/m_D + 2k_e/m_D c^3(t), ω_H^2(t) = k_H(t)/m_H + 4k_e/m_H c^3(t), and mode coupling strength Ω_Hip^2(t) = 4√(2) k_e/√(m_Dm_H) c^3(t). The Hamiltonian can then be written as H(t) = 1/2∑_k=ip,op,H[p_k^2 + ω_k^2(t) x^2_k] - Ω_Hip^2(t) x_Hx_ip, which has the form of the Hamiltonian in (<ref>) with an out-of-phase mode that is decoupled from the other two modes and has no participation of the H ion. The other two modes are coupled initially with a term that is linear in their position operators and therefore acts in analogy to a beam-splitter. Inspection of (<ref>) reveals that the terms proportional to the Coulomb interaction fall off as 1/c^3(t) and rapidly become negligible compared to the confinement k_D(t) from the external potential with increasing c(t). This justifies the approximation of three decoupled oscillators at the end of separation with motional frequencies determined entirely by the local curvature of the external potential and the mass of the ions. Applying a suitable time-dependent basis rotation between the two coupled modes will allow us to write the Hamiltonian in a way that is fully decoupled whenever the oscillator frequencies are not changing in time (in particular, before and after separation). However, the modes can still be coupled at intermediate times. To this end, we can define two new normal modes, a and b, as well as their operators, x_a and x_b that are connected to the H and ip modes by the unitary transformation U_θ = exp[-iθ(p_H x_ip - p_ip x_H)] where θ is implicitly defined as tan 2 θ = 2 Ω_Hip^2/ω_H^2 - ω_ip^2. The position operators in the rotated basis are, x_a = U_θ x_ip U_θ = x_ipcosθ + x_Hsinθ, x_b = U_θ x_H U_θ = x_H cosθ - x_ipsinθ. Since θ changes in time, this basis is constantly adjusting as the separation proceeds. In the adjusted basis, the Hamiltonian transforms to H_θ = U_θ H(t) U_θ + i U̇_θ U_θ = p_ip^2 + p_a^2 + p_b^2/2 + 1/2ω_ip^2(t) x_ip^2 + 1/2ω_a^2(t) x_a^2 + 1/2ω_b^2(t) x_b^2 + θ̇(p_bx_a - p_ax_b), where we have defined the following quantities, ω_a^2 = 1/2 (ω_ip^2 + ω_H^2 + Γ) ω_b^2 = 1/2 (ω_ip^2 + ω_H^2 - Γ) Γ = √(4Ω_Hip^4 + (ω_ip^2 - ω_H^2)^2). A full exploration of this type of transformation in two dimensions can be found in <cit.>. In the chosen frame of reference, all modes are decoupled whenever θ̇ = 0. Additionally, the position of one oscillator is coupled to the momentum of the other in this frame, which can be seen from the last term in (<ref>). §.§ Classical Dynamics For separation of the DHD crystal in a physical system, the applied confining potential cannot be turned on and off instantaneously due to experimental constraints. For concreteness and ease of calculation in the following examples, we use sinusoidal ramping of the potential in time, but other ramp shapes could also be used. We ramp the external axial potential to zero starting at t=0 over a duration τ, followed by expansion of the ion crystal for τ_0 and another sinusoidal ramp to re-trap the ions in separate wells with duration τ, in analogy to the separation considered in <cit.>. Writing out the time dependence explicitly and denoting ω_0 =√(k_D(0)/m_D), ω(t) = ω_0 t ≤ 0 ω_0/2[ 1+cos(π/τt)] 0 < t ≤τ 0 τ < t ≤τ + τ_0 ω_0/2[ 1-cos(π (t-τ-τ_0)/τ)] τ + τ_0 < t ≤ 2τ +τ_0 ω_0 t > 2τ +τ_0. It is necessary to ensure that the classical motion of all three ions leaves them at rest at the end of the separation. While the initial potential confines the ions, t∈ (-∞, τ], the potential minimum is located at the origin, w_i(t < τ) = 0. However, during the re-trapping period, we will allow the potential to move and apply forces to the D ions to slow them down. To this end, the potential minima near the D ions will follow w_1(t) = x_D1(t) - ηẋ_D1(t) and analogously for w_2 with x_D2. This introduces decelerating forces proportional to the ion velocities in the classical dynamics which slow the ions as they are recaptured and cease once the ions are at rest. §.§ State Preparation To achieve GGS, we can cool the motional modes close to their ground states and pre-compensate for the quantum mechanical effects of separation during t<0, then separate the crystal starting at t=0 to end in the ground states of all three final modes at t_f. We can separate (<ref>) into the time evolution of two states. The ip mode is completely decoupled from the rest of the Hamiltonian and can be evolved independently. The a and b modes are decoupled at t=0, couple during separation (implying that their motion is entangled) and decouple at t=t_f=2 τ+τ_0. They must be evolved together. The time evolution during separation can be further decomposed into suitable squeezing operations S(r_k,ϕ_k) on the k-th initial normal mode with (k={op,a,b}), rotation operations R(θ_k) on the k-th initial normal mode and a beam-splitting operation B_BS(θ_BS, ϕ_BS) between modes a and b as defined in (<ref>)-(<ref>). In the phase space coordinates introduced in <ref>, these operations take the form [ p_k; x_k ]_S = S(r_k, ϕ_k) [ p_k; x_k ]_0 = [ cosh(r_k) - sinh(r_k)cos(ϕ_k) ω_k sinh(r_k)sin (ϕ_k); 1/ω_ksinh(r_k)sin(ϕ_k) cosh(r_k) + sinh(r_k)cos(ϕ_k) ][ p_k; x_k ]_0 [ p_k; x_k ]_R = R(θ_k) [ p_k; x_k ]_0 = [ cosθ_k -ω_ksinθ_k; 1/ω_ksinθ_k cosθ_k ][ p_k; x_k ]_0 [ p_a; p_b; x_a; x_b ]_BS = [ cosθ_BS √(ω_a/ω_b)cos(ϕ_BS)sin(θ_BS) 0 √(ω_a ω_b)sin(ϕ_BS)sin(θ_BS); - √(ω_b/ω_a)cos(ϕ_BS)sin(θ_BS) cosθ_BS √(ω_a ω_b)sin(ϕ_BS)sin(θ_BS) 0; 0 -1/√(ω_aω_b)sin(ϕ_BS)sin(θ_BS) cosθ_BS √(ω_b/ω_a)cos(ϕ_BS)sin(θ_BS); -1/√(ω_aω_b)sin(ϕ_BS)sin(θ_BS) 0 - √(ω_a/ω_b)cos(ϕ_BS)sin(θ_BS) cosθ_BS ][ p_a; p_b; x_a; x_b ]_0 where ω_k =ω_k(0), and ω_a,b =ω_a,b(0). Writing the operators in this form requires application of the Baker-Campbell-Hausdorff theorem, and is covered in detail in <cit.>. The arguments for these operations are determined by the quantum dynamics of the system during separation. In the frame moving with the classical ion positions, these are governed by (<ref>), which we will consider in detail in the next section. To reasonably compensate for the beam-splitting dynamics and squeezing during separation, all three ions need to be in close proximity with substantial Coulomb coupling and resolvable differences in the mode frequencies. For this reason, we will only consider compensation before or during separation here. §.§ Time Evolution and Occupation Numbers §.§.§ The out-of-phase mode To map an arbitrary state in the initial well at frequency ω_op(0)=√(3)ω_0 to its equivalent state in the final well at frequency ω_op(t_f)=ω_0 the net effect of all operations must be equal to the scaling operator Q_op = [ √(ω_op(t_f)/ω_op(0)) 0; 0 √(ω_op(0)/ω_op(t_f)) ], which can also be thought of as undoing the effect of the squeezing incurred by the out-of-phase mode due to an instantaneous change in its frequency from ω_op(0) to ω_op(t_f). In terms of the covariance, the out-of-phase mode's initial ground state at the initial oscillator frequency, √(3)ω_0, is described by V_op(0) ≡V^(i)_op = [ √(3)ω_0/2 0; 0 1/2 √(3)ω_0 ], while the covariance matrix of the out-of-phase modes's final state after successful GGS is given by V_op(t_f) ≡V^(f)_op = [ ω_0/2 0; 0 1/2ω_0 ], and Q_op·V^(i)_op·Q^T_op=V^(f)_op as expected. The effect of separation on the out-of-phase mode can be calculated from h(t) in (<ref>) h_op(t) = [ 1 0; 0 ω_op^2(t) ], which can be inserted into (<ref>) to solve for M_op(t) numerically Ṁ_op(t) = - C_op·h_op(t) ·M_op(t) M_op(0) = I, to yield M_op(t_f)≡M^(f)_op. The Bloch-Messiah theorem states that M^(f)_op can be decomposed into squeezing operations and rotations of the form given in (<ref>) and (<ref>), each with unit determinant. Therefore, the inverse is (M^(f)_op)^-1 = [ m_22 -m_12; -m_21 m_11 ], where m_jk are the matrix elements of M^(f)_op. The covariance V^(p)_op at t=0, after pre-compensation but before separation is V^(p)_op≡(M^(f)_op)^-1·V_f ·[(M^(f)_op)^-1]^T. The general form of a pre-compensation operation is M^(p)_op =R_p(θ_2)·S_p(r,ϕ)·R_p(θ_1) =R_p(θ_2)·S_p(r,ϕ)·R_p(-θ_2)·R_p(θ_2)·R_p(θ_1) = S_p(r_p,ϕ_p)·R_p(θ), where θ_p = θ_1+θ_2, and r_p and ϕ_p = ϕ + 2 θ_2 are parameters that are not yet determined. Since V_0 is the covariance of a Gaussian state, R_p (θ) ·V_p(0) ·R^T_p (θ)=V_op(0) for any θ. After pre-compensation, the covariance matrix of the initial state takes the form V^(p)_op = S_p(r_p, ϕ_p) ·V^(i)_op·S^T_p(r_p, ϕ_p). By inserting (<ref>) into (<ref>) and comparing to (<ref>) element by element we find cosh(2r_p) = (m_12/ω_0)^2 + 3 m_11^2 + (m_22^2 + 3 m_21^2 ω_0^2)/2√(3) sin(ϕ_p) = - m_11m_12 + m_22m_21ω_0^2/ω_0 sinh2r_p. For a concrete example, we assume a BMB crystal with initial axial in-phase frequency ω_ip/2π = 1 MHz, where ω^2(t) is ramped down over τ = 0.365 μs. The ions are allowed to fly apart unimpeded for τ_0 = 1.1  μs, and the potentials are ramped back up over 2 τ = 0.73  μs. The required pre-squeezing parameters are r_p ≈ 1.597, and ϕ_p≈ -0.671. The total duration of separation is similar to the two-ion separation discussed in <cit.>, which is reasonable given that similar Coulomb repulsion forces, ion masses, and final relative distances are used. The correct parameters for pre-compensation and the numerical solution for M_op(t) allows us to compute the occupation number of the out-of-phase mode as a function of time with respect to the final well frequency ω_0 from t=0 to t_f, n_op_t = 1/2 V^op_11(t) + 1/2ω_0^2 V^op_22(t)/ω_0 - 1/2. Fig <ref> shows (a) the position and strength of the potential wells and the classical time evolution of the position of the three ions and (b) the out-of-phase mode's occupation number over time. §.§.§ The a and b Modes As with the out-of-phase mode, the initial state must be pre-compensated before separation starts at t=0, but now with a suitable combination of single mode squeezing operations for a and b individually as well as a two-port interferometer. The initial and final normal mode frequencies in (<ref>) are ω_a(0) = ω_0/√(10)√(13 + 21 m_D/m_H + √(512 m_D/m_H + (13 - 21 m_D/m_H)^2)) ≡α_a ω_0, ω_a(t_f) = ω_0, ω_b(0) = ω_0/√(10)√(13 + 21 m_D/m_H - √(512 m_D/m_H + (13 - 21 m_D/m_H)^2)) ≡α_b ω_0, ω_b(t_f) = √(m_D/m_H)ω_0 ≡β_b ω_0. When applied to modes in their ground states initially, the interferometer B(θ_1) in the decomposition (<ref>) of the pre-compensation operation has no effect. It is sufficient to squeeze the two modes followed by an interferometer, M^(p)_ab = B_ab(θ) [S_a(r_a,ϕ_a) ⊕S_b(r_b, ϕ_b)]. This can be further reduced by making use of (<ref>) and the insertion of rotation operators, remembering that rotations leave the ground state invariant, M^(p)_ab = B_BS(θ_BS, ϕ_BS) ·R_ab(θ_a, θ_b) [S_a(r_a,ϕ_a) ⊕S_b(r_b, ϕ_b)] R_ab(-θ_a, -θ_b) ·R_ab(θ_a, θ_b) = B_BS(θ_BS, ϕ_BS) [S_a(r_a,ϕ_a') ⊕S_b(r_b, ϕ_b')] R_ab(θ_a, θ_b) →B_BS(θ_BS, ϕ_BS) [S_a(r_a,ϕ_a') ⊕S_b(r_b, ϕ_b')] The covariance matrix of the initial state is V^(i)_ab = [ α_aω_0/2 0 0 0; 0 α_bω_0/2 0 0; 0 0 1/2α_aω_0 0; 0 0 0 1/2α_bω_0 ], and that of the final state is V^(f)_ab = [ ω_0/2 0 0 0; 0 β_b ω_0/2 0 0; 0 0 1/2 ω_0 0; 0 0 0 1/2 β_b ω_0 ]. For the the a and b modes, h(t) in (<ref>) takes the form h_ab(t) = [ 1 0 0 -θ̇(t); 0 1 θ̇(t) 0; 0 θ̇(t) ω_a^2(t) 0; -θ̇(t) 0 0 ω_b^2(t) ]. The equation of motion for M_ab is Ṁ_ab(t) = - C·h_ab(t) ·M_ab(t) M_ab(0) =I. and can be solved numerically with M_ab(t_f)≡M^(f)_ab and inverted to yield V^(p)_ab≡ (M^(f)_ab)^-1·V^(f)_ab·[(M^(f)_ab)^-1]^T, which can be compared to V^(p)_ab = M^(p)_ab·V^(i)_ab·[M^(p)_ab]^T, to find the correct parameters for pre-compensation numerically. The squeezing parameters required for GGS on the a and b modes in this BMB crystal are r_a ≈ 1.938 and r_b ≈ 1.483, respectively, with phases ϕ_a≈ -1.846 and ϕ_b≈ -2.902. The required beam-splitting parameters are θ_B ≈ 1.714, and ϕ_B ≈ -1.470. In analogy to the out-of-phase mode, the occupation numbers for both modes can be written as n_a_t = 1/2 V^ab_11(t) + 1/2ω_0^2 V^ab_33(t)/ω_0 - 1/2, n_b_t = 1/2 V^ab_22(t) + 1/2 (β_bω_0)^2 V^ab_44(t)/β_bω_0 - 1/2, and are shown in Fig. <ref>(c). §.§ Generalization to Larger Crystals The generalization of this procedure to larger crystals is not difficult but rather tedious. For example, a crystal such as DHHD will have four modes that decouple into two groups of coupled oscillators. Each group will have its own rotation angle θ_1,2 and appear in the same way as the rotation appears in (<ref>). In this way, state preparation can be done on the two groups individually through two groups of single mode squeezers and beam-splitters. The generalization to crystals such as D...DHD...D with N ions, for odd N, is again straightforward, but requires a slightly more complicated decoupling procedure where we end up with two groups. The first group contains X=(N+1)/2 coupled modes described by in-phase modes with participation from the N-1 data qubits as well as the helper ion. The second group contains Y=(N-1)/2 coupled out-of-phase modes for the N-1 data qubits in which the helper ion does not participate. The state preparation for such a crystal will require X single mode squeezers and an X-port interferometer on the first mode group as well as Y single mode squeezers and a Y-port interferometer on the second mode group. §.§ Time-scales for Pre-compensation Operations to pre-compensate the motional modes for GGS require a finite amount of time. Resonant squeezing of a motional mode at frequency ω_k can be accomplished by modulating the potential curvature at 2 ω_k. To our knowledge, the strongest experimentally demonstrated squeezing rate for a single motional mode is r=t/(3.2 μs) <cit.>. This rate can potentially be increased, but to keep the effects on spectator modes negligible, the duration of each squeezing operation must be substantially above 1 /(2 min[Δω_kl]) where Δω_kl is the frequency difference between modes k and l ≠ k. For mode frequencies between 1 MHz ≤ω_k ≤ 3 MHz we estimate a total duration not shorter than approximately 6 μs for all three squeezing operations. A balanced beam-splitter between two axial modes in a three ion BMB crystal was implemented in 17 μs in <cit.> by driving a suitable coupling potential at Δω_kl. Similar considerations about the spectator mode as for resonant squeezing apply, and we can estimate a duration no shorter than 2 μs for the beam-splitter operation. Under these assumptions, the total duration of pre-compensation and separation is on the order of 10 μs, substantially larger than the 2 μs required for the three ions to separate by 80 μm as shown in Fig. <ref>. § THREE ION GGS THROUGH ON-THE-FLY COMPENSATION Since the pre-compensation removes effects from diabatic changes of the external potential, it seems reasonable that a more elaborate choice of dynamical external potential than just ramping down and back up can separate the ions and have all three modes start and end in their ground states in a duration that is on the same time-scale as the Coulomb expansion. This “on-the-fly compensation” will be explored next. Our starting point is the partially decoupled Hamiltonian (<ref>), assuming a BMB crystal. Separation will be driven by lowering the external potential to zero, similar to the separation in <ref>. In contrast to what is described in <ref>, the curvature of the trapping potential proportional to k_D(t) is increased starting at t=0 to squeeze and couple the modes while the ions experience substantial Coulomb interaction. Additionally, this modulation decreases the distance between the ions and effectively “tensions the springs” before separation. The curvature is ramped up and down quickly over τ_1 = 0.85 μs to near zero confinement such that the ions fly apart driven by their mutual Coulomb repulsion. As the trap is of small but nonzero strength with frequency ω_op = ω_0/30, the B ions reach a distance of 50  μm from their starting equilibrium position at t_catch∼ 1.4 μs, far enough apart to create individual potential minima close to the position of all three ions. The individual potential curvatures are increased and further modulated, and the positions of the B minima move further apart to near ± 80 μm ion distance while also providing a decelerating force that is proportional to the velocity of the B ions analogous to (<ref>) to bring them to rest over τ_2 = 1.4 μs. The modulation of the curvature transforms the final modes back to near their ground states at t = t_f = t_catch + τ_2 ≈ 2.8  μs after which the external potential is held constant. All put together, the time dependence of the well frequency that the B and M ions experience is ω_B,M(t) = ω_0 t < 0 ω_down(t) 0 ≤ t < τ_1 ω_op = ω_0/30 τ_1≤ t < t_catch ω_catchB,M(t) t_catch≤ t < t_catch + τ_2 ω_0 t_catch + τ_2 ≤ t =t_f where ω^2_B(t) = k_B(t)/m_B and m_B/m_Mω^2_M(t) = k_M(t)/m_M. The scaling of ω^2_M(t) by the ratio of masses is so that both ω_M(0) = ω_0 and k_M(0) = k_B(0) ≡ k_0 hold at t=0. We represent the time dependent parts of the profile, ω_m(t) (m ∈{down, catchB, catchM}), by a truncated Fourier series, which can provide a description of arbitrary time dependence with relatively few parameters (compared to other approaches such as splines) ω_m(t)/ω_0 = a_0 + ∑_ℓ=1^4(a_ℓcosπℓ t/2 τ_m + b_ℓsinπℓ t/2 τ_m) with a_ℓ and b_ℓ being the Fourier components, τ_m the amount of time the potential is modulated for, and ω_0 the initial well frequency. For the first modulation, ω_down, several of the Fourier components are constrained by the following boundary conditions, ω_down(0) = ω_0, ω'_down(0) = 0 ω_down(τ_1) = ω_op, ω_down'(τ_1) = 0, The minimum of the catching potential for the B moving in the positive direction moves along c_c(t) = c_B(t) - ηċ_B(t), where c_b(t) is the classical position of this B. For the B ion moving in the negative direction, the catching potential position is moving equal and opposite. Hence, the catching potentials implement a force to slow the ions that is proportional to their classical velocity. The time dependence of the catching potential frequencies will be described by another truncated Fourier series (<ref>) with altered boundary conditions; initial frequency ω_op and final frequency ω_0. If the unconstrained Fourier components of both the initial well modulation and catching potentials are chosen correctly, modes will be left in their approximate ground states at t_f. Fig. <ref> illustrates one example of this procedure. The Fourier components are, (a_0, a_1, a_2, a_3, a_4)_down = (2.217, 0.3, -0.517, -0.5, -0.5) (b_1, b_2, b_3, b_4)_down = (-2.2, 0.1, 0.0, 0.5), (a_0, a_1, a_2, a_3, a_4)_catchB = (24.2, 172.7, -206.5, -0.5, 11.1) (b_1, b_2, b_3, b_4)_catchB = (-202.3,-0.1,9.5,43.5), (a_0, a_1, a_2, a_3, a_4)_catchM = (27.2, 229.5, -264.5, -0.5, 9.3) (b_1, b_2, b_3, b4)_catchM = (-260.5, -0.5, 10.5, 57.5). The parameters for this modulated separation were found using a Nelder-Mead numerical optimization scheme such that the total occupation number (n̅_op and n̅_a + n̅_b separately) was minimized. A perfect implementation of the waveform described by (<ref>) will give final occupation numbers of n̅_op∼ 0.006, n̅_a ∼ 0.034, and n̅_b ∼ 0.11. As squeezed states in general can have long tails of occupation in the number basis, we also consider the number-basis populations P_n = ⟨n|ψ_f⟩^2 for the single op-mode case and P_n,m = ⟨n_a m_b|ψ_f⟩^2 in the coupled two mode (ab) case. We state the first few non-zero probabilities here for completeness: Mode P_0 P_1 P_2 P_4 P_6 |ψ_op⟩ 0.997 0 0.00291 1.28× 10^-5 6.22×10^-8 P_0,0 P_1,1 P_0,2 P_2,0 P_2,2 |ψ_ab⟩ 0.937 0.0219 0.0326 0.00297 0.00103 Here we see that states with odd total quantum number are disallowed as would be expected from squeezed states. The squeezing parameters for each of the three modes can readily be determined from a Bloch-Messiah decomposition of the final states and are given by, r_op∼ 0.0788, r_a ∼ 0.0289 and r_b ∼ 0.365. This example shows that modulation during separation is feasible and GGS separation can be executed on the same time scale that describes the Coulomb expansion of a BMB crystal when the potential frequency is dropped from ω_0 to zero (The B ions need approximately 1.3  μs to reach ± 80 μm distance from the M ion in this case). The most time consuming portion in our example is catching the ions, and while we have not attempted this, it is likely that the entire separation could be sped up further with clever optimization, for example by increasing η from the value of 0.4 μs used in our example or by compressing the crystal even further before release. §.§ Occupation Number Response to Error in Fourier Components An important consideration for any transport or separation protocol is its robustness to error in its implementation. Here, we address this by introducing random deviations of the Fourier series coefficients. These deviations are supposed to represent random uncontrolled noise, as opposed to static errors that can be compensated for by better calibration. We will introduce random deviations from the intended value in each component in (<ref>) sampled from a uniform distribution with maximum deviation equal to a fraction of the largest Fourier components in each waveform. Results are presented for maximum fractional deviations of 10^-5 and 5× 10^-5 of the largest Fourier components. We Monte-Carlo sample from these distributions to produce average final state populations to characterize the robustness to error. Recall that during the catching period of this algorithm, the catching potential minima followed the Be^+ ions through (<ref>). However, in a realistic scenario, this trajectory would be pre-computed based on an ideal implementation of the Fourier components in (<ref>) which are now disturbed by random error. This random error causes our catching potentials to imperfectly slow down the ions, which will in turn oscillate in the final well. The final state can be computed by performing a coherent displacement back into the lab frame through a displacement operator, D_f = exp{i[m_B(c_B1'(t_f) - w_B1'(t_f))x_B1 + m_B(c_B2'(t_f) - w_B2'(t_f))x_B2 - (c_B1(t_f) - w_B1(t_f))p_B1 - (c_B2(t_f) - w_B2(t_f))p_B2]}. where x_Bi are the original unscaled operators. For simplicity, we do not allow for any asymmetry between the two catching potentials during the slowdown period for the two Be^+ ions, i.e. c_B1 = -c_B2. This has the effect of canceling any classical motion effects at the end of transport on the M ion's classical position as well as the a and b modes, which are essentially COM modes, while still providing a realistic assessment of the final state of the op mode with noisy potentials. Finally, if the classical trajectory is measured in units of meters and seconds, the displacement of the scaled op-mode operators is given by, D_f x_op D_f = x_op + √(2m_B/ħ) (c_B1(t_f) - w_B1(t_f)) D_f p_op D_f = p_op + √(2/ħ m_B) m_B (c_B1'(t_f) - w_B1'(t_f)). Table <ref> shows the average final transition probability of the op mode while Fig. <ref> and Table <ref> show the average final transition probabilities of the a and b modes when the error is 1×10^-5 and 5× 10^-5 of the largest Fourier components. Notably, there is non-negligible occupation in the n=1 state of the op mode, which implies that classical motion cannot be neglected even for such a small amount of noise in the waveforms. If the final classical motion were to be neglected, the final occupations of the op mode would be more akin to the a and b modes where the only residual occupation is due to imperfect squeezing and mode-mixing. The second row in Table <ref> makes this even more apparent when the error is allowed to increase to 5×10^-5 and we see non-negligible occupation of the n=3 state. Here we again note that no asymmetry has been introduced into the B ion catching potentials which would disturb the classical motion of the M ion. This has been done for simplicity, so any real implementation of this protocol must take asymmetric perturbations into account as states like P_0,1 and P_1,0 will have non-zero and non-negligible occupation. We also note that the same precision in realizing the external potentials is required in implementations with pre-compensation and simple ramps, such as the one discussed in section <ref> and <cit.>. This analysis highlights the exquisite control over the Fourier components of the waveform necessary to reliably achieve fast GGS. Slower implementations will improve the robustness, with one limit represented by the essentially adiabatic separation methods that have already been demonstrated. Initial experimental implementations will likely be only slightly faster than adiabatic and durations will drop as control improves. Small deviations from the ground state can be removed by re-cooling the DHD crystal on the H ion <cit.> but recooling must be rapid to not defeat the purpose of fast separation, and therefore the modes should have as low as possible occupations after separation. § CONCLUSIONS Rapid separation of a DHD three ion crystal leads to squeezing and mode-mixing that needs to be carefully managed to end in the quantum mechanical ground states of all three axial normal modes of the separated ions. To achieve GGS, the effects of separation can either be pre-compensated by suitable squeezing and mode-mixing operations before separation, or more expediently be mitigated by precisely controlled modulation of the potential during separation. Importantly, we find that there is not a significant time penalty for the latter method when compared to Coulomb expansion after the external trapping potential is instantaneously set to zero. An example implementation uses an efficient Fourier representation of the trapping potential and ideally terminates with all three axial modes close to the ground state. The time dependence of the external potential is smooth, but requires very precise control that will take effort to implement with realistic driving electronics. Uncontrolled stray potential fluctuations will also need to be carefully minimized. Utilizing the Coulomb repulsion to drive the ions apart rather than narrow separation electrodes that create a “wedge” between ions <cit.> may help with reducing the complexity and electrode count of traps that are suitable for GGS separation. Separating D ions out of a DHD crystal trapped in a single well allows for the D ions to travel through a larger trap array on their own so they can be paired with other D ions in subsequent gate operations, while the H ions stay in place. This approach could mitigate issues from transporting groups of ions of different mass through junctions in a large ion trap array <cit.> and simplifies transport in general, since all transport primitives besides separation only need to be implemented for the D species. In this context, it is worth noting that recombination of two D ions that approach a stationary H from opposite directions is the time reversal of DHD separation. Since the equations of motion are invariant under time reversal, ground state to ground state recombination can be accomplished by running the separation protocol backwards in time. Performing imperfect separation and recombination multiple times will lead to error accumulation from imprecision in the electrode control, higher order than quadratic terms in the potential energy and fluctuating stray potentials, but small amounts of excess energy can be removed by cooling the H ion when the DHD crystal is confined in a single external well <cit.>. Further refinement of GGS can reduce the duration of subsequent cooling to a minimum. Although we do not explicitly show how the procedure in <ref> generalizes to crystals with more than three ions, we describe its general implementation and how the effects of separation can be decomposed into single mode squeezing and multi-port interferometers. In general, external potential modulations can accomplish diabatic squeezing and mode-mixing operations not just for ground states but could be used for implementing complex Gaussian operations on any initial motional state of groups of ions, with the potential for further generalization to all 3N motional modes of N ions. This could be of interest in realizing error correction codes and quantum logical operations on bosonic qubits realized in the motion of ion crystals <cit.> and for realizing entangled states of ion internal degrees of freedom generated by coupling them to non-classical states of the motion with Jaynes-Cummings type interactions <cit.>. The authors would like to thank Tyler Sutherland for useful discussions. The work of DL and DS was supported by the NIST Quantum Information Program. The work of TG and SL was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract DE-AC52-07NA27344.
http://arxiv.org/abs/2406.17708v1
20240625164655
Forecast Relative Error Decomposition
[ "Christian Gourieroux", "Quinlan Lee" ]
econ.EM
[ "econ.EM" ]
A reduction of the “cycles plus K_4's” problem Aseem Dalal[ Indian Institute of Technology Delhi, Department of Mathematics, Delhi India. Email: aseem.dalal@gmail.com.] Jessica McDonald[Auburn University, Department of Mathematics and Statistics, Auburn U.S.A. Email: mcdonald@auburn.edu. Supported in part by Simons Foundation Grant #845698 ] Songling Shan[Auburn University, Department of Mathematics and Statistics, Auburn U.S.A. Email: szs0398@auburn.edu. Supported in part by NSF grant DMS-2345869.] =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT We introduce a class of relative error decomposition measures that are well-suited for the analysis of shocks in nonlinear dynamic models. They include the Forecast Relative Error Decomposition (FRED), Forecast Error Kullback Decomposition (FEKD) and Forecast Error Laplace Decomposition (FELD). These measures are favourable over the traditional Forecast Error Variance Decomposition (FEVD) because they account for nonlinear dependence in both a serial and cross-sectional sense. This is illustrated by applications to dynamic models for qualitative data, count data, stochastic volatility and cyberrisk. Keywords: Nonlinear Forecast, Predictive Distribution, Learning, FEVD, FRED, Kullback Measure, Laplace Transform, Count Data, Stochastic Volatility, Cyberrisk. JEL Codes: C01, C32, C53 1 § INTRODUCTION Since its introduction in the time series literature [see e.g. Doob (1953), Whittle (1963)], the Forecast Error Variance Decomposition (FEVD) has been largely used in macroeconomics to analyse the effects of shocks and their propogation at different horizons in time [see e.g. Lanne, Nyberg (2016), Isakin, Ngo (2020)]. This approach is easy to implement at the cost of serial and cross-sectional linearity assumptions. It is typically applied on linear dynamic models such as Structural Vector Autoregressions (SVAR) with independent and identically distributed innovations. Moreover, the variance is just a local measure of uncertainty when this uncertainty is small [see Pratt (1963), Arrow (1965) for the definition of risk aversion], and their local interpretation is the basis of the short term mean-variance portfolio management in Markowitz (1953), (2000)]. The goal of our paper is to extend the standard FEVD in the two following directions above: [1] First to account for nonlinear dynamic models (nonlinear serial dependence), that involves asymmetric effects, conditional heteroscedasticity, switching regimes, or extreme risks. [2] Second, to allow not only for the prediction of the future value of the process, but also for the prediction of its nonlinear transforms (nonlinear cross-sectional dependence), in the short, medium and long run. Section 2 provides the background on the variance decomposition formula, its role in defining the standard FEVD, and the local interpretation of this decomposition. In Section 3, we extend the FEVD measure in a nonlinear dynamic framework. We first introduce a general decomposition for the relative forecast errors on a positive transformation of the process of interest, which leads to the so-called Forecast Relative Error Decomposition (FRED). We then apply this decomposition to the updating (i.e. learning) of the forecasts and propose two decompositions based on selected characterizations of the predictive distribution: (i) The Forecast Error Kullback Decomposition (FEKD) is a measure based on the transition density. (ii) The Forecast Error Laplace Decomposition (FELD) is a measure based on the conditional Laplace transform. We demonstrate the suitability of these new decompositions for nonlinear dynamic models in Section 4. We begin with the standard linear (Gaussian) Vector Autoregression (VAR) and compare our measures with the traditional FEVD. Then, we consider different examples in which the nonlinear features are due to the structural interpretation of the process of interest, which can be qualitative, a sequence of count variables, be positive valued, or even in the case of observed volatility-covolatility features be valued as a symmetric semi-positive definite matrix. Numerical illustrations are provided in Section 5, and statistical inference is discussed in Section 6. An application to count observations on cyberattacks is provided in Section 7 and we conclude in Section 8. Technical details are provided in the appendices and online appendices. § VARIANCE ANALYSIS AS A LOCAL APPROXIMATION This section introduces two preliminary lemmas that are useful in understanding the Forecast Error Variance Decomposition and its interpretation. §.§ Decomposition of Variance Let us consider a random vector Y of dimension n and an information set (σ-algebra) I such that Z_t is measurable with respect to I_t. If Y is square integrable[that is, 𝔼|| Y ||^2<∞, where ||·|| denotes the Euclidean norm.]: 𝕍[Y] = 𝕍[𝔼(Y|I)] + 𝔼[𝕍(Y|I)], where 𝔼(Y|I) (resp. 𝕍(Y|I)) denotes the expectation (resp. variance-covariance matrix) of Y conditional on the information set I. This decomposition can be written as: 𝔼[(𝔼(Y|I)-𝔼(Y))(𝔼(Y|I)-𝔼(Y))']=𝕍[Y]-𝕍[𝔼(Y|I)], where ' denotes a transpose. It relates the multivariate risk of prediction updating (left hand side) to the updating of the variances of the error forecasts (right hand side). The matrix equations (<ref>)-(<ref>) can be decomposed by components. They lead to the one dimensional variance decompositions as: 𝕍(Y_i)=𝕍[𝔼(Y_i|I)] + 𝔼[𝕍(Y_i|I)], i=1,...,n. as well as one dimensional covariance decompositions: Cov(Y_i,Y_j) = Cov[𝔼(Y_i|I),𝔼(Y_j|I)] + 𝔼[Cov(Y_i,Y_j|I)], i,j=1,...,n, i≠ j, in which the information set is not specific to a given component. It is easily checked that these additive decompositions of the variances and covariances do not imply a simple decomposition of the associated correlations ρ(Y_i,Y_j)=Cov(Y_i,Y_j)/√(𝕍(Y_i)𝕍(Y_j)) in terms of the conditional correlations. §.§ Forecast Error Variance Decomposition We can extend the decomposition of variance to any multivariate stochastic process (Y_t) and its increasing sequence of information sets (or filtration) (I_t), where I_t=(Y_t) is the σ-algebra generated by the present and lagged values of the process. Indeed, the forecast errors at horizon h can be written as a sum of multivariate forecast updates: Y_t+h-𝔼(Y_t+h|I_t) = [Y_t+h-𝔼(Y_t+h|I_t+h-1)]+[𝔼(Y_t+h|I_t+h-1)-𝔼(Y_t+h|I_t+h-2)] + ... + [𝔼(Y_t+h|I_t+1)-𝔼(Y_t+h|I_t)] = ∑_k=0^h-1 [𝔼(Y_t+h|I_t+k+1)-𝔼(Y_t+h|I_t+k)]. By the optimality of the conditional expectations, the forecast updatings are uncorrelated conditional on I_t. Then, we get the FEVD. 𝕍[Y_t+h|I_t] = ∑_k=0^h-1𝕍{[𝔼(Y_t+h|I_t+k+1)-𝔼(Y_t+h|I_t+k)]|I_t} = ∑_k=0^h-1𝔼{𝕍[𝔼(Y_t+h| I_t+k+1)| I_t+k]| I_t}. This decomposition is often written for a strictly stationary process (Y_t), which admits an infinite strong moving average[Since our objective is an extension of the FEVD to a nonlinear dynamic framework, it is important to consider strong moving average models, that are defined from strong white noise instead of weak moving average models where the noise is just assumed to be zero mean, with fixed variance and no serial correlation.] representation of the form: Y_t=∑_j=0^∞ A_j ε_t-j, A_0 = Id, where ε_t is a strong white noise, that is a sequence of i.i.d random vectors, with zero mean and variance-covariance Σ, and the moving average coefficients satisfy the square integrability condition ∑_j=0^∞|| A_j ||^2 < ∞. When this moving average representation is invertible, the process also admits an (infinite) autoregressive representation: ∑_j=0^∞Φ_j Y_t-j = ε_t, Φ_0 = Id, and the information generated by the current and lagged values of the process (Y_t) is equal to the information generated by the current and lagged values of the noise (ε_t), that is: I_t = (Y_t)=(ε_t). Then, the FEVD becomes: 𝕍[Y_t+h|Y_t]=𝕍[Y_t+h|ε_t]=∑_k=0^h-1 A_kΣ A_k', since: 𝔼[𝕍(Y_t+h|I_t+k)|I_t] = 𝕍[𝔼(Y_t+h|I_t+k+1)-𝔼(Y_t+h|I_t+k)|I_t] = 𝕍[A_h-k-1ε_t+k+1|I_t]=A_h-k-1Σ A'_h-k-1. In this special moving average case with iid noise, the decomposition is path independent, that is independent on the values of process (Y_t) before time t. These decompositions in the strong moving average case, that is in the strong linear dynamic case, have been initially analyzed in a systematic way by Whittle (1963) [see also Doob (1953)]. §.§ Local Analysis of Risk The comparison of multivariate quantitative risks (i.e. the losses) X and Y is based on the notion of stochastic dominance at order 2. Y is riskier than X if and only if 𝔼[υ(Y)] ≥𝔼[υ(X)], for any increasing convex function[This stochastic dominance can be equivalently defined in terms of preference, with now utility functions u that are increasing concave.] υ (such that the expectations exist) [see Rothschild, Stiglitz (1970), Vickson (1975), Fishburn, Vickson (1978).]. In the univariate case, it is well known that the quadratic function y → y^2 that underlies the definition of variance is convex, but not increasing (except if X and Y are nonnegative). Then the variance is not directly appropriate for measuring risk. However, we have the following local expansion. Let us assume that υ is increasing and convex such that υ(0)=0, and the variable Y is close to 0, with 0 mean. We have: * 𝔼[υ(Y)] ≥ 0. * 𝔼[υ(y)] ≈1/2[∂^2 v(0)/∂ y ∂ y'𝕍(Y)], where denotes the trace operator. Proof: (i) This is a consequence of Jensen's inequality. (ii) Indeed we have: v(Y) ≈dv(0)/dy' Y + 1/2 Y'∂^2 v(0)/∂ y ∂ y' Y, and by taking the expectation of both sides: 𝔼[v(Y)] ≈ 𝔼[1/2Y'∂^2 v(0)/∂ y ∂ y' Y] = 1/2𝔼[(Y'∂^2 v(0)/∂ y ∂ y'Y)] = 1/2𝔼[(∂^2 v(0)/∂ y ∂ y'YY')] (By commuting within the trace) = 1/2[∂^2 v(0)/∂ y ∂ y'𝔼(YY')] = 1/2[∂^2 v(0)/∂ y ∂ y'𝕍(Y)]. Since Y is assumed close to zero, the variance-covariance matrix is seen as a local risk measure for small risks and by construction does not account for asymmetric risks (which would require an expansion up to order 3), or to extreme risks (which would require an expansion up to order 4). This second–order expansion has been the basis for defining the local version of absolute risk aversion[In the multidimensional framework, the risk aversion is a matrix directly linked to the Hessian at 0 of function v, that is, ∂^2v(0)/∂ y ∂ y'.] [Arrow (1965)], or for justifying the mean-variance portfolio management in finance [Markowitz (1952), (2000)][In the standard financial application with Y as a vector of asset returns, the interest is in the risk of the portfolio returns α'Y, where α is the vector of portfolio allocations in values. Then the risk becomes scalar, that is we have υ(Y)=u(α'Y), where u is defined on ℝ. We deduce that the matrix of risk aversion ∂^2υ(0)/∂ y ∂ y'=d^2u(0)/dw^2αα' is of rank 1. It involves the effect of portfolio allocation and a scalar risk aversion.]. Since the risk on a price evolution y=p_t+h-p_t increases generally with the term, the mean-variance management is appropriate in the short run, that is with frequent predictive updating, and frequent learning. § FORECAST RELATIVE ERROR DECOMPOSITION (FRED) As mentioned in section 2.3, the standard FEVD has two drawbacks: (i) While it is appropriate for a local analysis of risk, it is not appropriate for asymmetric risks, or extreme risks. (ii) Moreover it is usually applied to pointwise predictions of Y, not to predictions of nonlinear transformations of Y. The aim of this section is to provide decompositions that are more appropriate for prediction and learning in a nonlinear dynamic framework. We first provide a general FRED decomposition. This decomposition is applied to one dimensional positive transformations of the process, that are either transition densities, or conditional Laplace transforms of process (Y_t), leading to the so-called Forecast Error Kullback Decomposition (FEKD) and Forecast Error Laplace Decomposition (FELD), respectively. §.§ Relative Forecast Updating Let us consider a univariate positive process (Z_t) and the sequence (I_t) of increasing information sets such that Z_t is measurable with respect to I_t. We can construct an analogue of the FEVD by considering the relative forecast update: 𝔼(Z_t+h|I_t+h-1)/𝔼(Z_t+h|I_t+h), k=0,...,h-1. Then we have: Z_t+h/𝔼(Z_t+h|I_t) = ∏_k=0^h-1[𝔼(Z_t+h|I_t+k+1)/𝔼(Z_t+h|I_t+k)], By taking the log of both sides and then the conditional expectation given I_t, we get the Forecast Relative Error Decomposition (FRED): 𝔼{log[𝔼(Z_t+h|I_t)/Z_t+h]| I_t} = ∑_k=0^h-1𝔼{log[𝔼(Z_t+h|I_t+k)/𝔼(Z_t+h|I_t+k+1)]| I_t}, where each term in the decomposition is nonnegative. Proof: The nonegativity is a consequence of Jensen's inequality. For instance, let us consider the left hand side. Then: 𝔼{log[𝔼(Z_t+h|I_t)/Z_t+h]| I_t} = 𝔼{-log[Z_t+h/𝔼(Z_t+h|I_t)]| I_t} ≥ -log𝔼{[Z_t+h/𝔼(Z_t+h|I_t)]| I_t} (By Jensen's Inequality applied to the convex function -log(·)) = - log 1 = 0. The proof is similar for the terms in the right hand side, noting that 𝔼(· | I_t) = 𝔼(𝔼(· |I_t+h)|I_t) by the Law of Iterated Expectation. The FRED is locally a variance decomposition written on the relative error forecasts. Indeed we have locally: -log[Z_t+h/𝔼(Z_t+h|I_t)] =-log{1+Z_t+h-𝔼(Z_t+h|I_t)/𝔼(Z_t+h|I_t)}, and 𝔼{-log[Z_t+h/𝔼(Z_t+h|I_t)]| I_t} ≈1/2𝔼{[Z_t+h-𝔼(Z_t+h|I_t)/𝔼(Z_t+h|I_t)]^2| I_t} = 1/2𝕍[Z_t+h-𝔼(Z_t+h|I_t)/𝔼(Z_t+h|I_t)| I_t], since 𝔼[Z_t+h-𝔼(Z_t+h|I_t)/𝔼(Z_t+h|I_t)| I_t]=0. Therefore, the main difference between the FRED and the FEVD is that locally the variance expansion is written on the relative forecast errors instead of the absolute forecast errors and can be applied to any positive transformation Z_t of the multivariate process (Y_t). The FRED (<ref>) can be written as: γ(h|I_t) = ∑_k=0^h-1γ(k,h|I_t), where the left hand side of equation (<ref>) γ(h|I_t), say, measures the risk on the prediction errors at horizon h, and the generic term of the right hand side, γ(k,h|I_t), has a forward interpretation. More precisely, we have: γ(k,h|I_t) = 𝔼{log[𝔼(Z_t+h|I_t+k)/𝔼(Z_t+h|I_t+k+1)]| I_t } = 𝔼[𝔼{log[𝔼(Z_t+h|I_t+k)/𝔼(Z_t+h|I_t+k+1)]| I_t+k}| I_t]. Therefore, γ(k,h|I_t) is the expectation at date t of the risk on the short run prediction error of Z_t+h at date t+k. As usual, this forward interpretation involves three dates: t, t+k and t+h. §.§ Forecast Error Kullback Decomposition (FEKD) To account for cross-sectional nonlinearities and avoid pointwise predictions, a first solution is to consider the transition predictive densities. Let us assume a strictly stationary Markov process (Y_t) with transition density at horizon h and time t given by f(y,h|Y_t). These densities are positive transforms of Y_t. They are well defined for horizons h≥ 1, but not at horizon h=0, where the value Y_t is perfectly known and the predictive distribution degenerates into a point mass at Y_t. Up to this degeneracy, we deduce from Lemma 4 the Forecast Error Kullback Decomposition below: For any value y, we have: 𝔼{log[f(y,h|I_t)/f(y,1|I_t+h-1)]| I_t} = ∑_k=0^h-2𝔼{log[f(y,h-k|I_t+k)/f(y,h-k-1|I_t+k+1)]| I_t}. We get a decomposition that involves measures of risk on the updating of predictive densities. When Y is a continuous variable, the FEKD is invariant by one-to-one differentiable transformations of Y. Indeed, due to the ratios, the Jacobian effect disappears in the decomposition formula. The generic term on the right hand side of the FEKD is: γ(k,h|y,I_t) =𝔼{log[f(y,h-k|I_t+k)/f(y,h-k-1|I_t+k+1)| I_t+k]| I_t}, that is the conditional Kullback proximity measure (or contrast, or divergence) between the two conditional densities. A main difference between the FEKD and FEVD is that the decomposition (<ref>) can be written for any value of y, whereas there is a single FEVD. Typically, let us consider a univariate stationary process (Y_t). From T observations Y_1,...,Y_T, we can derive the sample distributions, then the sample deciles, and compare the FEKD evaluate at difference deciles. This will provide a more detailed analysis of risk with clear applications to the dynamic analysis of inequality, or the analysis of financial risks (where these quantiles are usually called Value-at-Risk (VaR)). When the process (Y_t) has a dimension larger than 2, the notion of quantile has not yet been defined, and it is less clear how to select an appropriate grid of values y to which the FEKD will be applied. A solution can be to apply the approach by deciles to combinations of components of Y_t, if such combinations have an economic meaning. Combinations are more easily treated by means of Laplace transforms, as seen in the next subsection. Note that the FEKD can exist in cases where the FEVD has no meaning. Indeed, the FEVD requires that the observed Y_t are square integrable, so it cannot be applied to data with fat tails. We give below in Section 4.1.2 the example of the Cauchy AR(1) model, where the FEKD exists, but not the FEVD. §.§ Forecast Error Laplace Decomposition (FELD) Let us consider a process (Y_t) of dimension n. Its conditional Laplace transform at horizon h is defined by: Ψ(u,h|I_t) = 𝔼[exp(-u'Y_t+h)|I_t], where u∈ D ⊂ℝ^n and such that the expectation exists on domain D. It is known that the knowledge of the Laplace transform is equivalent to the knowledge of the conditional distribution for the Gaussian case, or if the process satisfies some positivity restrictions [see Feller (1971) and the examples in Section 4]. Then we can apply the FRED to these nonlinear transformations. 𝔼{log[Ψ(u,h|I_t)/exp(-u'Y_t+h)]| I_t} = ∑_k=0^h-1𝔼{log[Ψ(u,h-k|I_t+k)/Ψ(u,h-k-1|I_t+k+1)]| I_t}, for any u∈ D, where D is the domain of arguments in ensuring the existence of the conditional Laplace transforms. As for the FEKD, the FELD (<ref>) defines several decompositions: γ(h|u,I_t)=∑_k=0^h-1γ(k,h|u,I_t), as many as selected combinations u, u∈ D. In some applications such combinations can have economic interpretations to define portfolio allocations when Y_t is a vector of n asset prices at date t, or to combine income and wealth in the analysis of inequality. When (Y_t) satisfies some “positivity" restrictions, the Laplace transform with positive argument u characterizes the distribution. Moreover, we have 𝔼[exp(-u'Y_t+h)| I_t]≤ 1, and the FELD always exists, even if the distribution of Y_t has very fat tails. §.§ Decomposition of Risk Premium The FELD can be interpreted as a decomposition of risk premiums for spot prices. Consider a decision maker with exponential utility function u(y)=-exp(-uy). The certainty equivalent π(u) is the level of wealth which makes the decision maker indifferent from the expected outcome of a lottery on Y. That is: π(u) = -1/ulog𝔼[exp(-uY)]≡ -1/ulogΨ(u), which is a function of parameter u, which is the Arrow-Pratt scalar measure of risk aversion. Equivalently, for a risky asset with price Y_t > 0, we can write: π(u,h|I_t) = -1/ulogΨ(u,h|I_t), which is the spot value of the asset at time t and horizon h. Intuitively, it is a contract written at time t, which captures the value of delivering the asset at some future horizon h. For an investor with risk aversion parameter u, the FELD in (<ref>) can be written as: 𝔼[-1/ulogΨ(u,h|I_t)+Y_t+h| I_t] = ∑_k=0^h-1𝔼{-1/ulogΨ(u,h-k|I_t+k) + 1/ulogΨ(u,h-k-1|I_t+k+1)| I_t} 𝔼[Y_t+h-π(u,h|I_t)|I_t] = ∑^h-1_k=0𝔼[π(u,h-k|I_t+k)-π(u,h-k-1|I_t+k+1)| I_t] π(u,h|I_t) - 𝔼(Y_t+h|I_t) = ∑^h-1_k=0𝔼[π(u,h-k-1|I_t+k+1)-π(u,h-k|I_t+k)| I_t]. Let us now consider the economic interpretations of this decomposition formula. The term π(u,h|I_t) - 𝔼(Y_t+h|I_t) is the difference between the value (price) of Y_t+h at date t and its historical conditional expectation. This difference is positive (by Jensen's inequality) and usually interpreted as a risk premium. Therefore, (<ref>) provides a decomposition of this risk premium. More precisely, the term π(u,k,h|I_t)=𝔼[π(u,h-k|I_t+k)| I_t ] is the value of a forward contract of the asset, written at time t, for a payment at time t+k and delivery at time t+h. Hence, the generic term on the RHS of (<ref>) captures the difference in values π_f of the forward contracts for payment at time t+k and t+k+1 for the delivery of the asset at time t+h. As h varies, we get a decomposition of the term structure for the risk premium, or equivalently of the spot value (price) as: π(u,h|I_t) = 𝔼(Y_t+h|I_t)+ ∑^h-1_k=0𝔼[π_f(u,k+1,h|I_t+k+1)-π_f(u,k,h|I_t+k)| I_t], that is compatible with the no dynamic arbitrage condition between spot and forward contracts. There is a debate on the valuation approach to be chosen for contingent assets [see e.g. Embrechts (2000)]. For financial assets traded on very liquid markets, this is usually done by introducing a stochastic discount factor to satisfy the no dynamic arbitrage opportunity condition. The situation is different for individual insurance contracts or for operational risks [see the discussion of cyber risk in Section 7]. The certainty equivalent principle is a more appropriate valuation approach in such frameworks and we have checked ex-post that it is compatible with the no arbitrage opportunity assumption. § EXAMPLES In this section, we consider different dynamic models for which we derive closed form decompositions for either the FEKD, or the FELD. §.§ Examples of FEKD §.§.§ The Gaussian VAR(1) Let us assume that the n-dimensional stationary process (Y_t) satisfies: Y_t = Φ Y_t-1 + ε_t, where the eigenvalues of Φ have a modulus strictly smaller than 1 and the ε_t's are i.i.d. Gaussian ε_t ∼ N(0,Σ). Then, the conditional distribution of Y_t+h given Y_t is Gaussian with mean Φ^hY_t and variance-covariance matrix Σ_h = Σ + ΦΣΦ'+...+Φ^h-1Σ(Φ')^h-1. The following proposition provides the closed form FEKD for the Gaussian VAR(1). In the Gaussian VAR(1) model, the FEKD is of the form: a(h|Y_t) + b(h|Y_t)y+y'c(h|Y_t)y = ∑_k=0^h-2[a(h,k|Y_t) + b(h,k|Y_t)y+y'c(k,h|Y_t)y], where: a(h,k|Y_t) = 1/2log[Σ_h-k-1/Σ_h-k] + 1/2[Σ^-1_h-k-1Φ^h-k-1Σ_k+1(Φ^h-k-1)'-Σ^-1_h-kΦ^h-kΣ_k(Φ^h-k)'] -1/2Y_t'(Φ^h)'(Σ^-1_h-k-Σ^-1_h-k-1)Φ^hY_t, b(h,k|Y_t) = Y_t'(Φ^h)'(Σ^-1_h-k-Σ^-1_h-k-1), c(h,k|Y_t) = - 1/2(Σ^-1_h-k-Σ^-1_h-k-1). Proof: See the Appendix A.1.1. For the Gaussian VAR(1) the FEVD is: Σ_h - Σ = ∑_k=0^h-2[Σ_h-k - Σ_h-k-1]. The FEKD in Proposition 3 differs from the FEVD, since it depends on Y_t and on the argument y. Its functional quadratic form in y provides information on the prediction errors and their decomposition in the tails when we focus on the components in y and the squares. The decomposition of the intercept is just to balance the change in tails, since all the predictive distributions are unit mass. The component in y depends on Y_t, which corresponds to the forward interpretation of the elements in the decomposition. The quadratic element is independent of Y_t due to the conditional homosecedasticity of the Gaussian VAR(1) model. This element is written on the inverses of the prediction variance, not on the prediction variances as in the FEVD. This difference is the analogue of the two equivalent filters, introduced by Kalman for the linear state space model. Indeed, the standard covariance filter is based on the direct updating of Σ_h, whereas the information form of the filter coincides with the direct updating of the inverse Σ^-1_h (called information). §.§.§ The Cauchy AR(1) Let us consider the stationary univariate process (Y_t) defined by: Y_t = φ Y_t-1 + σε_t, |φ|<1, where (ε_t) is a Cauchy distributed strong white noise. This distribution admits the density f(ε)=1/π1/1+ε^2, and its characteristic function is given by: 𝔼(exp(iuε))= exp(-|u|), where i is the imaginary number. Then we have: Y_t+h = φ^h Y_t + σ(ε_t+h+φε_t+h-1+...+φ^h-1ε_t+1) ≡φ^hY_t + ε_t,h, where: 𝔼[exp(iuε_t,h)] = 𝔼{exp[iu(σε_t+h+...+σφ^h-1ε_t+1)]} = exp[-|u|(σ+σ|φ|+...+σ|φ|^h-1)] = exp[-|u|σ1-|φ|^h/1-|φ|]. Therefore the conditional distribution of Y_t+h given Y_t is a Cauchy distribution with drift φ^hY_t and scale σ1-|φ|^h/1-|φ|. We deduce that: f(y,h-k|I_t+k)/f(y,h-k-1|I_t+k+1)=1-|φ|^h-k-1/1-|φ|^h-k1+[(y-φ^h-k-1Y_t+k+1)/(σ1-|φ|^h-k-1/1-|φ|)]^2/1+[(y-φ^h-kY_t+k) /(σ1-|φ|^h-k/1-|φ|)]^2, and then: γ(k,h|y,I_t) = log[1-|φ|^h-k-1/1-|φ|^h-k] +𝔼{log[1+((y-φ^h-k-1Y_t+k+1)/(σ1-|φ|^h-k-1/1-|φ|))^2]| I_t} -𝔼{log[1+((y-φ^h-kY_t+k)/(σ1-|φ|^h-k/1-|φ|))^2]| I_t}. This is the generic term of the RHS of the FEKD. In this framework the conditional variance at horizon h, that is 𝕍(Y_t+h|Y_t), does not exist and the FEVD does not exist as well. However, the elements γ(k,h|I_t) in the decomposition above exist, since the transformed variable log(α+βε_t^2) is integrable with respect to the Cauchy distribution. §.§.§ Markov Chain Let us start by considering a stationary Markov chain with two states 0,1, or equivalently, a stationary Markov binary time series (Y_t). The transition matrix of the chain can be parameterized by the marginal probability π = P(Y_t =1) and a persistence parameter λ, 0 ≤λ < 1. We have: P(Y_t+1=1|Y_t) = 𝔼(Y_t+1|Y_t) = π + λ (Y_t-π), Y_t=0,1. Then by iterated expectations we deduce: P(Y_t+h=1|1,Y_t) = 𝔼(Y_t+h|Y_t) = π + λ^h(Y_t-π), Y_t=0,1. For a binary Markov chain with states 0 and 1, the generic term of the right hand side of the FEKD (<ref>) for y=1 is: γ(k,h|Y_t)= log[1-λ^h-k/1-λ^h-k-1] + log[π+λ^h-k(1-π)/π(1-λ^h-k)][π+λ^k(Y_t-π)] - log[π+λ^h-k-1(1-π)/π(1-λ^h-k-1)][π+λ^k+1(Y_t-π)], which is in the form of α(h,k)Y_t + β(h,k). Proof: See Appendix A.1.2. Such univariate binary processes have attracted considerable attention to describe the business cycle recession/expansion periods [see e.g. Estrella, Mishkin (1998), Chauvet, Potter (2005), or Kauppi, Saikkonen (2008)]. This term can be compared with the generic term in the standard FEVD. The FEVD for the binary Markov chain is given by: 𝕍{𝔼[Y_t+h|I_t+k+1]-𝔼[Y_t+h|I_t+k]| I_t } = π(1-π)λ^2(h-k-1)[1-λ^2]- (Y_t-π)λ^2h-k-1[1-2π][1-λ]. Proof: See Appendix A.1.3. Now we can deduce the extension to the general case. Suppose Y_t is a stationary Markov chain with n possible states. Let X_t = (X_1,t,...,X_n,t)', with X_i,t=1, if Y_t is in state i, and zero otherwise, for i=1,...,n. The knowledge of Y_t is equivalent to the knowledge of X_t, which can take on the values: (1,0,...0)', (0,1,0,...,0),...,(0,...,0,1)'. Let us characterize its h-step transition probabilities with the matrix P^h, whose elements are defined as p^(h)_ij = P(Y_t+h=i|Y_t=j) for all t≥0. Then, the FEKD is given by the following proposition. For a Markov chain with n states, the generic term on the right hand side of the FEKD is: γ(h,k|I_t+k) = [log(P^h-k)_yP^k+1-log(P^h-k-1)_yP^k]X_t, where log(P^h) is a matrix whose elements are the logged elements of P^h and A_y denotes the y-th row of matrix A. Proof: See Appendix A.1.4. The result presented in Propostion 4 is a special case of Proposition 5. The expression of the FEKD for the binary Markov chain in (<ref>) is a special case of Proposition 5 where: P^h= [ p^(h)_00 p^(h)_01; p^(h)_10 p^(h)_11; ]= [ 1-[π(1-λ^h)] 1-[π+λ^h(1-π)]; π(1-λ^h) π+λ^h(1-π); ]. Proof: See Appendix A.1.5. §.§ Examples of FELD The FELD has a simple form and is easy to interpret for the dynamic affine model, which is called the Compound Autoregressive (CaR) model in discrete time [see Duffie, Filipovic, Schachermayer (2003), Darolles, Gourieroux, Jasiak (2006)]. We first review the CaR models and some of their dynamic properties. Then, we discuss in detail the application to Gaussian processes, Integer Autoregressive (INAR) models, Negative Binomial Autoregressive (NBAR) models, Markov Chains, Autoregressive Gamma and Wishart processes. Remember that the FELD has an interpretation as a decomposotion of risk premium where Y is a value (see Section 3.4). §.§.§ Dynamic Affine Model The process is assumed Markov of order 1 with a conditional log-Laplace transform which is affine in the conditioning value[The extension to a Markov of order p is straightforward.] Y_t. Thus, the Laplace transform at horizon 1 can be written as: Ψ(u,1|I_t) = Φ(u,1|Y_t) = 𝔼[exp(-u'Y_t+1)|Y_t] = exp{-a(u)'Y_t + c(u)-c[a(u)]}, where c(u) =log𝔼[exp(-u'Y_t)] is the unconditional log-Laplace transform of Y_t and the function a(·) captures all nonlinear serial dependence features. The affine property remains satisfied at any forecast horizon. More precisely, we have: Ψ(u,h|Y_t) = exp{-a^∘ h(u)' Y_t + c(u) - c[a^∘ h(u)]}, where a^∘ h(·) is function a(·) compounded h times with itself. Then, the FELD becomes: 𝔼[u'Y_t+h-a^∘ h(u)'Y_t+c(u)-c[a^∘ h(u)]| I_t ] = ∑_k=0^h-1𝔼{a^∘ (h-k-1)(u)Y_t+k+1-a^∘ (h-k)(u)'Y_t+k+c[a^∘ (h-k-1)(u)]-c[a^∘ (h-k)(u)]| I_t }, ∀ u. This decomposition involves the conditional expectations of Y_t+k given Y_t and can be written as: u'𝔼(Y_t+h|Y_t) - a^∘ h(u)'Y_t + c(u) -c[a^∘ h(u)] = ∑_k=0^h=1{a^∘(h-k-1)'(u)𝔼(Y_t+k+1|Y_t)-a^∘(h-k)'(u)𝔼(Y_t+k|Y_t)} + c[a^∘(h-k-1)(u)]- c[a^∘(h-k)(u)], ∀ u. Whereas the FELD involves the conditional expectations only, the nonlinear dynamic features are taken into account by the “weightings" of the expectation that depend on the function a(·), that summarizes the nonlinear serial dependence. It is known that the log-Laplace transform admits a Taylor expansion in terms of cumulants in a neighborhood of u=0. In particular, the first-order expansion of the conditional log-Laplace transform shows that the conditional expectation is affine in the conditioning variable Y_t. More precisely, we have: 𝔼(Y_t+1|Y_t) = -dc/du(0) + da'/du(0) [Y_t + dc/du(0)]. and 𝔼(Y_t+h|Y_t) = -dc/du(0) +[da'/du(0)]^h[Y_t + dc/du(0)]. Then we deduce a closed form FELD for dynamic affine models. For dynamic affine models, the FELD takes the form: {u'[da'/du(0)]^h - a^∘ h(u)'}Y_t - u'dc/du(0)+ u'[da'/du(0)]^h[dc/du(0)] + c(u) - c[a^∘ h(u)] = ∑_k=0^h-1{a^∘ (h-k-1)(u)'[da'/du(0)]^k+1-a^∘ (h-k)(u)'[da'/du(0)]^k}Y_t+(a^∘ (h-k)(u)'-a^∘ (h-k-1)(u)')[dc/du(0)] + (a^∘ (h-k)(u)'[da'/du(0)]^k-a^∘ (h-k-1)(u)'[da'/du(0)]^k+1)[dc/du(0)] + c[a^∘ (h-k-1)(u)]-c[a^∘ (h-k)(u)], ∀ u. Proof: See Appendix A.2.1. Therefore, we get a decomposition of the type: α(h,u)'Y_t + β(h,u) = ∑_k=0^h-1[α(h,k,u)'Y_t + β(h,k,u)], or equivalently decompositions of the functions α(h,u) and β(h,u) into ∑_k=0^h-1α(h,k,u) and ∑_k=0^h-1β(h,k,u) respectively. These decompositions depend on both the term h and the argument u. The decomposition of function α(h,u) is especially appealing due to its interpretation in terms of nonlinear Impulse Response Functions (IRF). Indeed, let us consider a given shock of magnitude δ on the level Y_t. Note that in our nonlinear framework, the magnitude of the shock δ is constrained by the domain of Y_t. It can be multivariate if Y_t is multivariate, constrained to be either 0, or -1 (resp. 0, or 1) if Y_t is binary with value 1 (resp. value 0), and so on. We do not discuss the sources of the shock and if they are identifiable or controllable. The effect on the predictive distribution at horizon h is α(h,u)'δ. Compared to the standard linear approach of the IRF, we see that the IRF depends on the argument u. In other words, this measure changes with the preference (risk aversion) of the analyst. Moreover, for a given u, the decomposition will change, since in a nonlinear dynamic forecast, the spot and forward short run updatings of the predictive distributions will differ. The FELD can be easily compared to the FEVD. Let us consider the one-dimensional case for expository purposes. In an affine dynamic model, the conditional variance 𝕍(Y_t+h|I_t+k) is an affine function of Y_t+k, and thus the generic term in the FEVD (<ref>) is also an affine function of Y_t. More precisely, we get: 𝔼{𝕍[𝔼(Y_t+h| I_t+k+1)| I_t+k]| I_t} = ∑_j=1^N {(∂ a'(0)/du)^h-k-1∂^2a_j(0)/∂ u ∂ u'(∂ a(0)/du')^h-k-1[-dc(0)/du_j+((∂ a'(0)/∂ u)^k(Y_t+dc(0)/du))_j]} + (∂ a'(0)/∂ u)^h-k-1∂^2b(0)/∂ u ∂ u'(∂ a(0)/∂ u')^h-k-1. Proof: See Appendix A.2.2. §.§.§ Strong Linear VAR(1) Model Let us consider the strong VAR(1) model: Y_t=Φ Y_t-1 +ε_t, where the ε_t's are i.i.d with the log-Laplace transform: log𝔼[exp(-u'ε_t+1)] = b(u), The conditional Laplace transform is: 𝔼[exp(-u'ε_t+1)|Y_t] = exp[-u'Φ Y_t +b(u)]. When the eigenvalues of Φ have a modulus strictly smaller than 1, we can write the infinite moving average representation of process (Y_t) as: Y_t = ∑_j=0^∞Φ^j ε_t-j. Therefore, the unconditional log-Laplace transform of process (Y_t) is: c(u) = ∑_j=0^∞ b[(Φ')^ju]. The strong VAR(1) is an affine model, with a(u)=Φ'u and a^∘ h(u)=(Φ')^hu. Since the dynamic is linear and the function a(·) is also linear, the FELD is greatly simplified. We get: For a strong linear VAR(1) model, the FELD is: 𝔼{log[𝔼[exp(-u'Y_t+h)|I_t]/exp(-u'Y_t+h)]| I_t} = ∑_k=0^h-1b[(Φ')^ku]. Due to the linear dynamic, the FELD does not depend on the value Y_t of the conditioning variable. However, there are still effects on the decomposition of the cross-sectional heterogeneity, that is the non-Gaussian distribution of the errors (ε_t). If the error is Gaussian ε_t ∼ IIN(0,Σ), we get b(u)=u'Σ u/2. Then, the right hand side of the decomposition becomes: ∑_k=0^h-1 b[(Φ')^k u] = 1/2∑_k=0^h-1(u'Φ^k Σ(Φ')^ku) = 1/2u'∑_k=0^h-1Φ^k Σ(Φ')^ku We recover the right hand side ∑_k=0^h-1Φ^k Σ(Φ')^k in the FEVD for the VAR(1) model (see equation (2.8)). To summarize: * In a strong linear dynamic model, the FELD does not depend on the conditioning variable. * The decomposition is equivalent to the FEVD only if the white noise is Gaussian. §.§.§ Markov Chain Let us return to the Markov chain example discussed in section 4.1.2 with n states. The conditional log-Laplace transform is given by: logΨ(u,h|X_t) = log[exp(u)P^h]X_t, where u=(u_1,...,u_n)' and log(A) is a matrix whose elements are the logged elements of matrix A. Then the FELD is given by the following proposition: For a stationary Markov chain (Y_t) with n states and transition matrix P, the FELD is of the form: 𝔼{log[Ψ(u,h|I_t)/exp(u'Y_t+h)]| I_t} = ∑_k=0^h-2[log(exp(u)P^h-k)P^k-log(exp(u)P^h-k-1)P^k+1]X_t, for all u=(u_1,...,u_n)'. Proof: See Appendix A.2.3. §.§.§ INAR Model For expository purposes, let us consider the Integer Autoregressive model (INAR) of order 1 introduced by McKenzie (1985), Al-Osh, Azaid (1987). The process is defined by: Y_t = ℬ_t(p) ∘ Y_t-1 + ε_t, with ℬ_t(p) ∘ Y_t-1=∑_j=1^Y_t-1U_j,t, where the variables ε_t, U_j,t, j,t varying, are independent, U_j,t follows the same Bernoulli distribution ℬ(1,p) and ε_t the Poisson distribution P(λ), with 0 ≤ p < 1 and λ≥0. By convention, ∑_j=1^0U_j,t = 0. The INAR model is a CaR model, with a(u) = -log[pexp(-u)+1-p], c(u)=-λ/1-p[1-exp(-u)]. In particular, the marginal distribution of Y_t is Poisson P(λ/1-p). It is easily checked that: a^∘ h(u)=-log[p^hexp(-u)+1-p^h]. For the INAR(1) model, the FELD is given by: 𝔼{log[𝔼[exp(-u'Y_t+h)|I_t]/exp(-u'Y_t+h)]| I_t} = {p^hu+log[1-p^h+p^hexp (-u)]}Y_t+λ/1-p[(1-p^h)(u-1+exp(-u))] = ∑_k=0^h-1{[p^klog(1-p^h-k+p^h-kexp (-u))-p^k+1log(1-p^h-k-1+p^h-k-1exp (-u))]}Y_t . +λ(1-p^k)/1-plog(1-p^h-k+p^h-kexp(-u))-λ(1-p^k+1)/1-plog(1-p^h-k-1+p^h-k-1exp(u)). . +λ[1-exp(-u)]p^h-k-1.. Proof: See Appendix A.2.4. §.§.§ Cox, Ingersoll, Ross and Autoregressive Gamma Process Let us consider the univariate Autoregressive Gamma Process of order 1, which is the time discretized Cox, Ingersoll, Ross model [see Gourieroux, Jasiak (2006)]. This dynamic model is the benchmark for the dynamic analysis of short run interest rates, or for one dimensional stochastic volatility. This is an affine model where the transition distribution is constructed from a gamma distribution with path dependent stochastic degree of freedom. It depends on two parameters: β≥ 0, δ≥0, and the Laplace transform corresponding to a(u)=β u/1+u, c(u)=-δlog[1+u/1-β]. In particular, the marginal distribution of the process is such that: (1-β)Y_t∼γ(δ). It is easily checked that a^∘ h(u)=β^h u/[1+1-β^h/1-βu] and that da(u)/du = β. Therefore, we get the following FELD decomposition of the functional IRF (the decomposition of the intercept β(h.u) is provided in Appendix A.2.5). For the ARG(1) process, the FELD leads to the decomposition of: α(h,u) = uβ^h[1-1/1+1-β^h/1-βu], into ∑_k=0^h-1α(h,k,u) where: α(h,k,u) = uβ^h [1/1+1-β^h-k-1/1-βu-1/1+1-β^h-k/1-βu] Proof: See Appendix A.2.5. Hence, the proportion α(h,k,u)/α(h,u) is given by: α(h,k,u)/α(h,u)= [1/1+1-β^h-k-1/1-βu-1/1+1-β^h-k/1-βu]/[1-1/1+1-β^h/1-βu]. As u increases, the numerator is a positive decreasing function, while the denominator is an increasing function of u. Hence, the proportion α(h,k,u)/α(h,u) is a decreasing function of u as well. §.§.§ The Wishart Process The analysis of risk measured by volatility can be extended to the multivariate framework and volatility-covolatility matrices. This leads to the Wishart Autoregressive (WAR(1)) process that is the multivariate analogue of the ARG(1) process [see Gourieroux, Jasiak, Sufana (2009) for discrete time and Cuchiero et al. (2011) for continuous time]. Due to the matrix framework and the positivity satisfied by a volatility-covolatility matrix, the conditional Laplace transform, is usually written as: Ψ(Γ,1| Y_t) = 𝔼[exp(-(Γ Y_t+1)| Y_t )], where Γ is a matrix of arguments assumed symmetric positive semi-definite and denotes the trace operator that sums up the diagonal elements of a square matrix. Then we get: For the WAR(1) process, the FELD leads to the generic element: a(h,k,Γ) = {(M^h)'[Γ(Id+2Σ_h-kΓ)^-1-Γ(Id+2Σ_h-k-1Γ)^-1]M^hY_t}. Proof: See Appendix A.2.6. It can be checked that this element reduces to the element in the FELD of the ARG (see Section 4.2.5) in the one dimensional case. § ILLUSTRATIONS We now provide some empirical illustrations of the theory presented in the preceding section. §.§ FEKD for Gaussian VAR(1) Process We consider a bivariate VAR(1), Y_t=(Y_1,t,Y_2,t)', with autoregressive parameter Φ = [ 0.5 0.1; 0.2 0.6; ]. The eigenvalues of matrix Φ are λ_1=0.7 and λ_2=0.4, so the model admits a stationary solution for (Y_t). Let the covariance matrix of ε_t be given by Σ = [ 1 0.2; 0.2 1; ], that is, the innovations have unit variance and are positively correlated. The first illustration of interest is how the total FEKD, that is, the left hand side of (<ref>), varies with different values of (Y_t,y). In fact, this total relies on how “far" the value y is from the mean of the conditional predictive density at time t, which is equal to Φ^hY_t. To see this, let us define the Mahalanobis distance between y and Φ^hY_t, given by: d(y,Φ^hY_t) = √((y-Φ^hY_t)'Σ_h^-1(y-Φ^hY_t)), which is an extension of the standard deviation in a multivariate setting. For expository purposes, suppose Y_t = (2,1)' and h=10. Then, Φ^h(2,1)'=(0.028,0.56)'. We plot in Figure 1 the relationship between the total FEKD in (<ref>) and the distance d(y,Φ^hY_t). The total FEKD increases exponentially with respect to d(y,Φ^hY_t). This means that values of y further from the mean Φ^hY_t will have a larger marginal increase in total FEKD. Moreover, note that two points with the same d(y,Φ^hY_t) will also have the same total FEKD. For example, the points y=Φ^h(2,2)'=[0.038, 0.075]' and y=Φ^h(2,0)'=[0.019, 0.038]' both have a Mahalanobis distance of 0.0101 and a total FEKD of 0.1415. However, even if two points share the same Mahalanobis distance, their decompositions can be very different. Let us return to the values y=Φ^h(2,2)'=[0.038, 0.075]' and y=Φ^h(2,0)'=[0.019, 0.038]'. In the first two graphs of Figure 2, we illustrate the decompositions of b(h|Y_t)y and y'c(h|Y_t)y respectively. It is clear that the point Φ^h(2,2) has stronger effects on both components. However, when we consider the total sum b'(h|Y_t)y+y'c(h|Y_t)y, it is not surprising that both y=Φ^h(2,2)'=[0.038, 0.075]' and y=Φ^h(2,0)'=[0.019, 0.038]' produce an identical number, as shown in the bottom left graph. Indeed, since the total FEKD is the same for both points and component α(h|Y_t) is independent of y, the sum b'(h|Y_t)y+y'c(h|Y_t)y must also be the same for two points with the same Mahalanobis distance. Furthermore, we note that the comparison of decomposition for these two points is somewhat inconsequential in understanding the dynamics of the VAR(1), since in the bottom right graph, we see that the sum b'(h,k|Y_t)y+y'c(h,k|Y_t)y contributes a very small fraction of the total FEKD in comparison to a(h,k|Y_t). On the other hand, this latter remark is not always valid for all y. Indeed, let us now illustrate an example of points that have a much higher Mahalanobis distance from the mean Φ^hY_t, that is, in the tail ends of the predictive density. For instance, consider y=Φ^h(2,-99)'=[-0.91, -1.83]' and y=Φ^h(2,101)'=[0.97, 1.94]', which have a Mahalanobis distance of 1.0146. In Figure 3, we can see again that the decomposition of components b'(h|Y_t)y and y' c(h|Y_t)y can be very different, even if two points share the same Mahalanobis distance. Also, note that the components of y' c(h|Y_t)y are much larger in magnitude than the components of b(h|Y_t)y, which suggests that y' c(h|Y_t)y (the quadratic term) is much more informative on dynamic behaviour in the tails. Furthermore, as seen in the last graph of Figure 3, the sum b'(h|Y_t)y+y'c(h|Y_t)y now takes up a much larger fraction of the total FEKD. Hence, the decomposition is much more sensitive to the choice of y in the tails than it is near the mean. §.§ FELD for INAR(1) Model Let us consider the INAR(1) process and its FELD derived in Corollary 6, eq. (<ref>). The decomposition depends on the current value Y_t, on the mean of the Poisson innovation λ, on the persistence parameter p, on the risk aversion u, and on the horizon h. For expository purposes, we set Y_t=3 and λ=2 in the illustrations, noting that they only enter the FELD as scaling constants and do not change the properties of the decomposition. In Figure 4, we present graphs depicting the relationship between the total FELD and the risk aversion parameter u in [0.1,2.8], under different values of persistence parameter p. The lines in each figure represent the FELD for different horizons, with darker shades representing later horizons. For instance, the black line represents the total FELD for horizon 10. In general, the total FELD is an increasing function of u. That is, when a decision maker is more risk averse, the total information gain at a future horizon is also higher, measured by means of the relative change in the Laplace transform of the process. The FELD is also influenced by the level of persistence in the INAR(1). When persistence is low, the information gain in each horizon is similar; in the bottom right graph for p=0.05, the line for each horizon is almost stacked on top of each other, whereas the lines in the later horizons are higher in the top left graph for p=0.99. Indeed, persistence also influence the steepness of the FELD in each horizon. In Figure 5, we consider decompositions of the FELD at horizons h=1,...,10, based on a grid of values in u=(0.5,1,2) and p=(0.1,0.5,0.95). The height of each bar in the graphs correspond to the value of the FELD on the left-hand side of (<ref>). They are decomposed into smaller bars with different shades, with the darker shades corresponding to a larger k on the right-hand side of (<ref>). The risk aversion parameter only influences the total FELD amount, but it has no impact on the decomposition of its components. Instead, the persistence of the process influnces the FELD in two ways. Firstly, at higher levels of persistence, there is higher contribution of short term updates in previous horizons. For example, let us consider the FELD for horizon 10 at u=2 (i.e. the graphs in the third row). When there is almost no persistence (p=0.1), the bar at horizon 10 is completely in black, meaning that only the update from horizon 9 to 10 contributes to the total FELD. When persistence is high (p=0.95), the bar at horizon 10 contains various shades of grey, which means that updates from previous horizons (e.g. 1 to 2, 2 to 3, 3 to 4, etc.) also contribute to the total FELD. This is also depicted in Figure 6, where the FELD is instead taken as a fraction out of 100. Secondly, for lower levels of persistence, the total FELD seems to “converge" towards a long-run value. This is due to the stationarity of the INAR(1) process. Indeed, although it is not visible in this illustration, the FELD also converges to a long-run value for p=0.95, albeit at a much further horizon. We plot in Table 1 the limiting values of the FELD as h tends to infinity for various combinations of (p,u). To no surprise, the limiting values are increasing in both u and p. Another decomposition of interest is the separation of the FELD into the time-varying component α and the fixed component β. This is exemplified in Figure 7 below for the INAR(1) process with (p,u)=(0.7,3). Although the FELD is increasing over horizon h, it is the fixed component that plays a gradually larger role in comparison to the time varying component. Indeed, we expect to see this behaviour of the α component, since it has the interpretation of an impulse response function; for stationary processes, the IRF should exhibit transitory behaviour rather than long-run or permanent changes. Looking at a more granular decomposition of the α and β componenents themselves, there are no special patterns that differ greatly from the overall FELD. §.§ FELD for ARG(1) Process Let us now illustrate the properties of FELD under the ARG(1) process, with focus on the terms α(h,u) and α(h,k,u). In Figure 8 below, we present the relationship between the total FELD term α(h,u) and risk aversion u for varying levels of parameter β, which characterizes the serial dependence. Let us first consider the cases where β is relatively high, say above β = 0.9. For all horizons h, the total FELD term α(h,u) is an increasing function of u. However, the rate of increase differs across horizons. In particular, when u is small, the total FELD is higher for the later horizons h (i.e the darker lines are above the lighter ones). This relationship reverses when u is high and there seems to be a point at which the lighter lines “cross" the darker ones. When β is low, this relationship is seemingly absent, and the lighter lines are always higher than the darker ones. However, there is still a “cross" point for these graphs, but due to the scaling of the axis, they cannot be seen clearly, since they appear at very low values of u. Indeed, the value of β influences the crossing condition between horizons. In particular, it can be shown that the crossing condition for horizons h and h+1 is given by: u^*(h,β)= (1-β)(β^h+1+β^h-1)/(1-β^h)(1-β^h+1). In the left graph of Figure 9, we depict the curves for u^*(h,β) as a function of β. Each line corresponds to an horizon h, with the darker lines representing later horizons. We can see that for smaller h, the total FELD will cross with h+1 at a much higher level of risk aversion u. For instance, at β=0.90, the curve for h=1 will cross with h=2 at u*=3.737. However, for the curve h=2 and h=3, the crossing point is at u^*=1.047. These values can be seen on the right graph of Figure 9. Finally, we consider the properties of the FELD decomposition for the ARG(1) process. We consider a grid of values on (u,β), for u=0.5,1,3 and β=0.1,0.5,0.9 in Figure 10, which shows the fraction α(h,k,u)/α(h,u). There are two main takeaways from this demonstration. Firstly, as the autoregressive parameter β increases (i.e going from top to bottom), then the contribution of previous horizons to the current horizon is higher. Secondly, as the risk aversion parameter u increases (i.e. going from left to right), the contribution of previous horizons to the current horizon is lower. Hence, at the bottom left graph, we can see that even for horizon h=10, there is significant contribution from α(h,k,u) for values k=1,...9. On the other hand, at the top right graph, the bar for horizon h=10 is almost completely homogenous in colour. § STATISTICAL INFERENCE Let us focus on statistical inference for the FELD, the approach being similar for the FEKD. The decomposition is functional, indexed by the current value Y_t =y and the risk aversion parameter u. Then we consider the functional estimator in both the parametric and nonparametric frameworks. §.§ Parametric Dynamic Models When the dynamic model is parameterized as in the examples of Section 4.2, the decomposition takes the form: γ(h| u,y;θ) = ∑_k=1^h-1γ (k,h | u,y; θ), where θ denotes the parameter, u captures the risk aversion and y is the generic value of Y_t. If θ̂_T is a consistent estimator of θ and is asymptotically normal: √(T)(θ̂_T - θ) N(0,V), the FRED can be estimated by plugging in θ̂_T in the decomposition: γ(h| u,y;θ̂_T) = ∑_k=1^h-1γ (k,h | u,y; θ̂_T), ∀ u,y. Moreover, uniform confidence bands can be derived by applying the δ-method with respect to θ̂_T. Indeed, the doubly indexed vector vec γ̂_T(u,y)=[γ̂_T(k,h,| u, y), h=1,...,H, k=0,...,h-1] is asymptotically normal. The estimator γ̂_T is asymptotically normal such that: √(T)[γ̂_T(u,y)-γ_0(u,y)] N(0,V”), where γ_0(u,y) is computed at the true value γ_0 and the expansion of the asymptotic variance-covariance matrix is given in Appendix B.1. §.§ Nonparametric Dynamic Model Nonparametric inference can also be applied under the Markov assumption in the one-dimensional case. Let us for instance consider the FELD. The estimation approach is in four steps: * Fix a value of the absolute risk aversion u and a maximal horizon H. * Estimate the value of the conditional Laplace transform: Ψ(u,h | Y_t =y)= 𝔼[exp(-uY_t+h)| Y_t = y ], by its Nadaraya-Watson counterpart: Ψ̂_T(u,h | Y_t =y)= ∑_t=h+1^T[K(y_t-y/b)exp(-uy_t+h)]/∑^T_t=h+1K(y_t-y/b), where b>0 is the bandwidth and K a kernel function such that ∫ K(u) du = 1 and K(-u) = K(u) for all values of u in the domain of K. * Then the quantity: log[Ψ(u;h-k| Y_t+k=y)/Ψ(u;h-k-1| Y_t+k=z)] can be consistently approximated by: log[Ψ̂_T(u;h-k| y)/Ψ̂_T(u;h-k-1| z)]. * A second application of the Nadaraya-Watson approach will provide the estimation of the generic term in the FELD. More precisely, we get: γ̂_T(k,h| u,y) = ∑_t=h+1^T{K(y_t-y/b) log[Ψ̂_T(u;h-k| y_t+k)/Ψ̂_T(u;h-k-1| y_t+k+1)]}/∑^T_t=h+1K(y_t-y/b). § APPLICATION TO CYBERRISK The prevalance of the internet in our daily lives means that individuals are now able to communicate, transfer and store large amounts of information with just a mobile device. This has significantly improved the way in which businesses operate on a daily basis and has transformed the structure of our modern economy. For example, hospitals have adopted digital patient records which can be accessed by any institution in their network, and alleviated the need to maintain or deliver physical patient files. However, this also offers an opportunity for bad faith actors to intercept or steal information, leading to potentially disastrous outcomes for the victims involved. As such, there is demand for businesses and government agencies to model and quantify the risk of cyber attacks in order to insure against these prospects. In this section, we demonstrate how the FELD can be used in the multivariate Negative Binomial Autoregressive framework in studying the decomposition of frequency in cyber attacks. §.§ The Challenges and the Available Data The data on cyber attacks is difficult to obtain. Firstly, there is no consensus on what a cyber attack is. Indeed, there are many definitions, and in a sense it is an umbrella term which includes a variety of illicit behaviours or acts to obtain digital information. Secondly, the collection of data on cyber attacks is rather limited. Although the internet is available almost everywhere today, its accessibility was much less so even just 20 years ago. Hence, tracking cyber attacks has eased only in recent years. Moreover, recording reliable and confirmed cyber attacks is a challenging task, since firms or organizations that are subject to these attacks have an incentive to hide their occurrences. Nonetheless, recent research has appealed to the Privacy Rights Clearinghouse (PRC) dataset, which includes information on publicly reported data breaches across the United States between 2005 and 2022[https://privacyrights.org/data-breaches. Other cyber databases are Advisen and SAS Oprisk [see Eling, Ibragimov and Ning (2023) for a comparison]]. The PRC was founded in 1992 by the University of San Diego School of Law. The data are gathered from different sources including the Attorney General offices, government agencies, nonprofit websites and media. The reports do not follow a consistent procedure, which may lead to a lack of accuracy and representativeness of the data. Nevertheless, this database is usually employed to analyze cyberrisk [Eling and Loperfido (2017), Eling and Jung (2018), Barati and Yankson (2022), Lu et al. (2024)]. We focus our attention on the modelling of cyber attack frequency counts and adopt the sample used in Lu et al. (2024). The data contains four types of breaches defined as by the PRC: * DISC - Unintended disclosures which do not involve hacking, intentional breaches, or physical losses. * HACK - Hacked by an outside party or infected by malware. * INSD - Breach due to an insider, such as an employee, contractor or customer. * ELET - Lost, discarded or stolen physical devices. The plots of these four time series are shown in Figure 12 below. All four series feature a large number of zero counts, with 146, 138, 440 and 108 instances for DISC, HACK, INSD and ELET respectively. They also have varying levels of occurrences; for instance, while HACK and INSD can both be considered breaches with malicious intent, the latter occurs much less frequently on average. This is due to the fact that it is much easier for an outsider to gain access without getting caught (such as using an IP spoofer), than an insider to attempt a breach. To gain insights on the distribution of each series we provide summary statistics and density plots of each type of breach below. In each series, the variance is much larger than the mean, which suggests that the data exhibit overdispersion. There is also positive excess skewness, which means that each count is skewed towards the right; this is not surprising since there are a large concentration of zeroes for each type of breach. Furthermore, all the series exhibit excess kurtosis, so they have relatively fatter tails compared to the normal distribution. §.§ Univariate Analysis The INAR model of Section 4.2.4. is the basic dynamic model for a series of count data. However, it implies a marginal Poisson distribution, for which the mean is equal to the variance. Therefore, it is not compatible with data featuring overdispersion, that is, where the variance is much larger than the mean, as seen in Table 2. Consequently, the risk can be underestimated in such a framework. The INAR model can be extended for more flexibility by introducing stochastic intensity. This leads to the univariate Binomial Autoregressive Process (NBAR) [see Gourieroux, Lu (2019)]. We present the univariate model in this section which will be used in this analysis (the bivariate NBAR will be presented in Section 7.3.1.). §.§.§ Univariate NBAR The process is defined by its state space representation with a state variable X_t interpretable as stochastic intensity. This representation is as follows: * Measurement Equation - Conditional on X_t+1,Y_t, the count process Y_t+1 is assumed to be 𝒫(β X_t+1) with β>0. * Transition Equation - Conditional on X_t,Y_t, the stochastic intensity factor X_t+1 is assumed to be a centred gamma distribution γ(δ+Y_t,β,c), with shape parameter δ+Y_t and scale parameter c. In total, there are three parameters to be estimated: β, δ and c. More specifically, β characterizes not only the serial dependence of the process, but also the level of conditional overdispersion. The process described above can be represented by the causal chain: ... X_t → Y_t+1→ X_t+1→ Y_t+2 ... that is a network with one hidden layer and one hidden neuron. Note that the NBAR process is an affine process with conditional Laplace transform [Gourieroux, Lu (2019), Proposition 2]: Ψ(u,h|I_t) = 𝔼[exp(-uY_t+h)| Y_t] = [1+β c_h-1(1-exp(-u))]^Y_t/[1+β c_h(1-exp(-u))]^δ+Y_t, = exp[-a^(h)(u)Y_t+b^(h)(u)], where: a^∘ h (u) =-log[1+β c_h-1(1-exp(-u))]+log[1+β c_h(1-exp(-u))], b^∘ h (u) =- δlog[1+β c_h(1-exp(-u))], and ρ = β c and the sequence (c_h) defined by c_h=c 1-ρ^h/1-ρ or equivalently, β c_h = ρ1-ρ^h/1-ρ. This process is strictly stationary if ρ<1 and its stationary distribution is obtained for h tending to infinity. The associated Laplace transform is: c(u)=Ψ(u,∞|I_t) = 1/[1+β c(1-exp(-u))]^δ, and corresponds to the negative binomial distribution NB(δ,ρ). §.§.§ Estimation The NBAR process is a nonnegative Markov process where the distribution is characterized by its conditional Laplace transform at horizon 1. This transition depends on the parameters β, c, δ, by means of ρ = β c and δ only. Therefore, the parameters ρ = β c and δ are identifiable [Gourieroux and Lu (2019)], but not β and c separately. Then, we can consider the following two estimation methods: (1) Linear Regression By the law of iterative expectation, it can be shown that: 𝔼(Y_t|Y_t-1)=𝔼[𝔼(Y_t|X_t-1)|Y_ t-1]=ρ Y_t-1 + ρδ. Therefore, the parameters ρ and ρδ (resp. δ) can be estimated using OLS via a regression of Y_t on Y_t-1. However, this method is not asymptotically efficient due to the presence of conditional heteroscedasticity in the NBAR process. (2) Maximum Likelihood Estimation The likelihood function of the NBAR process is given by [Gourieroux, Lu (2019), eq. 4.1]: logℓ(θ) = ∑_t=2^T log p(Y_t|Y_t-1;ρ,δ), where: p(Y_t|Y_t-1;ρ,δ) = ρ^Y_tΓ(δ+Y_t+Y_t-1)/Y_t!Γ(δ+Y_t-1)(1+ρ)^δ+Y_t+Y_t-1, and Γ denotes the gamma function. Unlike OLS, the MLE estimator is asymptotically efficient. We present below the estimates for δ and ρ and their associated standard errors under the OLS and MLE methods. As expected, the estimates of the parameter ρ are positive, to capture the overdispersion, and smaller than one, that is compatible with the stationarity of the count processes. Moreover, the largest persistences are for HACK and ELET. We also observe that the OLS estimates of the persistence (overdispersion) parameter ρ are always smaller than their ML counterparts. This provides some insight on the finite sample bias due to the omission of the conditional heteroscedasticity in the OLS approach. §.§.§ Univariate FELD For the univariate NBAR process described in Section 7.2.1, the FELD is given by: {uρ^h+log[1+β c_h-1(1-exp(-u))]-log[1+β c_h(1-exp(-u))]}Y_t +δρ u - δρ^h+1 u - δlog[1+β c_h(1-exp(-u))] = ∑_k=0^h-1{ρ^k+1log[1+β c_h-k-1(1-exp(-u))/1+β c_h-k-2(1-exp(-u))]-ρ^klog[1+β c_h-k(1-exp(-u))/1+β c_h-k-1(1-exp(-u))]}Y_t -δρ{log[1+β c_h-k(1-exp(-u))/1+β c_h-k-1(1-exp(-u))]-log[1+β c_h-k-1(1-exp(-u))/1+β c_h-k-2(1-exp(-u))]} -δρ{ρ^klog[1+β c_h-k(1-exp(-u))/1+β c_h-k-1(1-exp(-u))]-ρ^k+1log[1+β c_h-k-1(1-exp(-u))/1+β c_h-k-2(1-exp(-u))]} - δlog[1+β c_h-k(1-exp(-u))/1+β c_h-k-1(1-exp(-u))], ∀ u > 0, and β c_h = ρ1-ρ^h/1-ρ. Proof: See Appendix B.2.1. We now present the results of the FELD using the MLE estimated NBAR models for each of the four series for varying risk aversion u and horizon h. We focus on the term: {uρ^h+log[1+β c_h-1(1-exp(-u))]-log[1+β c_h(1-exp(-u))]}, which captures the marginal effect of one additional breach Y_t on the Total FELD. In each graph, the marginal effect of Y_t is plotted against the forecast horizon h. As h falls, the marginal effect of the history Y_t diminishes exponentially. The darker lines represent higher risk aversion parameter u. Thus, higher risk aversion means a higher marginal effect of Y_t. Intuitively, this implies that if the firm is more risk averse, an additional breach that occurs today will yield higher uncertainty in the forecast (measured by the marginal effect on the Total FELD). Moreover, the marginal effects seem to be higher for HACK and ELET, and lower for DISC and INSD. As seen in Section 3.4, the decompositions in Figure 13 also have interpretation in terms of spot and forward values of derivatives written on the number of cyber events. This is related to the literature on the pricing of cyberinsurance contracts [Fahrenwaldt et al. (2018)]. §.§ Bivariate Analysis The four series correspond to different types of cyber operational risks. Among them two of the risks correspond to breaches with malicious intent, i.e. HACK and INSD. We will focus on the joint analysis of these two risks[It is possible to study the four risks together, but with more complex models [see online appendix B3 for the extension to the K-dimensional NBAR.]], which has to account for both cross-sectional and serial dependencies between the series. A first insight on the cross-sectional dependence is through the joint stationary distributions. This is illustrated in Figure 14, where on the left hand side we provide a plot of values (Y_1,t,Y_2,t) and on the right hand side a plot of the associated Gaussian ranks (π(Y_1,t),π(Y_2,t)), i.e. the estimated copula after the Gaussian transform[The raw data are first ranked by increasing order. Rank(Y_j,t), the rank of Y_j,t, is valued in [1,...,T]. Then, π(Y_j,t)=Φ^-1[Rank(Y_j,t)/T], where Φ is the cumulative distribution function of the standard normal distribution.][This Gaussian transformed unconditional copula is completed on the count of occurrences. It differs from a copula completed from losses that account for severities [see Eling, Jung (2018)].]. Such plots are usually given for continuous variables or for count variables taking a large number of values. In our framework, the discrete nature of the variable has to be taken into account in the interpretation. For instance, we observe an overweighting of zero counts for the HACK variable, which can be seen in the left plot of Figure 14, where the joint density rests “against" its axis. The right panel in Figure 14 also shows that the two series features some right tail independence. §.§.§ Bivariate NBAR Let us denote Y_1t,Y_2t the two count series. We can now introduce three state variables interpretable as specific and common stochastic intensities, denoted X_1,t,X_2,t and Z_t, respectively. The nonlinear state space representation becomes: * Measurement Equation - Conditional on Y_t, X_t+1, Z_t+1, the variables Y_1,t+1, Y_2,t+1 are independent Y_j,t+1∼𝒫(α_jZ_t+1+β_jX_j,t+1) for j=1,2. * Transition Equations - Conditional on Y_t, X_t, Z_t, the variables X_1,t+1, X_2,t+1, Z_t+1 are independent such that X_j,t+1∼γ(δ_j+Y_j,t), for j=1,2, and Z_t+1∼γ(δ+σ_1Y_1,t+σ_2Y_2,t,0,c). The dimensionality and the presence of stochastic intensity factors now lead to 9 parameters to be estimated: α_1, α_2, β_1, β_2, δ_1, δ_2, σ_1, σ_2, and δ. The process described above corresponds to a more complicated causal scheme with one hidden layer and three hidden neurons. Y_1,t[r] [rd] X_1,t+1[r] Y_1,t+1 Z_t+1[ru] [rd] Y_2,t[r][ru] X_2,t+1[r] Y_2,t+1 It is easily checked that the bivariate NBAR model is an affine model. Its conditional Laplace transform at horizon 1 is given by [see Appendix A.2.7]: Ψ(u,1|Y_t)=𝔼[exp(-u'Y_t+1)| Y_t] = exp[-a_1(u_1,u_2)Y_1,t-a_2(u_1,u_2)Y_1,t-b(u_1,u_2)], where: a_1(u_1,u_2)= log[1+β_1(1-exp(-u_1))]+σ_1log[1+α_1(1-exp(-u_1))+α_2(1-exp(-u_2))], a_2(u_1,u_2)= log[1+β_2(1-exp(-u_2))]+σ_2log[1+α_1(1-exp(-u_1))+α_2(1-exp(-u_2))], b(u_1,u_2) = δ_1log[1+β_1(1-exp(-u_1))]+δ_2log[1+β_2(1-exp(-u_2))] + δlog[1+α_1(1-exp(-u_1))+α_2(1-exp(-u_2))]. We see from the expression of the conditional Laplace transform that all parameters are identifiable. §.§.§ Estimation The increase of the parameter dimension and the introduction of a common stochastic intensity lead to a larger number of identifiable parameters to be estimated, equal to 9. A first estimation to consider is by applying a Vector Autoregressive (VAR) representation based on the linear prediction formula [see Appendix A.2.8]: 𝔼[Y_t|Y_t-1] = [ α_1δ + β_1δ; α_2δ + β_2δ; ]+[ α_1σ_1+β_1 α_1σ_2; α_2σ_1 α_2σ_2 +β_2; ][ Y_1,t-1; Y_2,t-1 ]. Using OLS to estimate the model Y_t = C + AY_t-1 +ε_t, we obtain the following estimates: The corresponding eigenvalues of  = [ 0.638 0.053; 0.005 0.361; ] are λ_1 = 0.639 and 0.360, which suggests that the process is stationary in the conditional mean. More generally, we can apply a Method of Moments (MM) procedure based on the unconditional pairwise moment restrictions of the form: 𝔼{[exp(-u'Y_t)-Ψ(u,1|Y_t)]exp(-v'Y_t-1)} = 0, valued for different u and v, by considering the orthogonality between prediction errors and past values. A global estimation of the 9 parameters can be based on the unconditional pairwise moments given in (<ref>) after selecting at least 9 quadruples (u_1,u_2,v_1,v_2) of linearly independent moment restrictions[We choose quadruplets which correspond to a range of different risk aversion scenarios. For instance, the quadruplet (0.41,0.01,0.41,0.01) corresponds to the scenario where there is high risk aversion on only on the series Y_1,t. Likewise, the quadruplet (0.41,0.41,0.01,0.01) reflects the case where there is high risk aversion for both series, but only at time t.]. We also include 6 additional moment conditions implied by the OLS first order conditions. Applying a generalized method of moment procedure for the 15 moment conditions, we obtain the following estimates displayed in Table 4. These estimates imply Ĉ̂ = [ 1.143; 0.485; ] Â̂ = [ 0.701 0.053; 0.005 0.361; ]. Hence, our choice of quadruplets not only captures different risk aversion scenarios, but also produce somewhat agreeable estimates with that of OLS. We also note that the estimated values suggest that the process is stationary[The stationary conditions are given by: 1-α_1-σ_1β_1>0, 1-α_2-σ_2β_2>0 and (1-α_1-σ_1β_1)(1-α_1-σ_1β_1)>σ_1σ_2β_1β_2 [Gourieroux, Lu (2019), Proposition 3].]. §.§.§ FELD for Bivariate NBAR A closed form expression for the conditional Laplace transform at horizon h is difficult to obtain in closed form. However, the dynamic affine property of the bivariate NBAR means that it can be derived numerically by means of recursion. In particular, we have: Ψ(u,h|Y_t)=𝔼[exp(-u'Y_t+h)| Y_t] = exp[-a_1^(h)(u_1,u_2)Y_1,t-a_2^(h)(u_1,u_2)Y_1,t-b^(h)(u_1,u_2)], where: a_1^(h)(u_1,u_2)= a_1(a_1^(h-1)(u_1,u_2),a_2^(h-1)(u_1,u_2)), a_2^(h)(u_1,u_2)= a_2(a_1^(h-1)(u_1,u_2),a_2^(h-1)(u_1,u_2)), b^(h)(u_1,u_2)= b(a_1^(h-1)(u_1,u_2),a_2^(h-1)(u_1,u_2)), ∀ h ≥ 2. The estimated FELD for the model can still be obtained by applying the recursion in (<ref>) to generate the terms Ψ(u,h-k|I_t+k) in (<ref>) and plugging in the estimated values for the 9 parameters from Table 4. We consider two exercises with varying values of (u_1,u_2) [i.e. the risk aversion parameter for HACK and INSD] and of (Y_1,t,Y_2,t) [i.e. the last observed historical value for HACK and INSD] to showcase the FELD for the data. The first exercise is to see how the risk aversion and the historical counts of the two cyberrisks influence the FELD. We compare four scenarios: [1] Low Risk Aversion and Low Historical Counts. [2] High Risk Aversion and Low Historical Counts. [3] Low Risk Aversion and High Historical Counts. [4] High Risk Aversion and High Historical Counts. The results are presented in Figure 15 below. The graph on the left hand side depicts the total FELD up to horizon 10. We observe two important properties of the model. First, the total FELD is increasing in u, since the red and blue lines are lower than the green and yellow ones. Second, the total FELD is increasing in Y, since the green and blue lines are higher than the red and yellow ones. Indeed, recall that the FELD is a measure of risk. If a firm were to observe high historical counts Y of cyberattacks, then the risk of future attacks should be higher. Likewise, when observing the same level of historical counts of cyberattacks, there is a higher implied risk for a more risk averse firm. On the right hand side graph, we show the decomposition of the total FELD at horizon 10. For ease of comparison between the four scenarios, we express the FELD as a percentage of the total. We have highlighted in red the contribution of risk in updating between horizon 1 and horizon 2 on the FELD at horizon 10. Although there are four different scenarios, we can see that the percentage contribution is roughly 40% in all the cases. This suggests that if we are equally risk averse on the two types of cyber risk, and observe similar levels of historical counts, then our decomposition of the total risk is insensitive to the various levels. Of course, the assumption that the two series have the same risk aversion parameters or the same observed historical values is unrealistic. Hence, we consider a second exercise where these values for HACK and INSD. We compare four new scenarios: [1] Low/High Historical Count for HACK/INSD, High/Low Risk Aversion for HACK/INSD. [2] High/Low Historical Count for HACK/INSD, High/Low Risk Aversion for HACK/INSD. [3] Low/High Historical Count for HACK/INSD, Low/High Risk Aversion for HACK/INSD. [4] High/Low Historical Count for HACK/INSD, Low/High Risk Aversion for HACK/INSD. The results are presented in Figure 16. On the left hand side graph of Figure 16, the yellow and green lines correspond to a high observed historical value for HACK, but a low historical value observed for INSD. These lines are much higher than the blue and red lines, which correspond to the opposite case. Intuitively, this means that observing a high count of breaches due to hacking from an outside party will generate more uncertainty than observing a high count of insider breaches. This is a compelling result since for a firm, it is easier to diagnose the security leak and intitiate preventive measures for future insider breaches than it is for hacking from an outside party. A more straightforward observation is that for high levels of HACK, a lower risk aversion parameter means less uncertainty [i.e. the yellow line is higher than the green one]. On the right hand side graph of Figure 16, we show the decomposition of the total FELD at horizon 10 for this exercise. A notable difference is that now, the decomposition is quite different for the four scenarios. In particular, when updating from horizon 1 to 2 (outlined in red), the scenarios have different contributions to the FELD at horizon 10. When the observed historical count for HACK is low and INSD is high (that is, the first and third bars in the graph), the risk is front loaded since the risk of updating between horizon 1 and 2 accounts for over 40% of the FELD in these cases. On the other hand, when a high historical count of HACK and a low count of INSD is observed (that is, the second and fourth bars in the graph), the risk is has more spread across future horizons. This means that a high count of observed insider breaches implies a more front loaded risk. The exercises above demonstrate two important factors. Firstly, by considering two series together in a bivariate model, we are able to take advantage of the cross-sectional dependencies between the two types of cyber attacks, even if our decomposition measure is a dynamic separation of effects. Secondly, the FELD we have proposed in this paper has as many decompositions as there are u and Y_t. In particular, we see that in exercise 1 and exercise 2, the decompositions can be very different and dependent on risk aversion and observed historical counts. For an insurance firm that specializes in cyber risk, these types of scenario analysis can help price their insurance products, or allocate resources efficiently to hedge against future cyber attacks for a range of customers. § CONCLUDING REMARKS In this paper, we have introduced decomposition formulas for the analysis of global forecast errors in nonlinear dynamic models. These formulas are based on functional measures of nonlinear forecast with respect to either their transition densities in the Forecast Error Kullback Decomposition, or the conditional log-Laplace transform in the Forecast Error Laplace Decomposition. These measures and their decompositions are extensions of the Forecast Error Variance Decomposition (FEVD) used in the linear dynamic framework. This latter decomposition is for global shocks on the current value of the process of interest, without trying to define (identify) the sources of the global shocks. The advantage is that this measure and its decomposition with respect to horizon and updating are identifiable, as are also the FEKD and FELD. Such decompositions could be extended to functional measures of economic or financial interest such as conditional Lorenz curves used in inequality analysis or conditional quantiles [Montes-Rojas (2019)]. This is left for future research. § REFERENCES Al-Osh, M. and A., Azaid (1987): “First-Order Integer Valued Autoregressive (INAR(1)) Processes", Journal of Time Series Analysis, 8, 261-275. Arrow, K. (1965): “Aspects of the Theory of Risk Bearing", The Theory of Risk Aversion, Helsinki, Reprinted in “Essays in the Theory of Risk Bearing", Markham Publishing Co., Chicago, 1971, 90-109. Barati, M. and B., Yankson (2022): “Predicting the Occurrence of Data Breach," International Journal of Information Management Data Insights, 2. Cebula, J., and L. Young (2010): “A Taxonomy of Operational Cyber Security Risks", Technical Note, Software Engineering Institute, Carnegie Mellon University. Chauvet, M., and S. Potter (2005): “Forecasting Recession Using the Yield Curve", Journal of Forecasting, 24, 77-103. Cox, J., Ingersoll, J., and S., Ross (1985): “A Theory of the Term Structure of Interest Rates", Econometrica, 53, 385-407. Cuchiero, C., Filipovic, D., Mayerhofer, E. and J., Teichmann (2011): “Affine Processes on Positive Semi Definite Matrices", The Annals of Applied Probability, 21, 397-463. Darolles, S., Gourieroux, C., and J., Jasiak. (2006): “Structural Laplace Transform and Compound Autoregressive Models". Journal of Time Series Analysis, 27, 477-503. Doob, J. (1953): “Stochastic Processes", Wiley and Sons. Duffie, D., Filipovic, D., and W., Schachermayer. (2003): “Affine Processes and Applications in Finance". The Annals of Applied Probability, 13, 984-1053. Eling, M., Ibragimov, R., and D., Ning, (2023): “Time Dynamics of Cyber Risk", University of St. Gallen. Eling, M., and K., Jung (2018): “Copula Approaches for Modelling Cross-Sectional Dependence of Data Breach Losses", Insurance: Mathematics and Economics, 82, 167-180. Eling, M., and N. Loperfido (2017): “Data Breaches: Goodness of Fit Pricing and Risk Management", Insurance: Mathematics and Economics, 75, 126 - 136. Embrechts, P. (2000): “Actuarial versus Financial Pricing of Insurance", J. Risk Finance, 1, 17-26. Estrella, A., and F. Mishkin (1998): “Predicting US Recessions: Financial Variables as Leading Indicators", Review of Economics and Statistics, 80, 45-61. Fahrenwaldt, M., Weber, S., and K., Weske (2018): “Pricing of Cyber Insurance Contracts in a Network Model", ASTIN Bulletin, 48, 1175-1218. Feller, W. (1971). “An Introduction to Probability Theory and Its Applications", (2nd Edition), Vol 2, Wiley, New York. Fishburn, R., and R., Vickson (1978): “Theoretical Foundations of Stochastic Dominance", in Stochastic Dominance: An Approach to Decision Making Under Risk, Whitmore, G. and Findlay, M. (eds). DC Health, London. Gourieroux, C., and J., Jasiak (2006): “Autoregressive Gamma Processes", Journal of Forecasting, 25, 129-152. Gourieroux, C., Jasiak, J., and R., Sufana (2009): “The Wishart Autoregressive Process of Multivariate Stochastic Volatility", Journal of Econometrics, 150, 167-181. Isakin, M., and P., Ngo (2020): “Variance Decomposition Analysis for Nonlinear Economic Models". Oxford Bulletin of Economics and Statistics, 82, 1362-1374. Kauppi, H., and P., Saikkonen (2008): “Predicting US Recessions with Dynamic Binary Response Models", Review of Economics and Statistics, 90, 777-796. Lanne, M., and H., Nyberg. (2016): “Generalized Forecast Error Variance Decomposition for Linear and Nonlinear Multivariate Models". Oxford Bulletin of Economics and Statistics, 78(4), 595-603. Lu, Y., Zhang, J., and W., Zhu (2024): “Cyber Risk Modelling: A Discrete Multivariate Count Process Approach," forthcoming, Scandinavian Actuarial Journal. Markowitz, H. (1952): “Portfolio Selection", The Journal of Finance, 7,77-91. Markowitz, H. (2000): “Mean-Variance Analysis in Porfolio Choice and Capital Markets", Wiley and Sons, Vol. 66. McKenzie, E. (1985): “Some Simple Models for Discrete Variate Time Series", Water Resources Bulletin, 21, 645-650. Montes-Rojas, G. (2019): “Multivariate Quantile Impulse Response Functions", Journal of Time Series Analysis, 40, 730-752. Muirhead, R. (1982): “Aspects of Multivariate Statistical Theory", Wiley and Sons. Pratt, J. (1964): “Risk Aversion in the Small and in the Large", Econometrica, 32, 129-136. Rothschild, M., and J., Stiglitz (1970): “Increasing Risk: I. A Definition", Journal of Economic Theory, 2, 225-243. Schwartz, G. and S. Sastry (2014): “Cyber-insurance framework for large scale interdependent networks", in Proceedings of the 3rd International Conference on High Confidence Networked Systems, 145-154, New-York, The Association on Computing Machinery. Sun, H., Xu. M., and P., Zhao (2021): “Modelling Malicious Hacking Data Breach Risks", North American Actuarial Journal, 25, 484-502. Vickson, R. (1975): “Stochastic Dominance for Decreasing Absolute Risk Aversion", Journal of Financial and Quantitative Analysis, 10, 799-811. Whittle, P. (1963): “Prediction and Regulation", Bowman, new edition in 1983. § PROOFS §.§ Examples for FEKD §.§.§ Proof of Proposition 3: Gaussian VAR(1) The conditional distributions of Y_t+h given Y_t+k+1 and Y_t+h given Y_t+k are N(Φ^h-k-1Y_t+k+1,Σ_h-k-1) and N(Φ^h-kY_t+k,Σ_h-k) respectively. Hence: log[f(y,h-k|I_t+k)/f(y,h-k-1|I_t+k+1] = 1/2log[Σ_h-k-1/Σ_h-k]+1/2[y-Φ^h-k-1Y_t+k+1]'Σ_h-k-1^-1[y-Φ^h-k-1Y_t+k+1] -1/2[y-Φ^h-kY_t+k]'Σ_h-k^-1[y-Φ^h-kY_t+k] Let A= [y-Φ^h-k-1Y_t+k+1]. Then, by conditioning on I_t, we get: 1/2𝔼{[y-Φ^h-k-1Y_t+k+1]'Σ_h-k-1^-1[y-Φ^h-k-1Y_t+k+1]| Y_t} = 1/2𝔼[A'Σ_h-k-1^-1A| Y_t] = 1/2[Σ^-1_h-k-1𝔼(AA'| Y_t)] = 1/2[Σ^-1_h-k-1(𝔼(A|Y_t)𝔼(A|Y_t)'+𝕍(A| Y_t))] = 1/2{Σ_h-k-1^-1[(y-Φ^hY_t)(y-Φ^hY_t)'+Φ^h-k-1Σ_k+1(Φ^h-k-1)']}. Similarly, if B= [y-Φ^h-kY_t+k], then: 1/2𝔼[B'Σ_h-k^-1B| Y_t] = 1/2{Σ_h-k^-1[(y-Φ^hY_t)(y-Φ^hY_t)'+Φ^h-kΣ_k(Φ^h-k)']}. Then, the expression for γ(k,h|I_t) is: γ(k,h|I_t) = 1/2log[Σ_h-k-1/Σ_h-k]-1/2[(Σ_h-k^-1-Σ_h-k-1^-1)(y-Φ^hY_t)(y-Φ^hY_t)'] + 1/2[Σ_h-k-1^-1Φ^h-k-1Σ_k+1(Φ^h-k-1)'-Σ_h-k^-1Φ^h-kΣ_k(Φ^h-k)'] = 1/2log[Σ_h-k-1/Σ_h-k] + 1/2[Σ^-1_h-k-1Φ^h-k-1Σ_k+1(Φ^h-k-1)'-Σ^-1_h-kΦ^h-kΣ_k(Φ^h-k)'] -1/2Y_t'(Φ^h)'(Σ^-1_h-k-Σ^-1_h-k-1)Φ^hY_t + Y_t'(Φ^h)'(Σ^-1_h-k-Σ^-1_h-k-1)y - 1/2y'(Σ^-1_h-k-Σ^-1_h-k-1)y, which is quadratic in y. §.§.§ Proof of Proposition 4: Binary Markov Chain We have: γ(k,h|Y_t)= 𝔼{log[π + λ^h-k(Y_t+k-π)]-log[π + λ^h-k-1(Y_t+k+1-π)]| Y_t} = 𝔼{log[π + λ^h-k(1-π)]Y_t+k+log[π(1-λ^h-k)](1-Y_t+k) . .-log[π + λ^h-k-1(1-π)]Y_t+k+1-log[π(1-λ^h-k-1)](1-Y_t+k+1)| Y_t } = log[1-λ^h-k/1-λ^h-k-1] + log[π+λ^h-k(1-π)/π(1-λ^h-k)][π+λ^k(Y_t-π)] - log[π+λ^h-k-1(1-π)/π(1-λ^h-k-1)][π+λ^k+1(Y_t-π)]. §.§.§ Proof of Corollary 1 : FEVD for Markov Chain 𝕍{𝔼[Y_t+h|I_t+k+1]-𝔼[Y_t+h|I_t+k]| I_t } = 𝕍{λ^h-k-1(Y_t+k+1-π)-λ^h-k(Y_t+k-π)| Y_t } = 𝕍{λ^h-k-1Y_t+k+1-λ^h-kY_t+k| Y_t } = λ^2(h-k-1)𝕍[Y_t+k+1| Y_t] + λ^2 (h-k)𝕍[Y_t+k| Y_t] - 2λ^2(h-k)+1Cov(Y_t+k+1,Y_t+k|Y_t) Note that: 𝕍[Y_t+k| Y_t]= 𝔼(Y_t+k|Y_t)-[𝔼(Y_t+k|Y_t)]^2 = 𝔼(Y_t+k|Y_t)[1-𝔼(Y_t+k|Y_t)] = [π + λ^k(Y_t-π)][1-π - λ^k(Y_t-π)], 𝕍[Y_t+k+1| Y_t]= [π + λ^k+1(Y_t-π)][1-π - λ^k+1(Y_t-π)], and Cov(Y_t+k+1,Y_t+k| Y_t) = 𝔼(Y_t+k+1Y_t+k| Y_t)-𝔼(Y_t+k+1| Y_t)𝔼(Y_t+k| Y_t) = 𝔼[𝔼(Y_t+k+1Y_t+k| Y_t+k)| Y_t]-𝔼(Y_t+k+1| Y_t)𝔼(Y_t+k| Y_t) = 𝔼[𝔼(Y_t+k+1|Y_t+k) Y_t+k|Y_t]-𝔼(Y_t+k+1| Y_t)𝔼(Y_t+k| Y_t) = 𝔼[(π + λ Y_t+k-λπ) Y_t+k|Y_t]-𝔼(Y_t+k+1| Y_t)𝔼(Y_t+k| Y_t) = 𝔼[π Y_t+k + λ Y^2_t+k-λπ Y_t+k| Y_t ]-[π+λ^k+1(Y_t-π)][π+λ^k(Y_t-π)] = [π+λ(1-π)][π+λ^k(Y_t-π)]-[π+λ^k+1(Y_t-π)][π+λ^k(Y_t-π)] = [π+λ^k(Y_t-π)][λ(1-π) - λ^k+1(Y_t-π)] = λ[π+λ^k(Y_t-π)][1-π - λ^k(Y_t-π)] Thus we have: λ^2(h-k-1)𝕍[Y_t+k+1| Y_t] + λ^2 (h-k)𝕍[Y_t+k| Y_t] - 2λ^2(h-k)-1Cov(Y_t+k+1,Y_t+k|Y_t) = λ^2(h-k-1)[π + λ^k+1(Y_t-π)][1-π - λ^k+1(Y_t-π)]+ λ^2(h-k)[π + λ^k(Y_t-π)][1-π - λ^k(Y_t-π)] - 2λ^2(h-k)[π+λ^k(Y_t-π)][1-π - λ^k(Y_t-π)] = λ^2(h-k-1)[π + λ^k+1(Y_t-π)][1-π - λ^k+1(Y_t-π)]- λ^2(h-k)[π + λ^k(Y_t-π)][1-π - λ^k(Y_t-π)] = π(1-π)[λ^2(h-k-1)-λ^2(h-k)] - π(Y_t-π)[λ^2(h-k-1)λ^k+1-λ^2(h-k)λ^k] + (1-π)(Y_t-π)[λ^2(h-k-1)λ^k+1-λ^2(h-k)λ^k] - (Y_t-π)^2[λ^2(h-k-1)λ^2(k+1)-λ^2(h-k)λ^2k] = π(1-π)λ^2(h-k-1)[1-λ^2]- π(Y_t-π)λ^2h-k-1[1-λ]+ (1-π)(Y_t-π)λ^2h-k-1[1-λ] = π(1-π)λ^2(h-k-1)[1-λ^2]- (Y_t-π)λ^2h-k-1[1-2π][1-λ] §.§.§ Proof of Proposition 5: Markov Chain The h-step forward prediction of the Markov Chain, 𝔼[Y_t+h|Y_t], is given by: 𝔼[Y_t+h|Y_t] = 𝔼[X_t+h|X_t] = P^hX_t, and its h-step transition density, f(y,h|I_t), is given by: f(y,h|I_t) = P(Y_t+h=y|Y_t) =∑_i=1^k P(Y_t+h=y|Y_t=i)X_it =∑_i=1^k p^(h)_iyX_it = P_y^hX_t, where P_y^h denotes the y-th row of P^h. For the FEKD we are interested in the quantity: log f(y,h|I_t) = log P(Y_t+h=y|Y_t) =∑_i=1^n log P(Y_t+h=y|Y_t=i)X_it =∑_i=1^n log[p^(h)_yi]X_it. Let us denote log(A) as a matrix whose elements are the logged elements of matrix A. It follows that: log P(Y_t+h=y|Y_t) = [log(P^h)]_yX_t. Hence, the right-hand side term in the FEKD is given by: γ(h,k|I_t) = 𝔼[log P(Y_t+h=y|Y_t+k)-log P(Y_t+h=y|Y_t+k+1)| _t] = 𝔼[[log(P^h-k)]_yX_t+k| X_t]-𝔼[[log(P^h-k-1)]_yX_t+k+1| X_t] = [log(P^h-k)]_y𝔼[X_t+k| X_t]-[log(P^h-k-1)]_y𝔼[X_t+k+1| X_t] = [log(P^h-k)]_yP^(k)X_t-[log(P^h-k-1)]_yP^k+1X_t = [[log(P^(h-k))]_yP^k+1-[log(P^h-k-1)]_yP^k]X_t. §.§.§ Proof of Corollary 2: Binary Markov Chain as a Special Case of Proposition 5 The h-step transition matrix for the binary Markov Chain is given by: P^h= [ p^(h)_00 p^(h)_01; p^(h)_10 p^(h)_11; ]= [ 1-[π(1-λ^h)] 1-[π+λ^h(1-π)]; π(1-λ^h) π+λ^h(1-π); ] For y=1, we get: [[log(P^h-k)]_yP^k-[log(P^h-k-1)]_yP^k+1]X_t = [[ log p^(h-k)_10 log p^(h-k)_11; ][ p^(k)_00 p^(k)_01; p^(k)_10 p^(k)_11; ]-[ log^(h-k-1)_10 log p^(h-k-1)_11; ][ p^(k+1)_00 p^(k+1)_01; p^(k+1)_10 p^(k+1)_11; ]][ X_0,t; X_1,t; ] = [ log p^(h-k)_10p^(k)_00+log p^(h-k)_11p^(k)_10-log p^(h-k-1)_10p^(k+1)_00-log p^(h-k-1)_11p^(k+1)_10; log p^(h-k)_10p^(k)_01+log p^(h-k)_11p^(k)_11-log p^(h-k-1)_10p^(k+1)_01-log p^(h-k-1)_11p^(k+1)_11; ]'[ X_0,t; X_1,t; ] This expression coincides directly with the formula provided in (<ref>). For instance, when Y_t is in state 0, X_t = [1 0]' and the above can be simplified to: log p^(h-k)_10p^(k)_00+log p^(h-k)_11p^(k)_10-log p^(h-k-1)_10p^(k+1)_00-log p^(h-k-1)_11p^(k+1)_10 = log[π(1-λ^k)]-log[π(1-λ^k)][π(1-λ^k)]+log[π+λ^k(1-π)][π(1-λ^k)] - log[π(1-λ^k)]+log[π(1-λ^k)][π(1-λ^k)]-log[π+λ^k(1-π)][π(1-λ^k)] = log[1-λ^h-k/1-λ^h-k-1] + log[π+λ^h-k(1-π)/π(1-λ^h-k)][π(1-λ^k)] - log[π+λ^h-k-1(1-π)/π(1-λ^h-k-1)][π(1-λ^k)], which is equivalent to (<ref>) when Y_t = 0. §.§ Examples for FELD §.§.§ Proof of Proposition 6: FELD for Dynamic Affine Models Since: 𝔼(Y_t+h|Y_t) = -dc/du(0) +[da'/du(0)]^h[Y_t + dc/du(0)], the left hand side of the FELD is: u'𝔼(Y_t+h|Y_t) - a^∘ h(u)'Y_t + c(u) -c[a^∘ h(u)] = u'{-dc/du(0) +[da'/du(0)]^h[Y_t + dc/du(0)]}-a^∘ h(u)'Y_t+c(u) -c[a^∘ h(u)] = {u'[da'/du(0)]^h - a^∘ h(u)'}Y_t - u'dc/du(0)+ u'[da'/du(0)]^h[dc/du(0)] + c(u) - c[a^∘ h(u)]. Let us now consider the expression: a^∘(h-k-1)'(u)𝔼(Y_t+k+1|Y_t) = a^∘(h-k-1)'(u){-dc/du(0) +[da'/du(0)]^k+1[Y_t + dc/du(0)]} = a^∘(h-k-1)'(u)[da'/du(0)]^k+1Y_t -a^∘(h-k-1)'(u)[dc/du(0)] +a^∘(h-k-1)'(u)[da'/du(0)]^k+1[dc/du(0)]. Hence the right hand side of the FELD is: ∑_k=0^h=1{a^∘(h-k-1)'(u)𝔼(Y_t+k+1|Y_t)-a^∘(h-k)'(u)𝔼(Y_t+k|Y_t)} + c[a^∘(h-k-1)(u)]- c[a^∘(h-k)(u)] = ∑_k=0^h-1{a^∘ (h-k-1)(u)'[da'/du(0)]^k+1-a^∘ (h-k)(u)'[da'/du(0)]^k}Y_t+(a^∘ (h-k)(u)'-a^∘ (h-k-1)(u)')[dc/du(0)] + (a^∘ (h-k)(u)'[da'/du(0)]^k-a^∘ (h-k-1)(u)'[da'/du(0)]^k+1)[dc/du(0)] + c[a^∘ (h-k-1)(u)]-c[a^∘ (h-k)(u)], ∀ u. as required. §.§.§ Proof of Corollary 3: FEVD for Dynamic Affine Models Let us compute the generic term of the FEVD in (2.4) for an affine process. We have: 𝔼(Y_t+h| I_t+k+1) = - dc(0)/du + (da'(0)/du)^h-k-1(Y_t+k+1+dc(0)/du). We deduce: 𝕍[𝔼(Y_t+h| I_t+k+1)| I_t+k] = 𝕍[(∂ a'(0)/∂ u)^h-k-1Y_t+k+1| I_t+k] = (∂ a'(0)/∂ u)^h-k-1𝕍[Y_t+k+1| I_t+k] (∂ a(0)/∂ u')^h-k-1. We know that: 𝕍(Y_t+1| Y_t) = [∂^2/∂ u ∂ u'(a'(u)Y_t + b(u))]_u=0, where b(u)=c(u) - c(a(u)). Therefore we deduce that: 𝕍[𝔼(Y_t+h| I_t+k+1)| I_t+k] = [da'(0)/du]^h-k-1[∑_j=1^n ∂^2a_j(0)/∂ u∂ u'Y_j,t+k+1+∂^2b(0)/∂ u∂ u'][da(0)/du']^h-k-1 = ∑_j=1^n {[da'(0)/du]^h-k-1∂^2a_j(0)/∂ u∂ u'[da(0)/du']^h-k-1Y_j,t+k}+ [da'(0)/du]^h-k-1∂^2b(0)/∂ u∂ u'[da(0)/du']^h-k-1. We deduce the generic term by taking the expectation given Y_t. It is equal to: ∑_j=1^N {(∂ a'(0)/du)^h-k-1∂^2a_j(0)/∂ u ∂ u'(∂ a'(0)/du')^h-k-1[-dc(0)/du_j+((∂ a'(0)/∂ u)^k(Y_t+dc(0)/du))_j]} + (∂ a'(0)/∂ u)^h-k-1∂^2b(0)/∂ u ∂ u'(∂ a(0)/∂ u')^h-k-1. §.§.§ Proof of Corollary 5: Markov Chains The generic term on the right hand side of the FELD is: 𝔼[log(exp(u)P^h-k)X_t+k-log(exp(u)P^h-k-1)X_t+k+1| X_t] = log(exp(u)P^h-k)𝔼[X_t+k| X_t]-log(exp(u)P^h-k-1)𝔼[X_t+k+1| X_t] = log(exp(u)P^h-k)P^kX_t-log(exp(u)P^h-k-1)P^k+1X_t = [log(exp(u)P^h-k)P^k-log(exp(u)P^h-k-1)P^k+1]X_t. §.§.§ Proof of Corollary 6: INAR(1) The conditional log-Laplace transform of an INAR(1) process at horizon 1 is: log[Ψ(u,1| I_t)]= λ[1-exp (-u)]+Y_tlog[1-p+pexp (-u)], which is affine in Y_t and can be written in the form of equation (<ref>). In particular, we have [Darolles et al. (2004)]: a(u) = -log[1-p+pexp (-u)], c(u) = -λ/1-p[1-exp (-u)]. We deduce that: da/du(0) = p/p-(p-1)exp(0) = p, dc/du(0) = -λexp(0)/1-p = -λ/1-p. Hence, we have: 𝔼[Y_t+h| Y_t]= p^hY_t + λ/1-p(1-p^h). It is known that the moment generating function of the INAR(1) model at horizon h has the same form as the moment generating function with parameters (λ,p) replaced by (λ1-p^h/1-p,p^h). Therefore, we get: a^∘ (h)(u) = a[a^∘ (h-1)(u)] = -log[1-p^h+p^hexp (-u)], and: c[a^∘ (h)(u) ]= λ/1-p p^h [1-exp (-u)]. Combining these results and applying Proposition 6, the left hand side of the FELD is: {p^hu+log[1-p^h+p^hexp (-u)]}Y_t +λ/1-pu-λ/1-pp^hu+λ/1-p[1-exp (-u)]- λ/1-p p^h [1-exp(-u)] = {p^hu+log[1-p^h+p^hexp (-u)]}Y_t+λ/1-p[(1-p^h)(u-1+exp(-u))]. By using the expansions of a^∘ h(u), c[a^∘ h(u)] and of the prediction 𝔼(Y_t+h| Y_t), the right hand side is: ∑_k=0^h-1{p^klog[1-p^h-k+p^h-kexp (-u)]-p^k+1log[1-p^h-k-1+p^h-k-1exp (-u)]}Y_t +λ(1-p^k)/1-plog(1-p^h-k+p^h-kexp(u))-λ(1-p^k+1)/1-plog(1-p^h-k-1+p^h-k-1exp(u)) +λ[1-exp(u)]p^h-k-1. §.§.§ Proof of Corollary 7: ARG(1) For the ARG(1) process we know that: a(u) = β u/1+u, c(u) = -δlog[1+u/1-β], a^∘ h(u) = β^h u/[1+1-β^h/1-βu]. It is easily checked that: da/du(0) = [β/(1+u)^2]_u=0 = β. Thus applying Proposition 5: α(h,u) = u[da/du(0)]^h-a^∘ h(u) = β^hu -β^h u/[1+1-β^h/1-βu] = β^hu[1-1/(1+1-β^h/1-βu)]. Similarily: a(h,k,u) = a^∘ (h-k-1)(u) [da/du(0)]^k+1-a^∘ (h-k)(u) [da/du(0)]^k = β^h-k-1 u/[1+1-β^h-k-1/1-βu]β^k+1-β^h-k u/[1+1-β^h-k/1-βu]β^k = β^hu[1/(1+1-β^h-k-1/1-βu)-1/(1+1-β^h-k/1-βu)]. To get the terms β(h,u) and β(h,k,u), note: dc/du(0) = -δ[1/u-β+1]_u=0=-δ/1-β, and, c[a^∘ h(u)] = -δlog[1+a^∘ h(u)/1-β] = -δlog[1+β^hu/(1-β)(1+1-β^h-k/1-βu)] = -δlog[1+β^hu/(1-β)(1-β+(1-β^h)u)]. Hence, we apply Proposition 5 to obtain: β(h,u) = - udc/du(0) + u[da/du(0)]^hdc/du(0)+c(u)-c[a^∘ h(u)] = δ u/1-β-β^huδ/1-β-δlog[1+u/1-β]+δlog[1+β^hu/1-β+(1-β^h)u]. Similarly, β(h,k,u)= (a^∘(h-k)(u)-a^∘(h-k-1)(u))(dc/du(0)) +[a^∘(h-k)(u)(da/du(0))^k-a^∘(h-k-1)(u)(da/du(0))^k+1](dc/du(0)) + c(a^∘ (h-k-1)(u))-c(a^∘ (h-k)(u)) = [β^h-k u/(1+1-β^h-k/1-βu)-β^h-k-1 u/(1+1-β^h-k-1/1-βu)](-δ/1-β) + [β^h u/(1+1-β^h-k/1-βu)-β^h u/(1+1-β^h-k-1/1-βu)](-δ/1-β) + δlog[1+β^h-k-1u/1-β+(1-β^h-k-1u)]-δlog[1+β^h-ku/1-β+(1-β^h-ku)]. §.§.§ Proof for the Wishart Process Since the Wishart process concerns stochastic symmetric positive definite matrices, the conditional Laplace transform is usually written with matrix arguments [see Gourieroux, Jasiak and Sufana (2009)]. It is given by: Ψ(Γ,h| I_t ) = 𝔼[exp(-(Γ Y_t+h))| Y_t] = exp[-((M^h')Γ(Id+2Σ_h Γ)^-1M^hY_t)]/(Id+2Σ_hΓ)^k/2, where Σ_h=Σ + MΣ M' + ... + M^h-1Σ (M^h-1)'. Therefore we get: logΨ (Γ,h-k | I_t+k) - logΨ(Γ,h-k-1| I_t+k+1) = {(M^h-k-1)'Γ(Id+2Σ_h-k-1Γ)^-1M^h-k-1Y_t+k+1} -{(M^h-k)'Γ(Id+2Σ_h-kΓ)^-1M^h-kY_t+k} - log[(Id+2Σ_h-kΓ)^K/2] + log[(Id+2Σ_h-k-1Γ)^K/2]. Conditioning on Y_t in this equation, noting that 𝔼(Y_t+h| Y_t) = M^hY_t(M^h)'+KΣ_h, the generic term in the FELD decomposition (<ref>) is: {(M^h)'Γ[(Id+2Σ_h-k-1Γ)^-1-(Id+2Σ_h-kΓ)^-1]M^h Y_t} + {[(M^h-k)'Γ(Id+2Σ_h-kΓ)^-1M^h-kKΣ_k]} - {[(M^h-k-1)'Γ(Id+2Σ_h-k-1Γ)^-1M^h-k-1KΣ_k+1]} + K/2log[(Id+2Σ_h-kΓ)/(Id+2Σ_h-k-1Γ)]. §.§.§ Conditional Laplace Transform for Bivariate NBAR at Horizon 1 𝔼[exp(-u'Y_t+1)|Y_t] = 𝔼[𝔼(exp(-u'Y_t+1)|Y_t,X_t+1,Z_t+1)|Y_t] = 𝔼[exp(-(α_1Z_t+1+β_1X_1,t+1)(1-exp(-u_1)))exp(-(α_2Z_t+1+β_2X_2,t+1)(1-exp(-u_2)))|Y_t] = 𝔼{exp[-α_1Z_t+1(1-exp(-u_1))-α_2Z_t+1(1-exp(-u_2))]. .×exp[β_1X_1,t+1(1-exp(-u_1))]×exp[β_2X_2,t+1(1-exp(-u_2))]|Y_t} = 1/[1+α_1(1-exp(-u_1))+α_2(1-exp(-u_2))]^δ+σ_1Y_1,t+σ_2Y_2,t ×1/[1+β_1(1-exp(-u_1))]^δ_1+Y_1,t×1/[1+β_2(1-exp(-u_2))]^δ_2+Y_2,t = 1/[1+β_1(1-exp(-u_1))]^Y_1,t×1/[1+α_1(1-exp(-u_1))+α_2(1-exp(-u_2))]^σ_1Y_1,t ×1/[1+β_2(1-exp(-u_2))]^Y_2,t×1/[1+α_1(1-exp(-u_1))+α_2(1-exp(-u_2))]^σ_2Y_2,t ×1/[1+α_1(1-exp(-u_1))+α_2(1-exp(-u_2))]^δ ×1/[1+β_1(1-exp(-u_1))]^δ_1×1/[1+β_2(1-exp(-u_2))]^δ_2 = exp[-a_1(u_1,u_2)Y_1,t-a_2(u_1,u_2)Y_2,t-b(u_1,u_2)], where: a_1(u_1,u_2)= log[1+β_1(1-exp(-u_1))]+σ_1log[1+α_1(1-exp(-u_1))+α_2(1-exp(-u_2))], a_2(u_1,u_2)= log[1+β_2(1-exp(-u_2))]+σ_2log[1+α_1(1-exp(-u_1))+α_2(1-exp(-u_2))], b(u_1,u_2) = δ_1log[1+β_1(1-exp(-u_1))]+δ_2log[1+β_2(1-exp(-u_2))] + δ_3log[1+α_1(1-exp(-u_1))+α_2(1-exp(-u_2))]. §.§.§ VAR(1) Representation for the Bivariate NBAR Linear Prediction Formula 𝔼[Y_t|Y_t-1] = 𝔼[𝔼(Y_t|Y_t-1,X_t,Z_t)|Y_t-1] = 𝔼[[ α_1; α_2 ]Z_t + [ β_1 X_1,t+1; β_2 X_2,t+1 ]| Y_t-1] = [ α_1; α_2; ][δ_+σ_1Y_1,t-1+σ_2Y_2,t-1]+ [ β_1 (δ_1+Y_1,t-1); β_2 (δ_2+Y_2,t-1) ] = [ α_1δ + β_1δ; α_2δ + β_2δ; ]+[ α_1σ_1+β_1 α_1σ_2; α_2σ_1 α_2σ_2 +β_2; ][ Y_1,t-1; Y_2,t-1 ]. § ONLINE APPENDICES §.§ Asymptotic Confidence Bands on the FELD Suppose θ̂_T is a consistent estimator of θ and is asymptotically normal, that is: √(T)(θ̂_T - θ) N(0,V), where denotes the convergence in distribution. Then by the Delta method, we have, for given h: √(T)[vec(γ(k,h| u,y;θ̂_t))-vec(γ(k,h| u,y;θ_0))] N(0,∂Γ(u,y,θ_0)/∂θ'V∂Γ(u,y,θ_0)'/∂θ), where ∂Γ(u,y;θ_0)/∂θ'=[∂γ(1,h| u,y;θ_0)/∂θ,...,∂γ(h-1,h| u,y;θ_0)/∂θ]. Hence, the asymptotic distribution of vec[γ(k,h| y,u;θ̂_t)] can be approximated using a Gaussian with mean γ(k,h| y,u;θ_0) and variance Ω_T(u,y) = 1/T[∂Γ(u,y,θ̂_T)/∂θ'V̂∂Γ(u,y,θ̂_T)/∂θ'], where V̂_T is a consistent estimator of V. Then the asymptotic distribution of γ̂(h|u,y)=γ(h|u,y;θ̂_T) can be approximated by the Gaussian distribution with mean γ(h|u,y;θ_0) and variance e'Ω_T(u,y)e, where e is the vector with unitary components. Remark: For dynamic affine models, the closed form expressions of the components in the FELD contain terms of the form a^∘ k(u;θ), when θ is explicitly written. Since a^∘ k(u;θ)=a[a^∘ (h-1)(u;θ);θ], we get: ∂ a^∘ k(u;θ)/∂θ' = ∂ a/∂θ'[ a^∘ (k-1)(u;θ);θ] + ∂ a/∂ u'[a^∘ (k-1)(u;θ);θ]∂[a^∘ (k-1)(u;θ);θ]/∂θ, ∀ k. The computation this of the derivative ∂γ/∂θ can be simplified by applying recursively this rule. §.§ Negative Binomial Autoregressive Process §.§.§ Proof of Corollary 9: FELD for Univariate NBAR Processes Since the NBAR is a dynamic affine process, we may apply Propostion 6 directly to obtain the FELD. From the conditional Laplace transform in (<ref>) we get: a(u) = log[1+β c(1-exp(-u))], and a^∘ h (u) =-log[1+β c_h-1(1-exp(-u))]+log[1+β c_h(1-exp(-u))], b^∘ h (u) = c(u) - c[a^∘ h (u)] =- δlog[1+β c_h(1-exp(-u))]. The unconditional Laplace transform is given by (<ref>) is: c(u) = -δlog[1+β c(1-exp(-u))]. Hence, we deduce that: da/du(0) = β c/β c (exp(u)-1)+exp(u)|_x=0 = β c = ρ , and: dc/du(0) = -δβ c/β c (exp(x)-1)+exp(x)|_x=0 = -δβ c = -δρ. Applying Propostion 6, the LHS of the FELD is given by: {u'[da'/du(0)]^h - a^∘ h(u)'}Y_t - u'dc/du(0)+ u'[da'/du(0)]^h[dc/du(0)] + c(u) - c[a^∘ h(u)] = {uρ^h+log[1+β c_h-1(1-exp(-u))]-log[1+β c_h(1-exp(-u))]}Y_t +δρ u - δρ^h+1 u - δlog[1+β c_h(1-exp(-u))]. Likewise, the RHS of the FELD is given by: ∑_k=0^h-1{a^∘ (h-k-1)(u)'[da'/du(0)]^k+1-a^∘ (h-k)(u)'[da'/du(0)]^k}Y_t+(a^∘ (h-k)(u)'-a^∘ (h-k-1)(u)')[dc/du(0)] + (a^∘ (h-k)(u)'[da'/du(0)]^k-a^∘ (h-k-1)(u)'[da'/du(0)]^k+1)[dc/du(0)] + c[a^∘ (h-k-1)(u)]-c[a^∘ (h-k)(u)] = ∑_k=0^h-1{ρ^k+1log[1+β c_h-k-1(1-exp(-u))/1+β c_h-k-2(1-exp(-u))]-ρ^klog[1+β c_h-k(1-exp(-u))/1+β c_h-k-1(1-exp(-u))]}Y_t -δρ{log[1+β c_h-k(1-exp(-u))/1+β c_h-k-1(1-exp(-u))]-log[1+β c_h-k-1(1-exp(-u))/1+β c_h-k-2(1-exp(-u))]} -δρ{ρ^klog[1+β c_h-k(1-exp(-u))/1+β c_h-k-1(1-exp(-u))]-ρ^k+1log[1+β c_h-k-1(1-exp(-u))/1+β c_h-k-2(1-exp(-u))]} - δlog[1+β c_h-k(1-exp(-u))/1+β c_h-k-1(1-exp(-u))]. §.§.§ Extension to K-dimensions The J dimensional NBAR process (Y_t) satisfies a nonlinear state space model with the latent state variables X_t = (X_1,t,...,X_J,t), Z_t=(Z_1,t,...,Z_J,t) characterized by the measurement and state equations below: * Measurement Equation - They define the conditional distribution of Y_t+1 given Z_t+1,X_t+1,Y_t. It is assumed that Y_1,t+1,...,Y_J,t+1 are conditionally independent with Poisson distribution 𝒫(X_j,t+1+Z_j,t+1) for j=1,...,J. * Transition Equations - They define the conditional distribution of X_t+1 and Z_t+1 given Z_t,X_t,Y_t. The conditional intensity of the Poisson distribution has idiosyncratic components Z_j,t+1 for j=1,...,J and cross-dependent components X_j,t+1. It is assumed that these variables are conditionally independent where: * Z_j,t+1 follows a gamma distribution γ(δ_j+β'_jY_t,c_j), j=1,...,J with δ_j+β_j'Y_t, δ_j>0, β_j>0 the path dependent degree of freedom and c_j>0 the scale parameter. * The cross-dependent intensities X_t+1 follow a Wishart Gamma distribution with a conditional Laplace transform: 𝔼[exp(-s_1X_1,t+1-...-s_JX_J,t+1)| Y_t] = 1/{[I_d+2(s)Σ]}^δ_0/2+α'Y_t, where the components of s=(s_1,...,s_J) are positive. The parameterization involves a positive definite matrix Σ to create the cross-sectional dependence and a path dependent degree of freedom δ_0/2 + α'Y_t, where δ_0 and α_j>0, j=1,...,J. This NBAR model can be applied to the four series DISC, HACK, INSD and ELET with J=4, or pairwise by couple of series (J=2) or to each single series (J=1). The causal representation in the case of J=2 is given by: Z_1,t[rd,shift left = 0.5ex] Z_1,t+1[rd,shift left = 1.5ex] . Y_1,t-1 Y_2,t-1] [r] [ru] [rd] [ X_1,t X_2,t.[r,shift right=1.5ex][r,shift left=1.5ex] . Y_1,t Y_2,t] [r] [ru] [rd] [ X_1,t+1 X_2,t+1. [r,shift right=1.5ex][r,shift left=1.5ex] . Y_1,t+1 Y_2,t+1. Z_2,t[ru,shift right = 0.5ex] Z_2,t+1[ru,shift right = 1.5ex] In Lu et al. (2024), the nonlinear prediction formula is given by the probability generating function 𝔼(u_1^Y_1,t+1...u_1^Y_K,t+1|Y_t). To facilitate the construction of the FELD, we instead express the nonlinear prediction formula by means of the log-Laplace transform: The log-Laplace transform of the multivariate NBAR process is given by: 𝔼[exp(-u_1Y_1,t+1-...-u_JY_J,t+1)| Y_t] = 𝔼{𝔼[exp(-u_1Y_1,t+1-...-u_JY_J,t+1) | X_t+1, Z_t+1]| Y_t} = 𝔼{exp[-(1-exp(-u_1))(X_1,t+1+Z_1,t+1)-...-(1-exp(-u_J))(X_J,t+1+Z_J,t+1)]| Y_t} = 𝔼{exp[∑_j=1^J-(1-exp(-u_j))Z_j,t+1]exp[∑_j=1^J-(1-exp(-u_j))X_j,t+1]| Y_t} = [∏_j=1^J1/[1+c_j(1-exp(-u_j))]^δ_j+β_j'Y_t]×1/{[Id+2((1-exp(-u_1)),...,(1-exp(-u_J)))Σ]}^δ_0/2+α'Y_t = exp[-a(u_1,...,u_J)'Y_t-b(u_1,...,u_J)], where the expressions for a(u_1,...,u_J)=(a_1,...,a_J) and b(u_1,...,u_J) are given by: a_i(u_1,...,u_J) = ∑_j=1^Jβ_i,jlog[1+c_i(1-exp(-u_i))] +α_ilog{[Id+2((1-exp(-u_1)),...,(1-exp(-u_J)))Σ]}, j=1,...,J, b(u_1,...,u_J) = ∑_j=1^Jδ_jlog[1+c_j(1-exp(-u_j))] +δ_0/2log{[Id+2((1-exp(-u_1)),...,(1-exp(-u_J)))Σ]}, Since this is dynamic affine, the h-step ahead log-Laplace transform can be recursively defined as follows: a^(h)(u_1,...,u_J) = a_i(a_1^(h-1),...,a_J^(h-1)), b^(h)(u_1,...,u_J) = b(a_1^(h-1),...,a_J^(h-1)). The FELD can then be generated using this recursive formula, in a similar manner to the bivariate case, described in Section 7.3.3.
http://arxiv.org/abs/2406.18120v1
20240626071951
ArzEn-LLM: Code-Switched Egyptian Arabic-English Translation and Speech Recognition Using LLMs
[ "Ahmed Heakl", "Youssef Zaghloul", "Mennatullah Ali", "Rania Hossam", "Walid Gomaa" ]
cs.CL
[ "cs.CL", "cs.AI", "cs.CY", "cs.LG" ]
6th International Conference on AI in Computational Linguistics a]Ahmed Heakl a]Youssef Zaghloul a]Mennatullah Ali b]Rania Hossam a,c]Walid Gomaa [a]Egypt-Japan University of Science and Technology, New Borg El-Arab City, 21934, Alexandria, Egypt [b]Mansoura University, Mansoura, 35511, Egypt [c]Alexandria University, El-Shatby, 21526, Alexandria, Egypt § ABSTRACT Motivated by the widespread increase in the phenomenon of code-switching between Egyptian Arabic and English in recent times, this paper explores the intricacies of machine translation (MT) and automatic speech recognition (ASR) systems, focusing on translating code-switched Egyptian Arabic-English to either English or Egyptian Arabic. Our goal is to present the methodologies employed in developing these systems, utilizing large language models such as LLama and Gemma. In the field of ASR, we explore the utilization of the Whisper model for code-switched Egyptian Arabic recognition, detailing our experimental procedures including data preprocessing and training techniques. Through the implementation of a consecutive speech-to-text translation system that integrates ASR with MT, we aim to overcome challenges posed by limited resources and the unique characteristics of the Egyptian Arabic dialect. Evaluation against established metrics showcases promising results, with our methodologies yielding a significant improvement of 56% in English translation over the state-of-the-art and 9.3% in Arabic translation. Since code-switching is deeply inherent in spoken languages, it is crucial that ASR systems can effectively handle this phenomenon. This capability is crucial for enabling seamless interaction in various domains, including business negotiations, cultural exchanges, and academic discourse. Our models and code are available as open-source resources. [Code: <http://github.com/ahmedheakl/arazn-llm>], [Models: <http://huggingface.co/collections/ahmedheakl/arazn-llm-662ceaf12777656607b9524e>]. Dialectal Egyptian Arabic Code-Switching Machine Translation Automatic Speech Recognition Large Language Models § INTRODUCTION The term “code-switching” describes the phenomenon of a bilingual or multilingual speaker switching between two or more languages <cit.>. It has grown to be a prominent phenomenon in multilingual societies around the globe, particularly in the Arab world <cit.>. In Egypt, code-switching is a significant and common linguistic phenomenon. People's code choices have been impacted by recent political and social changes in Egypt. As shown in Table <ref>, code-switching is evident in everyday conversations. Addressing the complexities of code-switching presents a significant challenge due to the vast range of potential data combinations. Compounding this challenge is the scarcity of resources dedicated to training models on code-switched data. Additionally, the extent to which existing language models have encountered code-switched content during pre-training remains uncertain. Consequently, the ability of these models to effectively transfer knowledge to downstream code-switched tasks remains largely unexplored <cit.>. Machine translation approaches include direct-based, which uses dictionaries but lacks analysis <cit.>; rule-based, which leverages linguistic rules but requires manual effort <cit.>; corpus-based, which relies on data but struggles with low-resource languages <cit.>; knowledge-based, which incorporates explicit knowledge but struggles with ambiguity <cit.>; and hybrid, which combines approaches for better quality <cit.>. Arabic and English have different cultural backgrounds, affecting translation. `The news warms my heart' becomes الخبر يثلج صدري> in Arabic, where `warms' is translated to ثلج> (ices), due to the languages' origins in different climates. This is because English was born in a cold climate, where warmth is a pleasant weather, whereas Arabic was born in a hot climate, where cold is a pleasant weather. Human translators can understand these cultural differences, but machine translators may struggle to capture them <cit.>. ArzEn corpus serves as a valuable resource for linguistic research and the development of NLP systems capable of handling code-switched Egyptian Arabic-English while preserving cultural aspects <cit.>. Our primary contributions are the following: * Translation: Developing translation models using open-source models (Llma2, Llama3, and Gemma) for code-switched Egyptian Arabic-English, aiming to achieve translations that closely mimic human-generated outputs, from code-switched Egyptian Arabic to either English or Egyptian Arabic. * ASR: Developing an Automatic Speech Recognition (ASR) system using Whisper as a crucial component of a complete pipeline, where spoken code-switched Egyptian Arabic-English utterances are transcribed into written text, which is then translated using machine translation. * Quantization: Quantizing our models to be more accessible to human users through their CPUs/GPUs, ensuring efficient deployment and utilization of our models * Evaluation framework: Extending available metrics to enhance the reliability of our models, prioritizing evaluation accuracy and performance. * Open-Sourcing: Making our models and code publicly available to encourage community engagement and further research. The rest of this paper is organized as follows. Section <ref> reviews related literature. Section <ref> gives our methodology and experimental work. Section <ref> presents our results and discussion, featuring evaluations across multiple metrics. Concluding remarks are provided in Section <ref>. § RELATED WORKS §.§ Enhancements in Code-Switching Resources for Egyptian Arabic The authors in <cit.> discussed the phenomenon of code-switching (CSW) in Egyptian movies where code-switching is prevalent due to the complex linguistic landscape and social variables <cit.>, where speakers seamlessly blend dialectal Egyptian Arabic with other languages like English and French. The authors in <cit.> introduced ArzEn-ST which is a three-way speech translation corpus for code-switched Egyptian Arabic-English, which extends the ArzEn corpus <cit.>. They also presented benchmark baseline results for ASR, MT, and speech translation (ST) tasks. In addition, the authors in  <cit.> expanded the existing Egyptian Arabic datasets by introducing a new dataset focused on daily life conversations from movies and songs. This dataset is designed for benchmarking new machine translation models, fine-tuning large language models in few-shot settings, and facilitating research in cross-linguistic analysis and lexical semantics. This also helps in capturing more cultural nuances related to Egyptian Arabic. §.§ Code-switched corpora The authors in <cit.> presented the ArzEn corpus, an Egyptian Arabic-English code-switching spontaneous speech corpus. The corpus comprises 12 hours of recorded interviews with 38 Egyptian bilingual university students and employees. The corpus is designed for Automatic Speech Recognition (ASR) systems and offers insights into linguistic, sociological, and psychological aspects of code-switching. The work done in <cit.> extends the ArzEn corpus with translation in both primary (Egyptian-Arabic) and secondary (English) languages. The authors in <cit.> presented ArzEn-MultiGenre corpus comprising 25,557 segment pairs of Egyptian Arabic song lyrics, novels, and TV show subtitles, all manually translated and aligned with their English counterparts. §.§ The era of Large Language Models (LLMs) The process of translation requires a complete understanding of linguistic conversion, syntactic, grammatical, and cultural dimensions. It is more than mapping words between languages <cit.>. Accurate translation requires a deep understanding of the cultural nuances inherent in both languages, ensuring the preservation of cultural sensitivity and local values <cit.>. This versatility of LLMs enabled them to excel in numerous NLP applications, such as text generation (Llama2 <cit.>, ChatGPT <cit.>), machine translation (NLLB <cit.>, SemalessM4T <cit.>, ArzEn-ST <cit.>). Recent advancements in Large Language Models (LLMs) have led to the development of powerful models like LLaMa2 <cit.>, Gemma (2B, 7B) <cit.>, and LLaMa3 8B, which have demonstrated impressive capabilities in NLP tasks. Notably, these models have been designed to be more computationally efficient, allowing them to be deployed on consumer-grade GPUs. This shift enables researchers and developers to harness the power of LLMs on local machines, facilitating faster experimentation, prototyping, and deployment of AI applications. §.§ Code-switching Automatic Speech Recognition (CSW-ASR) Researchers have explored acoustic, linguistic, and pronunciation modeling approaches, including language identification systems <cit.>, parallel recognizers <cit.>, and single-pass methods <cit.>. The authors in <cit.> presented Whisper, a speech recognition system trained on 680,000 hours of multilingual and multitask audio data, achieving zero-shot transfer capabilities and approaching human accuracy and robustness. The system's architecture is based on an encoder-decoder transformer, leveraging a minimalist data processing approach and multitask training. § METHODOLOGY In this section, we present the machine translation and automatic speech recognition systems we used. §.§ Machine Translation (MT) The task of machine translation is represented by a mapping 𝒯: X^S→ Y^T where 𝒯 is the machine translation function, X^S is the set of source sentences in the source language S, represented as a sequence of tokens x = (x_1, x_2, ..., x_n), and Y^T is the set of translated sentences in the target language T. The goal of machine translation is to find the optimal translation ŷ that maximizes the likelihood of the target sentence given the source sentence ŷ = max_y ∈ Y^T P(y|x) where P(y|x) is the conditional probability of the target sentence given the source sentence. Formally, we can define the machine translation problem as: 𝒯^* = min_𝒯𝔼_x ∼𝒳^𝒮 [d(𝒯(x), y^*)] where 𝒯^* is the optimal machine translation function. d(·, ·) is a distance metric (e.g. BLEU score, METEOR score) that measures the similarity between the translated sentence and the reference translation y^*. The goal is to find the optimal machine translation function 𝒯^* that minimizes the expected distance between the translated sentence and the reference translation. This mathematical definition provides a formal framework for understanding the task of machine translation and its optimization problem. We used the infamous translation ArzEn-ST dataset to train all of our models <cit.>. We adhere to the same train and test splits as described in <cit.>. Specifically, we utilize the ArzEn-ST test set, comprising 1,402 sentences, and the train set, consisting of 3,344 sentences. To provide our models with a richer context, we also pre-train them on larger datasets, including the entire parallel corpora presented in <cit.>. This approach enables our models to leverage a broader range of linguistic patterns and cultural nuances. Data pre-processing involves removing corpus-specific annotations, URLs, and emoticons, as well as converting all text to lowercase. This step is crucial in ensuring that our models focus on the underlying linguistic structures and cultural nuances of the Egyptian-Arabic language. Given the sequential nature of the translation task and the need for culturally enriched translations, we opt for large language models (LLMs) as our primary approach. Specifically, we employ the latest LLMs that can be accommodated by consumer-grade RAM or GPU, including LLaMA3 8B, Gemma1.1 2B, and Gemma1.1 7B <cit.>. Notably, we utilize the chat version of each model, which has been trained to follow human instructions, thereby facilitating the training process. All models are trained using 2 T4 GPUs with 16GB VRAM. It is worth noting that these models are decode-based architectures, which are particularly well-suited for sequential tasks like machine translation. By leveraging the strengths of these models, we aim to produce culturally fitting translations that capture the nuances of Egyptian-Arabic language and culture. We employed the paged-Adam optimizer with weight decay <cit.> in 32-bit precision for all models, except for LLaMa3, which required 8-bit precision due to its substantial size (8 billion parameters). To accommodate the computational demands of the Adam optimizer, which utilizes multiple gradient copies, we trained our models using adapters for LLMs. Specifically, we explored the use of Quantized low-Rank Adapters (QLoRA) <cit.> and weight-Decomposed low-Rank Adaptation (DoRA) <cit.>, with the latter yielding the most promising results and exhibiting similar behavior to the original fine-tuning process. We opted for int4 quantization with normal floats (nf4) for each adapter. To mitigate memory constraints during training, we leveraged gradient checkpointing <cit.>, which incurs only an additional forward pass per mini-batch, while reducing memory consumption to O(√(n)). Furthermore, to enable training with effectively large batch sizes while minimizing memory constraints, we implemented a gradient accumulation step of 4 <cit.>. This approach allows us to accumulate gradients from 4 batches, perform backward propagation, and achieve comparable accuracy to updating a batch of 4 at once, while reducing memory requirements by a factor of 4. Our experiments revealed that the optimal strategy involves training models for a single epoch with a constant learning rate schedule. Additionally, we ensured that input attention masks were configured to mask out the output translation, thereby computing gradients and loss only for the output translation. Lastly, to make our models available on a consumer CPU, we provide the quanitized GGUF version of our best model. The quantization was done through the implementation of GGUF llama.cpp. §.§ Automatic Speech Recognition (ASR) In the context of Automatic Speech Recognition (ASR), we aim to convert a speech signal into a sequence of words. Let's assume a speech signal x = (x_1, x_2, ..., x_T) where x_t ∈ℝ^D is the acoustic feature vector at time t and T is the length of the speech signal. The goal of ASR is to find the most likely sequence of words w = (w_1, w_2, ..., w_N) where w_n ∈𝒱 is the n^th word in the vocabulary 𝒱 and N is the length of the transcription. The ASR problem can be formulated as ŵ = max_w ∈𝒱^* P(w|x), where P(w|x) is the posterior probability of the word sequence w given the speech signal x. We propose a cascaded speech-to-text translation system, wherein an ASR system is trained to generate transcriptions, which are subsequently fed into a machine translation model. We opted for a cascaded architecture over an end-to-end approach due to the constraints imposed by limited resources, which rendered the development of an end-to-end system infeasible. Furthermore, previous research has demonstrated that cascaded systems can outperform end-to-end systems in low-resource settings, thereby motivating our design choice <cit.> We employed the Whisper model <cit.> to tackle the task of ASR for Egyptian Arabic. The Whisper model, trained on a large-scale dataset of 680,000 hours of multilingual and multitask supervision, demonstrated excellent generalizability to our specific use case. This is particularly valuable for our application, as we are dealing with a unique dialect of Arabic, namely the Egyptian Arabic. The Whisper model, an encoder-decoder architecture, takes the input signal in spectrogram format and utilizes cross-attention mechanisms. For our experiments, we leveraged the ArzEn-ST dataset <cit.>, but restricted the output to transcription only, focusing on code-switched Egyptian Arabic. Data preprocessing involved resampling all audio to 16 kHz, removing URLs and emoticons from the text, segmenting the speech into 30-second clips, and converting each clip into mel-spectrogram images. Training was conducted on 2 T4 GPUs, each equipped with 16 GB of VRAM. The training process was completed in approximately 5 hours. § RESULTS AND DISCUSSION We evaluated the machine translation models using five criteria: BLEU <cit.>, BERT Score <cit.>, edit distance (EED), METEOR <cit.>, and LLaMa3-based grading, inspired by <cit.>, as traditional metrics are limited in capturing semantic nuances. For ASR, we employed Word-Error Rate (WER) and Character-Error Rate (CER) as evaluation metrics. Our models are compared to the state-of-the-art results in <cit.>, with a focus on BLEU for MT and WER and CER for ASR, as these are the only reported metrics. Figure <ref> shows that LLaMa3 outperforms all other models on the ArzEn to English translation task. As in table <ref>, LLaMa3 achieves a BLEU score of 53.64, which is significantly higher than the SoTA <cit.> by 56%. Also, smaller models such as Gemma 2B and Gemma 7B achieved comparable results to LLaMa3 8B with 9% and 4.1% lower in BERT-f1 score, respectively. On the other hand, LLaMa2 performance is the lowest which can be easily interpreted due to the fact that its tokenizer does not support Arabic tokenization. In contrast to new models such as Gemma and LLaMa3 which uses Byte-Pair Encoding (BPE) <cit.> implemented with tiktoken, LLaMa2 just breaks down the Arabic sentence into characters as shown in table <ref>. Notably, models pre-trained on additional data (Hamed et al., 2022 <cit.> + Extra and Gemmal.1 2B + Extra) generally outperform their counterparts trained only on the ArzEn dataset, suggesting that extra pre-training data can effectively enhance machine translation model performance. Although, this gain is marginal for larger models, such as Gemma1.1 7B, it can even be detrimental, as observed in LLaMa3 8B, with a -1% decrease in BERT-f1 score. As shown in table <ref>, translating into Arabic yields significantly higher BLEU scores compared to translating into English, with our optimal Arabic model achieving a BLEU score of 87.2, whereas the best English model attains a BLEU score of 53.64, representing a notable difference of approximately 62%. This phenomenon is consistent with the linguistic characteristics of the source text, where a significant proportion (approximately 85%) of Arabic words remain largely unchanged, with only minor modifications required to accommodate the target language. As illustrated in figure <ref>, increasing the LoRA rank consistently yields better results. Our experiments reveal that the optimal parameters are a rank of 256 and an alpha value of 128. Furthermore, we observe that higher ranks require increased LoRA dropout to mitigate overfitting, with a dropout of 0.1 employed for ranks exceeding 32. As shown in table <ref>, our trained Whisper models surpass the state-of-the-art results in <cit.> (+ Extra) by 11.6% in WER, despite being trained solely on the original data without additional pre-training. Furthermore, figure <ref> illustrates that the medium Whisper model marginally outperforms the small version, resulting in a 7.1% increase in BLEU score, as reflected in table <ref>. Whisper can achieve real-time output, with a latency of 1.3 seconds for a single 30-second clip inference on a consumer-grade GPU with fp16 precision, and 18 seconds on a CPU. Notably, for English models, we found that human evaluation is particularly well-suited. Therefore, we conducted a human evaluation study, where 65 university students were asked to assess the quality of 10 randomly selected generated sentences on a scale of 1-10, with 1 indicating an irrelevant translation and 10 representing a perfect translation that captures both meaning and cultural nuances. This approach was necessary, as traditional evaluation metrics such as BERTScore, METEOR, edit distance, and BLEU fail to adequately capture the nuances of meaning and cultural context. Our results show that, on average, the generated translations received a rating of 9.2 out of 10, which supports our claim of capturing both perfect meaning and cultural nuances. For instance, when presented with the sentence “انا دخلت IG,” our model produced the translation "I entered IG school," notwithstanding that “IG” signifies “Instagram” in contexts outside of Egyptian culture. Finally, our top-performing model, LLaMa3, was quantized from bfloat16 to 5-bit Q5, achieving a 68.75% reduction in bits while maintaining performance, with only 1.2% and 1% degradation in English and Arabic versions, respectively. The quantized model can be deployed on a consumer-grade RAM with a modest 5.6 GB footprint, supporting a throughput of 7.2 tokens/sec, thereby enabling real-time speech translation and video dubbing applications. § CONCLUSION This paper has provided insights into the methodologies employed in developing machine translation and automatic speech recognition systems for code-switched Egyptian Arabic. Through careful experimentation and rigorous evaluation, we have demonstrated the effectiveness of our approaches in achieving culturally fitting translations and accurate speech recognition. Our findings emphasize the importance of using large language models and pre-training with additional data to enhance the performance of MT systems. Moreover, the success of our ASR models, particularly the Whisper architecture, highlights the potential of deep learning techniques in tackling speech recognition tasks, even in low-resource settings. Looking ahead, further research could explore advanced optimization techniques and novel model architectures to push the boundaries of MT and ASR performance. Additionally, efforts to expand training data and refine models for specific dialects could result in even more precise translations and transcriptions, fostering greater linguistic accessibility in our globalized world. gemma Team, G., Mesnard, T., Hardin, C., Dadashi, R., Bhupatiraju, S., Pathak, S., Sifre, L., Rivière, M., Kale, M. S., Love, J., Tafti, P., Hussenot, L., Sessa, P. G., Chowdhery, A., Roberts, A., Barua, A., Botev, A., Castro-Ros, A., Slone, A., ... Kenealy, K. (2024) “Gemma: Open Models Based on Gemini Research and Technology”, arXiv.org, https://arxiv.org/abs/2403.08295. acegpt Huang, H., Yu, F., Zhu, J., Sun, X., Cheng, H., Song, D., Chen, Z., Alharthi, A., An, B., He, J., Liu, Z., Zhang, Z., Chen, J., Li, J., Wang, B., Zhang, L., Sun, R., Wan, X., Li, H., Xu, J. (2023) “AceGPT, Localizing Large Language Models in Arabic”, arXiv.org, https://arxiv.org/abs/2309.12053. arzen Hamed, I., Vu, N. T., Abdennadher, S. (2020) “Arzen: A speech corpus for code-switched Egyptian Arabic-English”, in Proceedings of the International Conference on Language Resources and Evaluation, pages 4237-4246. arzen-parallel Al-Sabbagh, R. (2024) “ArzEn-MultiGenre: An aligned parallel dataset of Egyptian Arabic song lyrics, novels, and subtitles, with English translations”, Data in Brief, 54, 110271. whisper Radford, Alec, Kim, Jong Wook, Xu, Tao, Brockman, Greg, McLeavey, Christine, Sutskever, Ilya (2023) “Robust speech recognition via large-scale weak supervision”, in Proceedings of the 40th International Conference on Machine Learning, Honolulu, Hawaii, USA. arzen-threeway Hamed, I., Habash, N., Abdennadher, S., Vu, N. T. (2022) “ArzEn-ST: A Three-way Speech Translation Corpus for Code-Switched Egyptian Arabic-English”, in Bouamor, H., Al-Khalifa, H., Darwish, K., Rambow, O., Bougares, F., Abdelali, A., Tomeh, N., Khalifa, S., Zaghouani, W. (eds) Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP), Abu Dhabi, United Arab Emirates (Hybrid). bleu Papineni, Kishore, Salim Roukos, Todd Ward, and Wei-Jing Zhu (2002) “Bleu: a method for automatic evaluation of machine translation”, in Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318. bert-score Zhang, T., Kishore, V., Wu, F., Weinberger, K.Q. and Artzi, Y., 2019. Bertscore: Evaluating text generation with bert. arXiv preprint arXiv:1904.09675. meteor Banerjee, S. and Lavie, A., 2005, June. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization (pp. 65-72). qlora Dettmers, T., Pagnoni, A., Holtzman, A. and Zettlemoyer, L., 2024. Qlora: Efficient finetuning of quantized llms. Advances in Neural Information Processing Systems, 36. dora Liu, S.Y., Wang, C.Y., Yin, H., Molchanov, P., Wang, Y.C.F., Cheng, K.T. and Chen, M.H., 2024. DoRA: Weight-Decomposed Low-Rank Adaptation. arXiv preprint arXiv:2402.09353. adamw Loshchilov, I. and Hutter, F., 2017. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101. gradient-checkpoint Chen, T., Xu, B., Zhang, C. and Guestrin, C., 2016. Training deep nets with sublinear memory cost. arXiv preprint arXiv:1604.06174. gradient-accumulation Lamy-Poirier, J., 2021. Layered gradient accumulation and modular pipeline parallelism: fast and efficient training of large language models. arXiv preprint arXiv:2106.02679. cascaded-mt Denisov, P., Mager, M. and Vu, N. T. (2021) 'IMS's systems for the IWSLT 2021 low-resource speech translation task', Proceedings of the International Conference on Spoken Language Translation. CSW-survey Sitaram, S., Chandu, K.R., Rallabandi, S.K. and Black, A.W., 2019. A survey of code-switched speech and language processing. arXiv preprint arXiv:1904.00784. CSW-Egypt Hafez, R. (2015) Factors affecting code switching between Arabic and English. Master's thesis, The American University in Cairo. Available at: https://fount.aucegypt.edu/etds/148 bpe Shibata, Y., Kida, T., Fukamachi, S., Takeda, M., Shinohara, A., Shinohara, T. and Arikawa, S., 1999. Byte pair encoding: A text compression scheme that accelerates pattern matching. CBMT Li, L., 2004. Corpus-based machine translation. Shanghai Journal of Translators for Science and Technology, 19(2), pp.59-62. DBMT Al-Taani, A.T. and Hailat, Z.M., 2005. A direct English-Arabic machine translation system. Information Technology Journal, 4(3), pp.256-261. RBMT Farhat, A. and Al-Taani, A., 2015. A rule-based English to Arabic machine translation approach. In international Arab conference on information technology (ACIT’2015). KBMT Carbonell, J.G., Cullingford, R.E. and Gershman, A.V., 1981. Steps toward knowledge-based machine translation. IEEE Transactions on Pattern Analysis and Machine Intelligence, (4), pp.376-392. hybrid-approach Oladosu, J., Esan, A., Adeyanju, I., Adegoke, B., Olaniyan, O. and Omodunbi, B., 2016. Approaches to machine translation: a review. FUOYE Journal of Engineering and Technology, 1(1). llm-intro Abiola, O.B., Adetunmbi, A.O. and Oguntimilehin, A., 2015. Using hybrid approach for English-to-Yoruba text to text machine translation system (proposed)”. International Journal of Computer Science and Mobile Computing, 4(8), pp.308-313. llama2 Touvron, H., Martin, L., Stone, K., Albert, P., Almahairi, A., Babaei, Y., Bashlykov, N., Batra, S., Bhargava, P., Bhosale, S. and Bikel, D., 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288. chatgpt OpenAI, 2022. ChatGPT. Available at: https://openai.com/blog/chatgpt. CSW-worldwide Jacobson, R. ed., 2001.Codeswitching worldwide II. Mouton de Gruyter. ASR-1 Chan, J.Y., Ching, P.C., Lee, T. and Meng, H.M., 2004, December. Detection of language boundary in code-switching utterances by bi-phone probabilities. In 2004 International Symposium on Chinese Spoken Language Processing (pp. 293-296). IEEE. ASR-2 Weiner, J., Vu, N.T., Telaar, D., Metze, F., Schultz, T., Lyu, D.C., Chng, E.S. and Li, H., 2012. Integration of language identification into a recognition system for spoken conversations containing code-switches. In Spoken Language Technologies for Under-Resourced Languages. ASR-3 Lyu, D.C., Lyu, R.Y., Chiang, Y.C. and Hsu, C.N., 2006, May. Speech recognition on code-switching among the Chinese dialects. In 2006 IEEE International Conference on Acoustics Speech and Signal Processing Proceedings (Vol. 1, pp. I-I). IEEE. NLLB Costa-jussà, M.R., Cross, J., Çelebi, O., Elbayad, M., Heafield, K., Heffernan, K., Kalbassi, E., Lam, J., Licht, D., Maillard, J. and Sun, A., 2022. No language left behind: Scaling human-centered machine translation. arXiv preprint arXiv:2207.04672. seamlessm4t Barrault, L., Chung, Y.A., Meglioli, M.C., Dale, D., Dong, N., Duppenthaler, M., Duquenne, P.A., Ellis, B., Elsahar, H., Haaheim, J. and Hoffman, J., 2023. Seamless: Multilingual Expressive and Streaming Speech Translation. arXiv preprint arXiv:2312.05187. llm-grader Xiao, C., Ma, W., Xu, S.X., Zhang, K., Wang, Y. and Fu, Q., 2024. From Automation to Augmentation: Large Language Models Elevating Essay Scoring Landscape. arXiv preprint arXiv:2401.06431.
http://arxiv.org/abs/2406.17748v1
20240625173451
A New Perspective on Shampoo's Preconditioner
[ "Depen Morwani", "Itai Shapira", "Nikhil Vyas", "Eran Malach", "Sham Kakade", "Lucas Janson" ]
cs.LG
[ "cs.LG", "math.OC", "stat.ML" ]
The Size-Mass relation at Rest-Frame 1.5μm from JWST/NIRCam in the COSMOS-WEB and PRIMER-COSMOS fields Depen Morwani[1] SEAS Harvard University Itai Shapira[1] SEAS Harvard University Nikhil VyasEqual contribution. Randomized Author Ordering. SEAS Harvard University Eran Malach Kempner Institute Harvard University Sham Kakade Kempner Institute Harvard University Lucas Janson Department of Statistics Harvard University July 1, 2024 ============================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT Shampoo, a second-order optimization algorithm which uses a Kronecker product preconditioner, has recently garnered increasing attention from the machine learning community. The preconditioner used by Shampoo can be viewed either as an approximation of the Gauss–Newton component of the Hessian or the covariance matrix of the gradients maintained by Adagrad. We provide an explicit and novel connection between the optimal Kronecker product approximation of these matrices and the approximation made by Shampoo. Our connection highlights a subtle but common misconception about Shampoo's approximation. In particular, the square of the approximation used by the Shampoo optimizer is equivalent to a single step of the power iteration algorithm for computing the aforementioned optimal Kronecker product approximation. Across a variety of datasets and architectures we empirically demonstrate that this is close to the optimal Kronecker product approximation. Additionally, for the Hessian approximation viewpoint, we empirically study the impact of various practical tricks to make Shampoo more computationally efficient (such as using the batch gradient and the empirical Fisher) on the quality of Hessian approximation. theoremTheorem *theorem*Theorem lemma[theorem]Lemma *lemma*Lemma *remark*Remark § INTRODUCTION Second-order optimization is a rich research area within deep learning that has seen multiple influential works over the past few decades. Recently, these methods have seen success in practical large scale training runs such as Gemini 1.5 Flash <cit.> and in academic benchmarks <cit.>. One of the primary challenges in this field arises from the substantial memory and computational demands of traditional second-order methods, such as Adagrad <cit.> and Newton’s method. In the context of neural networks, both of these methods require storing and inverting a |P| × |P| dimensional matrix H (either covariance of the gradients for Adagrad or the Gauss–Newton component of the Hessian for Newton's method), where |P| represents the number of parameters of the neural network. With modern deep learning architecture scaling to billions of parameters, these requirements make the direct application of these methods impractical. To address this issue, various approaches have been proposed, including Hessian-free optimization <cit.> and efficient approximations of the matrix H <cit.>. These methods aim to leverage second-order information while mitigating the computational and memory overhead. The class of methods for efficiently approximating the matrix H predominantly involve either a diagonal or a layer-wise Kronecker product approximation of H. These choices are motivated by the fact that, compared to maintaining the matrix H, both diagonal and layer-wise Kronecker products are significantly more memory-efficient to store and computationally efficient to invert. Two of the most well-known methods that utilize a layer-wise Kronecker product approximation of H are K-FAC <cit.> and Shampoo <cit.>. In this work, we primarily focus on the Shampoo optimizer <cit.>, which has recently gained increasing attention from the research community. Notably, in a recent benchmark of optimization algorithms proposed for practical neural network training workloads <cit.>, Shampoo appears to outperform all other existing methods. Another recent study, elucidating the Google Ads recommendation search pipeline, revealed that the Google Ads CTR model is trained using the Shampoo optimizer <cit.>. Additionally, a recent work <cit.> implemented a distributed data parallel version of Shampoo, demonstrating its superior speed in training ImageNet compared to other methods. Previously, Shampoo's approximation was shown to be an upper bound (in spectral norm) on the matrix H <cit.>. In this work, we make this connection much more precise. Prior research has established the notion of the optimal Kronecker product approximation (in Frobenius norm) of H <cit.>, which can be obtained numerically using a power iteration scheme. The primary contribution of this work is to theoretically and empirically demonstrate that the square of the approximation used by Shampoo is nearly equivalent to the optimal Kronecker factored approximation of H. The main contributions of the work are summarized below: * We theoretically show (Proposition <ref>) that the square of the Shampoo's approximation of H is precisely equal to one round of the power iteration scheme for obtaining the optimal Kronecker factored approximation of the matrix H. Informally, for any covariance matrix H = [gg^T] where g ∈ℝ^mn [Gauss–Newton component of the Hessian can also be expressed as a covariance matrix. For details, refer Section <ref>], we argue that the right Kronecker product approximation of H is 𝔼[ G G^⊤ ] ⊗𝔼[ G^⊤ G ] while Shampoo proposes 𝔼[ G G^⊤ ]^1/2⊗𝔼[ G^⊤ G ]^1/2, with G ∈ℝ^m × n representing a reshaped g into a matrix of size m × n. * We empirically establish that the result of one round of power iteration is very close to the optimal Kronecker factored approximation (see Figure <ref>), and provide theoretical justification for the same. * For the Hessian based viewpoint of Shampoo (Section <ref>), we empirically demonstrate the impact on the Hessian approximation of various practical tricks implemented to make Shampoo more computationally efficient such as averaging gradients over batch (Section <ref>) and using empirical Fisher instead of the actual Fisher (Section <ref>). Remark. Previous works <cit.> have explored the question of why Adagrad-based approaches like Adam and Shampoo have an extra square root compared to the Hessian inverse in their update. This alternative question is orthogonal to our contribution. For details, refer Appendix <ref>. Paper organization. In Section <ref>, we cover the technical background necessary for understanding this work. In Section <ref>, we provide a general power iteration scheme for obtaining the optimal Kronecker product approximation of the matrix H, and establish the the connection between Shampoo's approximation and the optimal Kronecker product approximation of H. In Section <ref>, we explore the Hessian approximation viewpoint of Shampoo and empirically study how various practical tricks to make Shampoo more computationally efficient impact the quality of the Hessian approximation. In Section <ref>, we cover closely related works and conclude with discussing the limitations of the work in Section <ref>. In Appendix <ref>, we include additional experiments on the ViT architecture and compare with the K-FAC approximation to the Hessian. Detailed related work, proofs, dataset and architecture details have been deferred to the Appendix. § TECHNICAL BACKGROUND We use lowercase letters to denote scalars and vectors, and uppercase letters to denote matrices. For a symmetric matrix A, A ≽ 0 (resp. A ≻ 0) denotes that A is positive semi-definite (resp. positive definite). Similarly, for symmetric matrices A and B, A ≽ B (resp. A ≻ B) denotes A - B ≽ 0 (resp. A - B ≻ 0). We will use M[i, j] refer to the 0-indexed (i, j) entry of the matrix M. The Kronecker product of two matrices A ∈ℝ^p × q and B ∈ℝ^r × s is denoted by A ⊗ B ∈ℝ^pr × qs. It is defined such that (A⊗ B)[ri+i',sj+j']=A[i, j]B[i', j'] where 0 ≤ i < p, 0 ≤ j < q, 0 ≤ i' < r, 0 ≤ j' < s. Vectorization of a matrix A ∈ℝ^m × n, denoted by A, is a mn-dimensional column vector obtained by stacking the columns of A on top of one another. We will usually denote A by a. Following is a basic lemma about Kronecker products that will be used later (A ⊗ B) G = BGA^⊤. §.§ Shampoo The original Shampoo <cit.> paper introduced its algorithm as an approximation of an online learning algorithm Adagrad <cit.>. Shampoo can also be interpreted <cit.> as approximating the Gauss–Newton component of the Hessian. Both of these perspectives will be discussed in Section <ref> and  <ref> respectively. . §.§.§ Adagrad based perspective of Shampoo Adagrad: This is a preconditioned online learning algorithm, that uses the accumulated covariance of the gradients as a preconditioner. Let θ_t ∈ℝ^p denote the parameters at time t and let g_t ∈ℝ^p denote the gradient. It maintains a preconditioner H_Ada = ∑_t=1^T g_tg_t^⊤. The update for the parameter for learning rate η are given by θ_T+1 = θ_T - η H_Ada^-1/2g_T. Shampoo is a preconditioned gradient method which maintains a layer-wise Kronecker product approximation to full-matrix Adagrad. Let the gradient for a weight matrix[We will focus on weights structured as matrices throughout this paper.] W_t ∈ℝ^m × n at time t be given by G_t ∈ℝ^m × n. The lemma below is used to obtain the Shampoo algorithm from Adagrad: Assume that G_1,...,G_T are matrices of rank at most r. Let g_t = G_t for all t. Then, with ≼ representing the for any ϵ > 0, ϵ I_mn + 1/r∑_t=1^T g_t g_t^⊤≼(ϵ I_m + ∑_t=1^T G_t G_t^⊤)^1/2⊗(ϵ I_n + ∑_t=1^T G_t^⊤ G_t)^1/2. Based on the above lemma, Shampoo maintains two preconditioners L_t ∈ℝ^m × m and R_t ∈ℝ^n × n, which are initialized to ϵ I_m and ϵ I_n respectively. . The update for the preconditioners and the Shampoo update for a learning rate η is given by L_T = L_T-1 + G_TG_T^⊤; R_T = R_T-1 + G_T^⊤ G_T; W_T+1 = W_T - η L_T^-1/4 G_T R_T^-1/4. In Lemma <ref> the matrix H_Ada = ∑_t=1^T g_tg_t^⊤ is approximated (ignoring ϵ and scalar factors) by the the Kronecker product (∑_t=1^T G_t G_t^⊤)^1/2⊗(∑_t=1^T G_t^⊤ G_t)^1/2. Our main focus will be to study the optimal Kronecker product approximation of the matrix H_Ada and its connection to Shampoo's approximation (done in Section <ref>). §.§.§ Hessian based perspective of Shampoo In this section we describe the Hessian approximation viewpoint of Shampoo explored by previous works <cit.> as an alternative to the Adagrad viewpoint described above. Our theoretical and empirical results hold for both viewpoints. Gauss–Newton (GN) component of the Hessian. For a datapoint (x,y), let f(x) denote the output of a neural network and ℒ(f(x),y) represent the training loss. Let W ∈ℝ^m × n represent a weight matrix in the neural network and 𝒟 denote the training distribution. Then, for CE loss, the Gauss-Newton component of the Hessian of the loss with respect to W is given by (see Appendix <ref> for details) H_GN = _(x,y) ∼𝒟[∂ f/∂ W∂^2 ℒ/∂ f^2∂ f/∂ W^⊤] = _x ∼𝒟_x s ∼ f(x)[g_x,s g_x,s^⊤] , where, for brevity, f(x) denotes the output distribution of the neural network and 𝒟_x represents the training distribution of x <cit.>. The right-hand side of the equation is also referred to in the literature as the Fisher matrix, and its counterpart for real labels, _(x,y) ∼𝒟[g_x,y g_x,y^⊤], is referred to as the empirical Fisher. For brevity, going forward, we will assume that x is drawn from 𝒟_x and represent the Fisher matrix as _x, s ∼ f(x)[g_x,s g_x,s^⊤]. Similarly, when both x and y are used, we will assume they are drawn from 𝒟. The aim of algorithms such as K-FAC and Shampoo (when viewed from the Hessian perspective) is to do a layerwise Kronecker product approximation of the Fisher matrix H_GN. The following lemma establishes the approximation made by Shampoo: Assume that G_x,s are matrices of rank at most r. Let g_x,s = G_x,s . Then, for any ϵ > 0, _x, s ∼ f(x)[g_x,s g_x,s^⊤] ≼ r(_x, s ∼ f(x)[G_x,s G_x,s^⊤] )^1/2⊗(_x, s ∼ f(x)[G_x,s^⊤ G_x,s] )^1/2. In Lemma <ref> the matrix on the left hand side is equal to H_GN and the right hand side represents the H_GN approximation made by Shampoo. However, computing this approximation at every step is expensive. So, in practice, Shampoo makes two additional approximations on top. First, it replaces the per-input gradient by batch gradient, i.e, replaces _x, s ∼ f(x) [G_x,s G_x,s^⊤] by _B,s [G_B,s G_B,s^⊤], where B denotes the batch, s is the concatenation of s ∼ f(x) for all (x,y) ∈ B and G_B, s = 1/|B|∑_(x,y) ∈ B, s=𝐬 [x] G_x,s is the sampled batch gradient, with 𝐬 [x] representing the sampled label corresponding to x ∈ B. Second, it replaces sampled labels with real labels, i.e., it replaces _B,s [G_B,s G_B,s^⊤] with _B [G_B G_B^⊤], where G_B = 1/|B|∑_(x,y) ∈ B G_x,y is the batch gradient. Thus, if G_j and W_j represent the batch gradient and weight matrix at iteration j, and λ is an exponential weighting parameter, then the update of Shampoo is given by L_j = λ L_j-1 + (1 - λ) G_j G_j^⊤; R_j = λ R_j-1 + (1 - λ) G_j^⊤ G_j; W_j+1 = W_j - η L_j^-1/4 G_j R_j^-1/4, where L_j and R_j represent the left and right preconditioners maintained by Shampoo, respectively. Our focus (when viewing Shampoo from the Hessian perspective) will be to study * The optimal Kronecker product approximation of the matrix H_GN and its connection to Shampoo's approximation (done in Section <ref>). * The effect of the aforementioned two approximations on the approximation quality (done in Section <ref>). §.§ Optimal Kronecker product approximation For Frobenius norm (or other “entry-wise” matrix norms), finding the optimal Kronecker product approximation of a matrix H ∈ℝ^mn × mn is equivalent to finding the optimal rank-one approximation of a rearrangement of H. We define the rearrangement operator , applied to a matrix H such that, H[mi+i', nj+j'] = H[mj+i, mj'+i'], where {i,i'}∈ [0,1,...,m-1], {j,j'}∈ [0,1,...,n-1] and H∈ℝ^m^2 × n^2. A property of that will be useful to us is: H = A ⊗ B H = ab^⊤, where A ∈ℝ^m × m, a = A∈ℝ^m^2, B ∈ℝ^n × n and b = B∈ℝ^n^2. This property can be used to prove the following result on optimal Kronecker product approximation: Let H ∈ℝ^mn × mn be a matrix and let L ∈ℝ^m × n, R ∈ℝ^n × m. Then, the equivalence of the Kronecker product approximation of H and the rank-one approximation of H is given by: H - L ⊗ R _F = H - LR^⊤_F, where ·_F denotes the Frobenius norm. Since the optimal rank-1 approximation of a matrix is given by its singular value decomposition (SVD), we conclude: Let H ∈ℝ^mn × mn. If the top singular vectors and singular value of H are represented by u_1, v_1 and σ_1, respectively, then the matrices L ∈ℝ^m × m and R ∈ℝ^n × n defined by vec(L) = σ_1 u_1, vec(R) = v_1, minimize the Frobenius norm H - L ⊗ R_F. Obtaining SVD by power iteration. Power iteration <cit.> is a well-known method for estimating the top eigenvalue of a matrix M. It can also be specialized for obtaining the top singular vectors of a matrix. The corresponding iterations for the left singular vector ℓ and the right singular vector r are given by ℓ_k ← Mr_k-1 ; r_k ← M^⊤ℓ_k-1, where k denotes the iteration number. Cosine similarity. We will be using cosine similarity between matrices as a metric for approximation. For two matrices M_1 and M_2, this refers to Tr(M_1M_2^⊤) / (||M_1||_F · || M_2 ||_F). A value of 1 indicates perfect alignment, while a value of 0 indicates orthogonality. § OPTIMAL KRONECKER PRODUCT APPROXIMATION AND SHAMPOO In this section, we will specialize the theory of Section <ref> for finding the optimal Kronecker product approximation of a covariance matrix H = _g ∼𝒟_g[gg^⊤] for g ∈ℝ^mn. Both perspectives of Shampoo described in Section <ref> are concerned with Kronecker product approximations of H of the form L ⊗ R where L ∈ℝ^m × m, R ∈ℝ^n × n, but for different distributions 𝒟_g. For the Adagrad viewpoint, with 𝒟_g as the uniform distribution over g_t where 1 ≤ t ≤ T refers to the gradient at timestep t, H = H_Ada. For the Hessian viewpoint, with 𝒟_g as the distribution over gradients with batch size 1 and with sampled labels, H = H_GN (see Section <ref> for derivation). Since our results will hold for all distributions 𝒟_g, we will use [gg^⊤] to refer to _g ∼𝒟_g[gg^⊤] to simplify notation. The main goal of this section will be to study the optimal Kronecker product approximation to such a generic matrix H, see its connection to Shampoo, and experimentally validate our results for H = H_Ada and H = H_GN, which are described in Section <ref> and <ref>, respectively. <cit.> describe an approach to find the optimal Kronecker product approximation of a matrix (with respect to the Frobenius norm). <cit.> use this approach to find the optimal layer-wise Kronecker product approximation of the hessian matrix for networks without weight sharing. We will now do a general analysis which would also be applicable to neural networks with weight sharing. Since g ∈ℝ^mn, each entry of g can be described as a tuple (i,j) ∈ [m] × [n]. Consequently, every entry of H can be represented by the tuple ((i, j), (i', j')). We now consider the matrix ĤH∈ℝ^m^2 × n^2, which is a rearrangement (see Section <ref>) of the entries of H. By using equation <ref> we get that: Ĥ = 𝔼 [ G ⊗ G ]. Further, by Lemma <ref>, we have that if L ⊗ R is the optimal Kronecker product approximation of H, then ℓ r^⊤ is the optimal rank-1 approximation of Ĥ, where ℓ = L and r = R. Hence, the problem reduces to finding the optimal rank-1 approximation of Ĥ. Applying the power iteration scheme described in Equation <ref> for estimating the top singular vectors of Ĥ and using Lemma <ref> yields (where k denotes the k^th step of power iteration): ℓ_k ←Ĥ r_k-1 = 𝔼 [ G ⊗ G ] r_k-1 = 𝔼[ G R_k-1 G^⊤ ], r_k ←Ĥ^⊤ℓ_k-1 = 𝔼 [ G ⊗ G ]^⊤ℓ_k-1 = 𝔼[ G^⊤ L_k-1 G ]. Reshaping vectors on both sides into matrices results in: L_k ←𝔼[ G R_k-1 G^⊤ ]; R_k ←𝔼[ G^⊤ L_k-1 G ]. §.§ One round of power iteration Our first and main approximation involves replacing the iterative power iteration scheme (Equation <ref>) with just a single iteration. This leads to the main contribution of our work: One step of power iteration, starting from the identity, for obtaining the optimal Kronecker product approximation of H is precisely equal to the square of the Shampoo's approximation of H The initialization for the single iteration will use the identity matrix, i.e., I_m and I_n for L and R, respectively. Thus, we transition from the iterative update equations: L_k ←𝔼[ G R_k-1 G^⊤ ]; R_k ←𝔼[ G^⊤ L_k-1 G ], to the simplified single-step expressions: L ←𝔼[ G G^⊤ ]; R ←𝔼[ G^⊤ G ]. With the above expression for L and R, L ⊗ R is precisely equal to the square of the Shampoo's approximation of H given by the right hand side of Equation <ref>. As shown in Figure <ref>, for various datasets and architectures, this single step of power iteration is very close to the optimal Kronecker product approximation for both H = H_GN (top) and H = H_Ada (bottom). However, we can see that the upper bound proposed by the original Shampoo work <cit.> is significantly worse. §.§.§ Why initialize with the identity matrix? Suppose the SVD of Ĥ is given by Ĥ = ∑_i σ_i u_iv_i^T , or equivalently, H = ∑_i σ_i U_i ⊗ V_i. The convergence of the power iteration in one step depends on the inner product of the initialization vector with the top singular vector. Let us focus on the left side,[The discussion for the other side is analogous.] i.e., the update L ←𝔼[ G G^⊤ ] which as described earlier is equivalent to starting with the initialization I_n. Let I_n = ∑_i α_i v_i i.e. I_n = ∑_i α_i V_i. After one iteration, we obtain ℓ := ∑_i α_i σ_i u_i, and correspondingly, L := ∑_i α_i σ_i U_i. We are interested in assessing how closely ℓ approximates the leading eigenvector u_1. The cosine similarity between ℓ and u_1 is given by α_1 σ_1/√(∑_i α_i^2 σ_i^2). One reason why the cosine similarity might be large is that Ĥ is nearly rank-1 (σ_1 is large); that is, H is closely approximated by a Kronecker product. As illustrated in Figure <ref>, this assumption does not universally hold. Instead, we propose an alternative explanation for why a single step of power iteration is typically sufficient: the coefficient α_1 is usually larger than α_i for all i ≥ 2. We begin by providing a theoretical justification for this, followed by empirical evidence from our experiments. We start by noting that α_i = I_n^Tv_i = Tr(V_i). Now, we will show that using the identity matrix as initialization is a good choice since a) shows it has the maximal dot product with possible top components i.e., PSD matrices (Proposition <ref>), and b) we expect it to have a small dot product with later components. [ <cit.>]lemmalempsd V_1 is a Positive Semi-Definite (PSD) matrix. Since V_1 is a PSD matrix we would like to initialize our power iteration with a matrix which is close to all PSD matrices. Now, we will show that identity is the matrix which achieves this, specifically it maximizes the minimum dot product across the set of PSD matrices of unit Frobenius norm. propositionlemiden Consider the set of PSD matrices of unit Frobenius norm of dimension m denoted by S_m. Then 1/√(m) I_m = _M ∈ S_mmin_M' ∈ S_m⟨M, M'⟩ . The previous proposition argues that I_m maximizes the worst-case dot product with possible top singular vectors. Now, we argue that its dot product with other singular vectors should be lower. lemmalemnopsd If V_1 is positive-definite, then V_i for i ≥ 2 are not PSD. Therefore, the diagonal elements of V_i for i ≥ 2 need not be positive, and this might lead to cancellations (for i ≥ 2) in the trace of V_i which is equal to α_i. Hence we expect α_i's for i ≥ 2 to be smaller than α_1. We now show experiments to demonstrate this in practice. To quantify the benefit of α_1 usually being larger than α_i for i ≥ 2, we will compare α_1 σ_1/√(∑_i α_i^2 σ_i^2) (for both left and right singular vectors) and σ_1/√(∑_i σ_i^2). The latter can be interpreted as the cosine similarity if all α's were equal or as a measure of how close Ĥ is to being rank 1 since it is equal to the cosine similarity between u_1v_1^T and Ĥ. Thus σ_1/√(∑_i σ_i^2) is equal to the “Optimal Kronecker” cosine similarity used in Figure <ref>. In Figure <ref> we track both of these quantities through training and indeed observe that α_1 σ_1/√(∑_i α_i^2 σ_i^2) are significantly closer to 1 than σ_1/√(∑_i σ_i^2) for both H = H_GN (top) and H = H_Ada (bottom). §.§.§ Exact Kronecker product structure in HGN The previous discussion shows that [G G^⊤] ⊗[G^⊤ G] is close to the optimal Kronecker product approximation of H. In this section we will show that this holds exactly if H is a Kronecker product. Intuitively, this holds since if H is a Kronecker product, then Ĥ is rank-1, and one round of power iteration would recover Ĥ. Until now, we have been focusing on the direction of top singular vectors of Ĥ, but with the assumption of Ĥ being rank 1, we can compute the explicit expression for Ĥ, and hence of H. corollarycorrrank Under the assumption that Ĥ is rank-1, H = ([G G^⊤] ⊗[G^⊤ G]) / ([G G^⊤]). Let Ĥ = σ uv^⊤, i.e, H = σ U ⊗ V. Let I_m = Tr(U) U + R_m and I_n = Tr(V) V + R_n, where R_m and R_n are the residual matrices. Now, after one round of power iteration, the left and right estimates provided by Shampoo are given by [G G^⊤] = σTr(V) U, [G^⊤ G] = σTr(U) V. From this, we can see that ([G G^⊤]) = σ(U) (V). Thus H = σ U ⊗ V = ([G G^⊤] ⊗[G^⊤ G])/ ([G G^⊤]). Since H = Ĥ_GN is an m^2 × 1 matrix for binomial logistic regression, it is rank-1, so the equality in the corollary holds. In other words, the square of Shampoo's H_GN estimate perfectly correlates with H_GN for binomial logistic regression. This is demonstrated in the first plot of Figure <ref>. We note that ([G G^⊤] ⊗[G^⊤ G]) / ([G G^⊤]) as an estimate of H was also derived by <cit.>. But their assumptions were much stronger than ours, specifically they assume that the gradients follow a tensor-normal distribution, which implies that Ĥ is rank 1. Instead, we only make a second moment assumption on the gradients: H = [ gg^⊤] is an exact Kronecker product. We also note that our derivation of the direction [G G^⊤] ⊗[G^⊤ G] being close to the optimal Kronecker product approximation holds independently of Ĥ being rank 1. §.§.§ Discussion about optimization Let us refer to [ GG^⊤ ] ⊗[ G^⊤ G] by H_1. As mentioned in Equation <ref>, the original Shampoo paper used the approximation H used was H_1/2[ GG^⊤ ]^1/2⊗[ G^⊤ G]^1/2. In practice, when using Shampoo as an optimization algorithm, the gradient step is taken in the direction of H_1/2^-p∇ L where p is tuned as a hyperparameter <cit.>. Since H_1/2^-p = H_1^-p/2, searching over p in H_1/2^-p yields the same search space as H_1^-p. Therefore, the difference between H_1 and H_1/2 does not manifest practically in optimization speed, but it yields a significant difference in our understanding of how Shampoo works. § HESSIAN APPROXIMATION OF SHAMPOO From the Hessian approximation viewpoint, the previous section covers the case of using batch size 1 and sampled labels, as described in Section <ref>. To be precise, in Figure <ref> top, we consider how well H_GN is correlated with E_x, s[G_x,sG_x,s^T] ⊗ E_x, s[G_x,s^TG_x,s], where s represents that the labels are sampled from the model's output distribution. On the other hand, as discussed in Section <ref>, Shampoo in practice is generally used with arbitrary batch sizes and real labels. We now investigate the effect of these two factors on the Hessian approximation. §.§ Averaging gradients across the batch The next approximation towards Shampoo is to average the gradient across the batch, i.e., we go from L ←_x, s ∼ f(x)[ G_x, s G_x, s^⊤ ] ; R ←_x, s ∼ f(x)[ G_x, s^⊤ G_x, s ] to L ← |B| _B,s [G_B,s G_B,s^⊤] ; R ← |B| _B,s [G_B,s^⊤ G_B,s], where B denotes the batch, s is the concatenation of s∼ f(x) for all x ∈ B and G_B, s = 1/|B|∑_x ∈ B, s=𝐬 [x] G_x,s is the batch gradient, with 𝐬 [x] representing the sampled label corresponding to x ∈ B. As previous works have shown, this change does not have any effect in expectation due to G_x, s being mean zero for all x when we take expectation over s ∼ f(x) <cit.> i.e. _s[G_x, s] = 0. [Implicitly in <cit.>]lemmalemavggrad |B| _B,s [G_B,s G_B,s^⊤] = _x, s ∼ f(x)[ G_x, s G_x, s^⊤ ]. However, this does lead to a significant improvement in computational complexity by saving up to a factor of batch size. §.§ Using real labels instead of sampled labels As our final approximation we replace using sampled labels s ∼ f(x) to using real labels y. This approximation, denoted in the literature by empirical Fisher when batch size is 1, has been discussed at length by prior works <cit.>. The main theoretical argument for why this approximation may work well is that, as we move towards optima, the two quantities converge in the presence of label noise <cit.>. In Figure <ref> (top), when evaluating H_GN approximation with batch size 1, we surprisingly find that the approximation quality is good throughout the training. However, unlike the case of sampled labels, the approximation starts to degrade at large batch sizes because the gradients with real labels are not mean 0. The lemma below <cit.> shows how this estimator changes with batch size. [<cit.>]lemmalemrealbatch Let B denote the batch and G_B = 1/|B|∑_(x,y) ∈ B G_x,y denote the batch gradient. Then _B [G_B G_B^⊤] = 1/|B|_x,y[ G_x, y G_x, y^⊤ ] + (1 - 1/|B|) _x,y[ G_x, y] _x, y[ G_x, y]^⊤. The above lemma shows that, depending on the batch size, the estimator interpolates between _x,y[ G_x, y G_x, y^⊤ ] (Empirical Fisher) and _x,y[ G_x, y] _x, y[ G_x, y]^⊤. As shown in Figure <ref> (top), at batch size 1, when _B [G_B G_B^⊤ ] is equal to _x,y[ G_x, y G_x, y^⊤ ], it closely tracks the optimal Kronecker product approximation. In other words, approximating the empirical Fisher is nearly sufficient in our experiments to recover the optimal Kronecker product approximation to H_GN. However, with increasing batch size (Figure <ref>, bottom row), the approximation quality degrades. We note that this approximation has the computational benefit of not requiring another backpropagation with sampled labels; instead, these computations can be done alongside usual training. § RELATED WORK We discuss the related works in detail in Appendix  <ref>. Here, we discuss two closely related works: <cit.> and <cit.>. <cit.> study the Hessian perspective of Shampoo and show that, under the assumption that sampled gradients follow a tensor-normal distribution, the square of the Hessian estimate of Shampoo is perfectly correlated with H_GN. We also show the same result under much weaker conditions in Corollary <ref>. Moreover, in Proposition <ref> we show that, in general, the square of the Hessian estimate of Shampoo is closely related to the optimal Kronecker product approximation of H_GN. We additionally also study the approximations used by Shampoo to make it computationally efficient (Section <ref>) and the Adagrad perspective of Shampoo's preconditioner. <cit.> develop the theory of optimal Kronecker product approximation of a matrix (in Frobenius norm). <cit.> use it for finding layer-wise optimal Kronecker product approximation of H_GN for a network without weight sharing. We extend their technique to networks with weight-sharing, and show that the square of the Hessian estimate of Shampoo is nearly equivalent to the optimal Kronecker product approximation of H_GN. § LIMITATIONS The main contribution of our work is to show that the square of the Shampoo's approximation of H (where H refers to either H_Ada or H_GN) is nearly equivalent to the optimal Kronecker approximation of H. Although we verify this empirically on various datasets and provide theoretical arguments, the gap between them depends on the problem structure. In some of our experiments with ViT architecture (Appendix <ref>), we find that the gap is relatively larger compared to other architectures. Moreover, it remains an open question to understand the conditions (beyond those described in K-FAC <cit.>) under which H is expected to be close to a Kronecker product. Again, in some of the experiments with ViTs (Appendix <ref>), we find that the optimal Kronecker product approximation to H is much worse as compared to other architectures. § ACKNOWLEDGEMENTS NV and DM are supported by a Simons Investigator Fellowship, NSF grant DMS-2134157, DARPA grant W911NF2010021, and DOE grant DE-SC0022199. This work has been made possible in part by a gift from the Chan Zuckerberg Initiative Foundation to establish the Kempner Institute for the Study of Natural and Artificial Intelligence. SK and DM acknowledge funding from the Office of Naval Research under award N00014-22-1-2377 and the National Science Foundation Grant under award #IIS 2229881. LJ acknowledges funding from the National Science Foundation DMS-2134157. iclr2024_conference § ADDITIONAL EXPERIMENTAL RESULTS §.§ ViT architecture In this subsection, we present the results for a Vision Transformer (ViT) architecture trained on the CIFAR-5m dataset. This architecture features a patch size of 4, a hidden dimension of 512, an MLP dimension of 512, 6 layers, and 8 attention heads. For these experiments, we utilize three layers from the fourth transformer block: two layers from the MLP (referred to as 'FFN Linear Layer 1' and 'FFN Linear Layer 2') and the QK layer[The QK layer is separated from the V part of the layer, following similar decomposition method described by <cit.>] (referred to as 'Q-K Projection Layer'). § EXPERIMENTS Datasets and Architectures. We conducted experiments on three datasets: MNIST <cit.>, CIFAR-5M <cit.>, and ImageNet <cit.>, using logistic regression, ResNet18 <cit.>, and ConvNeXt-T <cit.> architectures, respectively. For MNIST, we subsampled two digits ({ 0, 1}) and trained a binary classifier. For MNIST, we used the only layer, i.e, the first layer of the linear classifier for computing the cosine similarities. For Resnet18 and Imagenet, we picked arbitrary layers. In particular, for Resnet 18, we used one of the convolution layers within the first block ('layer1.1.conv1' in <https://pytorch.org/vision/master/_modules/torchvision/models/resnet.html#resnet18>). For Imagenet, we used the 1x1 convolutional layer within the 2nd block of convnext-T ('stages.2.1.pwconv1' in <https://pytorch.org/vision/main/models/generated/torchvision.models.convnext_tiny.html#torchvision.models.convnext_tiny>). Cosine similarity estimation for H_GN. For estimating the Frobenius norm of H_GN, we used the identity: _v ∼𝒩(0,I_d) [v^⊤ H_GN^2 v] = _v ∼𝒩(0,I_d) [ H_GN v _2^2] = H_GN_F^2 Hessian-vector products with the Gauss–Newton component were performed using the DeepNetHessian library provided by <cit.>. For estimating the cosine similarity between H_GN and its estimator H_GN, we used the following procedure: * Estimate H_GN_F, and calculate H_GN_F. * Define scaled H_GN as S_GN = H_GN_F/H_GN_FH_GN. * Cos-sim(H_GN, H_GN) = 1 - H_GN - S_GN_F^2/2 H_GN_F^2, where the numerator is again estimated via Hessian-vector products. Note that in the above procedure, we can exactly calculate H_GN_F as it is generally of a Kronecker product form with both terms of size m × m or n × n, where m × n is the size of a weight matrix. Cosine similarity estimation for H_Ada. We follow a similar recipe as before, but using a difference method for computing the product H_Adav. For a given time T, H_Ada = ∑_t=1^T g_t g_t^⊤. Thus, H_Ada v = ∑_t=1^T (g_t^⊤ v) g_t. We maintain this by keeping a running estimate of the quantity for multiple random vectors v during a training run, and use it for estimating the product H_Ada v. §.§ Figure details Optimal Kronecker method, wherever used was computed with five rounds of power iteration, starting from the identity. For H = H_GN, the Hessian approximations Shampoo^2, Shampoo, and K-FAC were done using sampled labels and a batch size of 1. For H = H_Ada and step t, we used gradient enocoutered during the training run in steps ≤ t. K-FAC was computed with the “reduce” variant from  <cit.>. In Figure <ref>, the Optimal Kronecker legend represents the cosine similarity between the optimal Kronecker approximation of H_GN and H_GN. This is precisely equal to σ_1/√(∑_i σ_i^2). Similarly, the label L (resp. R) represents the cosine similarity between the top left (resp. right) singular vector of Ĥ_GN and the estimate obtained after one round of power iteration starting from I_n (resp. I_m). This is precisely equal to α_1 σ_1/√(∑_i α_i^2 σ_i^2). In Figure <ref> (top), the Hessian approximation is calculated with batch size 1, i.e, |B|=1 in Section <ref>. Similarly, in Figure <ref> (bottom), |B| = 256. § DEFERRED PROOFS * Consider two PSD matrices M_1 and M_2 having the eigenvalue decomposition M_1 = ∑λ_1i q_1iq_1i^⊤ and M_2 = ∑λ_2i q_2iq_2i^⊤. Then Tr(M_1M_2) = ∑_i,jλ_1iλ_2j(q_1i^⊤ q_2j)^2 Thus, if M_1 and M_2 have unit frobenius norm and M_1 is positive definite, then Tr(M_1M_2) > 0. Thus, if V_1 is positive definite, then by orthogonality of successive singular vectors, V_i for i ≥ 2 cannot be positive semi-definite. * Consider the eigendecomposition of any M ∈ S_q given by ∑_i=1^q λ_i v_iv_i^⊤. Denote L = {i: λ_i ≤1/√(q)}. As ∑λ_i^2 = 1, therefore, |A| ≥ 1. Consider any j ∈ A. Then ⟨ Vec(M), Vec(v_j v_j^⊤) ⟩≤1/√(q) As v_j is orthogonal to the other eigenvectors. Thus, we can see max_M ∈ S_qmin_M' ∈ S_q⟨M, M'⟩≤1/√(q) Moreover, for the matrix 1/√(q) I_q, for any matrix M', 1/√(q)⟨ I_q, M' ⟩ = tr(M')/√(q) where tr(M') denotes the trace of the matrix M'. However, we know tr(M') = ∑λ_i ≥ 1 as ∑λ_i^2 = 1. Thus 1/√(q)⟨ I_q, M' ⟩ = tr(M')/√(q)≥1/√(q) Note that this is the only matrix with this property as any other matrix will at least have one eigenvalue less than 1/√(q). Thus 1/√(q) I_q = _M ∈ S_qmin_M' ∈ S_q⟨M, M'⟩ * Evaluating G_B,s G_B,s^T, we get G_B,s G_B,s^T = 1/|B|^2∑_x,x' ∈ B, s=s[x], s'=s[x'] G_x,s G_x',s'^⊤ Taking the expectation over s for a given B, and by using _s[G_x, s] = 0 we get _s [G_B,s G_B,s^T] = 1/|B|^2∑_x _s ∼ f(x) [G_x,s G_x,s^⊤] = 1/|B|_x ∼ B, s ∼ f(x)[G_x,sG_x,s^⊤] Now taking an expectation over batches, we get |B| _B,s [G_B,s G_B,s^T] = _x, s ∼ f(x)[ G_x, s G_x, s^T ] * Evaluating G_B G_B^T, we get G_B G_B^T = 1/|B|^2∑_(x,y), (x',y') ∈ B G_x,y G_x',y'^⊤ Taking the expectation over B on both the sides, we get _B [G_B G_B^T] = 1/|B|^2[ |B| _x,y[G_x,yG_x,y^⊤] + (|B|^2-|B|) _x,y[G_x,y] _x,y[G_x,y]^⊤] _B [G_B G_B^T] = 1/|B|_x,y[G_x,yG_x,y^⊤] + (1 - 1/|B|) _x,y[G_x,y] _x,y[G_x,y]^⊤ § TECHNICAL BACKGROUND ON HESSIAN Gauss–Newton (GN) component of the Hessian. For a datapoint (x,y), let f(x) denote the output of a neural network and ℒ(f(x),y) represent the training loss. Let W ∈ℝ^m × n represent a weight matrix in the neural network and 𝒟 denote the training distribution. Then, the Hessian of the loss with respect to W is given by _(x,y) ∼𝒟[ ∂^2 ℒ/∂ W^2] = _(x,y) ∼𝒟[∂ f/∂ W∂^2 ℒ/∂ f^2∂ f/∂ W^⊤] + _(x,y) ∼𝒟[∂ℒ/∂ f∂^2 f/∂ W^2]. The first component, for standard losses like cross-entropy (CE) and mean squared error (MSE), is positive semi-definite and is generally known as the Gauss–Newton (GN) component (H_GN). Previous works have shown that this part closely tracks the overall Hessian during neural network training <cit.>, and thus most second-order methods approximate the GN component. Denoting ∂ℒ(f(x),y)/∂ W by G_x,y∈ℝ^m × n and g_x,y = G_x,y, for CE loss, it can also be shown that H_GN = _(x,y) ∼𝒟[∂ f/∂ W∂^2 ℒ/∂ f^2∂ f/∂ W^⊤] = _x ∼𝒟_x s ∼ f(x)[g_x,s g_x,s^⊤] , § RELATED WORK The literature related to second order optimization within deep learning is very rich, with methods that can be broadly classified as Hessian-free and methods based on estimating the preconditioner H (which could refer to either H_Ada or H_GN). Hessian-free methods <cit.> generally tend to approximate the preconditioned step (for Newton's method) using Hessian vector products, but do not maintain an explicit form of the Hessian. Estimating H <cit.> methods maintain an explicit form of the preconditioner that could be efficiently stored as well as estimated. §.§ Hessian-free One of the seminal works related to second order optimization within deep learning was the introduction of Hessian-free optimization <cit.>. The work demonstrated the effectiveness of using conjugate gradient (CG) for approximately solving the Newton step on multiple auto-encoder and classifications tasks. Multiple works <cit.> have extended this algorithm to other architectures such as recurrent networks and multidimensional neural nets. One of the recent works <cit.> also takes motivation from this line of work, by approximately using single step CG for every update, along with maintaining a closed form for the inverse of the Hessian, for the single step to be effective. §.§ Estimating Preconditioner Given that it is costly to store the entire matrix H, various works have tried to estimate layer-wise H. KFAC <cit.> was one of the first work, that went beyond diagonal approximation and made a Kronecker product approximation to layer-wise H_GN. It showed that this structure approximately captures the per layer Hessian for MLPs. This approximation was extended to convolutional <cit.> and recurrent <cit.> architectures. Subsequent works also improved the Hessian approximation, by further fixing the trace <cit.> as well as the diagonal estimates <cit.> of the approximation. A recent work <cit.> also demonstrated that K-FAC can be extended to large-scale training. From the viewpoint of approximating Adagrad <cit.>, <cit.> introduced Shampoo, that also makes a Kronecker product approximation to H_Ada. One of the subsequent work <cit.> introduced a modification of Shampoo, that was precisely estimating the layer-wise H_GN under certain distributional assumptions. Other works <cit.> introduced a distributed implementation of Shampoo, that has recently shown impressive performance for training large scale networks <cit.>. Recently, another paper <cit.> proposed a modification of Shampoo, empirically and theoretically demonstrating that the new estimator approximates H_Ada better than Shampoo's approximation. Our work shows that the square of Shampoo's approximation of H_Ada is nearly equivalent to the optimal Kronecker approximation. § COMPARISON WITH EXTRA SQUARE ROOT IN ADAGRAD BASED APPROACHES Multiple previous works <cit.> have tried to address the question of why Adagrad-based approaches like Adam and Shampoo, have an extra square root in their update compared to Hessian inverse in their updates. This question is primarily concerned with the final update to the weights being used in the optimization procedure, once we have approximated the Hessian. The primary contribution of this work is completely orthogonal to this question. We are addressing the question of optimal Kronecker approximation of the Hessian, and its connection to Shampoo's Hessian approximation. This is orthogonal to the Hessian power used in the final update.
http://arxiv.org/abs/2406.19226v1
20240627145107
Simulating Classroom Education with LLM-Empowered Agents
[ "Zheyuan Zhang", "Daniel Zhang-Li", "Jifan Yu", "Linlu Gong", "Jinchang Zhou", "Zhiyuan Liu", "Lei Hou", "Juanzi Li" ]
cs.CL
[ "cs.CL", "cs.HC" ]
[ Wenhua Hub ============== § ABSTRACT Large language models (LLMs) have been employed in various intelligent educational tasks to assist teaching. While preliminary explorations have focused on independent LLM-empowered agents for specific educational tasks, the potential for LLMs within a multi-agent collaborative framework to simulate a classroom with real user participation remains unexplored. In this work, we propose , a multi-agent classroom simulation framework involving user participation. We recognize representative class roles and introduce a novel class control mechanism for automatic classroom teaching, and conduct user experiments in two real-world courses. Utilizing the Flanders Interactive Analysis System and Community of Inquiry theoretical frame works from educational analysis, we demonstrate that LLMs can simulate traditional classroom interaction patterns effectively while enhancing user's experience. We also observe emergent group behaviors among agents in , where agents collaborate to create enlivening interactions in classrooms to improve user learning process. We hope this work pioneers the application of LLM-empowered multi-agent systems in virtual classroom teaching. § INTRODUCTION The pursuit of utilizing artificial intelligence to provide immediate and customized teaching for students origins from the era of Intelligent Tutoring Systems (ITS) <cit.>. Following this enthusiasm, from personalized educational recommendation systems <cit.> to teaching assistants <cit.> and even LLM-driven AI teacher <cit.>, researchers have conducted enormous technological explorations and achieved impressive performance in specific educational tasks. As technology advances, intense discussions have also emerged around this topic concerning methodologies <cit.>. One of the most central directions is how to fully leverage the capabilities of large models to simulate real classrooms with multiple agents for automated teaching. From an educational perspective, this approach allows large models to move beyond their instrumental use and delve deeper into educational paradigms <cit.>. From a technical standpoint, multi-agent collaboration technologies <cit.> could further stimulate the latent knowledge of large models in education, leading to the emergence of richer capabilities <cit.>. However, towards LLM-empowered multi-agent systems that involve real user participation, there are still several fundamental research questions that need to be explored. (1) Simulation Capability Assessment: To what extent can a multi-agent classroom powered by large models simulate real teacher-student interactions? (2) Learning Experience Measurement: Can students in such an intelligent teaching environment experience a high sense of presence and learn effectively? (3) Emergence Phenomenon Observation: What types of classroom behaviors may spontaneously arise in scenarios that integrate multiple agents? In this work, responding to the questions above, we present , a Multi-Agent Classroom Simulation framework, and conduct real-world observation along with analysis based on it. To better simulate the classroom, we recognize representative class roles and design a novel class control mechanism with functional workflows. For systematic experiments, we deploy 2 different courses with prepared slides and teaching scripts as basis. 48 students are invited to join the classroom, learning and interacting with the system, and all the behavioral data is carefully recorded. Then we conduct experiments to explore the mentioned questions. (1) Firstly, we apply the Flanders Interaction Analysis System <cit.> to evaluate the interactions happening in the and explore the interaction pattern of the agents' classroom. (2) Secondly, we analyze the educational experience of these users, particularly with Community of Inquiry theory <cit.>. (3) Finally, we summarize several emergent group actions during the experiment for qualitative analysis. During our experiments, we observe the effectiveness of the class role and control mechanism design. Based on the problems we identified, experimental results show that: (1) Similarity: exhibits behaviors, interaction patterns, and characteristics similar to those of traditional classrooms; (2) Effectiveness: Multiple classroom agents enable users to engage more effectively in class and enhance their sense of presence; (3) Emergence: Our control mechanism spontaneously elicits the emergent behaviors in the multi-agent classroom system, including collaborative teaching and discussion, emotional company and discipline control. In summary, the LLM-based multi-agent system demonstrates the potential for simulating real classroom environments for educational purposes. We hope our work serves as a pioneering effort in this direction. The dataset of classroom interactions between users and multiple LLMs will be released soon for both education and AI researchers. § RELATED WORK §.§ LLMs for Human Simulation Recently, Large Language Models (LLMs) have achieved remarkable breakthroughs in various natural language processing (NLP) tasks <cit.>. The intelligence they demonstrated opened up opportunities and possibilities for applications in many other scenarios <cit.>. As LLMs encode many human-like behaviors in their training data, an increasing number of researchers are utilizing LLMs for human scenario simulation, investigating the model's capabilities for decision and actions as LLM-Empowered Agents in many fields, such as social and psychological research <cit.>, software development <cit.>, chemical and medicine <cit.>, and games <cit.>. Novel collaboration techniques are explored to enhance the cooperation and performance of multi-agent systems <cit.>. These works offer technical possibilities for multi-agent education and inspire curiosity about potential emergent phenomena. §.§ LLMs for Education With the eminent linguistic capabilities, explanatory skills, and parameterized knowledge of LLMs, numerous studies have explored applying LLMs to education services. In addition to applying large models to downstream tasks in the education <cit.>, many researchers are applying these models to replace certain classroom aspects, such as playing students to train teachers <cit.> or playing instructors to teach students <cit.>. <cit.> explored the use of multiple student agents to assist students in discussion, though they haven't involved real users. Existing work has examined various facets of interactions between LLMs and humans in educational settings. § §.§ Overview The design principles for constructing this immersive simulated classroom originate from the following two concerns: (1) How to ensure that the classroom covers the core teaching behaviors? (2) How to maintain the entirety of the interaction within the natural flow of the classroom process? For the former concern, we categorize classroom interaction behaviors based on widely accepted pedagogy principles <cit.>: Teaching and Initiation (TI), the teacher's teaching and the feedback or ideas expressed by students; In-depth Discussion (ID), alignment, discussion, and multiple Q&A between teacher and students to help students construct understanding of concepts; Emotional Companionship (EC), encouraging students to learn, creating a positive learning atmosphere, and providing emotional support; and Classroom Management (CM), maintaining discipline, organizing disruptive behaviors, and guiding the classroom content. Given that these behaviors are realized through the varied Class Roles (denoted as ℛ= {r_i}_1^| ℛ|, where each r_i denotes a certain role), it is essential to ensure the diversity and coverage of proposed agents within the classroom. For the latter concern, we need to ensure that the interactions among multiple agents within the system are finely and rhythmically controlled within the course content. Given the Learning Materials (denoted as C = [ c_1,...,c_t ], where each teaching script c_t is organized by order), we propose a novel Session Controller to manage the course interaction flow based on class status and the help of a core manager agent <cit.>. Based on these principles, we construct multiple class roles, implement class control, and ultimately derive the simulated classroom process. §.§ Class Role Agentization The teaching and learning process is presented as an informative, multi-round, and task-oriented communication <cit.>. However, simply exchanging responses of LLMs inevitably faces significant challenges including role flipping, instruction repeating, and fake replies <cit.>. Consequently, following the classroom behaviors outlined previously, we define two types of agents: Teaching Agents and Classmate Agents. Each agent 𝐚_i∈𝒜 is facilitated through prompting LLMs and associated with one or more class roles, denoted as: 𝒜 = ρ ( LLM, 𝖯_A ), 𝒜⇔ℛ where ρ is the role customization operation, 𝖯_A is the system prompt with agent description. Teaching Agents The teacher and the teaching assistant are the authoritative party responsible for imparting knowledge in the classroom, encompassing most teaching behaviors. The acronyms in parentheses represent the roles that the agent needs to accomplish in a classroom environment. Teacher Agent (TI, ID, EC, CM) : Given the teaching scripts C, its task is to persuasively display material c_i to students or answer questions based on the classroom historical discussions H. Assistant Agent (ID, EC, CM): Given the classroom history H, the assistant is responsible to supplement teaching information, participate in discussion, maintain the discipline and continuity of the class, and enhance student learning efficiency. Classmate Agents This type of agents are incorporated in addition to the teaching agents with distinct personality traits to better simulate traditional one-to-many classrooms, performing peer student roles. In this paper, we initialize 4 typical classmates, while users can also freely customize and deploy more interesting classmate agents on the platform. Class Clown (TI, EC, CM): This agent is designed to initiate ideas, enliven the atmosphere, help the user as a peer, and help the teachers to guide the class direction when the user is distracted. Deep Thinker (TI, ID): This agent aims to do deep thinking and raise topics that challenge the knowledge of the classroom. Note Taker (TI, CM): This agent loves to summarize and share notes for classroom content, helping everyone to organize their thoughts. Inquisitive Mind (TI, EC): This agent frequently poses questions about lectures, which stimulates others' thinking and discussion. Based on their respective functions, some related technologies, such as question generation <cit.> and retrieval-augmented generation <cit.>, can also be integrated into the construction of classroom agents. §.§ Classroom Session Controller Unlike Standardized Operating Procedures (SOPs) multi-agent systems <cit.>, the classroom scenario is a dynamic group chat without a strict workflow, where agents need to dynamically determine the appropriate speaking timing. Therefore, we implement a controller that observes, makes decisions, and controls agents to behave based on the current Class State. The Session Controller includes the following modules: Class State Receptor, Function Executor, and Manager Agent. Class State Receptor Let the classroom dialogue history until time t denote as H_t = ⋃ (u_i^𝐚_j)^t, where u_i is the utterance posted by agent 𝐚_j or user (denoted as 𝐚_u). The class state S_t is composed as: 𝒮_t = {C_t, H_t | ℛ} where C_t ⊆ C is composed of the learning materials that have been taught until t. Functions We design and divide the actions in the classroom into a functional hierarchy with two major categories. Tutoring functions f_X can only be performed by teacher agent 𝐚_0, such as teaching by displaying scripts and going to the next material page c_i+1. Interacting functions f_Y can be performed by each agent 𝐚_j ∈𝒜. According to the context, the interaction will emerge as diverse classroom activities, which are discussed in subsequent experiments. These functions are pluggable, allowing the addition of newly defined functions for different agents, such as displaying exercises. f = { f_X { f_0(c_i, 𝐚_0), Teaching. f_1(c_i+1, 𝐚_0), Next Page. ... .... f_Y{ f_n(c_i, 𝐚_j, H_t), Interaction. ... .... . Manager Agent Following AutoGen <cit.> and MathVC <cit.>, we design a hidden and meta agent to regulate the speakers. This agent receives the current class state 𝒮_t, observes and understands the class process, and decides the next action to be executed.The task ℒ of Manager Agent can be defined as: ℒ: 𝒮_t → ( 𝐚_t, f_t ) | 𝐚_t ∈𝒜, f_t ⇐ f where f_t is a certain kind of function, and the action will be executed and refresh the whole class into the next state. Specifically, the system will wait for a time window τ after an action is performed. If the user speaks or the waiting period ends, it will trigger the manager agent to make a new decision. §.§ Classroom Demonstration After introducing the necessary component of the , we present the demonstration of an entire class process: (1) Initialization. At the beginning of the class, the first function will be executed, displaying the initial course script and slides. At this point, users can interact with the class, and the manager agent will start controlling the class flow; (2) Tutoring and Interaction: the manager agent will continuously observe and control the class based on the states, and other agents will perform diverse activities by collaboration. As the example shown in Figure <ref>, when a user asks about the course content, the classroom interaction flow may involve the assistant responding, the teacher providing additional information, and sometimes the classmate agents raising corresponding topics; (3) Ending. After all the learning materials are taught and the final discussion ends, the classroom will close and provide survey questions to users. § EXPERIMENTS To evaluate the performance of , we invite a group of university students to participate in the classroom to record interaction data and collect feedback from them. We also develop ablation systems to better understand the impact of various interaction types within . Our analyses mainly focus on three key aspects: classroom interactions, user experience, and the emergent group behaviors of the LLM-empowered agents. §.§ Experimental Setup Courses and Materials. We conduct experiments with two courses. Two experienced teachers are invited to design the slides and teaching scripts for the courses. The first one, TAGI, Towards Artificial General Intelligence, covers the development of AI and language models, consisting of 50 pages of slides, each with a corresponding teaching script. The second course, HSU, How to Study at University, addresses topics such as completing academic work, managing pressure, communicating with others, and achieving self-fulfillment, and includes 45 pages of slides and teaching scripts. Systems. We use GPT-4 as the backbone LLM of both Class Roles and Manager Agent in . Besides, we implement two ablation systems to investigate the effects of different interaction types in the classroom. In the first system (w/o classmates), the classmate agents are removed, and only teacher agents are present. In the second system (w/o interactions), both classmate agents and user input are disabled, resulting in no interaction at all. The teacher can only conduct lectures persistently, and the Manager Agent is limited to the tutoring function. Participants. In the experiment, we invite 48 university students from different majors to participate in learning two courses on , with each course and each setting involving 8 users. To ensure the quality control of course data, we invite course designers to create four questions for each course and require users to complete these questions after finishing the course. Data from participants with an accuracy of 50% or lower on these questions were excluded. Ultimately, data from 38 participants remained, with each course and setting having data from at least 5 users. Participants are informed that all course data is generated by AI and needed to be carefully discerned. Each participant receives the appropriate amount of compensation. Survey Content. In addition to the four test questions, each participant is required to complete a short survey composed of three questions regarding the experience after finishing the course. We apply the widely recognized Community of Inquiry (CoI) theory <cit.> in online learning to evaluate the experience of students. Specifically, we adapt the three key elements from CoI to measure the learning experience on : Cognitive Presence, the degree to which learners are able to construct and confirm meaning through sustained reflection and interaction; Teaching Presence, the extent to which the class is focused, designed, and planned with specific directions and learning objectives; and Social Presence, the ability of learners to project themselves socially and emotionally within a group <cit.>. Students are asked to rate the system on a scale of [0,1,2], with a higher score indicating better performance according to detailed guidelines. The survey questions are listed in Table <ref>, and further details can be found in Appendix <ref>. §.§ Statistical Results Table <ref> presents the average speech length of various roles and users across different settings. All systems employ the same teaching scripts, leading to the teacher's speech being the longest and most closely aligned with the scripts. The assistant's primary role is to maintain discipline, resulting in shorter dialogues. Classmate agents are generally more talkative, whereas users tend to use fewer words. Notably, the absence of classmate agents significantly reduces the speech length of users and assistant in both courses. The presence of classmate agents in the classroom appears to encourage users to engage in longer conversations. §.§ Interaction Analysis To understand the dynamics of as a multi-agent classroom system, we encode classroom activities into quantitative behaviors. We utilize the Flanders Interaction Analysis System (FIAS) <cit.>, a valuable tool for analyzing the verbal behaviors in traditional classrooms. We adapt the method to our simulated classroom system, , where interactions occur in natural language. Encoding the Interactions. As shown in Table <ref>, the FIAS categorizes interactions into ten distinct types: seven for teachers, two for students, and one for silent. Labels 1–4 represent Indirect Influence from the teacher, while labels 5–7 indicate Direct Influence. When classroom activities are encoded as sequences, the proportion of each interaction type and their transitions can be decoded to reveal the classroom style, teaching style, and other features. For the classroom history of each student, we prompt GPT-4 to label interactions according to the ten communication categories. We assess the quality of GPT-4's labeling in Appendix <ref>. The classroom interactions are encoded as sequences, and the two-step transitions of classroom activities are recorded in a 10 × 10 matrix ℳ∈ℕ^10 × 10. Following the method introduced by <cit.>, we add a 10 (silence) to the beginning and end of each class sequence and sum the matrices ℳ_i of n students in the same setting to provide a general view of the interactions: ℳ = ∑_i=1^nℳ_i. To interpret the classroom interaction Matrix and observe features in , we report the following metrics designed by <cit.>: Teacher Talk (TT) and Student Talk (ST). TT and ST represent the proportions of total tallies in specific categories that indicate the amount of talk from teacher and students. Respectively, TT and ST are calculated using categories 1–7 and 8–9. ID Ratio (IDR). This ratio measures the balance between a teacher's indirect and direct methods of communication and teaching in the classroom. It is calculated by dividing the sum of tallies in categories 1–4 (Indirect influence) by the sum of tallies in categories 5–7 (Direct influence). Student Initiation Ratio (SIR). SIR evaluates the extent to which students initiate interactions themselves during classroom activities, which measures how much students are actively engaging in the classroom. It is calculated by dividing the tallies in category 9 by the total tallies in categories 8–9. 0.15ex0.15ex Results. Figure <ref> presents the FIAS matrices of for TAGI and HSU courses. Each matrix is divided into four parts based on the type of interaction in the class, labeled as follows: A (top left): Interactions from teacher to teacher; B (top right): student or silence to teacher; C (bottom left): teacher to student or silence; and D (bottom right): student to student. The matrices reveal the following findings: (1) In part A, most teacher actions are associated with lecturing (Cat. 5), where teachers primarily give lectures and interact with the class; (2) Part B demonstrates the teacher's responses to students. When students initiate ideas or responses to teachers, the teacher praises (Cat.2), accepts their ideas (Cat.3), or continues teaching; (3) Part C shows student actions in response to teachers, where students mostly initiate questions or respond to lectures; (4) Part D shows that student-to-student interactions and discussions occur periodically. The results of the ablation systems, demonstrated in Appendix <ref>, indicate that interactions are much less diverse after ablation. Table <ref> presents the metric results of FIAS. TT and ST represent the proportion of teacher and student speaking time, respectively. In traditional classrooms, the statistics for TT and ST range from 77.1%-82.6% and 17.4%-22.9% (excluding silence) <cit.>. exhibits a similar distribution. The IDR is low, which is partly due to the higher proportion of script-based teaching. The SIR is relatively high, especially in scenarios involving classmate agents, where there are more instances of students initiating questions. Conclusion of Interactive Analysis. From the perspective of ratio analysis (Table <ref>) and interaction distribution (Figure <ref>), demonstrates the characteristics of the traditional classroom, effectively simulates traditional classrooms and has the potential to achieve the performance of real classrooms. We further investigate the user experience and illustrate a few in the following sections. §.§ User Experience In this section, we report the results from the student experience with . As shown in Figure <ref>, several key findings are observed: (1) Importance of Interactions. Interactions during class are crucial for users. Without interaction, user experience significantly declines across all three metrics. (2) Enhancement by Classmate Agents. Classmate agents enhance user experience in terms of Cognitive Presence and Social Presence. This enhancement may be attributed to the classmate agents' active engagement in asking questions to the teacher, which aids the user's understanding of concepts and increases the sense of Social Presence in the classroom. (3) Satisfying Teaching Presence. All systems demonstrate good Teaching Presence, maintaining a focused and coherent class. This metric largely depends on the quality of the teaching scripts used, though we observe that interaction and Student Roles slightly improve the user experience. (4) Better Experience in HSU with students. The HSU course achieves a better user experience with the full setting. HSU focuses on college interpersonal relationships and learning methods, where peer learning plays a more significant role. This suggests that a multi-agent design is particularly crucial for certain types of classes. Conclusion of user experiments. According to Figure <ref> and Figure <ref>, the classroom demonstrates the effectiveness in in terms of both interactions and user presence. 0.15ex0.15ex §.§ Agent Behaviors Based on our classification of various types of classroom interactions in Section <ref>, we present some emergent group behaviors observed during the classroom experiments in . ∙ Teaching and Initiation. When the user learns from teaching, classmates engage and share their inspiring ideas, which deepens the depth of the topic and enriches the discussion. The diversity of agents, each approaching from different perspectives, introduces a wider range of possibilities for classroom teaching content. ∙ In-depth Discussion. If the current explanation is not clear enough for the users, they can ask questions at any time to initiate a discussion with the teacher and classmates until clarity is achieved. This highlights the advantage of as an interactive classroom compared to one-to-many education methods like pre-recorded videos. ∙ Emotional Companionship. Beyond knowledge dissemination, maintaining a positive learning atmosphere is crucial in classroom scenarios. When a user expresses negative learning intent, the classmate agent intervenes after the assistant, utilizing class content in the history and providing vivid emotional support as a non-teacher role. ∙ Classroom Management. Similarly, when a user tries to interrupt the system, the classmate agent subtly redirects the class while following the user's words. These classmates enhance classroom discipline more effectively than the teacher alone, demonstrating emergent group behaviors. Conclusion of case study. Based on the cases above, We can observe diverse interactions between different class roles within the classroom, as well as the effectiveness of the manager agent, who is spontaneously capable of designating appropriate speakers to elicit emergent group behaviors of Class Roles seamlessly, which significantly enlivens the class and enhances the user's experience. § CONCLUSION We introduce , a novel framework of multi-agent classroom using LLMs to answer several fundamental research questions in the era of LLM-driven education. Based on several theoretical methods, our experiments span two courses with real users, demonstrating interaction patterns similar to those in real classrooms and effective learning experience in . We observe emergent collaborative behaviors among LLM agents during the teaching process. Future work could incorporate more agents and explore more courses to analyze more diverse classroom behaviors. We hope our efforts can advance the explorations of LLM-empowered systems for AI-driven education researchers, practitioners, and pedagogues. § LIMITATIONS Despite our analytical efforts covering major theories in education, our work still has the following limitations: Firstly, we apply GPT-4 as our backbone model to perform our experiments. a more comprehensive understanding of our framework requires a broader range of diverse experiments. Secondly, we conduct experiments on a limited number of agents, while a more diverse set of agent characters could capture a wider array of behaviors in the classroom. Thirdly, we apply a limited quantity of functions in our system, while more various functions in teaching scenarios could further enhance the performance of the system. § ETHICAL CONSIDERATIONS Our investigation involves the development of a simulated classroom environment populated by artificial intelligent models acting as classmates and teachers. All user data obtained throughout these interactions will be anonymized to ensure privacy and confidentiality. Informed consent is obtained from participants, who are thoroughly briefed on the nature of simulation, the AI generated content, and the data collection process. Participants receive appropriate compensation for their involvement. In educational systems involving large language models, there is a potential for generating hallucinations and incorrect information. Therefore, applying these systems to real-world scenarios requires careful consideration and thorough evaluation before serving real users. § SURVEY AND QUIZ In this appendix section, we present detailed designs of the surveys and quizzes in our experiments. Table <ref> illustrates how the surveys were structured to evaluate three crucial dimensions of the learning experience: cognitive presence, teaching presence, and social presence. Each dimension includes rating guidelines to ensure consistent and reliable feedback from diverse users. The quizzes, administered after participants engaged with the simulated classrooms, were designed to measure the depth of their learning. The TAGI quiz evaluates understanding of artificial intelligence concepts, focusing on symbolic intelligence, pre-trained language models, and emergent phenomena (Table <ref>). Similarly, the HSU quiz assesses broader educational strategies and personal development, covering topics such as internal motivation, academic stress management, and time management (Table <ref>). All questions were meticulously crafted and verified by subject matter experts to align closely with course materials. Both quizzes feature multiple-choice questions, some with multiple correct answers, to comprehensively test whether the participants are actively engaged in the experiment. § EXAMINATION OF GPT-4 LABELING To validate the GPT-4 labeling in our experiment, we sampled 100 data points labeled by GPT-4 and had an expert familiar with FIAS label them for comparison. The results showed that GPT-4's labels matched the human expert's labels with an accuracy of 92%. We believe this demonstrates that GPT-4 can serve as a reliable and balanced alternative to crowd-sourced human labelers in our experiments. Additionally, we examined the eight instances where GPT-4's labels differed from the human expert's labels. These cases were also found to be uncertain during human labeling, suggesting that GPT-4 not only avoids individual human biases but also achieves a high level of precision comparable to human-labeled results. § FIAS MATRICES FOR ABLATION SYSTEMS In addition to the default setting of , we also provide the sum of the matrices based on Flanders Interaction Analysis System for our ablation settings (w/o classmate agents and w/o interactions), as demonstrated in Figure <ref> and Figure <ref>. In comparison with Figure <ref>, different types of classes demonstrate different interaction patterns. Generally, the fewer types of interactions there are, the less diverse the classroom will be. This indicates the significance of adding more kinds of agents for interactions, especially classmate agents to simulate a vivid classroom.
http://arxiv.org/abs/2406.19189v1
20240627140910
BISeizuRe: BERT-Inspired Seizure Data Representation to Improve Epilepsy Monitoring
[ "Luca Benfenati", "Thorir Mar Ingolfsson", "Andrea Cossettini", "Daniele Jahier Pagliari", "Alessio Burrello", "Luca Benini" ]
cs.LG
[ "cs.LG", "cs.AI" ]
empty empty JuliVQC: an Efficient Variational Quantum Circuit Simulator for Near-Term Quantum Algorithms Chu Guo ============================================================================================ § ABSTRACT This study presents a novel approach for EEG-based seizure detection leveraging a BERT-based model. The model, BENDR, undergoes a two-phase training process. Initially, it is pre-trained on the extensive Temple University Hospital EEG Corpus (TUEG), a 1.5 TB dataset comprising over 10,000 subjects, to extract common EEG data patterns. Subsequently, the model is fine-tuned on the CHB-MIT Scalp EEG Database, consisting of 664 EEG recordings from 24 pediatric patients, of which 198 contain seizure events. Key contributions include optimizing fine-tuning on the CHB-MIT dataset, where the impact of model architecture, pre-processing, and post-processing techniques are thoroughly examined to enhance sensitivity and reduce false positives per hour (FP/h). We also explored custom training strategies to ascertain the most effective setup. The model undergoes a novel second pre-training phase before subject-specific fine-tuning, enhancing its generalization capabilities. The optimized model demonstrates substantial performance enhancements, achieving as low as 0.23 FP/h, 2.5× lower than the baseline model, with a lower but still acceptable sensitivity rate, showcasing the effectiveness of applying a BERT-based approach on EEG-based seizure detection. Clinical relevance— The model enhances clinical seizure detection, offering personalized treatments and better generalization to new patients, akin to successes with transformer-based models, thus significantly improving patient safety and care. Healthcare, EEG, Time Series Classification, Deep learning § INTRODUCTION & RELATED WORKS Epilepsy is a neurological disorder that affects over 50 million individuals in the world <cit.> and is characterized by abnormal electrical activity of the brain that causes recurrent seizures. While medication management is the cornerstone of treatment, drug-resistant cases often require more advanced interventions, including surgery or neurostimulation. In this context, noninvasive eeg data plays a crucial role in seizure detection and monitoring to trigger closed-loop actions such as neurostimulation. Current methods of classifying raw eeg mostly rely on dnn, which demonstrated higher accuracy than classical machine learning algorithms <cit.>. On the other hand, dnns often face challenges when extracting features from relatively long time windows because of the lack of exploring the global correlation of all the input samples <cit.>. Recent approaches <cit.> showed promising results when working on a reduced number of EEG channels for the deployment of seizure detection approaches on wearable low-power devices. However, most of the works mentioned above fail to reach a False Positive per hour (FP/h) ratio that can enable the clinical implementation of such methods. Although they can correctly detect almost all seizure events, they show a FP/h ratio that jeopardizes their real-world deployment, where wrongly reported seizures will alert the patient and reduce the detection device reliability. Furthermore, another key open challenge in all DNN models is the unavailability of very large high-quality labelled datasets <cit.>, as collecting and labelling seizure data is expensive and requires complex, human labour-intensive protocols, as well as patient hospitalization. Our work investigates whether a model based on the Transformer architecture, which has recently gained widespread adoption in numerous deep learning applications, coupled with self-supervision and pre-training on large unlabeled datasets, can overcome the data labelling bottleneck and improve the state-of-the-art EEG-based seizure detection for long-term epilepsy monitoring. Our work focuses on the reduction of the FP/h ratio, even at the expense of a reduced sensitivity, working towards the objective of clinical deployment of seizure detection methods: we want a model that predicts a sufficient number of seizure events, with the least number of false alarms, thus reducing stress and anxiety of the monitored patients. The state-of-the-art in this domain is dominated by CNN-based methods <cit.>, with the best approach achieving 100% sensitivity and 0.58 FP/h on CHB-MIT dataset <cit.>. We take inspiration from a recent study <cit.> that applied Transformers and self-supervised sequence learning in bci with eeg data. Our contributions are: * The adaptation for a seizure detection task on CHB-MIT of bendr <cit.>, an approach inspired by wav2vec 2.0 <cit.> and bert <cit.> which allows to exploit massive amounts of unlabeled eeg data. * An extensive task-specific optimization of the base BENDR model, including i) custom training strategies to improve seizure-detection fine-tuning on CHB-MIT; ii) the evaluation of the impact of architectural choices; iii) the application of pre- and post-processing techniques. * A promising application of a transformer-based approach, pre-trained on a large amount of unlabelled data, for the seizure detection task. Thanks to our optimizations, our best model can reduce the FP/h to as low as 0.23, outperforming the best SoA model by 2.5×, with a lower yet acceptable sensitivity of 72.58% <cit.>. § MATERIALS & METHODS §.§ Model training and base architectures §.§.§ Self-supervised pre-training The first model training step considered in this work consists in the same self-supervised pre-training scheme of bendr, which in turn closely mirrors wav2vec 2.0 <cit.>. Namely, the model is pre-trained on a large unlabelled eeg dataset (TUEG) <cit.> to learn intrinsic patterns in EEG data. This initial phase then lays the groundwork for subsequent fine-tuning on our smaller and task-specific labelled dataset for seizure detection (CHB-MIT). Fig. <ref>a illustrates the model architecture used in the self-supervised pre-training phase. The encoder's objective is to reconstruct the original sequence elements. This is achieved thanks to a contrastive loss function, which aims to align each element of the transformer's output sequence c_i with the corresponding input b_i, notwithstanding the masking. In our work, we do not re-implement this phase, and directly leverage the pre-trained weights of the original bendr paper <cit.>. §.§.§ Fine-tuning The masking component is omitted in the fine-tuning phase, and bendr vectors are fed directly to the Transformer. A classification block composed of one or more linear layers with a final Softmax activation is then employed to predict seizure occurrences based on the first element of the transformer output sequence. Notably, during pre-training, this first input token was assigned a special value of -5 (selected to be out-of-distribution), and excluded from the contrastive loss. This step is crucial to ease downstream specialization to identify seizures accurately. Fig. <ref>b shows the architecture used in this phase. A significant modification of the fine-tuning task with respect to bendr is the use of the sswce loss, introduced by <cit.>. Building upon the insights from prior research <cit.>, this study employs a subject-specific approach for fine-tuning the seizure detection model. Such a tailored approach is pivotal in capturing the unique eeg patterns and seizure characteristics inherent to each individual, enhancing sensitivity and specificity. Precisely, we employ a loocv strategy, which involves training the model on all seizure-containing records of a patient except one. The excluded record is then used as test set, to assess the model's efficacy. A further 20% of the training records is extracted to obtain a validation set, used for early stopping, learning rate scheduling, etc. The whole procedure is repeated cyclically, considering different records as test set. §.§.§ Second Supervised Pre-training A novel aspect of this work is the implementation of a second pre-training phase on CHB-MIT, prior to the subject-specific loocv. In this phase, the model is trained in a supervised way (as described in Sec. <ref> and with the architecture of Fig. <ref>b), to predict seizures on all subjects except the target one. This step aims to imbue the model with a broader understanding of eeg patterns associated with seizures across different subjects, before honing in on the specific characteristics of a single patient. Figure <ref> illustrates the comprehensive training approach employed in the study. §.§ Task-specific optimizations In order to maximize seizure detection accuracy, we explore with varying the model architecture, and apply several pre- and post-processing optimizations. Keeping the fundamental architecture of Fig. <ref>, we explored different classification block architectures, finally converging to a sequence of 4 fully connected layers with decreasing dimensions, progressively reducing the feature space {(512, 256), (256, 128), (128, 64), (64, 2)} and improving the discriminative power of the model. We then vary the number of blocks in the Convolutional stage in {3, 6}, and the number of layers and heads in the Transformer Encoder in {2, 4, 6, 8, 12} to maximize seizure detection accuracy. Additionally, we evaluate different weights initialization policies <cit.>, in order to apply weights pre-trained on TUEG to models of different sizes (e.g., with a smaller/larger number of layers). Namely, to initialize the additional layers of a bigger model with respect to the original one, we tested both the duplication of the pre-trained weights and random initialization. On the other hand, when testing a smaller model, we initialize its weights only with the layers shared with the original model. Based on the hypothesis that early layers may capture the underlying, task-independent structure of EEG data, we consider freezing the weights of convolutional layers and fine-tuning only the Transformer encoder. For completeness, we also test how the model behaves when the Transformer encoder is frozen as well. As pre-processing, we consider applying a 5^th-order Butterworth bandpass filter within a 0.5–50 Hz frequency range to the input, to smooth the signal and reduce ripple effects, followed by either MinMax or MeanStd normalization. Moreover, to address the challenge posed by the highly unbalanced nature of EEG data, we tested two oversampling methods during supervised training phases, smote and Weighted Random Sampler. Lastly, we apply post-processing on the model's output to further reduce FP/h. Specifically, we go through the predicted labels with a sliding window and replace the central element with an aggregate prediction over the window. We consider two aggregation criteria: majority voting and minPooling. In the latter, we select the smallest predicted value in the window as the output. Windows of lengths 3, 5, 7 were considered. §.§ Training Protocol We train our models using an Adam optimizer with a learning rate of 1e^-4 and 0.01 weight decay, reducing the learning rate of factor 0.1 when the validation loss stops improving for 5 consecutive epochs. We apply early stopping on the validation loss with a patience of 15 epochs. Dropout layers with 50% probability are added both in the Convolutional stage (between 1D convolutions and Group Normalization layers) and in the Transformer encoder to reduce overfitting. §.§ Datasets §.§.§ Self-supervised Pre-training Data For the self-supervised pre-training phase, the Temple University Hospital eeg Corpus (TUEG) <cit.> was utilized, encompassing 1.5 TB of clinical eeg recordings. This dataset includes over 10,000 individuals, with diverse demographics, including 51% females and a wide age range. Pre-existing pre-training weights were employed as the starting point for model initialization. §.§.§ Supervised Fine-tuning Data The supervised fine-tuning phase utilized the CHB-MIT Scalp eeg Database <cit.>, comprising eeg recordings from 24 pediatric subjects with intractable seizures at Boston Children's Hospital. The dataset, collected using the International 10-20 system, includes 664 EDF files, with 198 containing seizure events. For this study, only files with seizure occurrences were considered. Additionally, only 20 out of 23 channels were considered to match the channels used in the pre-training task on TUEG. All signals were sampled at 256 samples per second with 16-bit resolution and a window length of 8s was considered, without overlap. § EXPERIMENTAL RESULTS All results that are presented were obtained considering the LOOCV setup described in <ref>, oversampling training data with a Weighted Random sampler. In terms of metrics, we focus on sensitivity and FP/h: sensitivity represents the ratio of correctly identified seizure events, i.e., an event for which at least one of the segments that compose it is assigned a positive label; FP/h are the number of false alarms in an hour, which is inversely proportional to the specificity. After an extensive hyperparameters search for the sswce loss, we found α=0.8 and β=0.2 to be the best trade-off to maximize sensitivity and reduce FP/h. §.§ Model Performance Table <ref> details our seizure detection results on the CHB-MIT dataset. First, we report the baseline models, i.e., the ones obtained fine-tuning on CHB-MIT the original BENDR architecture <cit.> without any of our optimizations. This serves as a starting reference to demonstrate the improvements achieved by more refined solutions. Baseline results are reported both with pre-trained weights from TUEG and without any pre-training. Without pre-training, the baseline achieves a 50.15% sensitivity and an extremely high FP/h of 132.24. This is reduced to 10.38 FP/h when using pre-trained weights, demonstrating the crucial role of self-supervised pre-training for this kind of model. In subsequent table rows, we report our successive refinements of the model obtained with optimizations described in the previous section, applied incrementally to the baseline. Results in bold represent the configurations that are used after each optimization step as a starting point for the next one. First, we explore freezing the weights of the two main parts of the architecture: as expected, freezing the encoder weights worsens the results, while freezing the initial convolutional blocks leads to good generalization capabilities. We impute this to the fact that initial convolutional layers extract generic enough (i.e., not-task-specific) features from EEG signals during the pre-training. This model reaches better sensitivity (54.93%) and FP/h (12.44). On top of this, by applying our second (supervised) pre-training and filtering pre-processing, we obtain a model with an improved sensitivity of 69.11% and a reduced FP/h of 6.95. The last steps are the exploration of the model architecture and the application of pre- and post-processing. Firstly, MeanStd normalization behaves better than the MinMax one. From what concerns model architecture, we notice that a more complex classifier, can learn more discriminative EEG patterns and make more accurate predictions, reaching 81.87% sensitivity and 2.75 FP/h. Then, we tune the number of convolutional layers and transformer block, obtaining the best generalization capabilities with 6 convolutional blocks and 4 attention layers in the encoder, with 4 heads each. Our best post-processing pipeline applies minPooling on the output, using a sliding window of length 3. This leads to 72.58% of seizures detected, with 0.23 FP/h. §.§ Comparison with State-of-the-Art Table <ref> shows a comparative analysis with the latest state-of-the-art works that address seizure detection on CHB-MIT considering all patients and using from 18 up to 23 EEG channels. We distinguish between CNN-based and Transformer-based approaches. Our approach outperforms all of them in terms of FP/h, at the cost of lower sensitivity, demonstrating the capability of modern transformer models to capture complex epileptic patterns in EEG. Our approach reduces FP/h of the best performing CNN-based model <cit.> by 2.5×, while simplifying the heavy pre-processing and feature extraction steps, typical of CNN-based approaches, at the expense of reduced sensitivity of the model (27.42% decrease). Our approach significantly improves upon other transformer-based models, with a 4.7× lower FP/h with respect to <cit.>. Noteworthy, this is crucial for a real-life closed-loop system, given that a high number of FP/h per hour would cause many warnings in patients, which in turn increase their stress, as discussed in <cit.>. Moreover, a sensitivity over 50% has already been showed to be an acceptable requirement for a seizure detection method <cit.>. § CONCLUSION This work proposes a BERT-based approach for seizure detection on the CHB-MIT dataset, demonstrating the potential of transfer learning from general eeg data to seizure detection task. We also validate the effectiveness of a Transformer-based architecture which, once pre-trained on a large amount of unlabelled data, can partially overcome data labelling bottleneck and improve the state-of-the-art results. We then extensively explore hyper-parameters and pre-/post-processing techniques to improve the model performance. The best model found obtains 0.23 FP/h, detecting 72.58% of seizures. Our future work will include compressing the model and reducing the EEG channels to explore the possibility of its deployment on a wearable device, as well as implementing artefacts detection and removal algorithms <cit.> to further improve performances towards the clinical implementation of such methods. IEEEtran
http://arxiv.org/abs/2406.18267v1
20240626114153
New ephemerides and detection of transit-timing variations in the K2-138 system using high-precision CHEOPS photometry
[ "H. G. Vivien", "S. Hoyer", "M. Deleuil", "S. Sulis", "A. Santerne", "J. L. Christiansen", "K. K. Hardegree-Ullman", "T. A. Lopez" ]
astro-ph.EP
[ "astro-ph.EP" ]
Ephemerides and TTVs of the system Vivien et al. Aix Marseille Univ, CNRS, CNES, Institut Origines, LAM, Marseille, France Caltech/IPAC-NASA Exoplanet Science Institute, Pasadena, CA 91125, USA Steward Observatory, The University of Arizona, Tucson, AZ, 85721, USA Multi-planet systems are a perfect laboratory for constraining planetary formation models. A few of these systems present planets that come very close to mean motion resonance, potentially leading to significant transit-timing variations (TTVs) due to their gravitational interactions. Of these systems, represents a excellent laboratory for studying the dynamics of its six small planets (with radii ranging between ∼1.5 – 3.3), as the five innermost planets are in a near 3:2 resonant chain. In this work, we aim to constrain the orbital properties of the six planets in the system by monitoring their transits with CHaracterising ExOPlanets Satellite (). We also seek to use this new data to lead a TTV study on this system. We obtained twelve light curves of the system with transits of planets d, e, f, and g. With these data, we were able to update the ephemerides of the transits for these planets and search for timing transit variations. With our measurements, we reduced the uncertainties in the orbital periods of the studied planets, typically by an order of magnitude. This allowed us to correct for large deviations, on the order of hours, in the transit times predicted by previous studies. This is key to enabling future reliable observations of the planetary transits in the system. We also highlight the presence of potential TTVs ranging from 10 minutes to as many as 60 minutes for planet d. New ephemerides and detection of transit-timing variations in the system using high-precision photometry H. G. Vivien1 ^https://orcid.org/0000-0001-7239-6700 < g r a p h i c s > S. Hoyer1 ^https://orcid.org/0000-0003-3477-2466 < g r a p h i c s > M. Deleuil1 S. Sulis1 ^https://orcid.org/0000-0001-8783-526X < g r a p h i c s > A. Santerne1 ^https://orcid.org/0000-0002-3586-1316 < g r a p h i c s > J. L. Christiansen2 ^https://orcid.org/0000-0002-8035-4778 < g r a p h i c s > K. K. Hardegree-Ullman3 ^https://orcid.org/0000-0003-3702-0382 < g r a p h i c s > T. A. Lopez ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ § INTRODUCTION Multi-planetary systems account for about 25% of confirmed planets to date[ Available systems gathered from <exoplanet.eu>]. The exoplanetary system known to host the largest number of planets so far is Kepler-90, with eight planets <cit.>. The vast majority of these systems are composed of small planets, super-Earths and mini-Neptunes, in the innermost part of the systems (<100 days periods). These packed systems are the ideal laboratory for testing and calibrating models of planetary formation: originating from the same original disc, the composition of these planets gives us clues about the region of the disc where they are likely to have formed. The orbital architecture of the system gives us information about the orbital evolution they may have undergone <cit.>. In these compact systems, the gravitational interactions between planets can be strong, as predicted by <cit.>. These interactions result in variations of differing magnitude (from a few seconds to nearly half an hour) in the transit times of the planets. Measuring these transit time variations <cit.> requires high photometric accuracy and a high measurement cadence in order to accurately determine transit times as well as ingress and egress. Studies of such systems can also benefit from a long time coverage to better identify which planet is transiting at a given time and track the transit time evolution. As TTVs are gravitational effects, they can be used to measure the mass of planets from photometric data alone, providing insights into their possible composition. To date, there are 39 systems known to host five or more planets, but only a handful with a resonant chain <cit.>. This phenomenon occurs when multiple successive planets are in (or close to) mean motion resonance <cit.>, namely, the planet period ratios are close to a ratio of small integers (r+1):r. Among these systems, we observe chains of at least three planets only in: Kepler-90 <cit.>, HD 219134 <cit.>, TRAPPIST-1 <cit.>, K2-138 <cit.>, HD 110067 <cit.>, Kepler-80 <cit.>, TOI-1136 <cit.>, TOI-178 <cit.>, and HD 158259 <cit.>. Such observations question formation models that predict a breaking of the chain soon after the dispersion of the protoplanetary disk via dynamical instabilities. The exact nature of these instability processes is however still debated <cit.>. The study of these systems therefore allows us to probe the architecture of these particular systems at a time when the system is in a stable phase. is a prime example of a highly packed resonant system. Originally described in <cit.> using data, it was found to host five sub-Neptunes close to 3:2 MMR. A better characterization of the planets' masses was obtained by <cit.> using observations. Using , <cit.> confirmed an outer sixth planet, not in the resonant chain. Finally, <cit.> investigated the composition of the planets and found that the inner planets present an increasing water content with distance from the host star, while the outer planets have an approximately constant water content. The cadence of the photometry (8 to 10 min) did not allow for a TTV analysis of the system for which TTV amplitudes between 2.1 and 6.5 min are expected <cit.>. We therefore observed the system with <cit.> to obtain high-precision photometry of the target and further explore the presence of TTVs. The higher cadence of , 60 sec in the case of K2-138 should therefore be sufficient to evaluate TTVs of even two minutes. The paper is organized as follows. First, we describe the observations used in this work in Sect. <ref> and the data modeling in Sect.<ref>. We then present improved ephemerides of the system in Sect. <ref> and our TTV analysis in Sect. <ref>. Finally, we present our conclusions in Sect. <ref>. § OBSERVATIONS AND MODELING §.§ Stellar and planetary properties A22 conducted a full spectroscopic analysis using stellar modeling to derive the atmospheric and fundamental parameters of the host star. The improved host star's parameters helped determining the possible composition of the planets in the system. To use completely homogeneous priors during our analysis, we used the stellar and planetary parameters derived in that work. For sake of clarity, the stellar parameters are given in Table <ref>. We note that the values reported by A22 for the planets are consistent with those in <cit.>. The system is thought to be made up of six planets: b, c, d, e, f, and g. Planets b through f are found to be very close the 3:2 resonance, thus forming one of the longest known chains. The periods of these planets are, respectively, 2.35, 3.56, 4.40, 8.26, 12.76, and 41.97 days. §.§ Previous photometric observations Available observations of the system prior to this work include the original data from campaign 12. This photometry was acquired over a 79 day period, between December 15, 2016 and March 4, 2017, using the long cadence mode of the spacecraft <cit.>. The Space Telescope subsequently observed the system using the InfraRed Array Camera (IRAC) for 11 hours between March 15 and 16, 2018 (DDT 13253; PI: J. L. Christiansen). These observations were targeting planet g, extending coverage to the tenth epoch since discovery. Finally, the Telescope at the Palomar Observatory observed with the Wide-field InfraRed Camera (WIRC) on two occasions, August 31, 2019 and November 4, 2019. Both observations were aimed at planet d, and observed epochs 182 and 194, respectively <cit.>. §.§ observations was observed by between 2020 July 22 and 2020 October 6 over twelve visits with durations between 8h42m and 34h44m as part of program 17 (PI: T.A. Lopez). The original schedule of observations was planned to acquire transits of planets d, e, f, and g. Due to the brightness of the star (V=12.2) an exposure time of 60 seconds was used. In total, observations account for slightly more than five and a half days, covering fifteen transits of four planets in the system: four transits for planet d, three for planet e, six for planet f, and one for planet g. Details about each visit can be found in Table <ref>. For the sake of simplicity, each light curve is given an identifier based on the last three digits of the corresponding file name. Light curves were obtained after the calibration and correction of the images by the automatic data reduction pipeline <cit.>. This version of the DRP automatically delivers the target's photometry for four different aperture radii: =25 px, =22.5 px, =30 px, and which is computed for each target based the target's flux and contamination of the field of view. For , the aperture was computed to be of 24 px. To compute the latter, the DRP uses the GAIA DR2 catalog <cit.> to simulate the position and flux of putative contaminants in the field of view of the target. For , no such contaminant is located within any of the apertures. The closest contaminant[2MASS J23154868-1050583] appearing in the DR3 survey is located at ∼13", with a G band magnitude of ∼20.8, much fainter than that of , at 12. We thus select the aperture, which minimizes the RMS of the light curves. To correct and detrend each CHEOPS light curves from systematic due to CHEOPS orbit or spacecraft effects, we use the package <cit.>. We mask points for which the photometry deviates by more than 1.5×MAD (mean absolute deviation). We also mask points where the flux is strongly affected by cosmic ray hits during passages through the South Atlantic Anomaly or when large deviations of centroids of the Point Spread Function (PSF) of the target are measured. To correct for non-astrophysical noise sources such as background or stray light from the roll angle, we detrend the light curve using the basis vectors provided by the DRP. For each visit, we follow the recommendation of the DRP for the decorelations flags to use. The detrending vectors are then passed as free parameters when fitting the transits with a normal distribution centered on 0 (𝒩(0, 1)). To detrend from stray light due nearby objects reflecting in the telescope (or "glint") we fit a spline against the roll angle of the telescope. This is done in using the function, which is fitted using uniform distribution between 0 and 2 (𝒰(0,2)). The cutoff values, vectors, glint orders used, and the remaining number of points are detailed in Table <ref>. visits are typically setup to catch one transit of a single planet, and we scheduled multiple visits for planets d, e, and f (table <ref>). This way, we observed four transits of planet d with , thus extending the transit monitoring up to 250 orbits after the first epoch, and three transits of planet e. The ephemeris for planet f predicted the transit ∼2.5 hours too late. Combined with the 3.2 hours transit duration, three visits completely miss the first half of the transit. Additionally, the egress falls in the observation gaps of the orbits in four visits. In total, out of the six available visits, only two egresses are perfectly caught. Because of the uncertainty on the previous ephemerides, some transits were predicted to occur in the visits, but were not detected. We identified those in the first column of Table <ref>. Notably, planets b and c were not detected at all, while only a single transit of planet d was not detected. Given the number of planets in this system and their short orbital periods, the ephemerides from HU21 and A22 predict light curves with multiple transits. Five visits should contain two or more transits of the different planets in the system: 102 (b, d), 201 (b, c, e), 303 (d, f), 304 (b, f), 501 (b, d, f, g) (see Figs. <ref>, <ref>, and  <ref>). §.§ Modeling of the light curves We perform the light curves analysis with the package <cit.>, routinely used for observations analyses <cit.>. This is done in two steps. A first estimate of the model parameters are determined using [<https://lmfit.github.io/lmfit-py/index.html>] <cit.>, which fits the transit using a least-square scheme. Then the parameters are refined using the package <cit.>, which explores the parameter space using an MCMC scheme. This second step allows for a better estimation of the fit, as well as the errors associated to each parameter. We use a normal distribution on planetary and stellar parameters priors based on their mean and 1σ interval values from A22, given in Tables <ref> and <ref>. Unlike most visits that are of short duration, aiming to capture unique transits, visit 501 exhibits four transit-like features (Fig. <ref>). We first identified the planets most likely to be associated with each transit observed in this visit. Based on the ephemerides from A22, we fitted each of the observed planets in their respective visits to get a good estimate of their transits timing. We have used these updated ephemerides when possible (and retained those of A22 in cases where they were not available) to determine which planets were the most likely observed during this visit. We thus identified the first three transits as being due to planets d, f, and g, respectively. Planet b appeared as a the most likely candidate to explain the fourth transit feature in this visit. However, as is detailed in Sect. <ref>, we finally discarded this possibility. The package is designed so that only one transit per light curve can be analyzed. To overcome this limitation for visit 501, to fit each transit event, we mask the other unwanted events using the predicted positions of the planets. Therefore, we were able to proceed with the fit as if a single transit was present in each case. This allowed us to follow the same procedure as described above for each transit. also allows also for multi-visit modeling <cit.>. Transits of planet d, f, and g of visit 501, were included in their respective multi visit analysis. The resulting fits on visit 501, including the putative planet b, are shown with their ephemerides in Fig. <ref>. For each planet, we detailed below the results obtained from the analysis of our observations (Sect. <ref>). The results of the fits, solely based on data, are presented in Table <ref>. § TRANSITS ANALYSIS §.§ Planet b With an expected depth of around 250 ppm (see Table <ref> for details), the transit of planet b is at the edge of detectability with . Since the last observed transit of planet b dates back to and that its current ephemeris predict that it should be visible in four of our visits, we nevertheless attempted to find it in our dataset. No visit was aimed directly at planet b, but it should instead appear on visits targeting other planets (see Table <ref>). We show the expected transits (based on A22's ephemeris) in Figs. <ref> and <ref>. In the cases of visits 201 and 501, the predicted transit time and its 1σ error is fully contained within the observations, unlike visits 102 and 304. Since visits 102, 201 and 304 are quite short due to focusing on a single planet, we attempted to fit planet b's transit signature in the residual of each visit (Fig. <ref>). We found no clear evidence of a photometric dip corresponding undoubtedly to planet b, therefore, we do not use them in the following TTV analysis. Visit 501 was aimed at capturing a transit of planet g and is a much longer observation than a typical visit. We therefore expected good chances of seeing a transit of planet b in this dataset. The best case scenario obtained is the one shown in Fig <ref>, where a transit-like feature is observed almost concomitantly to the predicted transit of the planet. However, the fitted transit values are not compatible with those of the literature. The modeled transit has indeed a duration T_14,fit = 1.12 ± 0.08 h compared to the expected T_14,prior = 2.00^+0.09_-0.11 h, as well as a depth of D_fit = 557 ± 160 ppm to D_prior = 252^+23_-21 ppm. Because of this mismatch on the duration and on the depth, we do not consider it to be a transit of planet b. This short event could be the residuals of some systematics during the detrending. §.§ Planet c We expected to catch the planet c transit in visit 201, as shown in Fig. <ref>, together with those of planets e and b. With an estimated depth of ∼500 ppm, planet c should be detectable with . Unfortunately, this visit was strongly affected by gaps and the only transit-like feature appears outside the expected ranges for the three planets (Fig. <ref>). Given the uncertainties in planets c and e ephemeris, however, it remains close to these ranges. We therefore explore the possibility that the transits of the two planets overlap, the event been masked by the numerous interruptions in the light curve that would hide clear ingress and egress of transits. To that end, we performed a one-planet (planet e), and two-planet (planets e and c) transit-fitting modeling successively, using the code <cit.>. The advantage of , is that it allows the simultaneous fit of two planetary transits and also a straightforward comparison of the results based on Bayesian criteria. For these fits (apart from the transit mid-time), we fixed all the parameters to the central values of planets c and e priors reported in Table <ref>. For the transit mid-time of planet e we used a normal distribution defined by BJD-2 450 000 =9078.98±0.3. For the mid-time of planet c we defined an uniform distribution between BJD-2 450 000 = 9078.905 – 9079.065, which was the wider distribution possible that secured the convergence of the fit. In Fig. <ref>, we present both solutions and the corresponding mid-times are given in Table <ref>. As expected, the mid-times of planet e obtained from both models are consistent. The fitted mid-time of planet c ranges along the first half of the transit of planet e, namely, between the ephemeris' predicted time (as marked by the red vertical region in right panel of Fig. <ref>) up to the middle of planet e transit. To determine which is the best model, provides the Bayesian evidences (ln(Z)) of each fitted scenarii (see Table <ref>). In this comparison, the planet-e only model yields a log-evidence of ln(Z)=1208.59 ± 0.46, while that of the planets-e and c model is of ln(Z)=1206.37 ± 0.44, leading to a Δln(Z)=2.2 ± 0.7. The limit between weak and moderate evidence of preference being ln(Z)=2, we can therefore not statistically prefer one model over the other <cit.>. This situation is discussed further when we compute the new ephemerides in in Sect. <ref>. §.§ Planet d Transits of planet d are present in visits 101, 102, 103, and 501. The ephemeris for this planet from A22 predicts the transit roughly 3.5 hours too early in each case, but the observations still manage to capture four complete transits (see Fig. <ref>). Three of the visits catch the ingress of the transit, while one shows a partial egress. While we were unable to catch both features on a single visit, we are able to derive accurate transit mid-times to use in our subsequent TTV analysis. Notably, planet d is also predicted in visit 303, but not detected. This is likely due to the short duration of the visit, which prevented us from catching the transit. §.§ Planets e This planet had the most accurate predicted transit times among all the planets in the system, with all its transits occurring within the estimated error range. Although visit 201 may have been affected by the potential transits of planets b and c, the other two visits are not anticipated to show any additional transits. However, due primarily to insufficient coverage during the ingress and egress phases of the transit, we faced limitations in the precision of the obtained transit mid-times (see Fig. <ref>). §.§ Planet f Planet f appears most frequently in our dataset, being observed during visits 301, 302, 303, 304, 305, and 501. The first five visits were dedicated to observing this planet, while visit 501 targeted planet g. Despite the number of visits dedicated to observing planet f, the ephemeris used to prepare these observations deviated significantly as illustrated in Fig. <ref>. This results in predicted transit times later than observed. Consequently, we captured only one egress of the transit during visit 501. This discrepancy led to mid-transit times with deviations from a constant orbital period of up to ∼25 minutes, with an average error of ∼11 minutes, which allows for a TTV analysis of the system to be carried out. §.§ Planet g When planning observations with , only two transits of the planet with a 42-day orbit had been observed with <cit.>, and an additional transit was captured with (HU21). However, the projected uncertainties of the transit times were significantly larger than 1.6 hours based on these observations. To address this, the visit for planet g was extended beyond the usual duration of visits, allowing us to also capture transits of planets d and f (see Sects. <ref> and <ref>). After the transit modeling, we were able to determine the mid-transit time of planet g with a precision of approximately ∼12.25 minutes at epoch 32. This extends the observation coverage of the planet beyond its last observation at epoch 10 by . Such comprehensive coverage is essential for estimating TTVs in the system, therefore, we have included this transit in our analysis. § UPDATED EPHEMERIDES The new ephemerides of the planetary transits of this system make use of the , , the telescope, and the newly acquired data, including the transits from planets d, e, f, and g. The data and their respective mid-transit times were first used in <cit.> and <cit.>, but unfortunately, the individual transit mid-times were not reported by the authors. Thus, we reproduced the procedure used in both studies: the individual transit mid-times were measured by fixing the model transit parameters to the best-fit values given in previous studies <cit.> and allowing only the mid-transit time to vary. The uncertainties were calculated by computing the residuals from the best-fit model and performing a bootstrap analysis using the closest 100 timestamps, re-fitting the mid-transit time at each timestamp permutation. All the mid-times are listed in Table <ref> and <ref> for each planet. For the rest of the data ( and ), the individual transit mid-times were provided directly by the respective authors, therefore, there was no need for a reanalysis. With all the available transit times of each planet, we built the respective observed minus calculated (O-C) diagrams of the transits (Figs. <ref> to <ref>), where we compared the observed transit times with our updated ephemeris as described below. For each planet, the updated values of the average period and reference epoch (T_0) were obtained by a weighted linear fit of the transit times. The updated values of the average periods and T_0 are given in Table <ref>. The time residuals are also listed in Table <ref> and <ref>. The reported errors of the time deviations consider the uncertainty of the updated ephemeris and the respective error of the observed mid-time. To improve the ephemeris of planet c, we utilized the transit times from and three reference epochs (T_0) provided in <cit.>, HU21 and A22 designated as epoch 0. All transit times are listed in Table <ref>. If the mid-time of the potential transit in visit 201 is excluded, fitting the transit times with a linear function yields an updated ephemeris that diverges notably from those of HU21 and A22. This discrepancy is illustrated in Fig. <ref>, which presents the O-C diagram of planet c's transit times using our updated ephemeris. Our refined ephemeris aligns with the scenario of a non-detection during visit 201, as the planet was likely to have transited shortly before or at the very beginning of the light curve. Conversely, as previously discussed and as shown in Fig. <ref>, assuming the detection of the transit in visit 201 results in a fitted time consistent with the predictions of HU21 ephemeris. Given the inability to definitively rule out any of these scenarios with our current data, any future transit monitoring of this planet ought to consider the large uncertainties in transit time predictions. The ephemerides of planets d and e were refined using all available previous data in addition to observations from , as detailed in Table <ref> and  <ref> respectively. For comparison, we also show in their updated O-C diagrams (Figs. <ref> and <ref>) the time projections from previous transit ephemerides. For planet d, our times do not agree with the linear ephemeris defined by considering only the and B22 transits. When incorporating the data, the B22 transit times present a significant deviation of -158±14 min and -48±12 min for planets d and e, respectively, with respect to our updated linear ephemeris. This result is discussed in the next Sect. <ref>. The six new visits for planet f effectively double its number of observations. Despite the incomplete coverage during these visits (as discussed in Sect. <ref>), we were able to improve the ephemeris of the planet. As illustrated in Fig. <ref>, the ephemeris from A22 exhibited drift, resulting in predictions that were consistently too early by 121±76 to 164±77 minutes. With only three previous observations, the ephemeris of planet g were not very well constrained with projected uncertainties of hours, as illustrated in Fig. <ref>. The new observation allows us to correct the ephemeris: it accounts for a deviation of around 45 minutes with respect to HU21 ephemeris for the 's epoch and also constrains the uncertainties of the projected transit times down to only few minutes (see Fig. <ref> and Table <ref>). § TTV ANALYSIS Despite the fact that the number of transits observed are not enough to perform a full TTV analysis of the system and thereby enable a probe of the masses and/or eccentricities distribution of planets using photometry alone <cit.>, we conducted a first-order TTV analysis. To that end, we used the [<https://github.com/ericagol/TTVFaster/>] code <cit.> for the TTV simulations combined with the nested sampling method <cit.>. This code implements analytical formulas instead of numerical integrations to model the TTVs of multi-planetary systems, allowing for more efficient transit timing modeling of the planets in packed systems. This approach helps us constrain the expected TTVs and validate our updated orbital periods as reported in Table <ref>. Such modeling is particularly crucial, for example, in verifying the significant TTVs observed in planet d. Our approach is well suited when exploring non-Gaussian parameters spaces. This is expected in our case, as we have a large number of fitted parameter compared to the number of transit times we have. Indeed requires seven input parameters per planet. For this we use the implementation of the [<https://johannesbuchner.github.io/UltraNest/>] package <cit.>, in particular, its method, which is more efficient for high-dimensional spaces. We analyzed the TTVs of the planets based on the timing information detailed in Sects. <ref> and <ref> and reported in Tables <ref> and <ref>, combined with the masses values from A22. In addition, we used our updated planetary parameters as priors. With respect to the parameters we lacked updates for, such as those of planets b and c, we utilized the values reported in A22. Specifically, for planet b, we relied on the three time stamps provided by A22, as we have no significant detection of any transit of this planet; thus, we were not able to update its ephemeris. Therefore, little information from this planet can be used to constrain the planet-system parameters. The defined priors distribution for each parameter, as well as the posteriors, are detailed in Table <ref>. We report in this table the distribution of the relevant fitted planetary parameters: mean orbital period (P), mass (M), and eccentricity (e). In addition, the RMS and the larger amplitude of the best-fit TTV model are also reported. The time residuals of the planetary system simulation are presented in the O–C diagrams of Fig. <ref>, along with the best-fit TTVs model. The posterior values of the fitted parameters are reported in Table <ref>. As also reported in the table, the models predict mean TTV amplitudes that are lower than 10 minutes for planets b, c, e, f, and g, as depicted in Fig. <ref>. For planet d, however, the temporal deviations are as large as 60 minutes, with a modeled TTVs RMS of 25 minutes. The fitted planetary parameters of the modeled TTVs are fully consistent with the defined priors, particularly the mean orbital period, eccentricities, and masses. Planet d is the only notorious exception, where its fitted mass deviates to a small M_d∼0.8, and a relatively high eccentricity of e_d=0.16±0.04. These values were necessary to fit the large TTVs observed with . It is clear that the transit times can not be described purely with a strict constant period. Notably for planet d, as reported in Table <ref>, B22's transits deviate -158±14 min and -48±12 min from the updated ephemeris, while three out of the four transits deviate by more than 10 minutes (but with uncertainties that are on the same order). Assuming these deviations are a consequence of gravitational interactions between the planetary bodies, we can predict that the actual mean orbital period of planet d should therefore be between the B22 value and our updated estimate. This detection of large TTV indicates the significant dynamical interaction of planet d with the other planets in the system, as demonstrated by our simulations. New observations would be required to constrain the presence of these TTVs, which are expected for planets located in MMR <cit.>, as is the case of planet d. These observations would also be required to properly sample the super-period of planet d, and further constrain its eccentricity. For planet f, the time residuals shown in Fig. <ref> seem to suggest a hint of TTVs. In fact, the transits show a more scattered distribution around the new linear ephemeris than the points, though the time uncertainties prevent us to claim this is a significant TTV detection. Moreover, the TTV simulations only predict small periodic oscillations with amplitudes of 7.4 ± 3.2 minutes around a constant period. Thus, more precise measurements are required to confirm the presence of such deviations. No deviations were predicted from the simulations for planet g as it is located further out in the system; therefore, it is subject to only weak gravitational influence from the innermost planets. § CONCLUSION The observations obtained during this campaign represent the first large series of measurement on the system since the initial data reported three-and-a-half years prior. The newly computed ephemerides for the system represent a leap forward in accuracy. Particularly for planets d, f, and g, the refined values are crucial for targeting future transit events. This is even more valuable as the accumulated deviations for these planets had already reached up to 200 minutes. For example, in the case of planet f, we examined the deviation between predictions by the end of 2024. Using the ephemerides from A22, we predict T_C=2460673.05±0.12. However, with the new average period derived from our work, we obtain T_C=2460672.727±0.015. The new ephemeris yield a T_C 8 hours prior to that of A22. We also want to emphasize the absence of transits of planet b in light curves. Indeed, despite the numerous transit events predicted within our observing windows, no transits were confirmed in the dataset. The non-detection of the transits of planet b in could result from a large accumulated error in the ephemeris or, for example, from a dynamical mechanism such as the precession of the planet. Therefore, a dedicated photometric follow-up is required to resolve this debate. Planets in the system typically display TTVs on the order of 10 minutes, except for planet d, which shows TTV amplitudes of up to 60 minutes compared to the mean orbital period. However, this is only possible if planet d is of a mass of ∼0.8, with a fairly high eccentricity of ∼0.16. As planet d is in a MMR with its neigbors, large TTVs can be expected. Therefore, a future analysis based on better sampled TTVs will provide a more detailed view of the architecture of this compact system and its stability. HV and MD acknowledge funding from the Institut Universitaire de France (IUF) that made this work possible. SS acknowledges support from the Programme National de Planétologie (PNP) of CNRS-INSU. SH acknowledges CNES funding through the grant 837319. We thank the referee for their comments that improved the quality of the paper. aa § SYSTEM PARAMETERS This appendix compiles the stellar and planetary parameters used in this study. Table <ref> shows the stellar parameters of host star and Table <ref> lists the parameters of the planets in the system. § MULTI-TRANSIT FIGURES Here, we present all the light curves used during multi-transit analyses. All the figures show the best-fit models obtained with . For each figure the detrented data are shown as gray dots, with their 15 minute bins as black circles. The transit model is shown as a black-gray line. The transit are plotted in chronological order from bottom to top. § O-C DIAGRAM
http://arxiv.org/abs/2406.18742v1
20240626201649
3D Feature Distillation with Object-Centric Priors
[ "Georgios Tziafas", "Yucheng Xu", "Zhibin Li", "Hamidreza Kasaei" ]
cs.CV
[ "cs.CV", "cs.RO" ]
Quantum Resources Required for Binding Affinity Calculations of Amyloid beta Adam Holmes July 1, 2024 ============================================================================ § ABSTRACT Grounding natural language to the physical world is a ubiquitous topic with a wide range of applications in computer vision and robotics. Recently, 2D vision-language models such as CLIP have been widely popularized, due to their impressive capabilities for open-vocabulary grounding in 2D images. Recent works aim to elevate 2D CLIP features to 3D via feature distillation, but either learn neural fields that are scene-specific and hence lack generalization, or focus on indoor room scan data that require access to multiple camera views, which is not practical in robot manipulation scenarios. Additionally, related methods typically fuse features at pixel-level and assume that all camera views are equally informative. In this work, we show that this approach leads to sub-optimal 3D features, both in terms of grounding accuracy, as well as segmentation crispness. To alleviate this, we propose a multi-view feature fusion strategy that employs object-centric priors to eliminate uninformative views based on semantic information, and fuse features at object-level via instance segmentation masks. To distill our object-centric 3D features, we generate a large-scale synthetic multi-view dataset of cluttered tabletop scenes, spawning 15k scenes from over 3300 unique object instances, which we make publicly available. We show that our method reconstructs 3D CLIP features with improved grounding capacity and spatial consistency, while doing so from single-view RGB-D, thus departing from the assumption of multiple camera views at test time. Finally, we show that our approach can generalize to novel tabletop domains and be re-purposed for 3D instance segmentation without fine-tuning, and demonstrate its utility for language-guided robotic grasping in clutter. § INTRODUCTION Language grounding in 3D environments plays a crucial role in realizing intelligent systems that can interact naturally with the physical world. In the robotics field, being able to precisely segment desired objects in 3D based on open language queries (object semantics, visual attributes, affordances, etc.) can serve as a powerful proxy for enabling open-ended robot manipulation. As a result, research focus on 3D segmentation methods has seen growth in recent years <cit.>. However, related methods fall in the closed-vocabulary regime, where only a fixed list of classes can be used as queries. Inspired by the success of open-vocabulary 2D methods  <cit.>, recent efforts elevate 2D representations from pretrained models  <cit.> to 3D via distillation pipelines  <cit.>. However, we identify certain limitations of existing distillation approaches. On the one hand, field-based methods  <cit.> offer continuous 3D feature fields, but require to be trained online in specific scenes and hence cannot generalize to novel object instances and compositions, they require a few minutes to train, and need to collect multiple camera views before training, all of which hinder their real-time applicability. On the other hand, original 3D feature distillation methods and follow up work <cit.> use room scan datasets  <cit.> to learn point-cloud encoders, hence being applicable in novel scenes with open vocabularies. However, such approaches assume that 2D features from all views are equally informative, which is not the case in highly cluttered indoors scenes (e.g. due to partial occlusions from some view), thus leading in noisy 3D features. 2D features are also usually fused point-wise from ViT patches  <cit.> or multi-scale crops <cit.>, therefore leading to the so called “patchyness” issue <cit.> (see Fig. <ref>). The latter issue is especially impactful in robot manipulation, where precise 3D segmentation is vital for specifying robust actuation goals. To address such limitations, we revisit 2D → 3D point-based feature distillation but revise the multi-view feature fusion strategy to enhance the quality of the target 3D features. In particular, we inject both semantic and spatial object-centric priors into the fusion strategy, in three ways: (i) We obtain object-level 2D features by isolating object instances in each camera view from their 2D segmentation masks, (ii) we fuse features only at corresponding 3D object regions using 3D segmentation masks, (iii) we leverage object-level semantic information to devise an informativeness metric, which is used to weight the contribution of views and eliminate uninformative ones. Extensive ablation studies demonstrate the advantages of object-centric fusion compared to vanilla approaches. To train our method, we require a large-scale cluttered indoors dataset with many views per scene, which is currently not existent. To that end, we build (Multi-View Tabletop Objects Dataset), consisting of ∼ 15k Blender scenes from more than 3.3k unique 3D object models, for which we provide 73 views per scene with 360 ^∘ coverage, further equipped with 2D/3D segmentations, 6-DoF grasps and textual object-level annotations. We use to distill our object-centric 3D CLIP  <cit.> features into a 3D representation, which we call (Distilled Representations with Object-centric Priors from CLIP). Our 3D encoder operates in partial point-clouds from a single RGB-D view, thus departing from the requirement of multiple camera images at test time, while offering real-time inference capabilities. We demonstrate that our learned 3D features achieve high grounding performance and segmentation crispness, while significantly outperforming previous 2D open-vocabulary approaches in the single-view setting. Further, we show that they can be leveraged zero-shot in novel tabletop domains, as well as be used out-of-the-box for 3D instance segmentation. In summary, our contributions are fourfold: (i) we release , a large-scale synthetic dataset of household objects in cluttered tabletop scenarios, featuring dense multi-view coverage and semantic/mask/grasp annotations, (ii) we identify limitations of current multi-view feature fusion approaches and illustrate how to overcome them by leveraging object-centric priors, (iii) we release , a 3D model that reconstructs view-independent 3D CLIP features from single-view, and (iv) we conduct extensive ablation studies, comparative experiments and robot demonstrations to showcase the effectiveness of the proposed method in terms of 3D segmentation performance, generalization to novel domains and tasks, and applicability in robot manipulation scenarios. § MULTI-VIEW TABLETOP OBJECTS DATASET r0.6 ! 23emDataset 23emLayout Multi 23emClutter Vision Ref.Expr. Grasp Num.Obj. Num. Num. Obj.-lvl View Data Annot. Annot. Categories Scenes Expr. Semantics ScanNet <cit.> indoor - RGB-D,3D 17 800 - S3DIS <cit.> indoor - RGB-D,3D 13 6 - Replica <cit.> indoor - RGB-D,3D 88 - - STPLS3D <cit.> outdoor - 3D 12 18 - ScanRefer <cit.> indoor RGB-D,3D 2D/3D mask 18 800 51.5k ReferIt-3D <cit.> indoor RGB-D,3D 2D/3D mask 18 707 125.5k ReferIt-RGBD <cit.> indoor RGB-D 2D box - 7.6k 38.4k SunSpot <cit.> indoor RGB-D 2D box 38 1.9k 7.0k GraspNet <cit.> tabletop 3D 6-DoF 88 190 - REGRAD <cit.> tabletop RGB-D,3D 6-DoF 55 47k - OCID-VLG <cit.> tabletop RGB-D,3D 2D mask 4-DoF 31 1.7k 89.6k template Grasp-Anything <cit.> tabletop RGB 2D mask 4-DoF 236 1M - open MV-TOD (ours) tabletop RGB-D,3D 3D mask 6-DoF 149 15k 671.2k open Comparisons between MV-TOD and existing 3D datasets. Existing 3D datasets mainly focus on indoor scenes in room layouts <cit.> and related language annotations typically cover closed-set object categories (e.g. furniture) and spatial relations <cit.>, which are not practical for robot manipulation tasks, where cluttered tabletop scenarios and open-vocabulary language are of key importance. On the other hand, recent grasp-related efforts collect cluttered tabletop scenes, but either lack language annotations <cit.> or connect cluttered scenes with language but only for 4-DoF grasps with RGB data  <cit.>. Further, most of such datasets lack dense multi-view scene coverage, granting them non applicable for 2D → 3D feature distillation, where we require multiple images from each scene to extract 2D features with a foundation model. To cover this gap, we propose , a large-scale synthetic dataset with cluttered tabletop scenes featuring dense multi-view coverage and rich language annotations at the object level. We generate a total of 15k scenes in Blender <cit.>, comprising of 3379 unique object models, 99 collected by us and the rest filtered from ShapeNet-Sem model set  <cit.>. The dataset features 149 object categories, each of which includes multiple instances that vary in fine-grained details. For each object instance, we leverage GPT-4-Vision <cit.> to generate open-set descriptions from various perspectives, including category, color, material, state, utility, affordance, etc, which spawn over 670k unique referring instance queries (see Fig. <ref>-right and Appendix A). For each scene, we provide 2D/3D segmentation masks, 6D object poses, as well as a set of semantic concepts for each appearing object instance. Additionally, we include 6-DoF grasp annotations for each object model, originating from the ACRONYM dataset  <cit.>. To the best of our knowledge, is the first dataset to combine 3D cluttered tabletop scenes with open-vocabulary language and 6-DoF grasp annotations, which we hope will accelerate future research. § METHODOLOGY Our goal is to distill multi-view 2D CLIP features into a 3D representation, while employing an object-centric feature fusion strategy to ensure high quality 3D features. Our overall pipeline is illustrated in Fig <ref>. We first introduce traditional multi-view feature fusion (Sec. <ref>), present our variant with object-centric priors (Sec. <ref>) and discuss our feature distillation method (Sec. <ref>). §.§ Multi-view 2D Feature Fusion We assume access to a dataset of 3D scenes, where each scene is represented through a set of 𝒱 posed RGB-D views of size H × W: { I_v ∈ℝ^H × W × 3, D_v ∈ℝ^H × W, T_v ∈ℝ^4 × 4} _v=1^𝒱, where T_v the camera pose from view v. For each scene, we first obtain the full point-cloud P ∈ℝ^M × 3 along with a 2D-3D correspondence map ℳ_v ∈ℝ^M × 2, mapping each point 𝐱_i, i=1,…,M to a pixel location 𝐮_v,i=(u_x,u_y)^T in image view I_v. RGB images are fed to a pretrained image model f^2D: ℝ^H × W × 3→ℝ^H × W × C <cit.> to obtain pixel-level 2D features of size C: Z^2D_v = f^2D(I_v), which are then back-projected to 3D points via: z^2D_v,i = f^2D ( I_v (u_v,i) ) = f^2D ( I_v (ℳ_v(𝐱_i)) ) To fuse 2D features across views Z^3D∈ℝ^M × C, previous works  <cit.> use average pooling: 𝐳_i^3D = 1/𝒱∑_v=1^𝒱𝐳_v,i^2D (see Appendix B for a comprehensive overview). In essence, this method assumes that all views are equally informative for each point, as long as the point is visible from that view. We suggest that naively average pooling 2D features for each point leads to sub-optimal 3D features, as noisy, uninformative views contribute equally, therefore “polluting" the overall representation. We then propose to instead use a generalized version relying on weighted average: 𝐳^3D_i = ∑_v=1^𝒱𝐳^2D_v,i·ω_v,i/∑_v=1^𝒱ω_v,i where ω_v,i∈ℝ is a scalar weight that represents the informativeness of view v for point i. In the next subsection, we describe how to use text data to dynamically compute an informativeness weight for each view based on semantic object-level semantic information. Additionally, vanilla point-wise fusion with pixel-level 2D features leads to non-crisp segmentations and fuzzy object boundaries. To resolve this, we propose to also leverage dense spatial information, i.e., instance-wise 2D/3D segmentation masks, which are used for both: (a) obtaining robust object-level 2D CLIP features from each view, and (b) fusing features only at the points corresponding to the 3D object region. §.§ Employing Object-Centric Priors Let { S_v^2D∈{0,1}^N × H × W}_v=1^𝒱 be 2D instance-wise segmentation masks for each scene, where N the total number of scene objects. We aggregate the 2D masks to obtain S^3D∈{0,1}^M × N, such that for each point i we can retrieve the corresponding object instance n_i=_ν S^3D_i,ν. Semantic informativeness metric Let 𝒬 = { Q_k }_k=1^𝒦, Q_k ∈ℝ^N_k × C be a set of object-specific textual prompts, where 𝒦 the number of dataset object instances and N_k the number of prompts for object k. We use CLIP's text encoder to embed the textual prompts in ℝ^C and average them to obtain an object-specific prompt q_k = 1 / N_k ·∑_j=1^N_k Q_k,j. For each scene, we map each object instance n ∈ [1,N] to its positive prompt q_n^+, as well as a set Q^-_n ≐𝒬 - {q_n^+} of negative prompts corresponding to all other instances. We define our semantic informativeness metric as: G_v,i=(z^2D_v,i, q^+_n_i) - _q∼ Q^-_n_i(z^2D_v,i, q) Intuitively, we want a 2D feature from view v to contribute to the overall 3D feature of point i according to how much its similarity with the correct object instance is higher than the maximum similarity to any of the negative object instances, hence offering a proxy for semantic informativeness. We clip this weight to 0 to eliminate views that don't satisfy the condition G_v,i≥ 0. Plugging in our metric in equation (2) already provides improvements over vanilla average pooling (see Sec. <ref>), however, does not deal with 3D spatial consistency, for which we employ our spatial priors below. Object-level 2D CLIP features For obtaining object-level 2D CLIP features, we isolate the pixels for each object n from each view v from S^2D_v,n and crop a bounding box around the mask from I_v: z^2D_v,n=f^2D_cls ( (I_v, S^2D_v,n) ) (see Appendix C for ablations in CLIP visual prompts). Here we use f^2D_cls: ℝ^h_n × w_n × 3→ℝ^C, i.e., only the feature of CLIP's ViT encoder, to represent an object crop of size h_n × w_n. We can now define our metric from equation (3) also at object-level: G_v,n=(z^2D_v,n, q^+_n) - _q∼ Q^-_n(z^2D_v,n, q) where G_v,n∈ℝ^𝒱× N now represents the semantic informativeness of view v for object instance n. Fusing object-wise features A 3D object-level feature can be obtained by fusing 2D object-level features across views similar to equation (2): 𝐳^3D_n = ∑_v=1^𝒱𝐳^2D_v,n·ω_v,n/∑_v=1^𝒱ω_v,n = ∑_v=1^𝒱𝐳^2D_v,n·Λ_v,n· G_v,n/∑_v=1^𝒱Λ_v,n· G_v,n where each view is weighted by its semantic informativeness metric G_v,n, as well as optionally a visibility metric Λ_v,n=∑_^S^2D_v,n that measures the number of pixels from n-th object's mask that are visible from view v  <cit.>. We finally reconstruct the full feature-cloud Z^3D∈ℝ^M × C by equating each point's feature to its corresponding 3D object-level one via: 𝐳^3D_i = 𝐳^3D_n_i, n_i=_ν S^3D_i,ν. §.§ View-Independent Feature Distillation Even though the above feature-cloud Z^3D could be directly used for open-vocabulary grounding in 3D, its construction is computationally intensive and requires a lot of expensive resources, such as access to multiple camera views, view-aligned 2D instance segmentation masks, as well as a set of text descriptions to compute informativeness metrics. Such utilities are rarely available in open-ended scenarios, especially in robotic applications, where usually only single-view RGB-D images from sensors mounted on the robot are provided. To tackle this, we wish to distill all the above knowledge from the feature-cloud Z^3D into a 3D encoder that receives only a partial point-cloud from single-view posed RGB-D. Hence, the only assumption that we make during inference is access to camera intrinsic and extrinsic parameters, which is a mild requirement in most robotic works. In particular, given a partial colored point-cloud from view v: P_v∈ℝ^M_v× 6 (3D coordinates plus colors), we train a 3D encoder ℰ_θ: ℝ^M_v× 6→ℝ^M_v× C such that ℰ_θ(P_v) = Z^3D. Notice that the distillation target Z^3D is independent of view v. Following  <cit.> we use cosine distance loss: ℒ(θ) = 1 - (ℰ_θ(P_v), Z^3D) See Appendix B.2 for training implementation details. With such a setup, we can obtain 3D features that: (i) are co-embedded in CLIP text space, so they can be leveraged for 3D segmentation tasks from open-vocabulary queries via computing cosine similarities between CLIP text embeddings Q and the predicted feature cloud: Ŝ_i = _i (𝐳̂_i^3D, Q), (ii) are ensured to be optimally informative per object, due to the usage of the semantic informativeness metric to compute Z^3D, (iii) maintain 3D spatial consistency in object boundaries, due to performing object-wise instead of point-wise fusion when computing Z^3D, and (iv) are encouraged to be view-independent, as the same features Z^3D are utilized as distillation targets regardless of the input view v. Importantly, no labels, prompts, or segmentation masks are needed at test-time to reproduce the fused feature-cloud, while obtaining it amounts to a single forward pass of our 3D encoder, hence offering real-time performance. § EXPERIMENTS In our experiments, we explore the following questions: (i) Sec. <ref>: What are the contributions of our proposed object-centric priors for multi-view feature fusion? Does the dense number of views of our proposed dataset also contribute? (ii) Sec. <ref>: How does our method compare to previous open-vocabulary approaches for 3D semantic and referring segmentation tasks? Are the learned features robust to open-ended language? (iii) Sec. <ref>: What are the generalization capabilities of our learned 3D representation in novel domains and novel tasks (3D instance segmentation)? (iv) Sec. <ref>: Can we leverage our 3D learned representation for language-guided 6-DoF robotic grasping? §.§ Multi-view Feature Fusion Ablation Studies [8]r0.45 ! 22emFusion 21em𝐟^2𝐃 21.5emΛ_𝐯,𝐢 22em𝐆_𝐯,𝐢 4cRef.Segm (%) mIoU Pr@25 Pr@50 Pr@75 point patch 44.2 59.9 41.4 27.0 point patch 37.3 55.4 33.7 16.7 point patch 57.0 74.1 59.5 40.9 point patch 57.4 77.0 60.9 39.9 obj obj 65.6 67.0 65.4 64.1 obj obj 67.3 68.7 67.1 65.8 obj obj 83.1 83.9 83.1 82.4 obj obj 80.9 83.1 80.2 79.7 Multi-view feature fusion ablation study for 3D referring segmentation in MV-TOD. To evaluate the contributions of our proposed object-centric priors, we conduct ablation studies on the multi-view feature fusion pipeline, where we compare 3D referring segmentation results of obtained 3D features in held-out scenes of . We highlight that here we aim to establish a performance upper bound that the feature fusion method can provide for distillation, and not the distilled features themselves. r0.3 < g r a p h i c s > Referring segmentation accuracy (Pr@25 (%)) vs. number of utilized views. We ablate: (i) patch-wise vs. object-wise fusion, (ii) MaskCLIP <cit.> patch-level vs. CLIP <cit.> masked crop features, (iii) inclusion of visibility (Λ_v,i) and semantic informativeness (G_v,i) metrics for view selection. Results in Table <ref>. Effect of object-centric priors We observe that all components contribute positively to the quality of the 3D features. Our proposed G_v,i metric boosts mIoU across both point- and object-wise fusion (57.0 % vs. 44.2 % and 83.1 % vs. 65.6 % respectively). Further, we observe that the usage of spatial priors for object-wise fusion and object-level features leads to both higher segmentation crispness (25.7 % mIoU delta), as well as higher grounding precision (42.5% Pr@75 delta). See qualitative comparisons in Appendix D. Effect of the number of views We ablate the 3D referring segmentation performance based on the number of input views in Fig. <ref>, where novel viewpoints are added incrementally. We observe that in both setups (point- and object-wise) fusing features from more views leads to improvements, with a small plateauing behavior around 40 views. We believe this is an encouraging result for leveraging dense multi-view coverage in feature distillation pipelines, as we propose with the introduction of . §.§ Open-Vocabulary 3D Segmentation Results [12]r0.65 ! 2*Method 2*#views 4cRef.Segm. (%) 2cSem.Segm (%) (l2ptr14pt)3-6 (l2ptr6pt)7-8 mIoU Pr@25 Pr@50 Pr@75 mIoU mAcc OpenScene†  <cit.> 73 29.32 44.00 24.51 11.26 21.79 32.14 OpenMask3D^*†  <cit.> 73 65.38 73.05 63.99 57.40 59.47 66.48 DROP-CLIP† (Ours) 73 82.67 86.11 82.43 79.23 75.41 80.02 DROP-CLIP (Ours) 73 66.56 75.73 67.55 59.88 62.04 70.74 OpenSeg^→ 3D  <cit.> 1 12.89 17.36 2.38 0.23 12.83 17.21 MaskCLIP^→ 3D  <cit.> 1 25.64 40.36 18.69 6.95 20.97 32.09 DROP-CLIP (Ours) 1 62.31 71.96 62.75 53.85 54.48 64.41 Referring and Semantic segmentation results on MV-TOD test split. Methods with † denote upper-bound 3D features, whereas DROP-CLIP denotes our distilled model. Methods with ^→ 3D produce 2D predictions that are projected to 3D to compute metrics. Method with * denotes further usage of ground-truth segmentation masks. In this section, we compare referring and semantic segmentation performance of our distilled features vs. previous open-vocabulary approaches, both in multi-view and in single-view settings. For multi-view, we compare our trained model with OpenScene <cit.> and OpenMask3D <cit.> methods, where the full point-cloud from all 73 views is given as input. [8]r0.34 < g r a p h i c s > Referring segmentation accuracy (Pr@25 (%)) vs. different language query types. We note that for these baselines we obtain the upper-bound 3D features as before, as we observed that our trained model already outperforms them, so we refrained from also distilling features from baselines (details in Appendix C2). For single-view, we feed our network with partial point-cloud from projected RGB-D pair, and compare with 2D baselines MaskCLIP <cit.> and OpenSeg <cit.>. Our model slightly outperforms the OpenMask3D upper bound baseline in the multi-view setting (+1.18% in referring and +2.57% in semantic segmentation), while significantly outperforming 2D baselines in the single-view setting (>30% in both tasks). Importantly, single-view results closely match the multi-view ones (∼-4.0%), suggesting that indeed learns view-independent features. Open-ended queries We evaluate the robustness of our model in different types of input language queries, organized in 4 families (class name - e.g. “cereal box", class + attribute - e.g. “brown cereal box", open - e.g. “chocolate Kellogs", and affordance - e.g. “I want something sweet`). Comparative results are presented in Fig. <ref> and qualitative in Fig. <ref>. We observe that our method achieves high grounding accuracy in all query types, even when using single-view. §.§ Zero-Shot Transfer to Novel Domains / Tasks [6]r0.4 ! 2*Method 2cOCID-VLG  <cit.> 2cREGRAD  <cit.> (l2ptr14pt)2-3 (l2ptr4pt)4-5 IoU Pr@25 IoU Pr@25 MaskCLIP^→ 3D  <cit.> 24.1 30.9 33.2 39.0 DROP-CLIP (Ours) 46.2 48.9 59.1 63.0 Referring segmentation results in OCID-VLG <cit.> and REGRAD <cit.> datasets Generalization to Novel Domains We evaluate the 3D referring segmentation performance of our trained model when applied zero-shot in novel tabletop domains. We test in 500 scenes from OCID-VLG <cit.> using the dataset's instance-wise open queries, as well as in 1000 scenes from REGRAD <cit.>, using each model's class name as a query. Only single-view input is provided for both datasets. [6]r0.3 ! Method mIoU AP_25 AP_50 SAM  <cit.> 70.11 95.26 79.88 DROP-CLIP (S) 80.83 91.92 86.83 Mask3D  <cit.> 14.41 18.65 3.41 DROP-CLIP (F) 88.37 93.13 91.47 Zero-shot 3D instance segmentation results in MV-TOD. We compare with MaskCLIP <cit.> as above and report results in Table <ref>. We note that test datasets contain both novel object instances (REGRAD) and classes (OCID-VLG). We observe that our method provides a significant performance boost across both domains (22.1 % mIoU delta in OCID-VLG and 25.9 % in REGRAD). Zero-Shot 3D Instance Segmentation Since our method has been distilled from features with object-level priors, we demonstrate that it can be used out-of-the-box for 3D instance segmentation, via clustering the 3D features (see Appendix E for implementation details). We report results in MV-TOD in Table <ref>, where we compare with SAM <cit.> with single-view images, as well as Mask3D  <cit.> with full point-clouds (transferred from ScanRefer <cit.> with room layout). Mask3D struggles to generalize to tabletop domains, whereas our method achieves comparable performance with SAM for segmenting from single-view, even without being explicitly trained for instance segmentation. §.§ Open-Vocabulary Language-guided Robotic Grasping r0.4 < g r a p h i c s > Language-guided 6-DoF grasping trial with real robot (left), 3D features, grounding and grasp proposals (right).. In this section, we wish to illustrate the applicability of in a language-guided robotic grasping scenario. We integrate our method with a 6-DoF grasp detection network <cit.>, to segment and then propose gripper poses for picking a target object indicated verbally. We randomly place 5-12 objects on a tabletop with different levels of clutter, and query the robot to pick the target object and place it in a fixed position. The user instruction is open-vocabulary and can involve open object descriptions, attributes, or affordances. We conducted 50 trials in Gazebo <cit.> and 10 with a real robot, and observed grounding accuracy of 84% and 80% respectively, and a final success rate of 64% and 60%, where failures were mostly due to grasp proposals that are outside of the robot's kinematic range or motion planning that lead to a collision with other objects and the table. Our setup and example trials are shown in Fig. <ref>, while more details and qualitative results are provided in Appendix E. A video of robot demonstrations is provided as supplementary material. § RELATED WORK 3D Scene Understanding There's a long line of works in closed-set 3D scene understanding <cit.>, applied in 3D classification <cit.>, localization <cit.> and segmentation <cit.>, using two-stage pipelines with instance proposals from point-clouds <cit.> or RGB-D views  <cit.>, or single-stage methods <cit.> that leverage 3D-language cross attentions. <cit.> use CLIP embeddings for pretraining a 3D segmentation model, but still cannot be applied open-vocabulary. Open-Vocabulary Grounding with CLIP Following the impressive results of CLIP <cit.> for open-set image recognition, followup works transfer CLIP's powerful representations from image- to pixel-level <cit.>, extending to detection / segmentation, but limited to 2D. For 3D segmentation, the closest work is perhaps OpenMask3D <cit.> that extracts multi-view CLIP features from instance proposals from Mask3D <cit.> to compute similarities with open text queries. 3D CLIP Feature Distillation Recent works distill features from 2D foundation models with point-cloud encoders <cit.> or neural fields <cit.>, with applications in robot manipulation <cit.> and navigation <cit.>. However, associated works extract 2D features from OpenSeg <cit.>, LSeg <cit.>, MaskCLIP <cit.> or multi-scale crops from CLIP <cit.> and fuse point-wise with average pooling, while our approach leverages semantics-informed view selection and segmentation masks to do object-wise fusion with object-level features (see detailed overview in Appendix F). § CONCLUSION, LIMITATIONS AND FUTURE WORK We propose , a 2D→3D CLIP feature distillation framework that employs object-centric priors to select views based on semantic informativeness and ensure crisp 3D segmentations, while working with single-view RGB-D. We also release , a large-scale synthetic dataset of multi-view tabletop scenes with dense annotations that can be leveraged for several downstream tasks. We hope our work can benefit the robotics community, both in terms of released resources as well as illustrating and overcoming theoretical limitations of existing 3D feature distillation works. While our spatial object-centric priors lead to improved segmentation quality, they collapse local features in favor of a global object-level feature, and hence cannot be applied for segmenting object parts. In the future, we plan to add object part annotations in our dataset and fuse with both object- and part-level masks. Second, only provides grounding and a two-stage pipeline is needed for grasping, while our dataset already provides rich 6-DoF grasp annotations. A next step would be to also distill them, opting for a joint 3D representation for grounding and grasping.
http://arxiv.org/abs/2406.18814v1
20240627010804
Length Optimization in Conformal Prediction
[ "Shayan Kiyani", "George Pappas", "Hamed Hassani" ]
stat.ML
[ "stat.ML", "cs.AI", "cs.LG", "stat.ME" ]
Angle-dependent planar thermal Hall effect by quasi-ballistic phonons in black phosphorus Kamran Behnia July 1, 2024 ========================================================================================= § ABSTRACT Conditional validity and length efficiency are two crucial aspects of conformal prediction (CP). Achieving conditional validity ensures accurate uncertainty quantification for data subpopulations, while proper length efficiency ensures that the prediction sets remain informative and non-trivial. Despite significant efforts to address each of these issues individually, a principled framework that reconciles these two objectives has been missing in the CP literature. In this paper, we develop Conformal Prediction with Length-Optimization (CPL) - a novel framework that constructs prediction sets with (near-) optimal length while ensuring conditional validity under various classes of covariate shifts, including the key cases of marginal and group-conditional coverage. In the infinite sample regime, we provide strong duality results which indicate that CPL achieves conditional validity and length optimality. In the finite sample regime, we show that CPL constructs conditionally valid prediction sets. Our extensive empirical evaluations demonstrate the superior prediction set size performance of CPL compared to state-of-the-art methods across diverse real-world and synthetic datasets in classification, regression, and text-related settings. § INTRODUCTION Consider a distribution 𝒟 over the domain 𝒳×𝒴, where 𝒳 is the covariate space and 𝒴 is the label space. Using a set of calibration samples (X_1, Y_1), …, (X_n, Y_n) drawn i.i.d. from 𝒟, the objective of conformal prediction (CP) is to create a prediction set C(x), for each input x, that is likely to include the true label y. This is formalized through specific coverage guarantees on the prediction sets. For example, the simplest, and yet the most commonly-used guarantee is the marginal coverage: The prediction sets C(x) ⊆𝒴 achieve marginal coverage if, for a test sample (X_n+1, Y_n+1), we have (Y_n+1∈ C(X_n+1)) = 1 - α. Here, α is the given miscoverage rate, and the probability is over the randomness in the calibration and test points. Conformal Prediction faces two major challenges in practice: conditional validity and length inefficiency. Marginal coverage is often a weak guarantee; in many practical scenarios we need coverage guarantees that hold across different sub-populations of the data. E.g., in healthcare applications, obtaining valid prediction sets for different patient demographics is crucial; we often need to construct accurate prediction sets for certain age groups or medical conditions. This is known as conditional validity - which ideally seeks a property called full conditional coverage: For every x ∈𝒳 we require (Y_n+1∈ C(X_n+1) | X_n+1 = x) = 1 - α. Alas, achieving full conditional coverage with finite calibration data is known to be impossible <cit.>. Consequently, in recent years several algorithmic frameworks have been developed that guarantee relaxations of (<ref>)<cit.>. However, the prediction sets constructed by these methods are often observed to be (unnecessarily) large in size <cit.>; i.e., it is possible to find other conditionally-valid prediction sets with smaller length. Even in the case of marginal coverage, the prediction sets constructed by algorithms such as split-conformal can be far from optimal in size. This brings us to the second major challenge of CP, namely length efficiency. Length efficiency is about constructing prediction sets whose size (or length) is as small as possible while maintaining conditional validity. Here, “length” refers to the average length of the prediction sets C(X) over the distribution of X. It is well known that length efficiency is crucial for the prediction sets to be informative <cit.>. In fact, the performance of CP methods is observed to be closely tied to the length efficiency of the prediction sets in practical applications in decision making<cit.>, robotics<cit.>, communication systems <cit.>, and large language models <cit.>. This raises the question of how length and coverage fundamentally interact. Along this line, we ask: How can we construct prediction sets that are (near-) optimal in length while maintaining valid (conditional) coverage? In this paper, we answer this question by proposing a unified framework for length optimization under coverage constraints. Before providing the details, we find it necessary to explain (i) where in the CP pipeline our framework is operating, and (ii) the crucial role of the covariate X. First, we note that the pipeline of CP consists of three stages (see Figure <ref>) which are often treated separately <cit.>: Training a model using training data, designing a conformity score, and constructing the prediction sets. Our framework operates at the third stage, i.e., we aim at designing the prediction sets assuming a given predictive model as well as a conformity score. In recent years, various powerful frameworks have been developed for designing better conformity scores <cit.> and obtaining conditional guarantees using a given score <cit.>. The missing piece in this picture is length optimization which is the subject of this paper. Second, we emphasize that length optimization can be highly dependent on the structural properties of the covariate X. It has been known in the CP community that the structure of X can play a role in terms of length efficiency and coverage validity (see e.g. <cit.>). However, to our knowledge, a principled study of length optimization using the covariate X is missing. To showcase the principles and challenges of using the covariate X in the design of prediction sets, we provide a toy example. Example. Let 𝒳 = [-1,+1]. Assume X is uniformly distributed over 𝒳, and Y is distributed as: -if x < 0, then Y ∼ x + 𝒩(0,1), and -if x ≥ 0, then Y ∼ x + 𝒩(0,2), see Figure <ref>-(a). For simplicity, we assume in this example that we have infinitely many data points available from this distribution. As the model, we consider f(x)=𝔼[Y | X = x] = x. As a result, considering the conformity score S(x,y) = | y - f(x) |, the distribution of S follows the folded Gaussian distribution: i.e. for x < 0 we have 𝒟_S | x = |N(0,1)|, and for x ≥ 0 we have 𝒟_S | x = |N(0,2)| – see Figure <ref>-(b). We aim for marginal coverage of 0.9, and consider prediction sets C(x) in the following form: -if x < 0: C(x) = {y ∈ℝ s.t. S(x,y) ≤ q_- }, and - if x ≥ 0: C(x) = {y ∈ℝ s.t. S(x,y) ≤ q_+ }. For every value of q_+ in the range [1.4, +∞] there exists a unique q_- that ensures 0.9-marginal coverage. This provides a continuous family of prediction sets, parameterized by q_+, all of which are marginally valid, but have different average lengths. Length optimization over this family amounts to minimizing the average length (which is equal to q_- + q_+) over the choice of q_+. Figure <ref>-(c) plots the average length versus q_+. Four lessons from this example are as follows: (i) First, as we see from Figure <ref>-(c), the optimal-length solution is different from the sets constructed by the Split-Conformal method (for which q_- and q_+ are both equal to the 0.9-quantile of the marginal distribution of S). Hence, the structure of the optimal prediction sets depends on the structure of the covariates (in this case the sign of the covariate). It can be shown that the optimal-solution found is the unique globally optimal-length solution among all the possible marginally-valid prediction sets. Hence, the feature h(x) = sign(x) is crucial in determining the optimal-length sets. (ii) Second, taking another look at the Figure <ref>-(c), the optimal-length solution among marginally valid sets is different than the solution which gives full conditional coverage, for which q_+ (resp. q_-) is equal to the 0.9-quantile of the conditional distribution 𝒟_S | x ≥ 0 (resp. 𝒟_S | x < 0 ). (iii) Third, for (q_-^*, q_+^*) corresponding to the optimal-length sets, the conditional PDFs take the same value; i.e. p(S = q_+^* | x ≥ 0) = p(S = q_-^* | x ≤ 0) – see Figure <ref>-(b). This is not a coincidence, and as we will show, it is the main deriving principle to find the optimal sets – see Example <ref> below. (iv) Finally, in this example, we were able to construct the optimal sets using the conditional PDFs. However, in practice, the distributions are unknown and we only have access to a finite-size calibration set. Hence, a key challenge is how to learn from data those features of the covariates that are pivotal for length optimization. Contributions. We develop a principled framework to design length-optimized prediction sets that are conditionally valid. We formalize conditional validity using a class of covariate shifts (as in <cit.>). As for the class, we consider a linear span of finitely-many basis functions of the covariates. This is a broad class; E.g., by adjusting the basis functions we can recover marginal coverage or group-conditional coverage as special cases. We develop a foundational minimax notion where the min part ensures conditional validity and the max part optimizes length. In the infinite-sample setting, the solution of the minimax problem is proved to be the optimal-length prediction sets, through a strong duality result. To design our practical algorithm for the finite-sample setting (termed as CPL), our key insight is to relax minimax problem using a given conformity score and a hypothesis class that aims to learn from data features that are important for length optimization and uncertainty quantification. In the finite sample setting, we guarantee that our algorithm remains conditionally valid up to a finite-sample error. We extensively evaluate the performance of CPL on a variety of real world and synthetic datasets in classification, regression, and text-based settings against numerous state-of-the-art algorithms ranging from Split-Conformal<cit.>, Jacknife<cit.>, Local-CP<cit.>, and CQR<cit.>, to BatchGCP<cit.> and Conditional-Calibration<cit.>. Figure <ref> showcases the superior length efficiency of our method in different tasks. In all the tasks our framework achieves the same level of conditional validity as the baselines. A detailed discussion of our experiments in section <ref> showcase the overall superior performance of CPL in achieving significantly better length efficiency in different scenarios where we demand marginal validity, group conditional validity, or the more complex case of conditional validity with respect to a class of covariate shifts. Further Related Work. We now review some of the remaining prior works that are relevant. Designing better conformity scores: There is a growing body of work on designing better conformity scores to improve conditional validity and/or length efficiency <cit.> (see <cit.> for more references). Among these, the work of <cit.> can fairly be seen to go beyond just designing a better conformity score. The authors propose to optimize for better length efficiency, however, their framework is limited to marginal coverage whereas we look at different levels of conditional validity. As shown in Fig. <ref>, our framework applies post selection of the score, and hence can use any of the conformity scores designed in the literature (see Section <ref> for experiments with different scores). Level set estimation. As we will see, a key ingredient to our framework is to study the problem of CP through the lens of level set estimation. Level set estimation has been extensively studied in the literature of statistics <cit.>, and it was introduced to CP community by <cit.>. Our framework expands and builds upon the connection of level set estimation and CP. In particular, comparing to <cit.>, we take three significant steps. (i) We propose to utilize the covariate, X, to optimize length via structured prediction sets. This results in major improvements in length efficiency in practice. (ii) We go beyond marginal coverage to different levels of conditional validity. (iii) Finally, unlike <cit.>, we do not need to directly estimate any marginal\conditional density function. Level sets and their approximations appear naturally in our optimization framework. § PROBLEM FORMULATION §.§ Preliminaries on conditional validity and length of prediction sets In this section, we discuss in detail the two notions of conditional validity and length. We recall that the data (X,Y) is generated from a distribution 𝒟 supported on 𝒳×𝒴 (distributions 𝒟_X and 𝒟_Y|X are then defined naturally). The prediction sets are of the form C(x): 𝒳→ 2^𝒴 for x ∈𝒳. Conditional Coverage Validity. Ultimately, in conformal prediction, the goal is to design prediction sets that satisfy the full conditional coverage desiderata introduced in (<ref>). Despite the importance, full conditional coverage is impossible to achieve in the finite-sample setting <cit.>. Therefore, several weaker notions of coverage such as marginal <cit.>, group conditional <cit.>, and coverage with respect to a class of covariate shifts <cit.> have been considered in the literature. Here, we focus on the notion of conditional validity with respect to a class of covariate shifts, which unifies all these notions, and contains each of them as a special case by adjusting the class. We now define this notion. Fix any non-negative function f and let 𝒟_f denote the setting in which {(X_i, Y_i)}_i=1^n is sampled i.i.d. from 𝒟, while (X_n+1, Y_n+1) is sampled independently from the distribution in which 𝒟_X is shifted by f, i.e., X_n+1∼f(x)/_𝒟[f(X)]· d 𝒟_X(x), Y_n+1| X_n+1∼𝒟_Y | X . We define the exact coverage validity under non-negative covariate shift f as, _X_n+1∼𝒟_f(Y_n+1∈ C(X_n+1) ) = 1- α, which can be equivalently written as, _X_n+1∼𝒟[f(X_n+1){[Y_n+1∈ C(X_n+1)]-(1-α)}]=0. This last equality enables us to define the notion of generalized covariate shift. For prediction sets C, we say that coverage with respect to a function f (which can take negative values) is guaranteed if (<ref>) holds. We often drop the term “generalized” when we refer to generalized covariate shifts. Let ℱ be a class of functions from 𝒳 to ℝ. We say the prediction sets C satisfy valid conditional coverage with respect to the class of covariate shifts ℱ, if (<ref>) holds for every f ∈ℱ. It is easy to see that if ℱ is the class of all (measurable) functions on 𝒳, then conditional validity w.r.t. ℱ is equivalent to full conditional converge (<ref>). Accordingly, using less complex choices for ℱ would result in relaxed notions of conditional coverage. We now review three important instances of ℱ: 1) Marginal Coverage: Setting ℱ={x ↦ c | c ∈}, i.e. constant functions, recovers the marginal validity, i.e., _X_n+1∼𝒟 (Y_n+1∈ C(X_n+1)) = 1- α. 2) Group-Conditional Coverage: Let G_1, G_2, ⋯, G_m ⊆𝒳 be a collection of sub-groups in the covariate space. Define f_i(x) = [ x ∈ G_i] for every i ∈ [1, m]. By using the class of covariate shifts ℱ={∑_i=1^mβ_if_i(x) | β_i∈ for every i ∈ [1, ⋯, m]}_i=1^m we can obtain the group-conditional coverage validity; i.e. _X_n+1∼𝒟 (Y_n+1∈ C(X_n+1) | X ∈ G_i) = 1- α, for every i ∈ [1, m]. 3) Finite-dimensional Affine Class of Covariate Shifts: Let Φ: 𝒳→^d be a predefined function (a.k.a. the finite-dimensional basis). The class ℱ is defined as ℱ = {⟨, Φ(x)⟩ | ∈^d}. Here, ℱ is an affine class of covariate shifts as for any f,f' ∈ℱ and ϵ∈ℝ we have f + ϵ f' ∈ℱ. We use the term “bounded” class of covariate shifts when we assume, 1≤ i≤ dmax x ∈𝒳sup ϕ_i(x) < ∞, where Φ(x) = [ϕ_1(x), ⋯, ϕ_d(x)]. The finite-dimensional affine class is fairly general and covers a broad range of scenarios in theory and practice (see Section <ref>, Remark <ref> in appendix <ref>, as well as <cit.>). As special cases, it includes both of the marginal and group-conditional settings. To see this, one can pick Φ(x) = 1 for the marginal case and Φ = [f_1(x), f_2(x), ⋯, f_m(x)]^T for the group-conditional scenario. Consequently, from now on we will focus on the scenario where ℱ is a d-dimensional affine class of covariate shifts. Length. We use the term len(C(x)) to denote the length of a prediction set at point x ∈𝒳. Length can have different interpretations in different contexts, yet it always depicts the same intuitive meaning of size. In the case of regression when 𝒴= we have, len(C(x)) = ∫_𝒴[y ∈ C(x)] dy. Here, dy can be interpreted as Lebesgue integral (i.e. the usual way to measure length on ℝ). Similarly, in the classification setting, when 𝒴= {y_1, y_2, ⋯, y_L}, we let len(C(x)) = ∑_i=1^L [y ∈ C(x)]. We will develop our theoretical arguments for regression; however, our algorithm CPL is not limited to regression, and as we will show in Section <ref>, CPL also performs very well in image-classification and text-related tasks. §.§ Problem Statement In the previous section, we defined conditional validity and length as two main notions for prediction sets. The canonical problem that we study in this paper is: Primary Problem: C(x)Minimize _X [len(C(X))] subject to _X, Y[f(X) {[Y∈ C(X)] - (1-α)}] = 0, ∀ f ∈ℱ I.e., we would like to construct predictions sets C(x), x ∈𝒳 with optimal (i.e. minimal) average length while being conditionally valid with respect to a (finite-dimensional) class of covariate shifts ℱ. Note that in the constraints (conditional validity) the expectation is over (X,Y) drawn from 𝒟 which is the distribution of the data; whereas in the objective (average length) the expectation is only over X drawn from 𝒟_X which is the distribution of the covariate. In the special case of marginal validity the constraint becomes [c {[Y∈ C(X)] - (1-α)}] = 0 for every c ∈, which then equivalently is (Y∈ C(X)) = 1-α. Similarly, in the case of special case of group-conditional the constraint boils down to (Y∈ C(X)|X∈ G_i) = 1-α for every i∈[1,⋯, m]. § MINIMAX FORMULATIONS In this section we analyze the Primary Problem<ref> in the infinite-data regime. Our goal is to derive the principles of an algorithmic framework for Primary Problem<ref>. We will do so by deriving an equivalent minimax formulation whose solutions have a rich interpretation in terms of level sets of the data distribution. Having the practical finite-sample setting in mind, we then further relax the minimax formulation using a given conformity score and a hypothesis class, where we also analyze the impact of this relaxation on conditional coverage validity and length optimality. §.§ The Equivalent Minimax Formulation: A Duality Perspective A natural way to think about the Primary Problem<ref> is to look at the dual formulation. Let us define, g_α(f,C)=_X,Y[f(X){[Y∈ C(X)] - (1-α)}] - _X [len(C(X))], Note that the first term in g_α(f,C) corresponds to coverage w.r.t. to f and the second term corresponds to length. One can easily check that g_α(f,C) is equivalent[The definition of g_α(f,C) is the negative of the conventional Lagrangian.]to the Lagrangian for the Primary Problem (given that ℱ is an affine finite-dimensional class). The dual problem can be written as: Minimax Problem: f ∈ℱMinimize C(x) Maximize g_α(f,C) The Proposition below shows that strong duality holds and derives the structure of the optimal sets. Assuming 𝒟_Y|X is continuous, the Primary Problem and the Minimax Problem are equivalent. Let (f^*, C^*(x)) be an optimal solution of the Minimax Problem. Then, C^* is also the optimal solution of the Primary Problem. Furthermore, C^* has the following form: C^*(x) = {y ∈𝒴 | f^*(x) p(y|x) ≥ 1} The continuity assumption on 𝒟_Y|X, known as tie-breaking, is necessary for getting exact coverage in CP (see e.g. <cit.>). We will now illustrate the result of this proposition using two examples. [Marginal case] For the marginal case, where ℱ ={x ↦ c | for every c ∈}, we have C^*(x) = {y ∈𝒴 | c^* p(y|x) ≥ 1}. In other words, the minimal length marginally valid prediction set is of the form of a specific level set of p(y|x). Note that the value of c^* can be found using the fact that the marginal coverage should be 1-α. [Group-conditional case] For the group-conditional setting (see Section <ref>), we have C^*(x) = {y ∈𝒴 | (∑_i=1^m β_i^*[x ∈ G_i]) p(y|x) ≥ 1}, where G_1, G_2, ⋯, G_m are the given sub-groups of the covariate space. Here, the optimal-length prediction sets can be characterized by m scalar values β^*_1, ⋯, β_m^* (m is the number of groups). For any x ∈𝒳, C(x) is still in the form of a level set of p(y|x) which is identified depending on the groups that contain x. For example, if x belongs to groups 1 and 2, then C(x) will be the set of all y ∈𝒴 such that (β_1^* + β_2^*)p(y|x) ≥ 1. Simply put, Proposition <ref> states that the answer to the Primary Problem corresponds to very specific level sets of the of the distributions (PDFs) p(y | x). Further, this optimal level set has an equivalent minimax formulation. We will see in section <ref> how this minimax procedure, and its relaxation which will be derived shortly, can be used to derive a finite sample algorithm with formal conditional validity guarantees. Next, we will discuss the roles and interpretations of the inner maximization and the outer minimization. The Inner Maximization. For a function f: 𝒳→ℝ we define the level sets corresponding to f as C_f(x) = [f(x) p(y|x) ≥ 1]. From Proposition <ref>, the optimal prediction sets have the form C_f^* for some f^* ∈ℱ. The following proposition, shows that for a fixed f the outcome of the inner maximization is exactly the level set C_f. [Variational representation] For any f ∈ℱ, and α∈ [0, 1] we have, C_f = C(x)argmax g_α(f,C), The Outer Minimization. So far, we know that for every f, the inner maximization chooses the level set C_f given in (<ref>). The next question is which of these level sets will be chosen by the outer minimization? The answer is simple: The one that is conditionally valid with respect to the whole class ℱ. In other words, the outer minimization chooses a f^* such that C_f^* has correct conditional coverage w.r.t. to the class ℱ. This is made precise as follows. Let f ∈ℱ. Recall that ℱ is an affine class of functions. This means for every f∈ℱ and ε∈, f+εf∈ℱ. We have for every f∈ℱ, d/dεg_α(f+εf, C_f+εf)|_ε=0 = [f(X){[Y ∈ C_f(X)] - (1-α)}]. Now, at the optimum solution f^* of the outer minimization, all the directional derivatives should be zero (since f^* is a stationary point). Hence, using the lemma, C_f^* satisfies coverage validity on the class ℱ. Let us summarize: The outer minimization ensures that we navigate the space of conditionally valid prediction sets while the inner maximization finds the most length efficient among them. This interpretation is the basis for the Relaxed Minimax Problem presented next. §.§ Relaxed Minimax Formulation using Structured Prediction Sets Taking a closer look at the Minimax formulation <ref>, the inner maximization is over the space of all the possible prediction sets, which is an overly complex set. We will need to relax this maximization due to two reasons: (1) Eventually, we would like to solve the finite-sample version of this problem as in practice we only have finite-size calibration data. Having an infinitely complex space of prediction sets can easily lead to overfitting. Consequently, we will need to restrict the maximization over a class of prediction sets of finite complexity. (2) In CP the prediction sets are often constructed using a conformity score which is carefully designed. For instance, the common practice in CP is to structure the prediction sets by threshold the conformity score. In light of these, we will need to relax the Minimax formulation. Structured Prediction Sets: Given the above considerations, we will restrict the domain of inner maximization in the most natural way. Aligned with the established methods in the CP literature, we look at the prediction sets that are described by thresholding a conformity measure. Let S(x, y): 𝒳×𝒴→ be a given conformity score. Consider a function h: 𝒳→ℝ. We will consider structured prediction sets of the form C_h^S = {y ∈𝒴 | S(x,y)≤ h(x)}. For the ease of notation, we use g_α(f, h) instead of g_α(f, C_h^S). We can then relax the Minimax Problem formulation <ref> to the following formulation. Let ℋ be a class of real valued functions on the set 𝒳. For now, ℋ could be any class – e.g. it could be the class of all the real-valued functions on 𝒳 (with infinite complexity) or it could be a class with bounded complexity (e.g. a neural network parametrized by its weights). Given the class ℋ we define the Relaxed Minimax Problem as follows: Relaxed Minimax Problem: f ∈ℱMinimize h ∈ℋMaximize g_α(f, h). Here, the relaxation is due to the fact that the inner maximization is not over all the possible prediction sets, but it is over structured prediction sets of form (<ref>) where h ∈ℋ. An Interpretation of the Relaxed Minimax Problem Based on Level Sets. Let us fix a function f ∈ℱ, and take a closer look at the inner maximization of the Relaxed Minimax Problem. As we just mentioned, the only difference between Relaxed Minimax Problem and the original one is that the inner maximization is over prediction sets of the form C_h^S rather than all the possible prediction sets. Recall from Proposition <ref> that the maximizer of the inner maximization in the original Minimax Problem is the level set C_f given in (<ref>). From Proposition <ref> we can further write, max_h ∈ℋ g_α(f,h) = g_α(f,C_f) - min_h ∈ℋ 𝔼_X[ d (C_f, C_h^S)], where, 𝔼_X[ d (C_f, C)] = _X∫_𝒴|(f(X) p(y|X) -1)|{[y ∈ C_f(X)Δ C(X)]}dy, and C_f(X)Δ C(X) denotes the symmetric difference of the two sets. The term d (C_f, C_h^S) is a positive (distance) term that quantifies how close C_h^S is to the level set C_f (look at the proof of the Proposition <ref> in Appendix <ref>). We can thus conclude that the inner maximization searches for a function h ∈ℋ whose corresponding prediction sets C_h^S are closest to the level sets C_f. Hence, at the high level, the Relaxed Minimax Problem operates as follows: The inner maximization searches a structured prediction set C_h^S that is closest to C_f, and the outer minimization adjusts the choice f ∈ℱ to ensure the conditional validity of C_h^S with respect to ℱ. §.§ Theoretical Guarantees for the Relaxed Minimax Problem A natural question to ask now is whether this relaxation preserves valid coverage guarantees or length optimality. Clearly, the answer to this question depends on the choice of the conformity measure S and the class ℋ. Let us start by considering the structure of the optimal prediction sets given in (<ref>); it is easy to see that the optimal sets have the form C^*(x) = {y∈𝒴 | 1/p(y | x)≤ f^*(x) }, for some function f^*: 𝒳→ℝ.[We are assuming 1/0 = + ∞.] Comparing (<ref>) with (<ref>) makes it clear that, unless the score S(x,y) is aligned with 1/p(y|x), the structured sets C_h^S may not be rich enough to capture the level sets in (<ref>). That is to say, once we construct our prediction sets using the conformity score S – i.e. construct the prediction sets using (x,S(x,y)) instead of (x,y)– there can be a fundamental gap to the best achievable length. This is due to the fact that the level sets of S(x, y) can be fundamentally different from the level sets of p(y|x)–i.e. the former may not be rich enough to fully capture the latter. Given the above considerations, we ask: what is the appropriate notion of length-optimality that can be achieved by solving the Relaxed Minimax Problem? A natural answer is the “optimal length” that can be achieved over all the structured prediction sets C_h^S, h ∈ℋ. Accordingly, we introduce the Relaxed Primary Problem which is the dual form of the Relaxed Minimax Problem: Relaxed Primary Problem: h∈ℋMinimize [len(C_h^S(X))] subject to 𝔼[f(X) {[Y∈ C_h^S(X)] - (1-α)}] = 0, ∀ f ∈ℱ To study the connection between these two problems, one would ideally want to show that the Relaxed Minimax Problem is equivalent to the Relaxed Primary Problem, similar to the strong duality relation we showed in Proposition <ref> between the Minimax Problem <ref> and the Primary Problem <ref>. At a high level, we demonstrate that strong duality holds when ℋ is “sufficiently complex”. Specifically, we first let ℋ be the class of all (measurable) functions from 𝒳 to ℝ, and show that the Relaxed Minimax Problem and the Relaxed Primal Problem achieve the same optimal value—i.e., strong duality. Hence, if the Relaxed Primal Problem admits an optimal solution h^*—whose corresponding prediction sets C_h^*^S have optimal length—then the Relaxed Minimax Problem has an optimal solution of the form (f^*, h^*). Developing this connection is challenging because, unlike the Minimax Problem <ref>, the inner maximization in the Relaxed Minimax Problem is non-concave. Also, the set of predictions sets {C_h^S}_h ∈ℋ lacks any linear or convexity structure that can be exploited. This means we cannot derive strong duality results using conventional tools from optimization literature, which are primarily developed around convex-concave minimax problems. However, the specific statistically rich structure of this problem allows us to establish a strong connection between these two problems. Next, we consider a bounded-complexity class ℋ. We show that under a realizability assumption, i.e., h^* ∈ℋ, strong duality still holds. We denote the optimal value of the Relaxed Minimax Problem by OPT and the optimal value of the Relaxed Primary Problem by L^*. One can interpret L^* as the smallest possible length over all the structured prediction sets that are conditionally valid with respect to ℱ. It is worth mentioning a strong duality result means L^* = - OPT, as our definition of g_α(f, h) differs from the conventional definition of Lagrangian by a negative sign, as we wanted to stay consistent with the literature on level set estimation (e.g. see <cit.>). Let us now state a technical assumption needed to develop our theory. Recall that the class ℱ is defined as ℱ = {⟨, Φ(x)⟩ | ∈^d}, where Φ: 𝒳→^d is a predefined function (a.k.a. the finite-dimensional basis). Let Φ(x) = [ϕ_1(x), ⋯, ϕ_d(x)]. We assume each ϕ_i(x) takes countably many values. The above assumption is mainly a technical assumption that helps to obtain a more simplified and interpretable result. Assumption <ref> holds in both marginal and group-conditional settings (as in these cases ϕ_i's take their value in {0, 1}– see Section <ref>). Moreover, the values taken by any bounded function ϕ can be quantized with arbitrarily-small precision to a function that can take finitely many values (e.g. by uniform quantization). Hence, any class of finite-dimensional covariate shifts can be quantized with arbitrarily-small error to fall under the above assumption. We will provide a more general but weaker result without assumption <ref> in Appendix <ref>. Let us assume 𝒟_S|X and 𝒟_X are continuous. Let ℱ be a bounded finite-dimensional affine class of covariate shifts and ℋ be the class of all (measurable) functions from 𝒳 to ℝ. Under assumption <ref>, the strong duality holds between the Relaxed Primary Problem and the Relaxed Minimax Problem. In other words, f ∈ℱMinimize h ∈ℋMaximize g_α(f, h) = h ∈ℋMaximize f ∈ℱMinimize g_α(f, h), and the optimal values of the two problems coincide (L^* = - OPT). If the Relaxed Primary Problem (i.e., the right hand side of the above equation) attains it's optimal value at h^* ∈ℋ, then there exists f^*∈ℱ such that (f^*,h^*) is an optimal solution to the Relaxed Minimax Problem. Otherwise, let f^* denote the optimal solution to the outer minimization of the Relaxed Minimax Problem. Then for every ε>0, there exists h^* ∈ℋ such that, (i) |g_α(f^*, h^*) - OPT|≤ε. (ii) h^* is conditionally valid; i.e., for every f∈ℱ we have, [f(X){[Y∈ C_h^*^S(X)]-(1-α)}]=0. (iii) _Xlen(C_h^*^S(X)) ≤ L^* + ε. Put it simply, Theorem <ref> says fixing any ε>0, there is an ε-close optimal solution of the Relaxed Minimax Problem (statement (i)) such that it is a feasible solutions of the Relaxed Primary Problem with perfect conditional coverage (statement (ii)), and it achieves at most an ε-larger prediction set length compared to the smallest possible, which is the solution of Relaxed Minimax Problem (statement (iii)). Bounded complexity class ℋ: The following realizability condition paves the way to generalize the strong duality results to the case where ℋ is a class of functions with bounded complexity. Let ℋ be a class of real valued functions defined on 𝒳. We say ℋ is Realizable if there exists h^* ∈ℋ such that h^* is an optimal solution of the Relaxed Primary Problem and the Relaxed Minimax Problem. Let ℋ = {h_θ | θ∈Θ}, be a realizable class of functions where Θ is a compact set. Then the Relaxed Minimax Problem and the Relaxed Primary Problem are equivalent. In other words, strong duality holds and the min and max are interchangeable in the Relaxed Minimax Problem. Put it simply, for a realizable class ℋ, solving Relaxed Minimax Problem gives us prediction sets that have the smallest possible length within the class ℋ while they are conditionally valid with respect to ℱ. The realizability assumption is necessary to our study of the coverage and length properties of the Relaxed Minimax Problem. In fact, for an arbitrary class of functions ℋ which is not complex enough, one can show deriving exact coverage guarantees is impossible. Perhaps the simplest way to see this is to let ℋ={x→ c | c∈} and ℱ to be a collection of complex groups of the covariate space. It is easy to verify that it is not possible to achieve exact coverage over ℱ utilizing constant thresholds coming from ℋ. Another way of looking at this issue is to compare the Relaxed Minimax Problem and the Minimax Problem. In the relaxed problem, a class of structured prediction sets, which are of the form (<ref>), are optimized to approximate the level sets given in (<ref>). Therefore, the complexity of ℋ is very important to achieve a good approximation. Employing realizability assumptions is common in CP literature to handle these situations (e.g. see <cit.>). In the following section, we will focus on the finite-sample regime and a class ℋ with bounded complexity. This will be particularly necessary for the finite-sample regime as overfitting is inevitable when using an overly complex class. We will show in the next section that we can still guarantee the conditional coverage validity up to some finite-sample error terms. However, theoretical arguments for length optimality become challenging because the connections between the primal and dual problems fall apart in the finite sample regime, where the assumptions of distributional continuity no longer hold. That being said, we sill see in section <ref> that our framework significantly improves over state-of-the-art methods in terms of length efficiency in real world applications. § FINITE SAMPLE SETTING: THE MAIN ALGORITHM In this section, we present and analyze our main algorithm. Assume we have access to n independent and identically distributed (i.i.d.) calibration data {(x_i,y_i)}_i=1^n from the distribution 𝒟. Let S(x, y): 𝒳×𝒴→ℝ represents a given conformity score. We consider a class of functions ℋ–e.g. a neural network parameterized by its weights. Additionally, we consider a given finite-dimensional affine class of functions ℱ that represents the level of conditional validity to be achieved. Unbiased Estimation. From (<ref>), the objective g_α(f, h) of the Relaxed Minimax Problem <ref> admits a straightforward unbiased estimator using n i.i.d. samples {(x_i,y_i)}_i=1^n: g_α, n(f, h) = 1/n∑_i=1^n[f(x_i){[S(x_i, y_i) ≤ h(x_i)] - (1-α)}] - 1/n∑_i=1^n ∫_𝒴[S(x_i, y) ≤ h(x_i)] dy. Smoothing. The objective g_α,n(f, h) is non-smooth as it involves indicator functions. To make it differentiable, we will need to smooth the objective. A common way to smooth the indicator function [a < b] is via Gaussian smoothing. Consider the Gaussian error function, erf(x) = 2/√(π)∫_0^x e^-t^2 dt, and define the smoothed indicator function as: (a, b) = 1/2( 1 + erf( a - b/√(2)σ) ), where σ controls the smoothness of the transition between 0 to 1. The value of σ is often chosen to be small, and when it approaches zero we retrieve the indicator function. The smoothed version of our objective is g_α,n(f, h) = 1/n∑_i=1^n[f(x_i){(S(x_i, y_i), h(x_i)) - (1-α)}] - 1/n∑_i=1^n ∫_𝒴(S(x_i, y) , h(x_i)) dy. Given this smoothed objective, our main algorithm is presented in Algorithm <ref>. Oftentimes in practice, the machine learning models, such as Neural Networks, are parametric (h_θ, where θ is the set of parameters). In that case, performing the iteration on h can be implemented by updating θ. Recalling the definition of ℱ = {⟨, Φ(x)⟩ | ∈^d}, each f ∈ℱ can be represented by f(x) = ⟨, Φ(x)⟩≡ f_(x). Hence, performing the iteration on f can be implements by updating ∈^d. §.§ Finite Sample Guarantees In this section we provide finite sample guarantees for the conditional coverage of the prediction sets constructed by CPL using the covering number of class ℋ (denoted by 𝒩). We analyse the properties of stationary points of CPL. This is not a very restrictive choice as it is well known in the optimization literature that the converging points of gradient descent ascent methods (like CPL) is the stationary points of the minimax problem (e.g. look at <cit.>). Let us now state the required technical assumption. A distribution 𝒫, is called L-lipschitz if we have for every real valued numbers q ≤ q^', _X ∼𝒫(X ≤ q^')-_X ∼𝒫(X ≤ q) ≤ L(q^'-q). The conditional distribution 𝒟_S|X is L-Lipschitz, almost surely with respect to 𝒟_X. Assumption <ref> is often needed for concentration guarantees in CP literature <cit.>. consider the CPL algorithm run with smoothed function g̃_α, n with smoothing parameter σ = 1/√(n). Let (f_ CPL^*, h_ CPL^*) denote a stationary point reached by Algorithm <ref>. Also, the corresponding prediction sets are given as C_ CPL^*(x) = {y ∈𝒴 | S(x,y)≤ h_ CPL^*(x)}. Let us also remind that each element f∈ℱ can be represented by a β∈^d, where we use the notation f(x) = ⟨, Φ(x)⟩≡ f_(x) (look at section <ref> for more details). Under assumption <ref>, let (f_ CPL^*, h_ CPL^*) denote a stationary point reached by Algorithm <ref> (if exists) and C_ CPL^*(x) its corresponding prediction sets. Then with probability 1-δ we have, |[f(X) {[Y∈ C_ CPL^*(X)] - (1-α)}]| ≤c_1√(ln(2d𝒩(ℋ, d_∞, 1/n)/δ)) + c_2/√(n), ∀ f_∈ℱ, where c_1 = √(2)B||||_1, c_2 = ||||_1BL√(π/2) + ||||_12B/√(2π), B = max_isup_x∈𝒳Φ_i(x), and Φ is the basis for ℱ (see Section <ref>). PAC-style guarantees of the order O(1/√(n)) are generally optimal and common in CP literature <cit.>. Look at Appendix <ref> for further discussions and interpretations. § EXPERIMENTS In this section, we empirically examine the length performance of CPL compared to state-of-the-art (SOTA) baselines under varying levels of conditional validity requirements. This section is organized into three parts, each dedicated to setups that demand a specific level of conditional validity. Throughout this section, we demonstrate across different data modalities and experimental settings that CPL provides significantly tighter prediction sets (i.e. smaller length) that are conditionally valid. §.§ Part I: Marginal Coverage In this section we aim at showcasing the ability of CPL in improving length efficiency in designing prediction sets with marginal coverage validity. We consider two different setups: First, we consider a regression setting where we report the performance of CPL along with multiple state-of-the-art conformal methods averaged over 11 standard real world datasets. Second, we focus on a multiple-choice question answering task using LLMs. We show how CPL can be used to quantify the uncertainty of LLMs with tighter prediction sets. §.§.§ Real World Regression Data We compare the performance of our method, CPL, in terms of marginal coverage and length, to various split conformal prediction (SCP) methodologies. Specifically, we compare with: (i) Split Conformal (SC) <cit.> and Jackknife <cit.>, as the main methods in standard conformal prediction to achieve marginal validity; (ii) Local Split Conformal (Local-SC) <cit.> and LocalCP <cit.>, as locally adaptive variants of Split Conformal; (iii) Conformalized Quantile Regression (CQR) <cit.>, as a state-of-the-art method for achieving marginal coverage with small set size. Following the setup from <cit.>, we evaluate performance on 11 real-world regression datasets (see Appendix <ref>) and report the average performance over all of them. Each dataset is standardized, and the response is rescaled by dividing it by its mean absolute value. We split each dataset into training (40%), calibration (40%), and test (20%) sets. First Part: We focus on methods applicable to black-box predictors and conformity score. We compare SC, Jackknife, Local-SC, LocalCP, and CPL using the conformity score S(x,y) = |y-f(x)|, where f is a NN with two hidden layers of 64 ReLU neurons, trained on the training set. Second Part: We evaluate CQR, which requires quantile regressors trained on the training set. We also examine our method combined with CQR (i.e. we use the score obtained by CQR), referred to as CQR+CPL. Tables 1 and 2 summarize our results, reporting average performance over 100 random splits of the datasets. All reported numbers have standard deviations below 1 percent. In Table 1, our method is shown to improve the interval length using a generic conformity score. In Table 2, our method shows strength with sophisticated scores, highlighting that our minimax procedure significantly enhances length efficiency across various tasks and scoring methods. Our framework's advantages over CQR are: (i) It can use CQR scores to improve length efficiency, and (ii) it can use other scores, including residual scores with pre-trained predictors. This is advantageous when training quantile regressions from scratch is costly, while pre-trained models are accessible. §.§.§ Multiple Choice Text Data We use multiple-choice question answering datasets, including TruthfulQA <cit.>, MMLU <cit.>, OpenBookQA <cit.>, PIQA<cit.>, and BigBench <cit.>. The task is to quantify the uncertainty of Llama 2 <cit.> and create prediction sets using this model. We follow a procedure similar to the one proposed by <cit.> described below. For each dataset, we tackle the task of multiple-choice question answering. We pass the question to Llama 2 using a simple and fixed prompt engineering approach throughout the experiment: We then look at the logits of the first output token for the options A, B, C, and D. By applying a softmax function, we obtain a probability vector. The conformity score is defined as (1 - probability of the correct option), which is similar to (1 - f(x)_y) for classification. Our method is implemented using ℋ as a linear head on top of a pre-trained GPT-2 <cit.>. GPT-2 has 768-dimensional hidden layers, so the optimization for the inner maximization involves a 768-dimensional linear map from GPT-2's last hidden layer representations to a real-valued scalar. We also implemented the method of <cit.>, which directly applies split conformal method on the scores. Figures <ref> shows the performance of our method (CPL) compared to the baseline. CPL achieves significantly smaller set sizes while ensuring proper 90% coverage. §.§ Part II: Group-Conditional Coverage We use a synthetic regression task as in <cit.> to compare CPL with BatchGCP <cit.>. The covariate X = (X_1, ⋯, X_100) is a vector in ℝ^100. The first ten coordinates are independent uniform binary variables, and the remaining 90 coordinates are i.i.d. Gaussian variables. The label y is generated as y = ⟨θ, X ⟩ + ϵ_X where ϵ_X is a zero-mean Gaussian with variance σ_X^2 = 1 + ∑_i=1^10iX_i +(40·[∑_i=11^100X_i ≥ 0]-20). We generate 150K training samples, 50K calibration data points, and 50K test data points. We evaluate all algorithms over 100 trials and report average performance. We define 20 overlapping groups based on ten binary components of X. For each i in 1 to 10, Group 2i-1 corresponds to X_i = 0 and Group 2i to X_i = 1. BatchGCP and CPL are implemented to provide group-conditional coverage for these groups. We use a 2-hidden-layer NN with layers of 20 and 10 neurons for the inner maximization. We also include the Split Conformal method and an optimal oracle baseline, calculated numerically using the optimal formulation of Proposition <ref>. As seen in Figure <ref>, the Split Conformal solution under covers in high-variance groups and over covers in low-variance groups. CPL and BatchGCP provide near-perfect coverage in all groups, matching the Optimal Oracle. The mean interval plot shows that BatchGCP performs similarly to Split Conformal, while CPL significantly reduces the average length, nearing the Optimal Oracle solution. §.§ Part III: Coverage w.r.t. a General Class of Covariate Shifts We use the RxRx1 dataset of WILDS repository, containing cell images to predict one of 1339 genetic treatments across 51 experiments, which present covariate shifts. Our goal is to make prediction sets that maintain valid coverage across these shifts. We compare CPL to Conditional Calibration algorithm <cit.>, which uses quantile regression for valid conditional coverage under covariate shifts. To implement both methods, we delineate covariate shifts as in <cit.> by splitting the calibration data. We use ℓ_2-regularized multinomial linear regression to estimate each image's likelihood of originating from an experiment, defining the covariate shift class. For predicting genetic treatment, we use a pre-trained ResNet50 model, f(x), pre-trained on 37 experiments. Images from the remaining 14 experiments are split into calibration and test sets. For our method's inner maximization, we train a linear head on the last-layer representations of the pre-trained model. Conformity scores are calculated as: for each image x, denote f^i(x)^1339_i=1 as the weights f(x) assign to the treatments. We apply Temperature Scaling and softmax to convert these into probability weights π^i(x):=exp(T f_i(x)) /(∑_j exp(T f_j(x))), where T is the temperature parameter. The score S(x, y) is the sum of π_i(x) for treatments where π_i(x)>π_y. Figure <ref> shows the performance of CPL, Conditional Calibration, and Split Conformal. CPL and Conditional Calibration maintain valid coverage, but CPL significantly reduces the average set size, due to its emphasis on learning features to reduce prediction set size while maintaining conditional validity. § CONCLUSION AND DISCUSSION In this work we studied the fundamental interaction between conditional validity and length efficiency, as two major facets of conformal methods. Building upon the literature in CP, our studies strengthen the foundational connection between length optimal conformal prediction and level set estimation. As a result of that we proposed a novel algorithm, CPL, which significantly improves the length efficiency over state-of-the-arts methods in the literature. The primary objective of this work was to introduce a duality perspective, enabling the theoretical evaluation of both conditional validity and length optimality concurrently. We believe this perspective can potentially open many directions for both theoretical and algorithmic exploration in future works. Perhaps one limitation of our framework is that currently it only handles conditional coverage coming from finite-dimensional classes. To go to infinite-dimensional classes, we will need to explore duality results that hold for optimization problems with infinitely-many constraints. Such problems have recently been introduced in the ML literature and can be useful for CP as well. Another important question is about length: How can we compare the length of the solution of the Relaxed Minimax Problem to the solution of the original minimax problem C^*? We believe that there is a rich theory about “length stability” that is yet to be developed when we restrict the prediction sets (using a score S and a class ℋ).We leave this as a future work. Also, we believe, even though challenging, it should be possible to develop a more detailed theoretical framework to address length optimality in the finite-sample regime. § ACKNOWLEDGMENTS This work is supported by the NSF Institute for CORE Emerging Methods in Data Science (EnCORE). The authors wish to thank Luiz Chamon for helpful discussions. unsrt § OUTLINE * Appendix <ref> is dedicated on some interpretations of the finite sample guarantees of Section <ref>. * Appendix <ref> lists the all the Remarks that we moved to Appendix from the main body due to space limit. * Appendix <ref> provides the Lemmas, including their proofs, that are necessary for the development of out theoretical framework. * Section <ref> provides the proofs of all the statements proposed in Section <ref> along with an extra Theorem. * Section <ref> provides the proofs of all the statements proposed in Section <ref>. * Section <ref> provides references to all the 11 regression datasets used in experiment setup of Section <ref>. Under assumption <ref>, let (f_ CPL^*, h_ CPL^*) denote a stationary point reached by Algorithm <ref> (if exists) and C_ CPL^*(x) its corresponding prediction sets. Then with probability 1-δ we have, |[f(X) {[Y∈ C_ CPL^*(X)] - (1-α)}]| ≤c_1√(ln(2d𝒩(ℋ, d_∞, 1/n)/δ)) + c_2/√(n), ∀ f_∈ℱ, where c_1 = √(2)B||||_1, c_2 = ||||_1BL√(π/2) + ||||_12B/√(2π), B = max_isup_x∈𝒳Φ_i(x), and Φ is the basis for ℱ (see Sec. <ref>). We now provide three Corollaries to this Theorem. The case of marginal validity: For this case we have to pick ℱ={x ↦ c | c ∈}, i.e. constant functions. In this case one can see B=1, so we have the following result. Under assumption <ref>, let (f_ CPL^*, h_ CPL^*) denote a stationary point reached by Algorithm <ref> (if exists) and C_ CPL^*(x) its corresponding prediction sets. Then with probability 1-δ we have, |(Y∈ C_ CPL^*(X)) - (1-α)| ≤c_1√(ln(2𝒩(ℋ, d_∞, 1/n)/δ)) + c_2/√(n), where c_1 = √(2), c_2 = L√(π/2) + 2/√(2π). The case of group-conditional validity: Let G_1, ⋯, G_m be a collection of groups: each group is a subset of covariate space, i.e. G_i∈𝒳. These groups can be fully arbitrary and highly overlapping. Define f_i(x) = [ x ∈ G_i] for every i ∈ [1, m]. We can then run CPL using the class of covariate shifts ℱ={∑_i=1^mβ_if_i(x) | β_i∈ for every i ∈ [1, ⋯, m]}_i=1^m and obtain tight prediction sets with group-conditional coverage validity. In this case one can see B=1, so we have the following result. Under assumption <ref>, let (f_ CPL^*, h_ CPL^*) denote a stationary point reached by Algorithm <ref> (if exists) and C_ CPL^*(x) its corresponding prediction sets. Then with probability 1-δ we have, |(Y∈ C_ CPL^*(X) | X ∈ G_i) - (1-α)| ≤c_1√(ln(2m𝒩(ℋ, d_∞, 1/n)/δ)) + c_2/√(n), ∀ i∈ [1, ⋯, m], where c_1 = √(2), c_2 = L√(π/2) + 2/√(2π). § FURTHER REMARKS Another important special case of conditional validity with respect to an affine finite dimensional class of covariate shifts is the framework introduced by <cit.> which provides CP methods that guarantee a valid coverage when there is a known covariate shift between calibration data and test data. This exactly falls to our framework as the special case if we consider the class ℱ = {x→ cf(x) | for every c ∈}, where f is the known covariate shift (i.e. likelihood ratio between test and calibration data). We will need to assume that the members of the class ℋ are bounded functions; i.e. for every h∈ℋ and x∈𝒳 we have h(x)∈[0, Γ]. Two points are in order. (i) This assumption is purely for the sake of theory development. In practice, one can run our algorithm with any off-the-shelf machine learning class of models. In fact, in Section <ref>, we run our algorithm with a variety of models including deep neural networks (Resnet 50) and show case its performance. (ii) This assumption is not very far from practice. In fact, one can satisfy this assumption by using a Sigmoid activation function (or a scaled version of it) on the output layer to satisfy this assumption. Similar assumptions has been posed in the literature <cit.>. § TECHNICAL LEMMAS Let f(a, x) = 1/2( 1 + erf( a - x/√(2)σ) ) be the smoothed indicator function, where erf(x) is the error function, defined as erf(x) = 2/√(π)∫_0^x e^-t^2 dt, and σ is the variance of the Gaussian kernel used for smoothing. Then f(a, x) is Lipschitz continuous with respect to x with a Lipschitz constant 1/√(2π)σ. Proof: To prove that f(a, x) is Lipschitz continuous with respect to x, we compute the derivative of f(a, x) with respect to x: f(a, x) = 1/2( 1 + erf( a - x/√(2)σ) ) ∂/∂ x f(a, x) = 1/2∂/∂ x( 1 + erf( a - x/√(2)σ) ) = 1/2·2/√(π)·-1/√(2)σ e^-( a - x/√(2)σ)^2 Simplifying this, we get: ∂/∂ x f(a, x) = -1/√(2π)σ e^-(a - x)^2/2σ^2 To show that this derivative is bounded, observe that: | ∂/∂ x f(a, x) | = | -1/√(2π)σ e^-(a - x)^2/2σ^2| ≤1/√(2π)σ Since 1/√(2π)σ is a constant, we conclude that f(a, x) is Lipschitz continuous with respect to x with Lipschitz constant 1/√(2π)σ. Let [a < b] be the smoothed indicator function defined by [a < b] = 1/2( 1 + erf( a - b/√(2)σ) ) where erf(x) is the error function, defined as erf(x) = 2/√(π)∫_0^x e^-t^2 dt, and σ is the variance of the Gaussian kernel used for smoothing. The indicator function [a < b] is also defined as [a < b] = 1 if a < b 0 otherwise The error between the smoothed indicator function [a < b] and the actual indicator function [a < b], integrated over all a, is given by E = ∫_-∞^∞| [a < b] - [a < b] | da This integral evaluates to E = √(π/2)σ Proof: To compute the integral, we consider the two cases separately: a < b and a ≥ b. For a < b, the indicator function [a < b] = 1, so the absolute difference is |[a < b] - 1| = | 1/2( 1 + erf( a - b/√(2)σ) ) - 1 | = 1/2| erf( a - b/√(2)σ) - 1 | For a ≥ b, the indicator function [a < b] = 0, so the absolute difference is |[a < b] - 0| = 1/2( 1 + erf( a - b/√(2)σ) ) Combining these, the integral can be written as E = ∫_-∞^b1/2| erf( a - b/√(2)σ) - 1 | da + ∫_b^∞1/2( 1 + erf( a - b/√(2)σ) ) da We know that erf(x) = 2 Φ(x√(2)) - 1 where Φ(x) is the CDF of the standard normal distribution. Using symmetry properties of the error function and Gaussian integrals, we can simplify the calculations as follows: ∫_-∞^b( erf( a - b/√(2)σ) - 1 ) da = -√(π/2)σ ∫_b^∞( 1 + erf( a - b/√(2)σ) ) da = √(π/2)σ Thus, the total error integral is: E = 1/2( √(π/2)σ + √(π/2)σ) = 1/2( 2√(π/2)σ) = √(π/2)σ Let f∈ℱ be a fixed function such that sup_x∈𝒳f(x)≤ B and let us define, Z_h = |1/n∑_i=1^n [f(x_i){(S(x_i, y_i) , h(x_i))- (1-α)}] - [f(x){(S(x, y) , h(x)) - (1-α)}]| The following uniform convergence holds: Fixing any ε > 0, we have with probability 1-δ, |Z_h| ≤2Bε/√(2π)σ + √(2)B√(ln(2𝒩(ℋ, d_∞, ε)/δ))/√(n) for every h ∈ℋ. Proof. Let h_1, h_2 ∈ℋ be two arbitrary functions. We have, |Z_h_1 - Z_h_2| (a)≤|1/n∑_i=1^n [f(x_i){(S(x_i, y_i) , h_1(x_i)) - (S(x_i, y_i) , h_2(x_i))}] - [f(x){(S(x, y) , h_1(x)) - (S(x, y) , h_2(x))}]| (b)≤|1/n∑_i=1^n [f(x_i){(S(x_i, y_i) , h_1(x_i)) - (S(x_i, y_i) , h_2(x_i))}]| + |[f(x){(S(x, y) , h_1(x)) - (S(x, y) , h_2(x))}]| (c)≤ B|1/n∑_i=1^n [{(S(x_i, y_i) , h_1(x_i)) - (S(x_i, y_i) , h_2(x_i))}]| + B|[{(S(x, y) , h_1(x)) - (S(x, y) , h_2(x))}]| (d)≤B/√(2π)σ|1/n∑_i=1^n [|h_1(x_i) -h_2(x_i)| ]| + B/√(2π)σ|[|h_1(x) -h_2(x)|]| (e)≤2B/√(2π)σsup_x∈𝒳|h_1(x) -h_2(x)| where (a) is a triangle inequality, (b) is another triangle inequality, (c) comes from the definition of B, (d) follows from the Lipschitness Lemma <ref>, and finally (e) is from the definition of sup. Now let us define d_∞(h_1, h_2) = sup_x∈𝒳|h_1(x) -h_2(x)|. Therefore, |Z_h_1 - Zh_2|≤ 2Bd_∞(h_1, h_2). By Hoeffding’s inequality for general bounded random variables (look at chapter 2 of <cit.>), fixing h ∈ℋ, we have with probability 1-δ, |Z_h| ≤√(2)B√(ln(2/δ))/√(n). Now as a result of Lemma 5.7 of <cit.> (a standard covering number argument) we conclude, |Z_h| ≤2Bε/√(2π)σ + √(2)B√(ln(2𝒩(ℋ, d_∞, ε)/δ))/√(n) for every h ∈ℋ. The following equivalence between the three problems hold, Structured Minimax Problem ≡ Convex Structured Minimax Problem ≡ -Convex Structured Primary Problem where by -Convex Structured Primary Problem we mean the optimal value of Convex Structured Primary Problem is equal to the negative of the optimal values of the other two problems. We start by the following claim which will be proven shortly. Convex Structured Minimax Problem is equivalent to the Structured Minimax Problem. Let us remind the objective, g_α(f,C)= [f(X){C(X, Y) - (1-α)}] - ∫_𝒴C(X, y)dy, which is linear in terms of C. Now fixing f∈ℱ, since 𝒞_ℋ^∞∈𝒞_ℋ^∞^con we have, C ∈𝒞_ℋ^∞^conMaximize g_α(f, C) ≥C ∈𝒞_ℋ^∞Maximize g_α(f, C). We now proceed by showing the other direction. Let {C_i}_i=1^∞, where C_i∈𝒞_ℋ^∞^con and lim_i→∞g_α(f, C_i)=C ∈𝒞_ℋ^∞^conMaximize g_α(f, C) (pay attention that it is not trivial that this maximum is achievable by a single member of its domain). Now by the definition of 𝒞_ℋ^∞^con, we know each C_i is a convex combination of finitely many members of 𝒞_ℋ^∞. That is to say, for each C_i, there exists {C_ij}_j=1^T where C_ij∈𝒞_ℋ^∞ and, C_i = ∑_j=1^T a_jC_ij(x,y), ∑_j=1^Ta_j=1, a_j≥ 0(∀ 1≤ i≤ T). Hence, g_α(f,C_i) = g_α(f,∑_j=1^T a_jC_ij(x,y)) = ∑_j=1^T a_jg_α(f,C_ij)≤max_j=1^T g_α(f,C_ij) Let us assume that final maximum is achieved by the index j_i. Therefore, for each C_i∈𝒞_ℋ^∞^con there exists a C_ij_i∈𝒞_ℋ^∞ such that g_α(f,C_i)≤ g_α(f,C_ij_i). Hence we have, C ∈𝒞_ℋ^∞^conMaximize g_α(f, C) = lim_i→∞g_α(f, C_i) ≤lim_i→∞g_α(f, C_ij_i)≤C ∈𝒞_ℋ^∞Maximize g_α(f, C) Putting everything together, for every f∈ℱ we have, C ∈𝒞_ℋ^∞^conMaximize g_α(f, C) = C ∈𝒞_ℋ^∞Maximize g_α(f, C), which concludes the claim. Now Let us define the Convex Structured Primary Problem. Convex Structured Primary Problem: C ∈𝒞_ℋ^∞^conMinimize [len(C(X))] subject to 𝔼[f(X) {C(X,Y) - (1-α)}] = 0, ∀ f ∈ℱ As the Convex Structured Primary Problem is a linear minimization over a convex set, 𝒞_ℋ^∞^con, with finitely many linear constraints, strong duality holds for this problem (See Theorem 1, Section 8.3 of <cit.>; Also see Problem 7 in Chapter 8 of the same reference), which means Convex Structured Primary Problem is equivalent to Convex Structured Minimax Problem (Here we are using the fact that 𝒟_S|X is continuous hence the Convex Structured Primary Problem is feasible). Now putting everything together we have, Structured Minimax Problem ≡ Convex Structured Minimax Problem ≡ -Convex Structured Primary Problem The negative sign appeared as our definition of _α(f, h) has a minus sign with respect to the conventional definition of Lagrangian. Given a random variable Z taking values in [0, 1] and [Z]≥γ>0, we have, (Z≥γ/3) ≥2γ/3. Let us define θ = (Z≥γ/3). Now we have, γ≤[Z] ≤θ× 1 + (1-θ)×γ/3. Hence, θ(1-γ/3)≥ 2γ/3. Therefore, θ≥2γ/3/1-γ/3≥2γ/3. Consider δ>0 , 1/10>γ >0 and N numbers z_1, ⋯, z_N such that, ∑_i=1^N z_i = γ and z_i ∈ [0, δ] for every 1≤ i≤ N. Then there exists a subset S⊆ [N] such that, ∑_i∈ S z_i ∈ [γ^1/2δ^1/4/2, 2γ^1/2δ^1/4]. Let us define for every 1≤ i≤ N, y_i= z_i with probability γ^-1/2δ^1/4, 0 with probability 1 - γ^1/2δ^1/4, and assume that y_is are independently generated. By Hoeffding inequality we have, (|∑_i=1^N y_i - ∑_i=1^N[y_i]|≥ε) ≤ 2exp{-2ε^2/∑_i=1^N z_i^2}. This results in, (|∑_i=1^N y_i - γγ^-1/2δ^1/4|≥ε) ≤ 2exp{-2ε^2/δγ}. Now setting ε = 1/2γ^1/2δ^1/4, we get, (|∑_i=1^N y_i - γ^1/2δ^1/4|≥1/2γ^1/2δ^1/4) ≤ 2exp{-1/2δ^1/2}< 1. This means there exists a deterministic realization that satisfies <ref>. § PROOFS OF SECTION <REF> Here we provide a version of Theorem <ref>, which does not need the assumption <ref> . Before that, let us remind that each element f∈ℱ can be represented by a β∈^d, where we use the notation f_β(x)=⟨β, Φ(x)⟩ (look at section <ref> for more details). Let us assume 𝒟_S|X and 𝒟_X are continuous. Let ℱ be a bounded finite-dimensional affine class of covariate shifts and ℋ be the class of all measurable functions. Let f^* denote the optimal solution to the outer minimization and OPT denotes the optimal value of the Relaxed Minimax Problem <ref>. Finally, let L^* denotes the optimal value for the Relaxed Primary Problem <ref>. For every ε>0, there exists h^* ∈ℋ such that, (i) |g_α(f^*, h^*) - OPT|≤ε. (ii) For every f_β∈ℱ we have, |[f_β(X_n+1){[Y_n+1∈ C_h^*^S(X_n+1)]-(1-α)}]|≤ε ||β||_1. (iii) len(C_h^*^S(X)) ≤ L^* + ε. Put it simply, Theorem <ref> says fixing any ε>0, there is an ε-close optimal solution of the Relaxed Minimax Problem (statement (i)) such that it is ε-close to the feasible solutions of the Relaxed Primary Problem which have perfect conditional coverage (statement (ii)), and it achieves an at most ε-larger prediction set length compare to the smallest possible, which is the solution of Relaxed Minimax Problem (statement (iii)). Proof of Theorem <ref>:Let us define ℋ^∞ as the class of all measurable functions from 𝒳 to . Now let us restate the Structured Minimax Problem for ℋ^∞. Structured Minimax Problem: f ∈ℱMinimize h ∈ℋ^∞Maximize g_α(f, h). We can now rewrite this problem in the equivalent form of the following, where we change the domain of the maximization from ℋ_∞ to corresponding set functions. Let 𝒞_ℋ^∞ be the set of all function from C(x,y) : 𝒳×𝒴→ such that there exists h∈ℋ^∞ such that C(x,y)=1[S(x,y) ≤ h(x)]. Structured Minimax Problem: f ∈ℱMinimize C ∈𝒞_ℋ^∞Maximize g_α(f, C). We now proceed by defining a convexified version of this problem. Let us define 𝒞_ℋ^∞^con = {∑_i=1^T a_iC_i(x,y) | ∑_i=1^Ta_i=1, a_i≥ 0(∀ 1≤ i≤ T) , C_i ∈𝒞_ℋ^∞ (∀ 1≤ i≤ T), T ∈}. Now we can define the following problem. Convex Structured Minimax Problem: f ∈ℱMinimize C ∈𝒞_ℋ^∞^conMaximize g_α(f, C). Now applying lemma <ref> we have, Structured Minimax Problem ≡ Convex Structured Minimax Problem ≡ -Convex Structured Primary Problem The negative sign appears because our definition of _α(f, h) has a minus sign with respect to the conventional definition of Lagrangian. Let us call the optimal value for all these three problems OPT (here pay attention that based off of our definition OPT is always a negative number, hence when we are addressing the optimal length the term -OPT shows up). With some abuse of notation, Let us assume {C_i}_i=1^∞, where C_i∈𝒞_ℋ^∞^con is a feasible sequence of Convex Structured Primary Problem which achieves the optimal value in the limit, i.e, lim_i→∞g_α(f, C_i)=-OPT. This means, for any ε≥ 0, there is an index i_ε such that |len(C_i_ε)+OPT|≤ε. Let us fix the value of ε for now, we will determine its value later. By definition, there should be {C_i}_i=1^t and corresponding {h_i}_i=1^t such that C_i∈𝒞_ℋ^∞, h_i∈ℋ^∞, C_i=[S(x, y)≤ h_i(x)], and we have C_i_ε = ∑_i=1^t a_iC_i(x,y) and ∑_i=1^t a_i=1, a_i≥ 0 (∀ 1≤ i≤ t). Now Let us define L_i=len(C_i) for every 1≤ i≤ t (all of these expectations are finite cause |len(C_i_ε)+OPT|≤ε). Now if we look at the truncations of these expectations, by Dominated Convergence Theorem we have, [len(C_i)[len(C_i)>k]]k→∞⟶0. This means, for any ε≥ 0, there exists a real valued number γ_1 such that we can define the set Γ_γ_1⊂𝒳 so that (i) for every 1≤ i≤ t and for every x∈Γ_γ_1, len(C_i(x))≤γ_1, and (ii) [len(C_i_ε)[x∉Γ_γ_1]]≤ε. Now remind that 𝒳=^p. Since we know 𝒟_X is continuous then by Dominated Convergence Theorem there exists large enough real value γ_2 such that (X∉B(0, γ_2))≤ε where B(0, γ_2) is the ball with radius γ_2 and ε is a positive real number that we will determine later. Similarly we can again look at L_ik = L_i[||X||_2≥ k]. By Dominated Convergence Theorem there is a γ_3 such that for every 1≤ i≤ t we have [L_i[||X||_2≥γ_3]]≤ε. Let R=max{γ_1, γ_2, γ_3, 1} and A = B(0, R)∩ L_R. Now the rest of the proof proceed as follows. We will construct h^* ∈ℋ^∞, and as a result the corresponding set C^*(x, y) = [S(x,y)≤ h^*(x)], in a way that coverage properties and the length of C^* is close to the ones for C_i_ε. Let us proceed with the construction of h^*. case 1: x∉ A. For this case we define h^*(x)=0. case 2: x∈ A. This case is more involved. Let us fix ε_1≥ 0. Let {B(b_i, ε_1)}_i=1^N be a minimal ε_1 covering for A. We can then further prune this covering balls to {W_i}_i=1^N, where (i) W_i∩ W_j=∅ for every i≠ j, (ii) ⋃_i=1^N W_i = A, and (iii) W_i⊆B(b_i, ε_1) for every i. Now we use the following randomized method to prove a desirable construction for h^* exists. Let {Z_i}_i=1^∞ be a collection of uniform iid discrete random variables where they take values between 1 and t. Then we define the random function h_rand(x) inside A in the following way. By construction, for each x∈ A there is a unique W_i that includes x. We set h_rand(x)=h_Z_i(x). We denote the corresponding set function to h_rand by C_rand. Now Let us remind that F is a finite dimensional affine class of functions. In particular, let Φ(x) = [ϕ_1(x), ⋯, ϕ_d(x)] be a predefined function (a.k.a. the finite-dimensional basis). The class ℱ is defined as ℱ = {⟨, Φ(x)⟩ | ∈^d}. Now for each k ∈ [1, ⋯, d] we can do the following calculations. |X,Y, h_rand [∑_i=1^N ϕ_k(X)(C_rand(X, Y) - (1-α))[X∈ W_i]] | =| X,Y, h_rand[ϕ_k(X)(C_rand(X, Y) - (1-α))[X ∈ A]]| =| ∑_i=1^t a_iX,Y[ϕ_k(X)(C_i(x,y)- (1-α))[X ∈ A]] | =| X,Y[ϕ_k(X)(C_i_ε(X, Y) - (1-α))[X ∈ A]] | =| 0 - X,Y[ϕ_k(X)(C_i_ε(X, Y) - (1-α))[X ∉ A]] | ≤X,Y[|ϕ_k(X)||(C_i_ε(X, Y) - (1-α))||[X ∉ A]|] ≤ B [X ∉ A] ≤ B(B(0, R) + L_R) ≤ 2Bε, where B is the upper bound for ϕ_k. We now define the random variable E_i = X,Y[ϕ_k(X)(C_rand(X, Y) - (1-α))[X∈ W_i]] for every 1≤ i≤ N. The computation above indicates a bound on the expectation of the sum of these variables. By the construction of C_rand we know that they are independent. Furthermore, each random variable E_i is bounded by the quantity 2BMvol(B(0, ε_1)), where B is the upper bound for ϕ_k, M=max_x∈B(0, R)p(x) (which is finite as B(0, R) is a compact set and we assumed 𝒟_𝒳 is continuous), and B(0, ε_1) appears as a result of W_i⊆B(b_i, ε_1). Hence, we can apply Hoeffding inequality for general bounded random variables and derive for every ε≥ 0, (|∑_i=1^N E_i - [∑_i=1^N E_i]|<ε)≥ 1-2exp{-2ε^2/4NB^2M^2vol(B(0, ε_1))^2}. This results in, (|X,Y[ϕ_k(X)(C_rand(X, Y) - (1-α))[X∈ A]]|<ε + 2Bε) ≥ 1-2exp{-2ε^2/4NB^2M^2vol(B(0, ε_1))^2}. Furthermore, |X,Y[ϕ_k(X)(C_rand(X, Y) - (1-α))[X∉ A]]| = X,Y[|ϕ_k(X)||(C_rand(X, Y) - (1-α))|[X∉ A]] ≤ B [X ∉ A] ≤ B(B(0, R) + L_R) ≤ 2Bε. Therefore, (|X,Y[ϕ_k(X)(C_rand(X, Y) - (1-α))]|<ε + 4Bε) ≥ 1-2exp{-2ε^2/4NB^2M^2vol(B(0, ε_1))^2}. Now since we have this argument for each 1≤ k≤ d, by a union bound we have, (∀ k: 1≤ k≤ d: |X,Y[ϕ_k(X)(C_rand(X, Y) - (1-α))]|<ε +4Bε) ≥ 1 - 2dexp{-2ε^2/4NB^2M^2vol(B(0, ε_1))^2}. Now one can argue a similar inequality should hold for length too. This time we can define Q_i = X[∫_𝒴C_rand(X, y)dy[X∈ W_i]]. Now these variables are also independent and as a result of the construction of the covering set they are bounded by R Mvol(B(0, ε_1). We also have, |X, h_rand∑_i=1^N Q_i[X∈ W_i]+ OPT| = |X, h_rand∫_𝒴C_rand(X, y)dy[X∈ A]+OPT| = |∑_i=1^t a_iX∫_𝒴C_i(X, y)dy[X∈ A]+OPT| =|X∫_𝒴C_i_ε(X, y)dy[X∈ A]-OPT| =|X∫_𝒴C_i_ε(X, y)dy+ OPT - X∫_𝒴C_i_ε(X, y)dy[X∉ A] | ≤ε +X∫_𝒴C_i_ε(X, y)dy[X∉ A] ≤ε + X∫_𝒴C_i_ε(X, y)dy[X∉B(0, R)] + X∫_𝒴C_i_ε(X, y)dy[X∉ L_R)] ≤ 3ε. Now again by applying Hoeffding inequality for general bounded random variables we have for every ε≥ 0, (|∑_i=1^N Q_i - [∑_i=1^N Q_i]|<ε)≥ 1-2exp{-2ε^2/NR^2M^2vol(B(0, ε_1))^2}. This results in, (|X∫_𝒴C_rand(X, y)dy[X∈ A] + OPT|<4ε)≥ 1-2exp{-2ε^2/NR^2M^2vol(B(0, ε_1))^2}. Furthermore, by construction of C_rand, |X∫_𝒴C_rand(X, y)dy[X∉ A]|=0. Therefore, (|X∫_𝒴C_rand(X, y)dy + OPT|<4ε)≥ 1-2exp{-2ε^2/NR^2M^2vol(B(0, ε_1))^2}. Now a union bound between (<ref>) and (<ref>) leads to, (∀ k: 1≤ k≤ d: |X,Y[ϕ_k(X)(C_rand(X, Y) - (1-α))]|<ε +4Bε and |X∫_𝒴C_rand(X, y)dy + OPT|<4ε) ≥ 1 - 2dexp{-2ε^2/4NB^2M^2vol(B(0, ε_1))^2}-2exp{-2ε^2/NR^2M^2vol(B(0, ε_1))^2}. Recalling, N=𝒩(A, ||.||_2, ε_1), where 𝒩 denotes the covering number, we have the following inequality (look at chapter 4 of <cit.>), N≤ (3/ε_1)^pvol(A)/vol(B(0, 1)), vol(B(0, ε_1))=ε_1^pvol(B(0, 1)). This results in, (∀ k: 1≤ k≤ d: |X,Y[ϕ_k(X)(C_rand(X, Y) - (1-α))]|<ε +4Bε and |X∫_𝒴C_rand(X, y)dy + OPT|<4ε) ≥ 1 - 2dexp{-2ε^2/3^p4vol(A)B^2M^2ε_1^pvol(B(0, 1))}-2exp{-2ε^2/3^pvol(A)R^2M^2ε_1^pvol(B(0, 1))}. Since we have this inequality for every ε_1>0, we can pick a small enough ε_1 such that, (∀ k: 1≤ k≤ d: |X,Y[ϕ_k(X)(C_rand(X, Y) - (1-α))]|<ε +4Bε and |X∫_𝒴C_rand(X, y)dy + OPT|<4ε)≥1/2. This means there is a realization of h_rand and accordingly C_rand, which we denote them by h^* and C^* such that, ∀ k: 1≤ k≤ d: |X,Y [ϕ_k(X)(C^*(X, Y) - (1-α))]|<ε +4Bε and |X∫_𝒴C^*(X, y)dy + OPT|<4ε. Now since we proved this for any ε>0, we can put ε = ε^'min{1/4, 1/1+4B}. Therefore the following statement holds. For every ε^'>0 there exists h^*∈ℋ^∞ and its corresponding set function C^*(x, y) = [S(x,y) ≤ h^*(x)] such that, ∀ k: 1≤ k≤ d: |X,Y [ϕ_k(X)(C^*(X, Y) - (1-α))]|<ε^' and |X∫_𝒴C^*(X, y)dy + OPT|<ε^'. This immediately proves the statement (ii) of the Theorem <ref>. The statement (iii) also follows by the fact that by weak duality between the Relaxed Minimax Problem and the Relaxed Primary Problem we have -OPT≤ L^*. Finally, let f^* denotes an optimal solution to the outer minimization of the Relaxed Minimax Problem. There exists a ^* such that f^*(x) = ⟨^*, Φ(x)⟩. Now we have, | g_α(f^*, h^*) -OPT| = |[⟨^*, Φ(X)⟩{C^*(X, Y) - (1-α)}] - ∫_𝒴C^*(X, y)dy| ≤|[⟨^*, Φ(X)⟩{C^*(X, Y) - (1-α)}]| + |∫_𝒴C^*(X, y)dy+OPT | ≤ε^' ||^*||_1 + ε^'. Hence ε^'≤ε/1 + ||^*||_1 prove the statement (i) of the Theorem <ref>. Proof of Theorem <ref>: Let us define ℋ^∞ as the class of all measurable functions from 𝒳 to . Now let us restate the Structured Minimax Problem for ℋ^∞. Structured Minimax Problem: f ∈ℱMinimize h ∈ℋ^∞Maximize g_α(f, h). We can now rewrite this problem in the equivalent form of the following, where we change the domain of the maximization from ℋ_∞ to corresponding set functions. Let 𝒞_ℋ^∞ be the set of all function from C(x,y) : 𝒳×𝒴→ such that there exists h∈ℋ^∞ such that C(x,y)=1[S(x,y) ≤ h(x)]. Structured Minimax Problem: f ∈ℱMinimize C ∈𝒞_ℋ^∞Maximize g_α(f, C). We now proceed by defining a convexified version of this problem. Let us define 𝒞_ℋ^∞^con = {∑_i=1^T a_iC_i(x,y) | ∑_i=1^Ta_i=1, a_i≥ 0(∀ 1≤ i≤ T) , C_i ∈𝒞_ℋ^∞ (∀ 1≤ i≤ T), T ∈}. Now we can define the following problem. Convex Structured Minimax Problem: f ∈ℱMinimize C ∈𝒞_ℋ^∞^conMaximize g_α(f, C). Naturally we can also define the Convex Structured Primary Problem. Convex Structured Primary Problem: C ∈𝒞_ℋ^∞^conMinimize [len(C(X))] subject to 𝔼[f(X) {C(X,Y) - (1-α)}] = 0, ∀ f ∈ℱ Now applying lemma <ref> we have, Structured Minimax Problem ≡ Convex Structured Minimax Problem ≡ -Convex Structured Primary Problem Let us call the optimal value for all these three problems OPT (here pay attention that based off of our definition OPT is always a negative number, hence when we are addressing the optimal length the term -OPT shows up). With some abuse of notation, Let us assume {C_i}_i=1^∞, where C_i∈𝒞_ℋ^∞^con is a feasible sequence of Convex Structured Primary Problem which achieves the optimal value in the limit, i.e, lim_i→∞g_α(f, C_i)=-OPT. This means, for any ε≥ 0, there is an index i_ε such that |len(C_i_ε)+OPT|≤ε. Let us fix the value of ε for now, we will determine its value later. By definition, there should be {C_i}_i=1^t and corresponding {h_i}_i=1^t such that C_i∈𝒞_ℋ^∞, h_i∈ℋ^∞, C_i=[S(x, y)≤ h_i(x)], and we have C_i_ε = ∑_i=1^t a_iC_i(x,y) and ∑_i=1^t a_i=1, a_i≥ 0 (∀ 1≤ i≤ t). Let us recall that the class ℱ is defined as ℱ = {⟨, Φ(x)⟩ | ∈^d}, where Φ: 𝒳→^d is a predefined function (a.k.a. the finite-dimensional basis). Now let {Φ_i}_i=1^∞ be all the possible countable values that it takes. In other words, each Φ_i is a vector in ^d and there exists x∈𝒳 such that Φ(x) = Φ_i. Then we have, ∫_𝒳∑_i=1^t a_i Φ(x)(∫_𝒴[S(x,y)≤ h_i(x)]p(y|x)dy)p(x)dx = (1-α) [ 1; 1; ⋮; 1 ]. Here we implicitly assumed that Φ(x) is properly normalized so that the Right hand side don't need any extra normalization. Let us define, cov_i(x) = ∫_𝒴[S(x,y)≤ h_i(x)]p(y|x)dy, l_i(x) = ∫_𝒴[S(x,y)≤ h_i(x)]dy. from (<ref>) we have, ∫_𝒳∑_i=1^t a_i Φ(x)cov_i(x)p(x)dx = (1-α) [ 1; 1; ⋮; 1 ]. We can then write, ∑_j=1^∞Φ_j ∫_𝒳_j(∑_i=1^t a_i cov _i(x)) p(x) d x=(1-α)[ 1; 1; ⋮; 1 ], where 𝒳_j = {x∈𝒳 | Φ(x) = Φ_j}. Now two points are in order. (i) Without loss of generality, one can assume p(𝒳_j) > 0 for every j as otherwise we can just omit that Φ_j since it does not contribute to any integral due to continuity assumptions. (ii) {𝒳_j}_j=1^∞ partitions 𝒳, i.e., their union is 𝒳 and they are pairwise disjoint. Now the idea behind the rest of the proof is we show for each region 𝒳_j, there is a single function h∈ℋ such that, (i) ∫_𝒳_j(∑_i=1^t a_i cov _i(x)) p(x) d x = ∫_𝒳_icov_h(x) p(x) d x, where cov_h(x) = ∫_𝒴[S(x,y)≤ h(x)]p(y|x)dy. (ii)∫_𝒳_jl_h(x) p(x) d x ≤∫_𝒳_j(∑_i=1^t a_i l_i(x)) p(x) d x + ε p(𝒳_j) , where l_h(x) = ∫_𝒴[S(x,y)≤ h(x)]dy. Hence, from now on we fix an arbitrary 𝒳_j and prove the existence of such h. For the ease of notation let us define 𝒳̃ = 𝒳_j and p̃(x) = p(x)/p(𝒳̃). This way ∫_𝒳̃p̃(x)=1 and p̃(x) is still continuous. Now we can rewrite our goals with this new notation. With a simple normalization we have, (i) ∫_𝒳̃(∑_i=1^t a_i cov _i(x)) p̃(x) d x = ∫_𝒳̃cov_h(x) p̃(x) d x, where cov_h(x) = ∫_𝒴[S(x,y)≤ h(x)]p(y|x)dy. (ii)∫_𝒳̃l_h(x) p̃(x) d x ≤∫_𝒳̃(∑_i=1^t a_i l_i(x)) p̃(x) d x + ε , where l_h(x) = ∫_𝒴[S(x,y)≤ h(x)]dy. Now, as a result of Dominated Convergence Theorem, there is a large enough real value R such that for the set of 𝒳̃_1={x∈𝒳̃ | ||x||_2≤ R, i∈[1,⋯, t]maxl_i(x)≤ R} we have, ∑_i=1^t ∫_𝒳̃ l_i(x)[x∉𝒳̃_1] p̃(x)dx ≤ε_1, and∑_i=1^t ∫_𝒳̃cov_i(x)[x∉𝒳̃_1] p̃(x)dx ≤ε_1, where the value of ε_1>0 will be chosen later in a way that it would be small enough for the proof to work. Among the functions cov_1, ⋯, cov_t there should be at least two of them, without loss of generality cov_1 and cov_2 so that ∫_𝒳̃cov_1(x) p̃(x)dx ≥∫_𝒳̃cov_2(x) p̃(x)dx + γ where γ >0. Otherwise we have alreasy achieved our goal by picking the h_i with the smallest average length. Now assume ε_1 < γ/4 (we will make sure to pick ε_1 in a way that it satisfies this condition). Hence we have, ∫_𝒳̃_1cov_1(x) p̃(x)dx ≥∫_𝒳̃_1cov_2(x) p̃(x)dx + γ/4. Now applying Lemma <ref> there exists 𝒳̃_2 ⊆𝒳̃_1 such that p̃(𝒳̃_2) ≥γ/6 and for every x ∈𝒳̃_2 we have cov_1(x)≥cov_2(x) + γ/12. (Here one have to use Lemma <ref> using Z=(cov_1(x)-cov_2(x))[cov_1(x) ≥cov_2(x)][x∈𝒳̃_1]) Let us consider the following three sets that partition the space 𝒳̃, A = 𝒳̃_2 B = 𝒳̃_1\𝒳̃_2 C = 𝒳̃\𝒳̃_1 Note that these three sets are pair-wise disjoint and cover the space 𝒳̃. LEt us first consider the set C. we would like to consider a function h_C such that, ∫_C(∑_i=1^t a_i cov _i(x)) p̃(x) d x = ∫_Ccov_h_C(x) p̃(x) d x, where cov_h_C(x) = ∫_𝒴[S(x,y)≤ h_C(x)]p(y|x)dy. To do so, without loss of generality, we assume, ∫_Ccov_1(x) p̃(x) d x ≥∫_Ccov_2(x) p̃(x) d x ≥⋯≥∫_Ccov_t(x) p̃(x) d x. Hence, ∫_Ccov_1(x) p̃(x) d x ≥∫_C(∑_i=1^t a_i cov _i(x)) p̃(x) d x ≥∫_Ccov_t(x) p̃(x) d x. For r∈[0,∞) let, f(r) = ∫_C ∩B (0, r)cov_t(x) p̃(x) d x + ∫_C \B (0, r)cov_1(x) p̃(x) d x. Note that f(0) = ∫_Ccov_1(x) p̃(x) d x and f(∞) = ∫_Ccov_t(x) p̃(x) d x. Also, as a result of continuity assumptions, f(r) is a continuous function, therefore we can apply Intermediate Value Theorem. As a result, there should be r_0<∞ such that, f(r_0) = ∫_C(∑_i=1^t a_i cov _i(x)) p̃(x) d x. We then naturally pick h_c to be, h_C(x)= h_t(x) if x ∈ C ∩B (0, r), h_1(x) if x ∈ C \B (0, r). Let us now consider the sets A and B. Note that by construction they both are a subset of B(0, R). Now let us consider a δ-net for A and a separate δ-net for B. Similar to our arguments in the proof of Theorem <ref>, we can construct these δ-nets in a way that they partition A and B. That is to say we have, A=⋃_j=1^N_AA_j and B=⋃_j=1^N_BB_j such that withing each δ-net the pair wise intersections are empty. Now, for each A_j (and B_j) we choose independently one of {h_i}_i=1^t with probabilities {a_i}_i=1^t at random. Let J(A, j) (similarly J(B, j)) be the index of the function h_i assigned to A_j (similarly B_j). Then we have, Event 1: | ∑_j=1^N_A∫_A_jcov_h_J(A, j)p̃(x) dx + ∑_j=1^N_B∫_B_jcov_h_J(B, j)p̃(x) dx -∫_A∩ B∑_i=1^t a_i cov_i(x) p̃(x) dx | ≤√(δ). Event 2: ∑_j:J(A, j)=1p̃(A_j) ≥a_1/2p̃(A). Event 3: ∑_j:J(A, j)=2p̃(A_j) ≥a_2/2p̃(A). Now we can repeat the probabilistic argument of proof of Theorem <ref> here. The key insight is all of the three above-mentioned events are high probability events, in a way that by letting δ (the precision of the covering net) to be sufficiently small then the probability of these events happening approaches to 1. Hence, there is a deterministic realization that make all these events happen. Now on, we fix that deterministic realization. Particularly, this means we have a realization that satisfies all the events for arbitrary small δ. Now the idea is to make a small change in the configuration that comes from this realization that make the approximate coverage of event 1 to be an exact coverage. With a little abuse of notation, let J(A, j) (similarly J(B, j)) be the assignments of that deterministic realization. Without loss of generality, we can assume, | ∑_j=1^N_A∫_A_jcov_h_J(A, j)p̃(x) dx + ∑_j=1^N_B∫_B_jcov_h_J(B, j)p̃(x) dx | =∫_A∩ B∑_i=1^t a_i cov_i(x) p̃(x) dx - ε_2, where ε_2 ≥ 0 and ε_2 ≤constant×√(δ) (the case of ε_2 ≤ 0 can be similarly handled). That is to say, our deterministic assignment J(A, j) and J(B, j) led to a small under-coverage of ε_2. The idea now is to "engineer" the assignments of some of the A_js in a way that we make the coverage exact while not changing the average length of the current assignment significantly. Remind the definition the event 3 we defined above. Recall that a_2 and p̃(A) are strictly positive numbers. Hence, as a result of (<ref>) and (<ref>) we have, p̃(A_2)≥γ/12. Now we can argue through Lemma <ref>. Consider the set Q = { j ∈ [N_a] | J(A, j)=2}. We know from event 3 that p̃(Q) ≥a_2/2p̃(A)≥a_2 γ/24. Now let for every j ∈ Q: Z_j = p̃(A_j). We know that Z_j ≤δ and γ^'≡∑_j ∈ Q Z_j ≥a_2 γ/24. Applying Lemma <ref> there exists a Q^'⊆ Q such that ∑_j ∈ Q^'p̃(A_j) ∈ [γ^'^1/2δ^1/4/2, 2γ^'^1/2δ^1/4]. Recall that since Q^'⊆ Q, then for any j∈ Q^' we have J(A, j)=2. now if we reassign all the A_j's, j∈ Q^', to h_1 (i,e changing J(A, j) to 1 for every j∈ Q^') then the amount of added coverage would be, ∑_j∈ S^'p̃(A_j)×γ/12, where γ/12 appears following (<ref>) and the fact that A_i ∈𝒳̃_2. Hence, the amount of added coverage would be bounded by, ∑_j∈ S^'p̃(A_j)×γ/12≥γγ^'^1/2δ^1/4/2≥√(a_2/24)γ^3/2δ^1/4/2. Recall that we can pick δ as small as we want. Hence, we can pick δ small enough that we have, √(a_2/24)γ^3/2δ^1/4/2≥√(δ)≥ε_2, where the last inequality follows from the definition of ε_2 in (<ref>). This can be done by letting δ≤(√(a_2/24)γ^3/4/2)^4. Now by noting (<ref>) and (<ref>) it should be clear that by reassigning all any A_j, j ∈ Q^', the total coverage added will be larger than ε_2 (hence in total we will over-cover). Now we can apply Intermediate Value Theorem once again. Let us define for r∈ [0, ∞], f(r) = ∑_j∈ Q^'∫_A_j∩B(0, r)cov_1(x)p̃(x) dx + ∫_A_j\B(0, r)cov_2(x)p̃(x) dx. Here note that again as a consequence of continuity assumptions f is a continuous function. Also, f(R) - f(0) ≥ε_2. Hence there exists r_0 ∈ [0, R] such that, f(r_0) -f(0) = ε_2. That is to say we can make the coverage exactly valid. Now let us analyze the deviation in length. Not that for x ∈ A∪ B we have length is bounded by R. This means by reassigning the elements in the set A_j, j∈ Q^', the change in length is at most R ×∑_j ∈ Q^'p̃(A_j) ≤ 2R γ^'^1/2δ^1/4, where the last inequaity comes from the fact that ∑_j ∈ Q^'p̃(A_j) ∈ [γ^'^1/2δ^1/4/2, 2γ^'^1/2δ^1/4]. Here again by choosing δ small enough, particularly δ≤(ε/2Rγ^'^1/2)^4, we can ensure the change in length is smaller than ε, hence we achieved our goal. Proof of Proposition <ref>: Let h^*∈ℋ be an optimal solution of the Relaxed Minimax Problem and the Relaxed Primary Problem, when the optimization is over all the measurable functions from 𝒳 to . Let us denote the corresponding optimal solution to the outer minimization of the Relaxed Minimax Problem by f^*. Recall that ℱ is a finite dimensional affine class of functions. That is to say there exists a vector β^* such that f^*(x)=⟨β^*, Φ(x)⟩≡ f_β^*(x). Since h^* is a solution of the Relaxed Primary Problem so it should be conditionally valid with respect to class ℱ. Therefore, as a result of Lemma <ref>, we should have, ∇_β g_α(f_β, h^*)|_β^* = 0. Now define obj(β) = θ∈Θmax g_α(f_β, h_θ). Since Θ is a compact set and g_α(f_β, h_θ) is convex (in fact linear) in β so the Danskin's theorem applies to obj(β). Therefore, as a result of (<ref>), 0∈∂_β obj(β)|_β^*. Now again as a result of Danskin's theorem, obj(β) is convex in β. Therefore, as a consequence of (<ref>), β^* is a minimizer of obj(β). That is to say, (f^*, h^*) is an optimal solution to the Relaxed Minimax Problem. On the other hand, h^* is also a solution to the Relaxed Primary Problem (as it is a solution to the Relaxed Primary Problem when solved over all the measurable functions). Putting everything together, h^* is a joint optimal solution to both the Relaxed Minimax Problem and the Relaxed Primary Problem, hence we have strong duality. Proof of Proposition <ref>: Let us start by recalling the definitions, g_α(f,C)= [f(X){[Y ∈ C(X)] - (1-α)}] - ∫_𝒴[y ∈ C(X)]dy where C_f(x) = {y ∈𝒴 | f(x)p(y|x)≥ 1}. Now, fixing f ∈ℱ, all we have to show is that g_α(f, C_f(x))-g_α(f, C(x)) ≥ 0 for every C(x): 𝒳→ 2^𝒴. g_α(f, C_f(x))-g_α(f, C(x)) ≥ 0 for every C(x): 𝒳→ 2^𝒴 proof: We have, g_α(f, C_f(x))- g_α(f, C(x)) (a)=_X∫_𝒴(f(X) p(y|X) -1){([y ∈ C_f(X)] - [y ∈ C(X)]) }dy (b)=_X∫_𝒴 (f(X) p(y|X) -1){[y ∈ C_f(X)\ C(X)] - [y ∈ C(X)\ C_f(X)]}dy (c)=_X∫_𝒴 |(f(X) p(y|X) -1)|{[y ∈ C_f(X)Δ C(X)]}dy ≥ 0, where, (a) follows from the definitions, (b) follows from the definition of set difference operation (\) where, A \ B={x | x ∈ A and x ∉ B}, and (c) comes from the definition of C_f(x). The proof is complete now. One can define, 𝔼_X[ d (C_f, C)] ≡_X∫_𝒴|(f(X) p(y|X) -1)|{[y ∈ C_f(X)Δ C(X)]}dy, and rewrite the above calculation as, g_α(f, C(x)) = g_α(f, C_f(x)) - 𝔼_X[ d (C_f, C)]. This reformulation is very intuitive. at a high level, it says maximizing g_α(f, C(x)) over C, is equivalent to minimizing a distance between C and C_f. Proof of Proposition <ref>: Let us start by restating the proposition. The Primary Problem and the Minimax Problem are equivalent. Let (f^*, C^*(x)) be the optimal solution of Minimax Problem. Then, C^* is also the optimal solution of the PP. Furthermore, C^* has the following form: C^*(x) = {y ∈𝒴 | f^*(x) p(y|x) ≥ 1} Let us also recall the definition of Primary Problem, Primary Problem (PP): C(x)Minimize [len(C(X))] subject to 𝔼[f(X) {[Y∈ C(X)] - (1-α)}] = 0, ∀ f ∈ℱ Recall that in this paper we assume that ℱ = {⟨β, Φ(x)⟩ | β∈^d} is a finite dimensional affine class of functions (see section <ref>). Assuming Φ = [ϕ_1, ϕ_2, ⋯, ϕ_d]. The Primary Problem can be equivalently written in terms of d linear constraints on the prediction sets C(x). Primary Problem- finite constraints: C(x)Minimize [len(C(X))] subject to 𝔼[ϕ_i(X) {[Y∈ C(X)] - (1-α)}] = 0, ∀ i ∈ [1, d]. We now take a closer look at the main optimization variable, i.e. the prediction sets C(x), and put it in the proper format. The prediction sets C(x) can be equivalently represented by a function C: 𝒳×𝒴→{0, 1} such that C(x, y) = [y∈ C(x)]. Furthermore, we can expand the optimization domain from C(x, y) : 𝒳×𝒴→{0, 1} to C(x, y) : 𝒳×𝒴→ [0, 1]. One should pay attention that this change does not affect the optimal solutions as the solutions will be integer values. Putting everything together we can write the following problem, Linear PP C(x, y) : 𝒳×𝒴→ [0, 1]Minimize ∫_𝒳×𝒴 C(x,y) p(x) ν(y) dxdy subject to 𝔼[ϕ_i(X) {C(X, Y) - (1-α)}] = 0, ∀ i ∈ [1, d]. Here ν is the lebesgue measure; recall from Section <ref> that we defined length through the lebesgue measure on 𝒴 = ℝ, which is equivalent to cardinality of a set in the case where 𝒴 is a discrete and finite set. This problem is a Linear program in terms of C with finitely many constraints. Also, the Linear PP, includes the Primary Problem in the sense that any solution to the Primary Problem is a feasible solution for Linear PP. That is to say, to prove Proposition <ref>, we just have to prove it for Linear PP. For d=1, Linear PP can be seen through the lens of the Neyman-Pearson Lemma in Hypothesis Testing <cit.>. As a result, our analysis of the Linear PP (for general d) can be considered as an extension of this lemma which will be done using the KKT conditions. Now, let us rewrite the Linear PP: Minimize ∫_𝒳×𝒴 C(x,y) p(x) ν(y) dxdy subject to: ∫_𝒳×𝒴ϕ_i(x)C(x, y) p(x,y) dx dy - (1-α) = 0, ∀ i ∈ [1, d] C(x,y) ∈ [0,1] ∀ x ∈𝒳,y∈𝒴 The above is a standard linear program on C(x,y). In what follows, we will find a closed-form solution using the dual of this program. Here, the “optimization variable” C(x,y) belongs to an infinite-dimensional space. Hence, in order to be fully rigorous, we will need to use the duality theory developed for general linear spaces that are not necessarily finite-dimensional. For a reader who is less familiar with infinite-dimensional spaces, what appears below is a direct extension of the duality theory (i.e. writing the Lagrangian) for the usual linear programs in finite-dimensional spaces. Let ℱ be the set of all measurable function defined on 𝒳×𝒴. Note that ℱ is a linear space. Let Ω be the set of all the measurable functions on 𝒳×𝒴 which are bounded between 0 and 1; I.e. Ω = { C ∈ℱ s.t. C: 𝒳×𝒴→ [0,1] } Note that Ω is a convex set. We can then rewrite our linear program as follows: Minimize ∫_𝒳×𝒴 C(x,y) p(x) ν(y) dxdy subject to: ∫_𝒳×𝒴ϕ_i(x)C(x, y) p(x,y) dx dy - (1-α) = 0, ∀ i ∈ [1, d] C ∈Ω Moreover, let us define the functional F: ℱ→ℝ as F(C) = ∫_𝒳×𝒴 C(x,y) p(x) ν(y) dxdy, and also define, for i ∈ [1,d], the functional G_i: ℱ→ℝ as G_i(C) = ∫_𝒳×𝒴ϕ_i(x)C(x, y) p(x,y) dx dy - (1-α). Finally, we define the mapping : ℱ→ℝ^d as (C) = [G_1(C), G_2(C), ⋯, G_d(C) ]. Note that G is a linear (and hence convex) mapping from ℱ to the Euclidean space ℝ^d. Using the above-defined notation, our linear program becomes: Minimize F(C) subject to: (C) = 0 C ∈Ω where 0∈ℝ^d is the all-zero vector. Note that the feasibility set of the above program is non-empty, as C(x,y) = 1-α, for all (x,y) ∈𝒳×𝒴, is a feasible point. We can now use the duality theory of convex programs in vector spaces (See Theorem 1, Section 8.3 of <cit.>; Also see Problem 7 in Chapter 8 of the same reference). Specifically, let OPT be the optimal value achievable in the above linear program. Then, there exists a vector ∈ℝ^d such that the following holds: OPT = inf_C ∈Ω{ F(C) - ⟨, (C) ⟩}, where ⟨, (C) ⟩ denotes the Euclidean inner-product of the two vectors , (C)∈ℝ^d. Here, note that the vector is the usual Lagrange multiplier. By denoting (x) = (ϕ_1(x), ϕ_2(x), ⋯, ϕ_d(x)), and using (<ref>), we can write ⟨, (C) ⟩ = ∫_𝒳×𝒴⟨, (x) ⟩ C(x, y) p(x,y) dx dy - (1-α)⟨, 1⟩, where 1∈ℝ^d is the all-ones vector. As a result, by using (<ref>), in order to solve the optimization in (<ref>) we need to solve the following optimization: inf_C ∈Ω{∫_𝒳×𝒴 C(x, y) ( p(x) ν(y) - ⟨, (x) ⟩ p(x,y) ) dx dy } + (1-α) ⟨, 1⟩. And by noting the fact the p(x,y) = p(x) p(y | x), and removing the term (1-α) ⟨, 1⟩ which is independent of C, our dual optimization becomes: inf_C ∈Ω{∫_𝒳×𝒴 C(x, y) ( ν(y) - ⟨, (x) ⟩ p(y | x) ) p(x) dx dy } Now, it is easy to see that as C(x,y) ∈ [0,1] (due to the constraint C ∈Ω), the above optimization problem has a closed-form solution C^*: C^*(x,y)= 1 if ⟨, (x) ⟩ p(y | x) > ν(y), 0 if ⟨, (x) ⟩ p(y | x) < ν(y), ∈{0,1} otherwise. Finally, note that in this paper the measure ν is considered to be the Lebesgue measure – see Section <ref> – hence, up to a constant normalization factor that can be absorbed into , we have ν(y) = 1 for all y ∈𝒴. The structure given in (<ref>) is also sufficient. I.e. for any pair (, C^*) such that (i) C^* has the form given in (<ref>); and (2) C^* satisfies the coverage-constraints (C^*) = 0, then C^* is an optimal solution of the Primary Problem(<ref>). This sufficiency result follows again from the duality theory of linear programs in linear spaces; E.g. see Theorem 1 in Section 8.4 of <cit.>. This necessary and sufficient condition is known as Strong Duality, which then results in the equivalence of Minimax Problem and Primary Problem, i.e we can change the order of min and max. § PROOFS OF SECTION <REF> Proof of Theorem <ref>: Let us recall the algorithm. CPL: f ∈ℱMinimize h ∈ℋMaximize g̃_α,n(f, h), where, C_h^S(x) = {y ∈𝒴 | S(x,y)≤ h(x)}. For the ease of notation let us call g(f, h) = g̃_α,n(f, h). let us also call the stationary solution to CPL by h_ CPL^* and f_ CPL^*. Now since g is a smooth function with respect to both of its arguments we have the following optimality condition, d/dεg(f_ CPL^*+ε f, h_ CPL^*)|_ε=0 =0, for every f ∈ℱ. Recall that ℱ is a d-dimensional affine class of functions over the basis Φ = [ϕ_1, ⋯, ϕ_d]. We can then rewrite <ref> as what follows. d/dεg(f_ CPL^*+εϕ_j, h_ CPL^*)|_ε=0 =0, for every j ∈ [1, ⋯, d]. Now looking at the definition of g(f, h)= 1/n∑_i=1^n[f(x_i){[S(x_i, y_i), h(x_i)] - (1-α)}] - 1/n∑_i=1^n ∫_𝒴([S(x_i, y) ≤ h(x_i)] - (1-α))dy, We can take the derivative with respect to f. Hence we can rewrite <ref> as what follows: 1/n∑_i=1^n[ϕ_j(x_i){[S(x_i, y_i) ,h_ CPL^*(x_i)] - (1-α)}]=0, for every j ∈ [1, ⋯, d]. One can think of the mathematical term above as the smoothed version of coverage under covariate shift ϕ_j. Now we can apply Lemma <ref>. Therefore, fixing j ∈ [1, ⋯, d], with probability 1-δ we have, |1/n∑_i=1^n[ϕ_j(x_i){[S(x_i, y_i) , h^*(x_i)] - (1-α)}] - [ϕ_j(x){[S(x, y) , h_ CPL^*(x)] - (1-α)}]| ≤2Bε/√(2π)σ + √(2)B√(ln(2𝒩(ℋ, d_∞, ε)/δ))/√(n) for any ε >0. Combining with (<ref>) we get, |[ϕ_j(x){[S(x, y) , h_ CPL^*(x)] - (1-α)}]| ≤2Bε/√(2π)σ + √(2)B√(ln(2𝒩(ℋ, d_∞, ε)/δ))/√(n) for any ε >0. Union bounding over (<ref>) we have with probability 1-δ for any j ∈ [1, ⋯, d], |[ϕ_j(x){[S(x, y) , h_ CPL^*(x)] - (1-α)}]| ≤2Bε/√(2π)σ + √(2)B√(ln(2d𝒩(ℋ, d_∞, ε)/δ))/√(n) for any ε >0. In other words, we proved that the expected smoothed version of coverage is bounded. The last step that we have to take is then to prove that the expected smoothed coverage is actually close to the expected actual coverage. The following claim makes this precise. The following inequality holds for every j ∈ [1, ⋯, d]. |[ϕ_j(x){[S(x, y) , h_ CPL^*(x)] - (1-α)}] - [ϕ_j(x){[S(x, y) ≤ h_ CPL^*(x)] - (1-α)}]| ≤ BLσ√(2/π) Proof. |[ϕ_j(x){[S(x, y) , h_ CPL^*(x)] - (1-α)}] - [ϕ_j(x){[S(x, y) ≤ h_ CPL^*(x)] - (1-α)}]| = |[ϕ_j(x)[S(x, y) , h_ CPL^*(x)] ] - [ϕ_j(x)[S(x, y) ≤ h_ CPL^*(x)]]| (a)≤[|ϕ_j(x)||((S(x, y) , h_ CPL^*(x)) - [S(x, y) ≤ h_ CPL^*(x)])|] (b)≤ B [|((S(x, y) , h_ CPL^*(x)) - [S(x, y) ≤ h_ CPL^*(x)])|] = B _X _S|X[|((S(x, y) , h_ CPL^*(x)) - [S(x, y) ≤ h_ CPL^*(x)])|] (c)≤ B _X Lσ√(2/π) = BLσ√(π/2), where (a) comes from triangle inequality, (b) is derived by the definition of B = max_i∈[1, ⋯,d]sup_x∈𝒳Φ_i(x), and (c) is followed by assumption <ref> and Lemma <ref>. Now combining Claim <ref> and (<ref>) we have with probability 1-δ, |[ϕ_j(x){[S(x, y) ≤ h_ CPL^*(x)] - (1-α)}]|≤ BLσ√(π/2) + 2Bε/√(2π)σ + √(2)B√(ln(2d𝒩(ℋ, d_∞, ε)/δ))/√(n) for any ε >0. Putting σ = 1/√(n) and ε = 1/n we have for every j∈ [1, ⋯, d], |[ϕ_j(x){[S(x, y) ≤ h_ CPL^*(x)] - (1-α)}]|≤√(2)B√(ln(2d𝒩(ℋ, d_∞, 1/n)/δ)) + BL√(π/2) + 2B/√(2π)/√(n) Let us also remind that each element f∈ℱ can be represented by a β∈^d, where we use the notation f(x) = ⟨, Φ(x)⟩≡ f_(x) (look at section <ref> for more details). By linearity of the class ℱ we can conclude with probability 1-δ for every f_∈ℱ, |[f(x){[S(x, y) ≤ h_ CPL^*(x)] - (1-α)}]|≤||||_1√(2)B√(ln(2d𝒩(ℋ, d_∞, 1/n)/δ)) + ||||_1BL√(π/2) + ||||_12B/√(2π)/√(n). § REFRENCES FOR 11 DATASETS FOR SECTION <REF> Here we list all the datasets. * MEPS-19 <cit.> * MEPS-20 <cit.> * MEPS-21 <cit.> * blog feedback (blog-data)<cit.> * physicochemical properties of protein tertiary structure (bio) <cit.> * bike sharing (bike) <cit.> * community and crimes (community) <cit.> * Tennessee’s student teacher achievement ratio (STAR) <cit.> * concrete compressive strength (concrete) <cit.> * Facebook comment volume variants one (facebook-1) <cit.> * Facebook comment volume variants two (facebook-2) <cit.>
http://arxiv.org/abs/2406.18698v1
20240626190040
Unveiling the CP-odd Higgs in a Generalized 2HDM Model at a Muon Collider
[ "Nandini Das", "Nivedita Ghosh" ]
hep-ph
[ "hep-ph", "hep-ex" ]
WavRx: a Disease-Agnostic, Generalizable, and Privacy-Preserving Speech Health Diagnostic Model Yi Zhu, Graduate Student Member, IEEE, and Tiago Falk, Senior Member, IEEE, Authors are with INRS. E-mail: Yi.Zhu@inrs.ca Manuscript received XXX. July 1, 2024 =============================================================================================================================================================== § INTRODUCTION The proposal of a high-energy muon collider by the International Muon Collider Collaboration (IMCC) is an important and interesting development in the world of collider physics <cit.>. The uniqueness of the muon collider lies in several facts. Firstly, unlike the hadron collider, where the energy used is only a fraction of the total energy taken by the partons, muon colliders can use the full energy. Secondly, the hadron collider is challenged by a noisy environment due to unwanted hadronic activity and smearing effects from the parton distribution functions (PDFs), which makes precision studies very difficult. In contrast to that, a very high-energy muon beam can be achieved in a circular collider due to relatively low synchrotron radiation compared to a e^+e^- collider. The muon, being an elementary particle, can therefore be lucrative in returning high center-of-mass energies in hard collisions along with a very little energy spread due to the suppressed radiative effects of bremsstrahlung <cit.>. It will not be an exaggeration to say that the muon collider provides the advantages of both pp and e^+e^- colliders, offering the benefits of high energy and high precision <cit.>. All these aspects make Muon Collider an attractive option to search for New Physics scenarios. The energy and luminosity of the upcoming muon collider are not yet finalized. However, there is a proposal to run at 1 ab^-1 luminosity for a 3 TeV center of mass (c.o.m) energy and 10 ab^-1 luminosity for a 10 TeV machine <cit.>. As the luminosity of the 3 TeV machine is compared to the 14 TeV HL-LHC luminosity, one expects that the early stages of the muon collider could be crucial in identifying new physics signals that LHC might not be able to probe even with its high luminosity option. This machine additionally offers direct study of the muon related physics <cit.>. The "muon-philic" BSM scenarios would have an extra advantage in this machine due to the direct coupling of other particles to muon in such scenarios <cit.>. The "muon-philic" models have a primary interest because of the observed excess in the anomalous magnetic moment of the muon by "MUON G-2" collaboration <cit.>. The latest measurement by the "MUON G-2" collaboration at Fermi National Laboratory (FNAL) combined with the E989 experiment at the Brookhaven National Laboratory (BNL) shows a 5.1σ deviation from its prediction by the Standard Model. In addition to the muon-phillic models, there can be other scenarios where a particular channel can be privileged at muon collider in comparison to hadron colliders while also providing a solution to the muon anomaly. In this work, we consider a generalized 2HDM model, with a minimally perturbed Type-X Yuwaka sector <cit.>. The presence of the nonstandard scalars i.e the charged Higgs and the light pseudoscalar in this model help us to satisfy both the muon anomaly and lepton flavor violation(LFV) data <cit.>. The constraint coming from the muon anomaly data requires moderate to high tanβ values which can give rise to interesting signal at muon collider. The study of generalized 2HDM in the context of LHC has been studied in great detail <cit.>. However, this model has not yet been studied in the muon collider extensively <cit.>. Here we intend to probe a pseudoscalar of 30-50 GeV mass range in the context of this model at muon collider. After finding a suitable region of parameter space where both the muon anomaly and LFV constraints are satisfied at two loops as well as theoretical constraints coming from perturbativity, unitarity, vacuum stability, oblique parameter constraints and constraints coming from B physics and collider experiments are also obeyed, we explore the possibility of probing a pseudoscalar at a 3 TeV muon collider in ℓ^+ℓ^'-γ+ final state, ℓ,ℓ^'=e,μ. This channel serves as a complementary channel to look for the light pseudoscalar at the LHC. The reason behind this complementarity comes from the Yukawa structure of the pseudoscalar to the leptons and the quarks. From the muon anomaly satisfied data, we see that moderate to high tanβ is preferred which enhances the lepton Yukawa coupling with the pseudoscalar and reduces the same for the quark Yukawa. As a result, we observe that even at HL-LHC, this signal cannot be probed even with high luminosity, whereas at a 3 TeV machine, with merely 1 ab^-1 luminosity an ample amount of parameter space is easily probed with significance ≳ 4σ. The paper is organized as follows: in section <ref>, we briefly describe the model. We then discuss the muon anomaly in connection to the model in section <ref>, followed by the theoretical and experimental constraints in section <ref>. We discuss a distinct collider signature in section <ref>. Finally, we discuss and conclude in section <ref>. § TWO HIGGS DOUBLET MODEL In this section, we briefly discuss the model of our interest <cit.>. For a overview of 2HDM model, we refer the readers to <cit.>. The most general potential containing two SU(2)_L doublet Higgs can be written as V(Φ_1, Φ_2) = m^2_11 (Φ_1^†Φ_1)+m^2_22 (Φ_2^†Φ_2) -[m^2_12 (Φ_1^†Φ_2)+H.C.]+ 1/2λ_1 (Φ_1^†Φ_1)^2 + 1/2λ_2 (Φ_2^†Φ_2)^2 + λ_3 (Φ_1^†Φ_1) (Φ_2^†Φ_2)+λ_4 (Φ_1^†Φ_2) (Φ_2^†Φ_1) +{1/2λ_5 (Φ_1^†Φ_2)^2 + [λ_6 (Φ_1^†Φ_1)+λ_7 (Φ_2^†Φ_2)] (Φ_1^†Φ_2) +H.C.} where H.C. stands for the hermitian conjugate of the corresponding term. After electroweak symmetry breaking, the two scalar doublets Φ_1 and Φ_2 can be expanded around the vacuum expectation values(vevs) as Φ_1 = [ ϕ^+_1; 1/√(2)(ρ_1 +v_1 + i η_1) ], Φ_2 = [ ϕ^+_2; 1/√(2)(ρ_2 +v_2 + i η_2) ]. The ratio of the two vevs is parametrized as tanβ =v_2/v_1, which plays a key role in the analysis. The singly charged scalars can be written as a linear combination of the following mass eigenstates, a Charged Goldstone boson G^± and a physical charged Higgs scalar H^±. Similarly the gauge eigenstates of CP odd neutral scalars can be expressed as a linear combination of G_0, a massless CP odd Goldstone and A, a physical massive CP odd scalar. The gauge eigenstates of charged scalar and CP odd scalars in terms of mass eigenstates can be written as [ ϕ_1^±; ϕ_2^± ] = [ cosβ sinβ; sinβ -cosβ; ][ G^±; H^± ] [ η_1; η_2 ] = [ cosβ sinβ; sinβ -cosβ; ][ G_0; A ] The CP even gauge eigenstates can be written as [ ρ_1; ρ_2 ] = [ -sinα cosα; cosα sinα ][ h; H ] In the general 2HDM, where no Z_2 symmetry is imposed on the Lagrangian, we can write the Yukawa terms of the Lagrangian as -ℒ_Yukawa = Q_L (Y^d_1 Φ_1+Y^d_2 Φ_2) d_R + Q_L (Y^u_1 Φ_1+Y^u_2 Φ_2) u_R + L_L (Y^l_1 Φ_1+ Y^l_2 Φ_2)e_R +H.C. where Y_1,2^u,d,l are Yukawa matrices and Φ_i is defined as Φ_i= iσ_2 Φ_i^⋆ It is not possible to diagonalize both Y_1 and Y_2 without assuming any particular relation. We follow the prescription of <cit.> and choose to diagonalize Y^u_2,Y^d_2 and Y^l_1 matrices, while Y^u_1,Y^d_1 and Y^l_2 remains non-diagonal, giving rise to the tree-level Flavor-changing-neutral current (FCNC) in the Yukawa sector. The Yukawa lagrangian for the neutral scalars can be written as -ℒ_Yukawa = u_L [ ( c_α m^u/v s_β-c_β -αΣ^u/√(2) s_β) h+ ( s_α m^u/v s_β+s_β -αΣ^u/√(2) s_β) H] u_R + d_L [ ( c_α m^d/v s_β-c_β -αΣ^d/√(2) s_β) h +( s_α m^d/v s_β+s_β -αΣ^d/√(2) s_β) H] d_R + e_L [ ( -s_α m^l/v c_β+c_β -αΣ^l/√(2) c_β) h+ ( c_α m^l/v c_β-s_β -αΣ^l/√(2) c_β) H] e_R +i [ u_L ( m^u/v t_β-Σ^u/√(2) s_β) u_R+d_L ( -m^d/v t_β+Σ^d/√(2) s_β) d_R+ e_L ( m^l t_β/v-Σ^l/√(2) c_β) e_R] A +H.C. where m^i corresponds to the diagonalized mass matrices of fermions, s_α=sinα, c_α=cosα, t_β=tanβ, s_β-α=sin(β-α) and c_β-α=cos(β-α). The Σ matrices contain the off-diagonal entries and can induce tree-level FCNC. They are defined as Σ^u= U^u_L Y^u_1U^u_R†, Σ^d= U^d_L Y^d_1U^d_R^† and Σ^l= U^u_L Y^l_2U^l_R^†, U's being the bi-unitary transformations required to diagonalize fermion mass matrices. In the Σ^i → 0 limit, the Yukawa sector reduces to the same as pure Type X HDM. Here for our analysis, the leptonic couplings with CP odd scalar A, would be relevant. Σ^f can be parametrised as <cit.> Σ^f_ij= √(m^f_i m^f_j)χ_ij/v For simplicity, we consider χ_ij to be symmetric. The leptonic non-diagonal couplings would direct to lepton flavor violation and they would be noted as y_μ e, y_τμ and y_τ e in the following sections. § MUON (G-2) IN CONNECTION TO 2 HDM AND LEPTON FLAVOR VIOLATION The "MUON G-2" collaboration at the Fermilab National Accelerator Laboratory (FNAL) in its recent report has published its recent experimental measurement of the anomalous magnetic moment of muon (g-2)_μ <cit.>. At the classical level, the gyromagnetic ratio of the muon (g_μ) is 2. However, it receives corrections from loop effects and this correction is defined as a_μ = (g-2)_μ/2. The value of a_μ in the SM comes out to be <cit.> a^SM_μ = 116591810(43) × 10^-11. On the other hand, the recent measurement at FNL after improving the measurement uncertainty <cit.> gives the value of the anomalous magnetic moment as <cit.> a^exp-FNAL_μ = 116592055(24) × 10^-11. This new measurement from FNAL along with a combination of old FNAL <cit.> and older BNL(2006) <cit.> data gives <cit.> a^exp-comb_μ = 116592059(22) × 10^-11 which results in an excess of Δ a_μ= 249(48) × 10^-11. Although there are tensions in the Hadronic Vacuum Polarization (HVP) <cit.> contribution to the (g-2) due to the recent lattice QCD based results <cit.> from BMW collaboration and the e^+e^-→π^+π^- data from CMD-3 experiment <cit.>. However, as any firm comparison of the muon (g-2) measurement with the theory is hard to establish, we, therefore, choose to work in the paradigm that a 5.1σ excess exists, and a contribution from new physics is needed. In this work, we take into account both the one-loop and two-loop Bar-Zee contribution to the muon anomaly in generalized 2HDM model <cit.>. A detailed study in the context of a_μ has already been done in <cit.>. We scan our model parameter space imposing muon g-2 constraints and plot the allowed region in the m_A-tanβ plane in Fig.<ref>. One can see that the low m_A and large tanβ are favored for satisfying muon anomaly data. While scanning, we have taken the 3σ upper and lower bounds on observed central value of Δ a_μ as noted in Eq. <ref>. The diagrams that appear in the calculation of muon anomaly, similar to those diagrams will also appear in the lepton flavor violating(LFV) processes. The non-observation of any significant deviation in the charged lepton sector puts bound on these following processes <cit.>: BR(μ→ eγ) < 4.2×10^-13, BR (τ→ eγ) < 3.3×10^-8 , BR (τ→μγ) < 4.4×10^-8. The strongest bounds come from the BR(μ→ eγ) process from MEG experiment <cit.>. We see that to satisfy both the LFV and muon anomaly constraints, one needs to put the values of the y_eμ, y_eτ and y_μτ to be O(10^-5), O (10^-4) and O(10^-5) respectively or lesser. While scanning the parameter space for both the muon anomaly and LFV, we have chosen the other CP-even Higgs and the charged Higgs mass to be 110 GeV and 165 GeV respectively. These particular choices of the masses will be justified soon in the next section. § THEORETICAL AND EXPERIMENTAL CONSTRAINTS ON MODEL PARAMETERS: In this section, we discuss different theoretical and experimental constraints considered on the model parameter space. For scanning of the parameter space, we have assumed the alignment limit in the analysis and therefore have kept the mixing angle cos(β - α) to be close to unity. The scan ranges of the parameters are mentioned below: m_12^2 ∈ [-500,500]  GeV^2, m_A ∈ [10.0, 60.0]  GeV,m_H ∈ [62.5, 125.0]  GeV, m_H^± ∈ [89.0, 190.0]  GeV,tanβ∈ [10, 100],|cos(β - α)| ∈ [0.99, 1],λ_6 ∈ [0, 0.1],λ_7 ∈ [0, 0.1] ∙   Vacuum Stability and Perturbativity Unitarity: The necessity to obtain a stable vacuum imposes constraints on the Higgs quartic couplings. The set of stability conditions for this model are as follows λ_1 >0,   λ_2 > 0,   λ_3 > -√(λ_1 λ_2),  λ_3 +λ_4-λ_5 > √(λ_1 λ_2) On the other hand, unitarity demands the λ parameters to be less than ∼ 4π. These λ parameters can be expressed in terms of the physical parameters such as the mass of the particle, vev etc and therefore we can translate these bounds to the physical parameter spaces. The other crucial parameter for perturbative unitarity is the soft Z_2 breaking parameters which requires to be m^2_12≃m^2_H/tanβ to ensure λ_1 to be within the perturbative limit <cit.> . These conditions for vacuum stability and unitarity of 2HDM have been previously discussed in multiple works <cit.>. As shown in Ref. <cit.>, though low to moderate tanβ values are preferred to satisfy the abovementioned constraints for m_A ranging between (10- 60 GeV), but higher values of tanβ can also satisfy the constraints for relatively lower number of parameter points. Here for our choice of CP odd mass (in the range 30-50 GeV), higher tanβ values are preferred in order to satisfy the muon g-2 constraint in the 3σ limit. We scan our parameter space for low m_A and high tanβ using 2HDMC-1.8.0 <cit.> package and have found points where vacuum stability, unitarity, and perturbativity constraints are satisfied. ∙   Electroweak constraints Due to the presence of non-standard Higgs in the current scenario, the W and Z boson receive one-loop correction to their masses and therefore the oblique parameter S,  T,  U <cit.> modifies. Consideration of updated values of SM Higgs mass and top mass gives the following values of S,  T,  U <cit.> S= 0.04± 0.11,   T=0.09± 0.14,  U=-0.02± 0.11 This in turn restricts the mass gap between the charged and the light CP even Higgs. For our choice of benchmark points, where this mass difference is -55 GeV, the electroweak observables are within the 2σ allowed range. ∙  B-Physics constraints: The presence of the flavor-changing terms in the Yukawa Lagrangian of the charged Higgs (Eq. <ref>) leads to rare processes involving B-mesons <cit.>. In the presence of non-zero FCNC Yukawa matrix elements, the B → X_s γ process will be modified. However, even in this scenario, it is possible to have low enough charged Higgs mass m_H^±≳ 150 GeV by taking λ_tt∼ 0.5 and λ_bb∼ 2 <cit.>. The other decay process which can constrain our model parameters space is B^±→τ^±ν_τ where charged Higgs enters at the tree level itself <cit.>. The constraint from Δ M_B also puts an an upper limit on λ_tt as a function of the charged Higgs mass <cit.>. m_H^±≳ 150 GeV is allowed for λ_tt≲ 0.5. The upper limit on the BR(B_s →μ^+ μ^-) <cit.> constrains the low tanβ (< 2) region for low m_H^±(∼ 100 GeV) <cit.>. For higher charged Higgs mass this limit is further relaxed. Therefore, these specific searches do not significantly impact our parameter space. ∙  Constraints from collider searches: LEP experiment puts a tight bound on the charged Higgs mass from the τν and cs̅ channel to be m_H^± > 80 GeV <cit.>. At the LHC, an upper limit on the charged Higgs mass comes from the production BR(t → b H^±) in the τν <cit.> and cs̅ <cit.> channels, when m_h^± < m_t. There are also available bounds on the charged Higgs mass from the search in pp → t b H^± <cit.>. We have taken into account all these searches and have set our charged Higgs mass to be 165 GeV. The constraints coming from the direct search of the nonstandard neutral Higgs can also modify our parameter space. Specifically, as we are interested in the low pseudoscalar-mass region with enhanced coupling to leptons, the search for low-mass pseudoscalar produced in association with b quarks and decaying into a ττ final state <cit.> plays an important role. The search for low-mass (pseudo)scalars produced in association with bb̅ and decaying into bb̅ <cit.> have been taken into account in our work. CMS has also investigated decays involving two non-standard Higgs bosons, such as h/H → Z(ℓℓ)A(ττ) <cit.> and h/H → Z(ℓℓ)A(bb̅) <cit.>. However, these constraints are relevant only for heavier CP-even Higgs bosons with masses ≳200 GeV. Therefore, our parameter space is not affected by these constraints. CMS and ATLAS have a series of searches of the decay of 125 GeV Higgs at various final states, namely, ττ <cit.>, μμ <cit.>, ee <cit.> and also lepton flavor violating eτ <cit.>, μτ <cit.>, and ττ <cit.> channels. However, our choice of cos(β-α)≃ 0.99 and higher CP-even Higgs to be 125 GeV, helps us to satisfy the lepton-flavor-violating decays of the 125 GeV Higgs trivially, as the coupling goes as sin^2(β-α) as we see from eq. <ref>. The most stringent condition that constrains our model parameter space is the 125 GeV Higgs decaying to a pair of light pseudoscalars <cit.>. In our work, we have taken the higher CP-even Higgs m_H to be 125 GeV. However, in this case, LEP limits translate that either m_A or m_h can be < m_H/2. As we are interested in low mass pseudoscalar for the collider analysis, we keep m_h= 110 GeV, i.e. > m_h/2. For detailed discussion, please see Refs. <cit.>. We have explicitly checked that HAA coupling (Eq. <ref>) can be made very less by suitable adjustment of tanβ,m^2_12 and m_h, thus avoiding the direct search constraints from H→ A A. g_HAA = 1/2v[(2m_A^2 - m_H^2) cos(α-3β)/sin 2β + (8 m_12^2 - sin 2β(2m_A^2 + 3m_H^2)) cos(β+α)/sin^2 2β] + v[sin 2βcos 2β(λ_6-λ_7)cos(β - α) + (λ_6 sinβsin 3β + λ_7 cosβcos 3β)sin(β - α)] We conclude this section with the remark that we have taken m_h = 110 GeV, m_H=125 GeV, m_H^±=165 GeV, m_A ∈ [30,50] GeV and tanβ∈ [50,80] for our collider analysis. § COLLIDER SEARCHES In this section, we explore the production of a mono-photon in association with a CP odd scalar A that further decays into two τ's in lepton-specific 2HDM at muon collider. The process is as follows (Fig. <ref>) μ^+ μ^- →γ A →γ τ^+ τ^- Our signal of interest is ℓ^+ℓ^'-γ+ where l,l^' = e , μ. The SM processes that can mimic this signal are γ W^+ W^-, γ Z Z and γ τ^+ τ^-. The first two among the aforementioned processes are the dominant backgrounds in our signal region. The third background can be reduced completely by applying a cut on the separation of the two leptons (Δ R_l l^') which will be discussed shortly. Therefore we do not discuss that background here in detail. We analyze four benchmark points that satisfy all necessary theoretical conditions (vacuum stability, unitarity) and experimental constraints such as constraints from oblique parameters, muon anomaly in 3σ limit, and lepton flavor violation constraints. The choice of parameters for the benchmarks and corresponding cross-sections are tabulated in Table <ref>. In the following, we present a cut-based analysis of the above-mentioned channel. One interesting pattern to be noticed is that although BP4 has the highest mass out of the four benchmark points, due to high tanβ, the cross-section is the highest for this channel. To analyse the collider aspects, we have implemented the model in FEYNRULES<cit.> and generated the UFO file. The signal and background generation is done by feeding the UFO file in MadGraph5@NLO<cit.>. PYTHIA8<cit.> is used for hadronization and showering. The showered events are then passed through DELPHES<cit.> for detector simulation purposes with the necessary modified muon collider card <cit.>. The preselection cuts used to generate the background and events are as follows |η(γ)| < 2.5 ;   |η(l)| <2.5 ;   p_T(l) > 10  GeV ;   p_T(γ) > 10  GeV; In addition to the basic cuts, we propose this set of selection cuts over the following kinematic observables which reduce the SM background events and improve the signal significance significantly. ∙ p_T of the leptons (p_T(ℓ)): In left panel of Fig. <ref>(a), we show the transverse momentum (p_T in GeV) distribution of the leading lepton. For the signal, the leptons come from the decays of two τ's which are products of the pseudoscalar. As a result, the leptons are peaked at lower values of p_T. For the backgrounds, the leptons come from the boosted W^± and Z bosons which results in peaking at relatively higher values. Therefore applying a cut over p_T < 300 GeV reduces the background considerably. ∙ η of the lepton (η_l): We show the η distribution of the leading lepton in Fig. <ref> (b). Due to the high boost of the signal leptons, the rapidity peaks around the higher value region (close to the beam axis) whereas the background is almost uniformly distributed from -2.5 to 2.5 in the η_l axis. Background events are reduced by putting |η_l| < 1.5. ∙ p_T of the photon (p_T(γ)): In Fig. <ref> (c), we show the momentum distribution of the photon. For the signal process, the photons come directly from the muons. As a result, they tend to peak at higher momentum. For the background processes, due to 3-body decay, the low momentum sides are mostly populated with a long tail. We see that choosing p_T(γ) > 200 GeV greatly reduce background in comparison to signal events. ∙ η of the photon (η_γ): We portray the rapidity distributin of photon in Fig. <ref> (d). This is very similar to the lepton rapidity distribution. Benchmark-specific cuts can reduce backgrounds without affecting the signal events. ∙ Δ R between the leptons (Δ R_l l^'): In Fig. <ref> (e), Δ R_l l^' distribution is shown for the signal and background events. For W^+ W^- background, the leptons coming from two different particles, acquire a bigger cone and, therefore are distinguishably distant from the signal distribution as shown in the plot. However, for the ZZ background, the distinction is difficult as the two leptons originates from one mother particle for both cases. But the CP odd scalar A being lighter than the Z boson, the signal distribution peaks towards the lower end of the Δ R axis more in comparison to the ZZ background. Therefore, an appropriate cut of Δ R < 0.35 takes the main role in increasing the signal significance. ∙ missing transverse energy (): The appears from the neutrinos for both the signal and background events. Though the ZZ background can greatly be reduced by applying a cut over missing as portrayed in the right panel of Fig. <ref> (f), however, the main background which arises from WW process, can not be reduced by applying this cut. To ensure that our signal contains , we put a basic cut of > 10 GeV while generating the events and refrain from applying any further hard cut on this variable. After applying appropriate cuts on the aforementioned observables, signal significance has been calculated in Table <ref> using the following formula <cit.> 𝒮=√(2[(S+B) ln (1+S/B)-S]). where S (B) is the number of signal events (background events) after applying all cuts. In Table <ref>, we see that all four benchmark points can be probed with significance ≳ 4σ with 1 ab^-1 luminosity at the proposed 3 TeV muon collider. Before concluding this section, we would like to comment regarding the possibility of probing the four benchmark points at 14 TeV HL-LHC. The cross-sections for this channel are 1.8× 10^-3 fb and 8.9×10^-4 for BP1 and BP4 respectively, which are at least a factor of O(200(800)) less than that of the cross-section compared to the 3 TeV muon collider as can be seen from Table <ref>. The reason behind such a small cross-section at HL-LHC is the fact that the quark to pseudoscalar coupling (A-q-q̅) is proportional to β, whereas the muon to pseudoscalar coupling (A-μ^+-μ^- ) is proportional to tanβ (Eq. <ref>). From muon g-2 data, we see that low m_A and high tanβ are preferred which in turn makes the search for this ℓ^+ℓ^'-γ+ signal topology at muon collider much more lucrative than the LHC. § CONCLUSION In this work, we explore the possibility of probing a low-mass pseudoscalar at a 3 TeV muon collider in the context of the generalized 2HDM model. Apart from the SM-like Higgs, this model has an additional CP-even Higgs, one CP-odd Higgs, and a charged Higgs. The presence of the additional scalars helps us to satisfy muon anomaly, as well as LFV constraints in an ample amount of parameter space. After satisfying (g-2)_μ data and LFV constraints, we discuss how the theoretical constraints pertaining to the requirements of perturbativity, unitarity, and vacuum stability modify our model parameter space. As our model contains non-diagonal Yukawa coupling, we have to consider the B-physics constraints as well. Finally, we have taken into account direct searches for the SM Higgs as well as the additional scalar states in colliders putting another set of bounds on the model parameter space. As we see the main contribution to muon anomaly comes from the low-mass pseudoscalar, we need to take into account the direct search of the SM Higgs decaying to two light pseudoscalars. The light pseudoscalar also implies a large branching ratio of the 125 GeV Higgs into a pair of pseudoscalars when the decay is kinematically feasible. We ensure an upper bound to the branching fraction coming from collider data, along with the perturbativity requirements, by demanding that the observed 125 GeV Higgs is the heavier of the two CP-even states of the 2HDM in the alignment limit. After satisfying the theoretical and experimental constraints, we set out to search for the pseudoscalar in the 3 TeV muon collider. As the pseudoscalar has a Yukawa coupling that is lepton(muon)-philic, this gives us a unique opportunity to look for a distinctive signal of ℓ^+ℓ^'-γ+ channel. The main motivation of the search at the muon collider lies in the fact that this channel would have a smaller cross-section to be probed even at HL-LHC due to the suppressed coupling of the pseudoscalar with quarks for the parameter space favoured by muon (g-2) anomaly data. The other advantage we gained in the muon collider is the cleanliness of the environment. After a simple cut-based analysis, we find out that the pseudoscalar having a mass range of 30 to 50 GeV, can be probed with significance ≳ 4σ. As the luminosity reach of the muon collider is yet to be finalized, one can hope that even more parameter space can be probed at the muon collider in this signal topology. § ACKNOWLEDGEMENT NG is thankful to IISc for financial support from the IOE postdoctoral fellowship. ND and NG thank Dr. Tousik Samui for the creative discussion. The authors would like to thank Prof. Dilip Kumar Ghosh and Prof. Sunanda Banerjee for useful discussions. NG would also like to acknowledge Dr. Jayita Lahiri for informative conversations. ND is funded by CSIR, Government of India, under the NET SRF fellowship scheme with Award file No.09/080(1187)/2021-EMR-I. JHEP
http://arxiv.org/abs/2406.19005v1
20240627084500
Effective potential in leading logarithmic approximation in non-renormalisable $SO(N)$ scalar field theories
[ "R. M. Iakhibbaev", "D. M. Tolkachev" ]
hep-th
[ "hep-th" ]
=1 -0.5cm 21cm 0cm 0cm 16cm [ Effective potential in leading logarithmic approximation in non-renormalisable SO(N) scalar field theories R.M. Iakhibbaev^1 and D. M. Tolkachev^1,2 ^1Bogoliubov Laboratory of Theoretical Physics, Joint Institute for Nuclear Research, 6, Joliot Curie, Dubna, Russia and ^2Stepanov Institute of Physics, 68, Nezavisimosti Ave., Minsk, Belarus The study of the effective potential for non-renormalisable scalar SO(N) symmetric theories leads to recurrence relations for the coefficients of the leading logarithms. These relations can be transformed into generalised renormalization-group (RG) equation which can be analyzed in detail. In some special cases this equation can be solved exactly. § INTRODUCTION The effective potential plays a very important role in the determination of vacuum properties in weakly coupled field theory, which was pointed out in the pioneering work of Coleman and Weinberg <cit.>. The calculation of this object by summing infinite series of Feynman diagrams with zero external momentum is a laborious task even for renormalisable interactions. Fortunately, in the leading logarithmic approximation one can easily find an exact expression for the effective potential due to renormalisability of potentials well studied in the literature <cit.>. For non-renormalisable theories, even the very statement of a such a problem seems at first glance infeasible due to the fact that the structure of the counterterms differs from the structure of the original non-renormalisable Lagrangian, quantum corrections to the Lagrangian grow uncontrollably. Not so long ago there has been developed a technique for studying the leading logarithms of the effective potential V_eff in scalar theories with non-renormalisable interactions. In the paper <cit.>, a scalar potential of arbitrary form was studied, which has no any symmetries. The formalism of the paper <cit.> was based on the extraction of information about the leading logarithmic approximation from vacuum Feynman diagrams in the framework of the Jackiw approach <cit.>, and on the application of the R-operation and the Bogoliubov-Parasiuk theorem <cit.> on loop Feynman graphs. Thus, it is possible to find a relation between the leading UV-singularities of n-loop vacuum diagrams and leading poles of (n-1)-loop diagrams. Finally, one can figure out that higher divergences are governed by one-loop diagrams, as in the renormalisable case. This fact leads to recurrence relations that generalize the standard renormalization-group (RG) equations. In this work we try to generalize the analysis to a four dimensional scalar theory with SO(N)-type interactions, this Lagrangian can be represented in a generalized form as L=1/2∂_μϕ_a ∂^μϕ_a-g V(ϕ_a ϕ_a). For convenience of the analysis we restrict ourselves to the power and exponential potentials, which in general can be represented as g V(ϕ_a ϕ_a)= g/p!(ϕ_a ϕ_a)^p/2, where p ≥ 4 is an integer number, or g V(ϕ_a ϕ_a)= g exp(|ϕ_a|/m) where a=1,2,…,N, m is the dimensional parameter and |ϕ_a| is the length of the vector ϕ_a. In the general case, these potentials are non-renormalisable, as seen from the power counting. The structure of the paper is as follows. In the first part of the paper, we briefly discuss the derivation of the effective potential within the functional formalism and the Feynman rules derived from the shifted action. In the second part of the paper, we calculate the first loop orders contributing to the effective potential. In the second section, we also briefly discuss the ℛ'-operation and how recurrence relations can be obtained in connection with the Bogoliubov-Parasiuk theorem. Finally, we analyze the behaviour of the obtained recurrence relation in different limits and explore the generalised RG-equation in the large N-limit. § EFFECTIVE POTENTIAL AND VACUUM DIAGRAMS The generating function of the considered theory is given as <cit.> Z(J)=∫𝒟ϕ exp(i∫ d^4x   L(ϕ,d ϕ)+Jϕ), thus Z(J) corresponds to vacuum-to-vacuum transitions in the presence of the classical external current J(x). It is known that the effective potential is obtained by the Legendre transform of the connected functional W(J)=-ilog Z(J)Γ(ϕ)= W(J)-∫ d^4x J(x)ϕ(x), where the classical field ϕ(x) is defined as the solution to ϕ(x)=δ W(J)/δ J(x). However, it is convenient to directly read the effective potential in a perturbative sense through 1PI Feynman graphs; so the effective potential can be represented as V_eff(ϕ)=g∑_k=0^∞ (-g)^k V_k(ϕ). The rules for the latter can be obtained from the shifted action S[ϕ + ϕ̂] where ϕ is the classical field obeying the equation of motion and ϕ̂(x) is the quantum field over which integration can be performed <cit.>. This shift with ϕ_a being a scalar field with N-components gives rise to a mass matrix, which can be conveniently represented as <cit.> m_ab^2=v̂_2 (δ_ab-ϕ_aϕ_b/ϕ^2)+v_2 ϕ_a ϕ_b/ϕ^2 where m_1^2=gv_2=g∂^2 V/∂ϕ^2,   m_2^2=gv̂_2=2g ∂ V/∂ (ϕ^2). One can note that the non-diagonal terms (that give contribution proportional to N̂=N-1) and the diagonal terms are factorised (this technique was used in <cit.> to study higher loop corrections to the effective potential). According to this representation, one can write propagators in the following form G_1,ab=1/p^2-m_1^2ϕ_a ϕ_b/ϕ^2, G_2,ab=1/p^2-m_2^2(δ_ab- ϕ_a ϕ_b/ϕ^2). The vacuum diagrams contributing to the effective potential are depicted in Fig. <ref>. The vertices are also generated by shifting and expanding the corrections (we denote them as Δ V_1) by the quantum field ϕ̂<cit.>. In our case the vertices correspond to the derivative of the classical SO(N)-potential: v_n=∂^n V/∂ϕ^n. Here the derivative corresponds to the number of internal lines without the vector SO(N) index. Also, the shift gives vector vertices with indices as follows v̂_2=2 ∂ V/∂ (ϕ^2) , v̂_a,3=2 ∂/∂ϕ_a∂ V/∂ (ϕ^2) and also v̂_3=2 ∂/∂ϕ∂ V/∂ (ϕ^2) and so on. For the triple vertex with two internal lines carrying indices we have Λ_a b c=(δ_a bv̂_c,3+δ_a cv̂_b,3+δ_b cv̂_a,3). For the internal lines of the quartic vertices carrying the vector index one must contract propagators with symmetric tensor: X_abcd=(δ_abδ_cd+δ_acδ_bd+δ_adδ_bc). The same rules hold for the higher loop contribution and more complicated vertices <cit.>. The Feynman rules derived above allow us to compute loop diagrams (we use the dimensional regularization d=4-2ϵ) for the effective potential so that the one-loop contributions to the effective potential have the following form Δ V_1=v_2^2/4(1/ϵ+log(m_1^2/μ^2))+N̂v̂_2^2/4(1/ϵ+log(m_2^2/μ^2)), where 1/4 is the combinatorial factor and μ is the dimensional transmutation parameter. Computation of two-loop diagrams gives the following expression for leading divergences: Δ V_2=1/8ϵ^2(v_2 v_3^2+v_4 v_2^2)+N̂(N̂+2)v̂_4 v̂_2^2/4ϵ^2+3N̂(v_3^2 v̂_2/ϵ^2+v̂_3^2 v_2/ϵ^2) Using these expressions, one can get all the expressions obtained earlier <cit.>. For example, at the one-loop level of leading-log contribution for (ϕ^2)^2-potential one can get V_1= 1/4g (ϕ^2)^2/4log(m_1^2/μ^2)+N/4g (ϕ^2)^2/36log(m_2^2/μ^2) and at the two-loop level V_2= 3 g^2 (ϕ^2)^2/32log^2 (m_1^2/μ^2)+Ng^2(ϕ^2)^2/48log(m_1^2/μ^2) log(m_2^2/μ^2)+N^2g^2 (ϕ^2)^2/864log^2 (m_2^2/μ^2) To obtain higher order contributions, we turn to the BPHZ procedure which allows us to relate the lower orders in the PT-series to the higher orders. § BPHZ-PROCEDURE AND RG-EQUATION Let us remind that the R-operation <cit.> being applied to an n-loop diagram subtracts first of all the ultraviolet divergences in subgraphs starting from one loop up to (n-1)- loops and then finally subtracts the remaining n-loop divergence, which are obliged to be local due to the Bogoliubov-Parasyuk theorem <cit.>. This n-loop divergence left after the incomplete ℛ-operation (ℛ'-operation) is precisely what we are looking for. The locality requirement tells us that the leading 1/ϵ^n divergence in n-loops A^(n)_n is given by the following formula <cit.>: A^(n)_n=(-1)^n+11/n A^(1)_n, where A^(1)_n is the one-loop divergence left after subtraction of the (n-1)-loop counter term as a result of the incomplete R'-operation. Recall that the ℛ'-operation for any graph G can be defined recursively via the action of the ℛ'-operation on divergent subgraphs <cit.>: ℛ'G= (1- ∑_γ Kℛ'_γ+∑_γγ' Kℛ'_γ Kℛ'_γ' - …) G, where the subtraction operator K_γ subtracts the UV divergence of a given subgraph γ. The action of the R'-operation is shown in Figure <ref> Using this theorem and applying it in the same way as in <cit.>, one can write the expression for the generalized renormalization equation for non-renormalisable theories. The recurrent structure of the leading divergences can be written as n Δ V_n=1/4∑_k=0^n-1 D_abΔ V_k D_abV_ n- k-1 with the differential operators D_ab→∂^2/∂ϕ_a ∂ϕ_b This equation can be re-expressed n Δ V_n= 1/4∑_k=0^n-1(4 N̂∂/∂ (ϕ^2)Δ V_k ∂/∂ (ϕ^2)Δ V_n-k-1+∂^2/∂ϕ^2Δ V_k∂^2/∂ϕ^2Δ V_n-k-1), n≥ 2 This equation can be considered as the main result of the present work. If we choose N=1, we see that we recover the generalized renormalization-group equation in the case of ordinary arbitrary scalar interaction <cit.>. Obviously, no analytical solution can be obtained from the general equation since even the trivial limit N=1 does not yield an equation that can be solved. However, it can be seen that in the limit of large N the recurrence equation is considerably simplified. It can be shown that if one represents the recurrence relation in the large N limit (which in itself means summation over all bubble-type topologies) n Δ V_n= ∑_k=0^n-1(N̂∂/∂ (ϕ^2)Δ V_k ∂/∂ (ϕ^2)Δ V_n-k-1). It can be rewritten as a differential equation after introducing Σ=∑_n=0^∞ (-z)^n Δ V_n (ϕ) as d/dzΣ(z,ϕ)=-N(∂/∂ϕ^2Σ(z,ϕ))^2,  Σ(0,ϕ)=V_0 The effective potential can be represented as a function of the solution with the interchanged variable V_eff=g Σ(z,ϕ)|_z → -g/16 π^2log(gv̂_2/μ^2 ) Note that the RG-equation (<ref>) is an ordinary differential equation of the first order and can be solved analytically at least for a number of cases of non-renormalisable potentials. It can be seen that the generalised renormalisation group equations look roughly the same as in the renormalisable case with the exception that the beta function is replaced by a complex operator of fields. This situation fits quite well into the general picture of the study of non-renormalisable theories <cit.>. Let us turn to particular examples of potentials to which one can calculate quantum corrections with the help of the obtained generalised renormalisation-group equation (<ref>). § LARGE N LIMIT §.§ Power-like potentials Now let us consider an example of a power-like potential. It is convenient to obtain a solution to the generalised RG-equation in the form of the following ansatz: Σ(z,ϕ_i)=(ϕ^2)^p/2/p! f(z (ϕ^2)^p/2-2) where p is a power of initial interaction. This behaviour is justified by the fact that the expansion of the function f(z) is given as a power series, as one can see in (<ref> -<ref>). Inserting (<ref>) into (<ref>) one easily can find an ordinary differential equation with the initial conditions - N/4p!((p-4) x f'(x)+p f(x))^2=f'(x),  f(0)=1 where the dimensionless argument is introduced as it comes from loop expansion x=z (ϕ^2)^p/2-2 . One can notice that this equation is homogeneous (we can use it and look for solution as f(Nx)). In the simplest p=4 case, the equation turns into sum of a geometric progression as expected: f(z)= 1/1 + N/6 z, and here the pole is located at z=-6/N. Thus, in the renormalisable case the generalised RG-equation (<ref>) reproduces the well-known textbook example. It is easy to change variables as prescribed in (<ref>) and get a full expression for the effective potential (<ref>): V_eff=g(ϕ^2)^2/4!/1-N/6g/16 π^2log(m_2/μ). The corresponding graphics for f(z) and V_eff are shown in Figure <ref>. In these figures one can clearly see a pole due to which the ground state in such renormalisable theory is stable. Let us turn to non-renormalisable examples (when p>4). It is easier to shift the ansatz f(x) → ((4-p) x)^p/4-p S(x) and to find that p S(x)-(p-4) x S'(x)=N/4p! ((4-p) x)^2p-6/p-4S'(x)^2,   S(0)=0. Thus, nonlinearity is simplified and isolated. Unfortunately, the general solution of the equation in analytically closed form cannot be obtained, but it should be noted that the solution of the equation by quadrature is given by -2 (p!)^p-4/2(-2 √(N (p-4) /(p-1)!q S(x)+1)-p+2)^p-2/2(√(N (p-4)/(p-1)! q S(x)+1)-1)/N (p-4) p^2 q S(x)=C. Here C is the constant of integration and q=((4-p) x)^4/4-p, hence the desired function in exact form must be expressed as cumbersome inverse function. In some cases one can solve this generalised RG-equation in an analytically exact way. The most obvious and easiest choice of potential is p=6. In this case, we can get from (<ref>) -N/720(x S'(x)+3 S(x))^2=S'(x) and solution can be found as f(x)=3× 5!^2(1/2 N x (1+N x/5!)-10 (1+N x/30)^3/2+10)/N^3 x^3 Note that this expression describes a number of bubble diagrams including those generated by the vertices of ϕ^6-type. Note that in (<ref>) there is no pole or function discontinuity behaviour. It is easy to see that (<ref>) has an imaginary part when x<-30/N. However, the behaviour of the function remains regular as shown in Fig. <ref>. It is known that imaginary parts (due to non-convexity of solution (<ref>)) have a natural explanation as indicators of unstable states <cit.>. §.§ Exponential potential It is quite interesting that representation of the mass matrix in the form of (<ref>) is also applicable to the exponential potential. To proceed, we can perform the same steps for the exp(|ϕ|/m)-potential and look for solution of RG-equation in the following representation Σ(z, ϕ_a)=e^|ϕ|/m f(x) where we introduced dimensionless variable x=z/m^4 e^|ϕ|/m. Thus the RG-equation has the following form N/4(x f'(x)+f(x))^2=-f'(x) with the initial condition f(0)=1 Solution of (<ref>) satisfying (<ref>) is given by f(x)=1/2 N xW( N x) (W(N x)+2) where W(x) is the Lambert function. This solution is regular, f(x) is presented in Fig. <ref>. Again, there are no signs of irregularity but the appearance of the imaginary part (the exact value of the argument is given by the transcendental equation Im(f(x))=0). Surprisingly, the resulting expression is an analytic function of x although the original potential is not. It should be noted that here the initial potential is included as a variable in the argument of the function so that in fact the function is really not analytic. Thus, as in the previous example, the ground state of the obtained solution can be considered as metastable. § CONCLUSION In this paper, we have succeeded in constructing a generalised renormalisation-group equation for an arbitrary scalar theory with SO(N) symmetry using the Bogoliubov-Parasiuk theorem. The resulting equation reproduces in the limit the equation we studied earlier. In the large N limit we managed to obtain a generalised renormalisation-group equation which turn out to be solvable. In some special cases we showed its exact solutions and the corresponding effective potentials. The common property of the resulting effective potentials in non-renormalisable models is their metastability and absence of behavioural peculiarities i.e. discontinuities. The exactly solvable equations obtained allow the properties of effective potentials to be described more qualitatively even in the case of non-renormalisable interactions. Therefore, there is a good prospect of studying more complicated actions with more interesting properties, for example, four-dimensional supersymmetric actions of the Wess-Zumino type and their generalisations (as is well-known, kahlerian and chiral superpotentials can be supplemented by quantum corrections <cit.>). Also, effective potentials are quite interesting from the point of view of studying the scheme dependence since effective potentials require the computation of only quite simple vacuum diagrams. Thus, the study of the sub-leading logarithmic order promises to be fruitful as it was in the case of higher dimensional on-shell four-point scattering MHV-amplitudes <cit.>. § ACKNOWLEDGMENTS The authors are grateful to D.I. Kazakov for reading the manuscript and valuable comments. The authors would also like to thank I.L. Buchbinder and S.V. Mikhaylov for fruitful discussions. unsrt ]
http://arxiv.org/abs/2406.18009v1
20240626013837
E2 TTS: Embarrassingly Easy Fully Non-Autoregressive Zero-Shot TTS
[ "Sefik Emre Eskimez", "Xiaofei Wang", "Manthan Thakker", "Canrun Li", "Chung-Hsien Tsai", "Zhen Xiao", "Hemin Yang", "Zirun Zhu", "Min Tang", "Xu Tan", "Yanqing Liu", "Sheng Zhao", "Naoyuki Kanda" ]
eess.AS
[ "eess.AS", "cs.SD" ]
Coordination of Transmission and Distribution Systems in Load Restoration Santosh Sharma Department of Electrical & Computer Engineering University of Central Florida, Orlando, FL, USA, 32816 =========================================================================================================================== § ABSTRACT This paper introduces Embarrassingly Easy Text-to-Speech (E2 TTS), a fully non-autoregressive zero-shot text-to-speech system that offers human-level naturalness and state-of-the-art speaker similarity and intelligibility. In the E2 TTS framework, the text input is converted into a character sequence with filler tokens. The flow-matching-based mel spectrogram generator is then trained based on the audio infilling task. Unlike many previous works, it does not require additional components (e.g., duration model, grapheme-to-phoneme) or complex techniques (e.g., monotonic alignment search). Despite its simplicity, E2 TTS achieves state-of-the-art zero-shot TTS capabilities that are comparable to or surpass previous works, including Voicebox and NaturalSpeech 3. The simplicity of E2 TTS also allows for flexibility in the input representation. We propose several variants of E2 TTS to improve usability during inference. See <https://aka.ms/e2tts/> for demo samples. zero-shot text-to-speech, flow-matching § INTRODUCTION In recent years, text-to-speech (TTS) systems have seen significant improvements <cit.>, achieving a level of naturalness that is indistinguishable from human speech <cit.>. This advancement has further led to research efforts to generate natural speech for any speaker from a short audio sample, often referred to as an audio prompt. Early studies of zero-shot TTS used speaker embedding to condition the TTS system <cit.>. More recently, VALL-E <cit.> proposed formulating the zero-shot TTS problem as a language modeling problem in the neural codec domain, achieving significantly improved speaker similarity while maintaining a simplistic model architecture. Various extensions were proposed to improve stability <cit.>, and VALL-E 2 <cit.> recently achieved human-level zero-shot TTS with techniques including repetition-aware sampling and grouped code modeling. While the neural codec language model-based zero-shot TTS achieved promising results, there are still a few limitations based on its auto-regressive (AR) model-based architecture. Firstly, because the codec token needs to be sampled sequentially, it inevitably increases the inference latency. Secondly, a dedicated effort to figure out the best tokenizer (both for text tokens and audio tokens) is necessary to achieve the best quality <cit.>. Thirdly, it is required to use some tricks to stably handle long sequences of audio codecs, such as the combination of AR and non-autoregressive (NAR) modeling <cit.>, multi-scale transformer <cit.>, grouped code modeling <cit.>. Meanwhile, several fully NAR zero-shot TTS models have been proposed with promising results. Unlike AR-based models, fully NAR models enjoy fast inference based on parallel processing. NaturalSpeech 2 <cit.> and NaturalSpeech 3 <cit.> estimate the latent vectors of a neural audio codec based on diffusion models <cit.>. Voicebox <cit.> and MatchaTTS <cit.> used a flow-matching model <cit.> conditioned by an input text. However, one notable challenge for such NAR models is how to obtain the alignment between the input text and the output audio, whose length is significantly different. NaturalSpeech 2, NaturalSpeech 3, and Voicebox used a frame-wise phoneme alignment for training the model. MatchaTTS, on the other hand, used monotonic alignment search (MAS) <cit.> to automatically find the alignment between input and output. While MAS could alleviate the necessity of the frame-wise phoneme aligner, it still requires an independent duration model to estimate the duration of each phoneme during inference. More recently, E3 TTS <cit.> proposed using cross-attention from the input sequence, which required a carefully designed U-Net architecture <cit.>. As such, fully NAR zero-shot TTS models require either an explicit duration model or a carefully designed architecture. One of our findings in this paper is that such techniques are not necessary to achieve high-quality zero-shot TTS, and they are sometimes even harmful to naturalness.[Concurrent with our work, Seed-TTS <cit.> proposed a diffusion model-based zero-shot TTS, named Seed-TTS_DiT. Although it appears to share many similarities with our approach, the authors did not elaborate on the details of their model, making it challenging to compare with our work.] Another complexity in TTS systems is the choice of the text tokenizer. As discussed above, the AR-model-based system requires a careful selection of tokenizer to achieve the best result. On the other hand, most fully NAR models assume a monotonic alignment between text and output, with the exception of E3 TTS, which uses cross-attention. These models impose constraints on the input format and often require a text normalizer to avoid invalid input formats. When the model is trained based on phonemes, a grapheme-to-phoneme converter is additionally required. In this paper, we propose Embarrassingly Easy TTS (E2 TTS), a fully NAR zero-shot TTS system with a surprisingly simple architecture. E2 TTS consists of only two modules: a flow-matching-based mel spectrogram generator and a vocoder. The text input is converted into a character sequence with filler tokens to match the length of the input character sequence and the output mel-filterbank sequence. The mel spectrogram generator, composed of a vanilla Transformer with U-Net style skip connections, is trained using a speech-infilling task <cit.>. Despite its simplicity, E2 TTS achieves state-of-the-art zero-shot TTS capabilities that are comparable to, or surpass, previous works, including Voicebox and NaturalSpeech 3. The simplicity of E2 TTS also allows for flexibility in the input representation. We propose several variants of E2 TTS to improve usability during inference. § E2 TTS §.§ Training Fig. <ref> (a) provides an overview of E2 TTS training. Suppose we have a training audio sample s with transcription y=(c_1, c_2, ..., c_M), where c_i represents the i-th character of the transcription.[Alternatively, we can represent y as a sequence of Unicode bytes <cit.>.] First, we extract its mel-filterbank features ŝ∈ℝ^D× T, where D denotes the feature dimension and T represents the sequence length. We then create an extended character sequence ỹ, where a special filler token ⟨ F⟩ is appended to y to make the length of ỹ equal to T.[We assume M≤ T, which is almost always valid.] ŷ = (c_1, c_2, …, c_M, ⟨ F⟩, …, ⟨ F⟩_(T-M) times). A spectrogram generator, consisting of a vanilla Transformer <cit.> with U-net <cit.> style skip connection, is then trained based on the speech infilling task <cit.>. More specifically, the model is trained to learn the distribution P(m⊙ŝ|(1-m)⊙ŝ, ŷ), where m∈{0,1}^D× T represents a binary temporal mask, and ⊙ is the Hadamard product. E2 TTS uses the conditional flow-matching <cit.> to learn such distribution. §.§ Inference Fig. <ref> (b) provides an overview of the inference with E2 TTS. Suppose we have an audio prompt s^aud and its transcription y^ aud=(c'_1, c'_2, ..., c'_M^ aud) to mimic the speaker characteristics. We also suppose a text prompt y^ text=(c”_1, c”_2, ..., c”_M^ text). In the E2 TTS framework, we also require the target duration of the speech that we want to generate, which may be determined arbitrarily. The target duration is internally represented by the frame length T^ gen. First, we extract the mel-filterbank features ŝ^ aud∈ℝ^D× T^ aud from s^ aud. We then create an extended character sequence ŷ' by concatenating y^ aud, y^ text, and repeated ⟨ F⟩, as follows: ŷ' = (c'_1, c'_2, …, c'_M^ aud, c”_1, c”_2, …, c”_M^ text, ⟨ F⟩, …, ⟨ F⟩_𝒯 times), where 𝒯=T^ aud+T^ gen-M^ aud-M^ text, which ensures the length of ŷ' is equal to T^ aud+T^ gen.[To ensure 𝒯≥ 0, T^ gen needs to satisfy T^ gen≥ M^ aud+M^ text-T^ aud, which is almost always valid in the TTS scenario.] The mel spectrogram generator then generates mel-filterbank features s̃ based on the learned distribution of P(s̃|[ŝ^ aud; z^ gen], ŷ'), where z^ gen is an all-zero matrix with a shape of D× T^ gen, and [;] is a concatenation operation in the dimension of T^*. The generated part of s̃ are then converted to the speech signal based on the vocoder. §.§ Flow-matching-based mel spectrogram generator E2 TTS leverages conditional flow-matching <cit.>, which incorporates the principles of continuous normalizing flows <cit.>. This model operates by transforming a simple initial distribution p_0 into a complex target distribution p_1 that characterizes the data. The transformation process is facilitated by a neural network, parameterized by θ, which is trained to estimate a time-dependent vector field, denoted as v_t(x;θ), for t ∈ [0, 1]. From this vector field, we derive a flow, ϕ_t, which effectively transitions p_0 into p_1. The neural network's training is driven by the conditional flow matching objective: ℒ^CFM(θ) = 𝔼_t,q(x_1), p_t(x|x_1) u_t(x|x_1) - v_t(x;θ) ^2, where p_t is the probability path at time t, u_t is the designated vector field for p_t, x_1 symbolizes the random variable corresponding to the training data, and q is the distribution of the training data. In the training phase, we construct both a probability path and a vector field from the training data, utilizing an optimal transport path: p_t(x|x_1)=𝒩(x|tx_1, (1-(1-σ_ min)t)^2I) and u_t(x|x_1)=(x_1-(1-σ_ min)x)/(1-(1-σ_ min)t). For inference, we apply an ordinary differential equation solver <cit.> to generate the log mel-filterbank features starting from the initial distribution p_0. We adopt the same model architecture with the audio model of Voicebox (Fig. 2 of <cit.>) except that the frame-wise phoneme sequence is replaced into ŷ. Specifically, Transformer with U-Net style skip connection <cit.> is used as a backbone. The input to the mel spectrogram generator is m⊙ŝ, ŷ, the flow step t, and noisy speech s_t. ŷ is first converted to character embedding sequence ỹ∈ℝ^E× T. Then, m⊙ŝ, s_t, ỹ are all stacked to form a tensor with a shape of (2· D+E)× T, followed by a linear layer to output a tensor with a shape of D× T. Finally, an embedding representation, t̂∈ℝ^D, of t is appended to form the input tensor with a shape of ℝ^D× (T+1) to the Transformer. The Transformer is trained to output a vector field v_t with the conditional flow-matching objective ℒ^ CFM. §.§ Relationship to Voicebox E2 TTS has a close relationship with the Voicebox. From the perspective of the Voicebox, E2 TTS replaces a frame-wise phoneme sequence used in conditioning with a character sequence that includes a filler token. This change significantly simplifies the model by eliminating the need for a grapheme-to-phoneme converter, a phoneme aligner, and a phoneme duration model. From another viewpoint, the mel spectrogram generator of E2 TTS can be viewed as a joint model of the grapheme-to-phoneme converter, the phoneme duration model, and the audio model of the Voicebox. This joint modeling significantly improves naturalness while maintaining speaker similarity and intelligibility, as will be demonstrated in our experiments. §.§ Extension of E2 TTS §.§.§ Extension 1: Eliminating the need for transcription of audio prompts in inference In certain application contexts, obtaining the transcription of the audio prompt can be challenging. To eliminate the requirement of the transcription of the audio prompt during inference, we introduce an extension, illustrated in Fig. <ref>. This extension, referred to as E2 TTS X1, assumes that we have access to the transcription of the masked region of the audio, which we use for y. During inference, the extended character sequence ỹ' is formed without y^ aud, namely, ỹ' = (c”_1, c”_2, …, c”_M^ text, ⟨ F⟩, …, ⟨ F⟩_𝒯 times). The rest of the procedure remains the same as in the basic E2 TTS. The transcription of the masked region of the training audio can be obtained in several ways. One method is to simply apply ASR to the masked region during training, which is straightforward but costly. In our experiment, we employed the Montreal Forced Aligner <cit.> to determine the start and end times of words within each training data sample. The masked region was determined in such a way that we ensured not to cut the word in the middle. §.§.§ Extension 2: Enabling explicit indication of pronunciation for parts of words in a sentence In certain scenarios, users want to specify the pronunciation of a specific word such as unique foreign names. Retraining the model to accommodate such new words is both expensive and time-consuming. To tackle this challenge, we introduce another extension that enables us to indicate the pronunciation of a word during inference. In this extension, referred to as E2 TTS X2, we occasionally substitute a word in y with a phoneme sequence enclosed in parentheses during training, as depicted in Fig. <ref>. In our implementation, we replaced the word in y with the phoneme sequence from the CMU pronouncing dictionary <cit.> with a 15% probability. During inference, we simply replace the target word with phoneme sequences enclosed in parentheses. It's important to note that y is still a simple sequence of characters, and whether the character represents a word or a phoneme is solely determined by the existence of parentheses and their content. It's also noteworthy that punctuation marks surrounding the word are retained during replacement, which allows the model to utilize these punctuation marks even when the word is replaced with phoneme sequences. § EXPERIMENTS §.§ Training data We utilized the Libriheavy dataset <cit.> to train our models. The Libriheavy dataset comprises 50,000 hours of read English speech from 6,736 speakers, accompanied by transcriptions that preserve case and punctuation marks. It is derived from the Librilight <cit.> dataset contains 60,000 hours of read English speech from over 7,000 speakers. For E2 TTS training, we used the case and punctuated transcription without any pre-processing. We also used a proprietary 200,000 hours of training data to investigate the scalability of E2 TTS model. §.§ Model configurations We constructed our proposed E2 TTS models using a Transformer architecture. The architecture incorporated U-Net <cit.> style skip connections, 24 layers, 16 attention heads, an embedding dimension of 1024, a linear layer dimension of 4096, and a dropout rate of 0.1. The character embedding vocabulary size was 399.[We used all characters and symbols that we found in the training data without filtering.] The total number of parameters amounted to 335 million. We modeled the 100-dimensional log mel-filterbank features, extracted every 10.7 milliseconds from audio samples with a 24 kHz sampling rate. A BigVGAN <cit.>-based vocoder was employed to convert the log mel-filterbank features into waveforms. The masking length was randomly determined to be between 70% and 100% of the log mel-filterbank feature length during training. In addition, we randomly dropped all the conditioning information with a 20% probability for classifier-free guidance (CFG) <cit.>. All models were trained for 800,000 mini-batch updates with an effective mini-batch size of 307,200 audio frames. We utilized a linear decay learning rate schedule with a peak learning rate of 7.5×10^-5 and incorporated a warm-up phase for the initial 20,000 updates. We discarded the training samples that exceeded 4,000 frames. In a subset of our experiments, we initialized E2 TTS models using a pre-trained model in an unsupervised manner. This pre-training was conducted on an anonymized dataset, which consisted of 200,000 hours of unlabeled data. The pre-training protocol, which involved 800,000 mini-batch updates, followed the scheme outlined in <cit.>. In addition, we trained a regression-based duration model by following that of Voicebox <cit.>. It is based on a Transformer architecture, consisted of 8 layers, 8 attention heads, an embedding dimension of 512, a linear layer dimension of 2048, and a dropout rate of 0.1. The training process involved 75,000 mini-batch updates with 120,000 frames. We used this duration model for estimating the target duration for the fair comparison to the Voicebox baseline. Note that we will also show that E2 TTS is robust for different target duration in Section <ref>. §.§ Evaluation data and metrics In order to assess our models, we utilized the test-clean subset of the LibriSpeech-PC dataset <cit.>, which is an extension of LibriSpeech <cit.> that includes additional punctuation marks and casing. We specifically filtered the samples to retain only those with a duration between 4 and 10 seconds. Since LibriSpeech-PC lacks some of the utterances from LibriSpeech, the total number of samples was reduced to 1,132, sourced from 39 speakers. For the audio prompt, we extracted the last three seconds from a randomly sampled speech file from the same speaker for each evaluation sample. We carried out both objective and subjective evaluations. For the objective evaluations, we generated samples using three random seeds, computed the objective metrics for each, and then calculated their average. We computed the word error rate (WER) and speaker similarity (SIM-o). The WER is indicative of the intelligibility of the generated samples, and for its calculation, we utilized a Hubert-large-based <cit.> ASR system. The SIM-o represents the speaker similarity between the audio prompt and the generated sample, which is estimated by computing the cosine similarity between the speaker embeddings of both. For the calculation of SIM-o, we used a WavLM-large-based <cit.> speaker verification model. For the subjective assessments, we conducted two tests: the Comparative Mean Opinion Score (CMOS) and the Speaker Similarity Mean Opinion Score (SMOS). We evaluated 39 samples for both tests, with one sample per speaker from our test-clean set. Each sample was assessed by 12 Native English evaluators. In the CMOS test <cit.>, evaluators were shown the ground-truth sample and the generated sample side-by-side without knowing which was the ground-truth, and were asked to rate the naturalness on a 7-point scale (from -3 to 3), where a negative value indicates a preference for the ground-truth and a positive value indicates the opposite. In the SMOS test, evaluators were presented with the audio prompt and the generated sample, and asked to rate the speaker similarity on a scale of 1 (not similar at all) to 5 (identical), with increments of 1. §.§ Main results In our experiments, we conducted a comparison between our E2 TTS models and four other models: Voicebox <cit.>, VALL-E <cit.>, and NaturalSpeech 3 <cit.>. We utilized our own reimplementation of the Voicebox model, which was based on the same model configuration with E2 TTS except that the Vicebox model is trained with frame-wise phoneme alignment. During the inference, we used CFG with a guidance strength of 1.0 for both E2 TTS and Voicebox. We employed the midpoint ODE solver with a number of function evaluations of 32. Table <ref> presents the objective evaluation results for the baseline and E2 TTS models across various configurations. By comparing the (B4) and (P1) systems, we observe that the E2 TTS model achieved better WER and SIM-o than the Voicebox model when both were trained on the Libriheavy dataset. This trend holds even when we initialize the model with unsupervised pre-training <cit.> ((B5) vs. (P2)), where the (P2) system achieved the best WER (1.9%) and SIM-o (0.708) which are better than those of the ground-truth audio. Finally, by using larger training data (P3), E2 TTS achieved the same best WER (1.9%) and the second best SIM-o (0.707) even when the model is trained from scratch, showcasing the scalability of E2 TTS. It is noteworthy that E2 TTS achieved superior performance compared to all strong baselines, including VALL-E, NaturalSpeech 3, and Voicebox, despite its extremely simple framework. Table <ref> illustrated the subjective evaluation results for NaturalSpeech 3, Voicebox, and E2 TTS. Firstly, all variants of E2 TTS, from (P1) to (P3), showed a better CMOS score compared to NaturalSpeech 3 and Voicebox. In particular, the (P2) model achieved a CMOS score of -0.05, which is considered to have a level of naturalness indistinguishable from the ground truth <cit.>.[In the evaluation, E2 TTS was judged to be better in 33% of samples, while the ground truth was better in another 33% of samples. The remaining samples were judged to be of equal quality.] The comparison between (B4) and (P1) suggests that the use of phoneme alignment was the major bottleneck in achieving better naturalness. Regarding speaker similarity, all the models we tested showed a better SMOS compared to the ground truth, a phenomenon also observed in NaturalSpeech 3 <cit.>.[In LibriSpeech, some speakers utilized varying voice characteristics for different characters in the book, leading to a low SMOS for the ground truth.] Among the tested systems, E2 TTS achieved a comparable SMOS to Voicebox and NaturalSpeech 3. Overall, E2 TTS demonstrated a robust zero-shot TTS capability that is either superior or comparable to strong baselines, including Voicebox and NaturalSpeech 3. The comparison between Voicebox and E2 TTS revealed that the use of phoneme alignment was the primary obstacle in achieving natural-sounding audio. With a simple training scheme, E2 TTS can be easily scaled up to accommodate large training data. This resulted in human-level speaker similarity, intelligibility, and a level of naturalness that is indistinguishable from a human's voice in the zero-shot TTS setting, despite the framework’s extreme simplicity. §.§ Evaluation of E2 TTS extensions §.§.§ Evaluation of the extension 1 The results for the E2 TTS X1 models are shown in Table <ref>. These results indicate that the E2 TTS X1 model has achieved results nearly identical to those of the E2 TTS model, especially when the model was initialized by unsupervised pre-training <cit.>. E2 TTS X1 does not require the transcription of the audio prompt, which greatly enhances its usability. §.§.§ Evaluation of the extension 2 In this experiment, we trained the E2 TTS X2 models with a phoneme replacement rate of 15%. During inference, we randomly replaced words in the test-clean dataset with phoneme sequences, with a probability ranging from 0% up to 50%. Table <ref> shows the result. We first observed that E2 TTS X2 achieved parity results when no words were replaced with phoneme sequences. This shows that we can introduce extension 2 without any drawbacks. We also observed that the E2 TTS X2 model showed only marginal degradation of WER, even when we replaced 50% of words with phoneme sequences. This result indicates that we can specify the pronunciation of a new term without retraining the E2 TTS model. §.§ Analysis of the system behavior §.§.§ Training progress Fig. <ref> illustrates the training progress of the (B4)-Voicebox, (B5)-Voicebox, (P1)-E2 TTS, and (P2)-E2 TTS models. The upper graphs represent the training progress as measured by WER, while the lower graphs depict the progress as measured by SIM-o. We present a comparison between (B4)-Voicebox and (P1)-E2 TTS, as well as between (B5)-Voicebox and (P2)-E2 TTS. The former pair was trained from scratch, while the latter was initialized by unsupervised pre-training <cit.>. From the WER graphs, we observe that the Voicebox models demonstrated a good WER even at the 10% training point, owing to the use of frame-wise phoneme alignment. On the other hand, E2 TTS required significantly more training to converge. Interestingly, E2 TTS achieved a better WER at the end of the training. We speculate this is because the E2 TTS model learned a more effective grapheme-to-phoneme mapping based on the large training data, compared to what was used for Voicebox. From the SIM-o graphs, we also observed that E2 TTS required more training iteration, but it ultimately achieved a better result at the end of the training. We believe this suggests the superiority of E2 TTS, where the audio model and duration model are jointly learned as a single flow-matching Transformer. §.§.§ Impact of audio prompt length During the inference of E2 TTS, the model needs to automatically identify the boundary of y^ aud and y^ text in ŷ based on the audio prompt ŝ^ aud. Otherwise, the generated audio may either contain a part of y^ aud or miss a part of y^ text. This is not a trivial problem, and E2 TTS could show a deteriorated result when the length of the audio prompt ŝ^ aud is long. To examine the capability of E2 TTS, we evaluated the WER and SIM-o with different audio prompt lengths. The result is shown in Fig. <ref>. In this experiment, we utilized the entire audio prompts instead of using the last 3 seconds, and categorized the result based on the length of the audio prompts. As shown in the figure, we did not observe any obvious pattern between the WER and the length of the audio prompt. This suggests that E2 TTS works reasonably well even when the prompt length is as long as 10 seconds. On the other hand, SIM-o significantly improved when the audio prompt length increased, which suggests the scalability of E2 TTS with respect to the audio prompt length. §.§.§ Impact of changing the speech rate We further examined the model's ability to produce suitable content when altering the total duration input. In this experiment, we adjusted the total duration by multiplying it by 1/sr, where sr represents the speech rate. The results are shown in Fig. <ref>. As depicted in the graphs, the E2 TTS model exhibited only a moderate increase in WER while maintaining a high SIM-o, even in the challenging cases of sr=0.7 and sr=1.3. This result suggests the robustness of E2 TTS with respect to the total duration input. § CONCLUSIONS We introduced E2 TTS, a novel fully NAR zero-shot TTS. In the E2 TTS framework, the text input is converted into a character sequence with filler tokens to match the length of the input character sequence and the output mel-filterbank sequence. The flow-matching-based mel spectrogram generator is then trained based on the audio infilling task. Despite its simplicity, E2 TTS achieved state-of-the-art zero-shot TTS capabilities that were comparable to or surpass previous works, including Voicebox and NaturalSpeech 3. The simplicity of E2 TTS also allowed for flexibility in the input representation. We proposed several variants of E2 TTS to improve usability during inference. IEEEbib
http://arxiv.org/abs/2406.18137v1
20240626074141
Sparse deep neural networks for nonparametric estimation in high-dimensional sparse regression
[ "Dongya Wu", "Xin Li" ]
stat.ML
[ "stat.ML", "cs.LG" ]
Sparse deep neural networks for nonparametric estimation A]Dongya Wu[label=e1]wudongya@nwu.edu.cn, B]Xin Li[label=e2]lixin@nwu.edu.cn [A]School of Information Science and Technology, Northwest University, Xi'an 710127, China[presep=, ]e1 [B]School of Mathematics, Northwest University, Xi'an 710127, China[presep=, ]e2 § ABSTRACT Generalization theory has been established for sparse deep neural networks under high-dimensional regime. Beyond generalization, parameter estimation is also important since it is crucial for variable selection and interpretability of deep neural networks. Current theoretical studies concerning parameter estimation mainly focus on two-layer neural networks, which is due to the fact that the convergence of parameter estimation heavily relies on the regularity of the Hessian matrix, while the Hessian matrix of deep neural networks is highly singular. To avoid the unidentifiability of deep neural networks in parameter estimation, we propose to conduct nonparametric estimation of partial derivatives with respect to inputs. We first show that model convergence of sparse deep neural networks is guaranteed in that the sample complexity only grows with the logarithm of the number of parameters or the input dimension when the ℓ_1-norm of parameters is well constrained. Then by bounding the norm and the divergence of partial derivatives, we establish that the convergence rate of nonparametric estimation of partial derivatives scales as 𝒪(n^-1/4), a rate which is slower than the model convergence rate 𝒪(n^-1/2). To the best of our knowledge, this study combines nonparametric estimation and parametric sparse deep neural networks for the first time. As nonparametric estimation of partial derivatives is of great significance for nonlinear variable selection, the current results show the promising future for the interpretability of deep neural networks. [class=MSC] [Primary ]68T07 Deep neural networks ℓ_1 sparse constraint nonparametric estimation convergence of derivatives § INTRODUCTION Consider the regression problem y_i=f_0(x_i)+ξ_i given n samples (x_i,y_i), where the input predictive variable x_i∈ℝ^d and response variable y_i∈, and ξ_i represents random noise. When the number of variables d is greater than the sample size n, the relationship between response variables and predictive variables is not unique, which directly leads to the unidentifiability of statistical models. For linear regression when f_0(·) is a linear function, researchers have committed to utilize the sparse structure of high-dimensional data. Estimators such as Lasso <cit.>, SCAD <cit.>, MCP <cit.> for sparse linear regression have been proposed with parameter estimation, prediction errors, and variable selection consistency established; see <cit.> for a detailed review. However, in many practical situations such as genomic and financial data, the relationship between the covariate x and the response y is generally nonlinear, and modeling the nonlinear relationship become an important challenge for high-dimensional statistical learning. Subsequently, nonlinear regression models such as sparse additive model <cit.>, local linear estimation <cit.> and kernel regression <cit.> have been developed. In view of the development trend, the expression ability of nonlinear models is getting more powerful, in order to achieve more effective approximation to the true nonlinear models. Thanks to the powerful model expression ability of neural networks, more and more researchers have adopted neural networks in high-dimensional nonparametric statistical learning. From the aspect of generalization, in the pioneering study, <cit.> showed that the sample complexity of two-layer neural networks with constrained ℓ_1-norm on the weights only grows with the logarithm of the input dimension d. The dependence on the logarithm of the input dimensions d is similar to the results in high-dimensional sparse linear regression and is crucial when the input dimension d is large. Later, similar sample complexity that only depends on the logarithm of the input dimension d has also been established for deep neural networks with ℓ_1-constrained parameters <cit.>. Other studies have established generalization guarantees for ℓ_1-regularized deep neural networks <cit.>, or some other kinds of sparsity such as node sparsity and layer sparsity <cit.>. From the aspect of information-theoretic limitations, based on the combinatorial or hierarchical or piece-wise partitioning assumption of the true regression function, researchers showed that deep neural networks with sparse connection structure can achieve the minimax convergence rate to the true regression function without relying on the input dimension d <cit.>. Parameter estimation and variable selection have also been investigated. <cit.> imposed Lasso penalty on the first layer's weights of a two-layer convex neural network with infinite number of hidden nodes, theoretically ensuring the generalization and achieving nonlinear variable selection in high-dimensional settings. <cit.> imposed sparse group Lasso penalty on the first layer's weights of two-layer neural networks, and showed that both the estimated model and weights of irrelevant features converge. <cit.> extended the group Lasso penalty to group SCAD and MCP. <cit.> also investigated the identifiability or estimation of parameters of two-layer neural networks with ℓ_1-regularized weights based on an extended Restricted Isometry Property (RIP). In addition, <cit.> showed that applying adaptive group Lasso penalty on the first layer's parameters of two-layer neural networks can achieve consistent model prediction, parameter estimation and variable selection. Consistency on parameter estimation and variable selection is based on the positive definite regularity of the Fisher information matrix or the Hessian matrix established in <cit.>. Since the regularity condition of the Fisher information matrix or Hessian matrix is difficult to be satisfied for deep neural networks, <cit.> extended the above results to deep neural networks via Lojasiewicz's inequality <cit.>. However, in general, the constant in Lojasiewicz's inequality can be large for deep neural networks and how the constant depends on deep neural networks is not clear. Therefore, the convergence rate for parameter estimation via Lojasiewicz's inequality can be very slow for deep neural networks. In summary, generalization theory for sparse deep neural networks is relatively better developed, while the parameter estimation and variable selection theories only focus on two-layer neural networks. The main challenge of parameter estimation and variable selection for deep neural networks lies in that deep neural networks are highly unidentifiable, since the Fisher information matrix or the Hessian matrix with respect to parameters can be highly singular <cit.>. Therefore, regularity conditions commonly used in classical high-dimensional sparse linear regression cannot be easily satisfied for deep neural networks, and classical parameter estimation or identification method that is important for variable selection cannot be applied on sparse deep neural networks. To avoid the unidentifiability of deep neural networks in parameter estimation, instead of focusing on parameter estimation, we propose to conduct nonparametric estimation of model partial derivatives with respect to inputs, which can be used to achieve nonlinear variable selection <cit.>. In this study, we first establish the model convergence for ℓ_1-norm constrained sparse deep neural networks; see Theorem <ref>. Specifically, the convergence rate of the model scales as 𝒪(√(log(P) x _∞^2) /n^1/2), where P is the number of parameters. This convergence rate is of great significance in high-dimensional settings, since both the terms log(P) and x _∞^2 scale with the logarithm of input dimension d. Secondly, we establish the convergence of partial derivatives by virtue of the efficiency of the ℓ_1-norm constraint in bounding the norm and the divergence of partial derivatives; see Theorem <ref>. The convergence rate of partial derivatives scales as 𝒪(√(log(P) x _∞^2) /n^1/4), which is slower than the convergence rate of the model by n^-1/4. Finally, numerical experiments are performed to show the performance of the nonparametric estimation. In a word, despite deep neural networks are highly unidentifiable with respect to parameters, our study shows that convergence of nonparametric estimation of partial derivatives can still be achieved under certain conditions. Since nonparametric estimation of partial derivatives is useful in identifying contributing variables, our work on the convergence of partial derivatives is potentially significant for the interpretability of deep neural networks. Notations. We end this section by introducing some useful notations. For a vector x=(x^1,x^2,⋯,x^d)∈^d, define the ℓ_p-norm of x as x_p=(∑_j=1^d|x^i|^p)^1/p for 1≤ p< ∞ with x_∞=max_i=1,2,⋯,d|x^i|. For two vectors x,y∈^d, we use · to denote their inner product, that is, x· y=y^⊤x. For a matrix A∈^d_1× d_2, let A_ij (i=1,…,d_1,j=1,2,⋯,d_2) denote its ij-th entry, A_i· (i=1,…,d_1) denote its i-th row, A_· j (j=1,2,⋯,d_2) denote its j-th column, and A_1,1 denote the ℓ_1-norm along all the entries of A. For two matrices with the same dimension, we use ⊙ to denote the element wise multiplication. For a function f:^d→, ∇ f and Δ f are used respectively to denote the gradient and the divergence, if they exist. For two measurable functions f:𝒳→^d_2 and g:𝒳→^d_2, where the independent variable x is with probability density function p(x) on a bounded sample space 𝒳⊆^d, define the L^2(p(x)) (abbreviated as L^2) space (written as f,g∈ L^2(p(x))), with its inner product and L^2-norm given respectively as ⟨ f,g ⟩ _L^2=∫_𝒳 f(x)· g(x) p(x)dx and f _L^2^2=⟨ f,f ⟩ _L^2, and denote the L^∞-norm as g_L^∞ = sup{g(x): x ∈𝒳} when d_2=1. § PROBLEM SETUP In this section, we provide a detailed background on nonparametric regression problems and sparse deep neural networks used to perform estimation. The nonparametric estimation of partial derivatives with respect to inputs is also introduced with the suitable smooth activation function chosen. §.§ Nonparametric regression problems Consider the sparse nonparametric regression problem: y=f_0(x)+ξ where the input prediction variable x∈ℝ^d, the response variable y∈, ξ represents a zero-mean random noise, and f_0 is the true regression function to be estimated. The pair (x,y) obeys the distribution with a joint probability density p(x,y) on a bounded sample space 𝒳×𝒴, and the marginal probability density of x is p_x(x). In order to perform estimation under the high-dimensional scenario, we assume that there are only s sparse variables in x responsible for predicting y. The relationship between x and y is then estimated via the hypothesis class of deep neural networks ℱ_Θ which will be specified later. We here assume that the unknown regression function f_0∈ℱ_Θ, hence the learning model is well-specified and there exists no approximation error. To estimate the unknown f_0 given a finite dataset 𝔻 with n independently and identically distributed samples {(x_i,y_i)}_i=1^n, a simple method is to minimize the empirical risk based on ordinary square loss L̂(f)=1/n∑_i=1^n(y_i-f(x_i))^2 with f belonging to some certain function class. Denote the expected risk as L(f)=𝔼_(x,y)∼ p(x,y)(y-f(x))^2. The performance of an estimated model f̃ is then evaluated via the excess risk L(f̃)-L(f_0), which equals to L(f̂)-L(f_0)=_x∼ p_x(x)(f̂(x)-f_0(x))^2=f̂-f_0^2_L^2. §.§ Sparse deep neural networks A deep neural network f_Θ: ^d → with L layers can be represented as follows: f_Θ(x)=θ_Lσ(θ_L-1⋯σ(θ_1 x)⋯) where θ_l∈^d_l× d_l-1(l=1,⋯,L, d_0=d, d_L=1), Θ={θ_L,θ_L-1,⋯,θ_1}, and σ(·) is the nonlinear activation function, which is assumed to be smooth and 1-Lipschitz and satisfy σ(0)=0. In the high-dimensional setting where the dimension d or the number of parameters in Θ is larger than the number of samples n, the performance of f̂ obtained through minimizing the empirical ordinary least squares risk is usually unsatisfactory and some sparse constraints on parameters are needed. For this purpose, denote ·_1,1 to be the ℓ_1-norm along all the entries of a matrix θ_l, that is, θ_l_1,1 = ∑_k=1^d_l∑_j=1^d_l-1 |(θ_l)_kj|. Then the ℓ_1-norm of the parameter set Θ is defined as Θ_1 = ∑_l=1^Lθ_l_1,1. Denote _r={Θ: Θ_1≤ r}, and the hypothesis class is denoted as ℱ_r={f_Θ:Θ∈_r} by constraining the ℓ_1-norm of the parameter set Θ. Then we propose to estimate the true regression function via minimizing the following objective function with sparse constraints f̂∈_f_Θ∈ℱ_r1/n∑_i=1^n (y_i-f_Θ(x_i))^2. In addition, the condition that Θ_0_1≤ r is required, where Θ_0 is the corresponding parameter set of f_0, in order to ensure the feasibility of f_0. In practice, the parameter r can be chosen via parameter searching. §.§ Nonparametric estimation of partial derivatives We conduct nonparametric estimation of partial derivatives with respect to inputs. For a sample x=(x^1,⋯,x^d)^⊤ and the corresponding output of a deep neural network f(x), the partial derivatives of f(x) with respect to x, i.e., ∇_x f = ( ∂ f(x)/∂ x^1,⋯,∂ f(x)/∂ x^d )^⊤, is adopted to evaluate the importance of x^a to the output. Specifically, the partial derivative characterizes the local variation of f(x) when x^a makes an infinitely small variation, and thus have been used for variable selection <cit.>. In this study, we mainly focus on the convergence of nonparametric estimation of partial derivatives. The convergence of nonparametric estimation of partial derivatives is characterized via ∇_xf̂ - ∇_xf_0_L^2^2. Recall the smoothness requirement on the nonlinear activation function, and the reason is to ensure the estimation of partial derivatives. Therefore, the commonly-used rectified linear unit (relu) activation function is not suitable for nonparametric estimation. Instead, the smoothed version softplus(x)=log(1+exp(x)) is adopted with the intercept term log2 subtracted to ensure σ(0)=0. Finally we obtain the suitable activation function σ(x)=log(1+exp(x))-log2. § MAIN RESULTS In this section, we provide our main results on the convergence of the estimated model and the model derivatives. §.§ Convergence of the model Some definitions and assumptions are needed first to facilitate the analysis. Let ϵ_1,…,ϵ_n be n independent Rademacher random variables that take values of 1 or -1 with probability 1/2. Given a dataset S={x_1,…,x_n} with n independent samples drawn from p_x(x), the empirical Rademacher complexity is defined as ℛ_S(ℱ_r) = _ϵ[sup_f ∈ℱ_r1/n∑_i=1^n ϵ_i f(x_i)]. The Rademacher complexity of hypothesis class ℱ_r is defined as ℛ_n(ℱ_r) =_x[ℛ_S(ℱ_r)]= _x_ϵ[sup_f ∈ℱ_r1/n∑_i=1^n ϵ_i f(x_i)]. The δ-covering number of set 𝒬 with respect to metric ρ is defined as the minimum size of δ-cover 𝒞 of 𝒬, such that for each v ∈𝒬, there exists v' ∈𝒞 satisfying ρ(v,v') ≤δ: 𝒩(δ, 𝒬, ρ) = inf{ |𝒞|: 𝒞 is a δ-cover of 𝒬 with respect to metric ρ}. In order to estimate the covering number, for the function class ℱ_r, we define the sample L^2(P_n)-norm of a function f ∈ℱ_r and the derived metric respectively as f _L^2(P_n) = √(1/n∑_i=1^n (f(x_i))^2) and ρ(f,f') = f-f' _L^2(P_n). For the parameter set _r, define the Frobenius-norm of parameter Θ∈_r and the derived metric respectively as Θ_F = √(∑_l=1^Lθ_l_F^2) = √(∑_l=1^L∑_k=1^d_l∑_j=1^d_l-1 (θ_l)_kj^2) and ρ(Θ,Θ') = Θ-Θ' _F. We shall see in the proof that the covering number of the function space ℱ_r can be bounded by the covering number of the parameter space _r. [Bounded loss function] For any loss function f_Θ∈ℱ_r, f_Θ is assumed to be bounded, such that | f_Θ(x)-y | ≤ b_0 for all (x,y) ∈𝒳×𝒴, where b_0 is an absolute positive constant. [Bounded inputs] The input space 𝒳 is assumed to be bounded such that the ℓ_∞-norm of each x ∈𝒳 is within R, i.e., sup{ x _∞: x ∈𝒳}≤ R. Then the following lemma is an oracle inequality that provides an upper bound of the expected L^2 error of the sparse constraint estimated model f̂. Recall that the dataset 𝔻 stands for n independently and identically distributed samples {(x_i,y_i)}_i=1^n. Let f̂ be the estimated model obtained from (<ref>). Then under Assumption <ref>, it follows that 𝔼_𝔻f̂-f_0_L^2^2≤ 4b_0ℛ_n(ℱ_r). Note from Lemma <ref> that the convergence of the model is determined by the Rademacher complexity. We next bound the Rademacher complexity of the hypothesis class via the covering number. The key is to utilize the following Lipschitz properties of the model function. Assume that the activation function σ(·) is 1-Lipschitz and satisfy σ(0)=0. For each x ∈𝒳 and Θ, Θ' ∈_r, it follows that | f_Θ(x) - f_Θ'(x) | ≤Lip(x) Θ-Θ'_F, f_Θ(x) - f_Θ'(x) _L^2(P_n) ≤Lip_L^2(P_n)(x) Θ-Θ'_F, where Lip(x) = √(L)( r/L-1)^L-1 x _∞, Lip_L^2(P_n)(x) = √(L)( r/L-1)^L-1√(1/n∑_i=1^n x_i_∞^2). Then the Rademacher complexity of ℱ_r is provided by transforming the covering number of the function space to the covering number of the parameter space via the Lipschitz property. Denote the total number of parameters as P. Assume f_Θ∈ℱ_r is Lip(x)-Lipschitz with respect to parameters. Then the Rademacher complexity of ℱ_r follows that ℛ_S(ℱ_r) ≤ 24r ( r/L-1)^L-1√(2Llog P/n)(1+log(c_1√(n)) √(1/n∑_i=1^n x_i_∞^2)), ℛ_n(ℱ_r) ≤ 24r ( r/L-1)^L-1√(2Llog P/n)(1+log(c_1√(n)) √( x _∞^2)), where c_1 = R/6rL^3/2√(2log P). Combining Lemmas <ref> and <ref>, it is straightforward to arrive at the model convergence as follows with the proof omitted. Let f̂ be the estimated model obtained from (<ref>). Then under Assumptions <ref> and <ref>, it follows that 𝔼_𝔻f̂-f_0_L^2^2≤ 96b_0r ( r/L-1)^L-1√(2Llog P/n)(1+log(c_1√(n)) √( x _∞^2)), where c_1 = R/6rL^3/2√(2log P). Theorem <ref> provides an upper bound for the estimated model f̂ via sparse deep neural networks and the true nonparametric regression function f_0. Note that the convergence of the model is captured essentially by the Rademacher complexity, and the upper bound of model convergence constitutes three important parts as follows. (i) The pre-factor (r/(L-1))^L-1 may increase exponentially with the number of layers L when the average parameter norm (r/(L-1)) is larger than 1. This exponential dependence on the number of layers is due to the worst case Lipschitz property of deep neural networks and is manifested in results concerning the sample complexities of deep neural networks <cit.>. (ii) The pre-factor √(log P) is due to the covering number of the ℓ_1-norm constrained parameter space. Particularly, assume that all hidden layers have the same number of hidden units h which is much smaller than the input dimension d. Then the total number of parameters P=d*h+(L-2)h^2+h and the pre-factor √(log P) scales linearly with √(log d). This logarithm dependence on the number of parameters or the input dimension is especially important under the high-dimensional scenario. (iii) <cit.> also use the covering number technique to bound the excess risk or L^2 error of ℓ_1-norm constrained sparse deep neural networks. However, the term √(x^2_∞) in Theorem <ref> is better than the term √(∑_i=1^nx_i^2_2/n) or √(x^2_2) in <cit.> when the input dimension d is high, since the term √(x^2_∞) scales with √(log d) but the term √(x^2_2) scales with √(d). §.§ Convergence of partial derivatives Convergence of model is prerequisite but not sufficient for the convergence of partial derivatives. In the following, we introduce several lemmas in preparation for establishing the convergence of partial derivatives. Assume the softplus activation function σ(·) is adopted. For any x ∈𝒳 and Θ∈_r, it follows that ∇_xf_Θ(x) _1≤ (r/L)^L. Assume the softplus activation function σ(·) is adopted. For any x ∈𝒳 and Θ∈_r, it follows that | Δ_xf_Θ(x) | ≤L/4(r/L)^Lmax_k∈{2,⋯,L-1}( r/k)^k. (i) It is well known in real analysis that function convergence does not guarantee derivative convergence. For example, let f_n(x)=n^-ksin(nx) and f_0(x)=0, k ∈ (0,1), then f_n converges to f_0 as n →∞, but ∇_xf_n does not converges to ∇_xf_0. This pathological example is due to the fact that the first or second derivative is not bounded. Therefore, as we will show in subsequent results, the requirements on bounded norm of partial derivatives (cf. Lemma <ref>) and bounded divergence of partial derivatives (cf. Lemma <ref>) are crucial for the convergence of partial derivatives. (ii) In the proof of Lemma <ref>, we use a key fact of the softplus activation function that the second derivatives of hidden layers are well bounded, that is, h”_k = σ”(⋯) ∈ (0,1/4). The bounded second derivative does not hold for the relu activation function, since relu(·) is not smooth and the second derivative can be infinite. Therefore, the proof of Lemma <ref> implies that relu is not suitable for derivative estimation and adopting a smooth activation function such as softplus is necessary. [Boundary condition] The true regression function f_0 and the estimated function f̂ are both assumed to have zero normal derivative on the boundary, that is, ∇_xf_0·n⃗=0 and ∇_xf̂·n⃗=0, where n⃗ is the unit normal to the boundary. Since it is not possible to estimate the partial derivatives on the boundary ∂𝒳 out of where there are no samples, we need the boundary assumption. [Bounded derivatives of probability density] The L^∞-norm of the logarithm of probability density is assumed to be bounded, that is, sup{ |∇_x_ilog p_x(x)|: x∈𝒳, ∀ i∈{1,⋯,d}}≤ b_1, where b_1 is an absolute positive constant. It is worth noting that Assumption <ref> is reasonable. For instance, assume that x obeys a truncated normal distribution such that 𝒳 is bounded. When all the elements of x are independent, it holds that |∇_x_ilog p_x(x)|=|x_i-μ_i|/σ_i^2, and thus sup |∇_x_ilog p_x(x)| is bounded provided that x ∈𝒳 is bounded (i.e., Assumption <ref> is satisfied). Let f,g ∈ L^2(p_x(x)) and ∇_xf,∇_xg ∈ L^2(p_x(x)) with ∇_xf·n⃗=0 on the boundary ∂𝒳, where n⃗ is the unit normal to the boundary. Then it follows that -⟨∇_xf,∇_xg ⟩_L^2 = ⟨Δ_xf+∇_xf ·∇_xlog p_x(x), g ⟩_L^2. With these Lemmas, we can now establish the other main result on the convergence of model partial derivatives with respect to inputs. Let f̂ be the estimated model obtained from (<ref>). Then under Assumption <ref>, <ref>, <ref> and <ref>, it follows that _∇_xf̂ - ∇_xf_0_L^2^2≤1/n^1/4(r/L)^2L(2+ L^2/8max_k∈{2,⋯,L-1}( r/k)^2k) + 48(1+b_1)b_0r ( r/L-1)^L-1√(2Llog P)/n^1/4(1+log(c_1√(n)) √( x _∞^2)), where c_1 = R/6rL^3/2√(2log P). The convergence rate of partial derivatives constitutes of two parts. The first part is due to the bounded norm of the gradient and the bounded divergence of the gradient field, both of which are established in Lemma <ref> and Lemma <ref>, respectively. The second part comes from the convergence of model that is established in Theorem <ref>. In a word, Theorem <ref> shows that in order to guarantee the convergence of partial derivatives, it is not sufficient to rely on the model convergence alone, but the boundedness of the gradient and the divergence of the gradient field are also required. This result coincides with the discussion in Remark <ref>. In addition, it should be noted that the convergence rate of partial derivatives is slower than the convergence rate of model by n^1/4. § EXPERIMENTS In this section, we perform experiments to illustrate the theoretical results. We demonstrate that ℓ_1-norm constrained sparse deep neural networks is efficient in nonparametric estimation under high-dimensional scenarios, and the adoption of smooth activation functions is crucial for nonparametric estimation. In the well-specified case, the true regression function is assumed to be a neural network f_0(x)=θ_Lσ(θ_L-1⋯σ(θ_1 x)⋯) and the data are generated via y_i=f_0(x_i)+ξ_i. Each layer's parameters of f_0 are generated from a normal distribution with standard deviation being √(2/h), where h represents the number of input features of each layer. We set the number of hidden units of each layer as 10. The input prediction variable x and random noise ξ are generated from truncated normal distributions with standard deviation 1 and 0.1 respectively. We truncate a normal distribution at a large absolute value (10 times the standard deviation) and uniformly scaling the density values inside the bounded interval. For the high-dimensional input prediction variable x ∈^d, we assume only s sparse variables are responsible for the regression function f_0(x). We set d=100 and s=5 across all experiments. We just keep the first layer's parameters θ_1 to be nonzeros corresponding to the s relevant sparse variables and set the other parameters corresponding to irrelevant variables as zeros. The number of training samples varies from 50 to 100, and the number of testing samples is set as 10000. We report the L^2 error of testing samples for model prediction and derivative estimation. We use a large testing sample size to approximate the expected L^2 error. The training process is repeated with random training samples for 100 times and results are reported via averaging. We compare the results of two nonlinear activation function, i.e., the relu(x)=max{0,x} and the smooth softplus(x)=log(1+exp(x))-log2. For the 2-layer and 3-layer neural networks, results on L^2 errors of model prediction and derivative estimation are displayed in Fig. <ref>. As we can see from Fig. <ref>, both the prediction error and the derivative estimation error decrease as the sample size increases. However, the difference between the prediction error and the derivative estimation error for the softplus function is much smaller than that of the relu function, a phenomenon which indicates the significance of the smooth function in nonparametric estimation of the model derivatives. § CONCLUSION In this study, we demonstrate that despite the unidentifiability of deep neural networks in parameter estimation, one can still achieve convergence of nonparametric estimation of partial derivatives with respect to inputs for sparse deep neural networks. The established convergence of nonparametric estimation is of potential significance for the interpretability of deep neural networks. There are also some future works to be considered. First, the theoretical results are based on the bounded domain and the boundary assumptions. It would be interesting to weaken these assumptions to build convergence results. Second, the theoretically established convergence rate of partial derivatives is slower than that of model by n^-1/4. It is natural to ask under what conditions can faster convergence rate of partial derivatives be guaranteed. Thirdly, one can also utilize the established convergence of partial derivatives to achieve the consistency of nonparametric variable selection. § PROOF OF SECTION <REF> It follows that the excess risk equals to the L^2 error L(f̂)-L(f_0) = _(x,y) ∼ p(x,y)(f̂(x)-y)^2 - _(x,y) ∼ p(x,y)(f_0(x)-y)^2 =_(x,y) ∼ p(x,y)(f̂(x)-f_0(x)-ξ)^2 - _(x,y) ∼ p(x,y)(f_0(x)-f_0(x)-ξ)^2 =_x ∼ p_x(x)(f̂(x)-f_0(x))^2 - _(x,y) ∼ p(x,y)[(f̂(x)-f_0(x))ξ] =_x ∼ p_x(x)(f̂(x)-f_0(x))^2 =f̂ - f_0_L^2^2. Decompose the L^2 error as follows: f̂ - f_0_L^2^2 = L(f̂)-L(f_0) =[L(f̂) - L̂(f̂)] + [L̂(f̂)-L̂(f_0)] + [L̂(f_0)-L(f_0)]. Since Θ_0_1≤ r, it follows from the global optimality of f̂ that L̂(f̂) ≤L̂(f_0). Combining (<ref>) and (<ref>) yields that f̂ - f_0_L^2^2≤ [L(f̂) - L̂(f̂)] + [L̂(f_0)-L(f_0)]. Taking expectations with respect to the dataset and noting the fact that _L̂(f_0)=L(f_0), we obtain that _f̂ - f_0_L^2^2≤_[L(f̂) - L̂(f̂)]. Set ℒ_r={(x,y) →ℓ(f(x),y):f ∈ℱ_r} to be the family of loss functions. The term _[L(f̂) - L̂(f̂)] is bounded via Rademacher complexity as follows _[L(f̂) - L̂(f̂)] ≤_[sup_f ∈ℱ_r(L(f) - L̂(f))] = _[sup_ℓ∈ℒ_r((ℓ) -1/n∑_i=1^nℓ(f(x_i),y_i))] = _[sup_ℓ∈ℒ_r(_'[1/n∑_i=1^nℓ(f(x_i'),y_i') -1/n∑_i=1^nℓ(f(x_i),y_i)])] ≤__'[ sup_ℓ∈ℒ_r( 1/n∑_i=1^nℓ(f(x_i'),y_i') -1/n∑_i=1^nℓ(f(x_i),y_i))] = __'_ϵ[ sup_ℓ∈ℒ_r( 1/n∑_i=1^nϵ_iℓ(f(x_i'),y_i') -1/n∑_i=1^nϵ_iℓ(f(x_i),y_i))] ≤__'_ϵ[ sup_ℓ∈ℒ_r( 1/n∑_i=1^nϵ_iℓ(f(x_i'),y_i') ) + sup_ℓ∈ℒ_r(1/n∑_i=1^n -ϵ_iℓ(f(x_i),y_i))] = 2__ϵ[ sup_ℓ∈ℒ_r( 1/n∑_i=1^nϵ_iℓ(f(x_i),y_i) ) ] =2ℛ_n(ℒ_r). Since the loss function ℓ(f(x),y)=((f(x)-y))^2 is 2b_0-Lipschitz by Assumption <ref>, we have by the contraction inequality of Rademacher complexity of Lipschitz loss functions that _f̂ - f_0_L^2^2≤_[L(f̂) - L̂(f̂)] ≤ 2ℛ_n(ℒ_r) ≤ 4b_0ℛ_n(ℱ_r). The proof is complete. Denote ∇_θf_Θ(x) = ( ∂ f/∂θ_1|_vec^⊤, ⋯, ∂ f/∂θ_L|_vec^⊤)^⊤, where for i=1,2,⋯,L, ∂ f/∂θ_i|_vec is the vectorized form of the matrix ∂ f/∂θ_i. Since the Lipschitz constant equals to sup{∇_θf_Θ(x) _2: Θ∈_r}, it suffice to analyze the ℓ_2-norm of ∇_θf_Θ(x) as follows ∇_θf_Θ(x) _2 = √(∑_l=1^L∂ f/∂θ_l_F^2). For a deep neural network f_Θ(x)=θ_Lσ(θ_L-1⋯σ(θ_1 x)⋯), denote the activations of the l-th hidden layer as h_l=σ(θ_l⋯σ(θ_1 x)⋯) with h_0(x)=x. Then the derivatives of the l-th hidden layer is h'_l=σ'(θ_l⋯σ(θ_1 x)⋯), where σ'(·) is the derivative of the activation function. According to the chain rule, the gradients with respect to parameters equal to ∂ f/∂θ_l =(h'_l⊙θ_l+1^⊤⋯θ_L-1^⊤ h'_L-1⊙θ_L^⊤) h_l-1^⊤. When the activation function is the softplus function σ(x)=log(1+exp(x))-log2, the derivative σ'(·) is the sigmoid function σ'(x)=exp(x)/1+exp(x), thus σ'(·) ∈ (0,1) and h'_l∈ (0,1). Recall the ·_1,1 matrix norm in (<ref>), and the Frobenius-norm of gradients is bounded layer by layer as: ∂ f/∂θ_l_F = √(trace(∂ f/∂θ_l∂ f/∂θ_l^⊤)) = h'_l⊙θ_l+1^⊤⋯θ_L-1^⊤ h'_L-1⊙θ_L^⊤_2 h_l-1_2 ≤θ_l+1^⊤ h'_l+1⋯θ_L-1^⊤ h'_L-1⊙θ_L^⊤_2 h_l-1_2 (∵ h'_l∈ (0,1)) ≤θ_l+1_1,1 h'_l+1⋯θ_L-1^⊤ h'_L-1⊙θ_L^⊤_∞ h_l-1_2 ≤θ_l+1_1,1 h'_l+1⋯θ_L-1^⊤ h'_L-1⊙θ_L^⊤_2 h_l-1_2 (∵·_∞≤·_2 ) ≤ h_l-1_2∏_k=l+1^L-1θ_k_1,1h'_L-1⊙θ_L^⊤_∞ ≤ h_l-1_2∏_k=l+1^L-1θ_k_1,1θ_L_∞≤ h_l-1_2∏_k=l+1^L-1θ_k_1,1θ_L_1 = h_l-1_2∏_k=l+1^Lθ_k_1,1, where the second inequality follows from Hölder's inequality that Ax_2 = √(∑_i⟨ A_i·,x ⟩ ^2)≤√(∑_iA_i·_1^2x _∞^2)≤√( (∑_iA_i·_1)^2x _∞^2) = A_1,1x _∞. Furthermore, the term h_l-1_2 is bounded layer by layer as: h_l-1_2 = σ(θ_l-1⋯σ(θ_1 x)⋯) _2 ≤θ_l-1σ(θ_l-2⋯σ(θ_1 x)⋯) _2 ≤θ_l-1_1,1σ(θ_l-2⋯σ(θ_1 x)⋯) _∞ ≤θ_l-1_1,1σ(θ_l-2⋯σ(θ_1 x)⋯) _2 ≤ x _∞∏_k=1^l-1θ_k_1,1, where the first inequality is due to the 1-Lipschitz property of σ(·) and fact that σ(0)=0, and the last inequality is by iteration. Combining (<ref>) and (<ref>) yields that ∂ f/∂θ_l_F≤ x _∞∏_k=1,k ≠ l^Lθ_k_1,1. Combing the Frobenius-norm of gradients of all layers, we obtain that ∇_θf_Θ(x) _2 = √(∑_l=1^L∂ f/∂θ_l_F^2)≤ x _∞√(∑_l=1^L∏_k=1,k ≠ l^Lθ_k_1,1^2) ≤ x _∞√( L max_l ∈{1,…,L }∏_k=1,k ≠ l^Lθ_k_1,1^2) = √(L) x _∞max_l ∈{1,…,L }∏_k=1,k ≠ l^Lθ_k_1,1 ≤√(L) x _∞max_l ∈{1,…,L }( 1/L-1∑_k=1,k ≠ l^Lθ_k_1,1)^L-1 ≤√(L) x _∞( 1/L-1∑_k=1^Lθ_k_1,1)^L-1 ≤√(L)( r/L-1)^L-1 x _∞. Furthermore, we can bound Lip_L^2(P_n)(x) as: f_Θ(x) - f_Θ'(x) _L^2(P_n) = √(1/n∑_i=1^n(f_Θ(x_i) - f_Θ'(x_i) )^2) ≤√(1/n∑_i=1^n(Lip_θ,f(x_i) Θ-Θ'_F)^2) ≤√(1/n∑_i=1^n( √(L)( r/L-1)^L-1 x_i_∞Θ-Θ'_F)^2) = √(L)( r/L-1)^L-1√(1/n∑_i=1^n x_i_∞^2) Θ-Θ'_F. The proof is complete. The empirical Rademacher complexity is bounded via Dudley's integral as follows ℛ_S(ℱ_r) ≤ 4α + 12∫_α^∞√(log𝒩(δ, ℱ_r, L^2(P_n) /n)dδ. Now it remains to control the covering number of the function space 𝒩(δ, ℱ_r, L^2(P_n) ). Actually, the covering number of the function space 𝒩(δ, ℱ_r, L^2(P_n) ) can be bounded by the covering number of the parameter space 𝒩(δ_θ, _r, ·_F ) via the Lipschitz property with respect to parameters. Specifically, let 𝒞__r be a δ_θ-cover of _r, such that for each Θ∈_r, there exists Θ' ∈𝒞__r satisfying Θ' - Θ_F≤δ_θ. According to Lemma <ref>, one has that f_Θ(x) - f_Θ'(x) _L^2(P_n)≤Lip_L^2(P_n)(x) Θ-Θ'_F≤δ_θLip_L^2(P_n)(x). Thus the function set 𝒞_ℱ_r={f_Θ':Θ' ∈𝒞__r} is a δ-cover of ℱ_r with δ=δ_θLip_L^2(P_n)(x) and it holds that 𝒩(δ, ℱ_r, L^2(P_n) ) ≤𝒩(δ / Lip_L^2(P_n)(x), _r, ·_F ). On the other hand, for the parameter set Θ∈_r, let Θ_vec denote the vector formed by the vectorized form of each parameter matrix arranging one by one. Denote the total number of parameters as P. Since Θ_vec_1 = Θ_1≤ r, the transformed parameter set is 𝔹_1,r^P = {θ∈ℝ^P: θ_1≤ r} and 𝒩(δ_θ, _r, ·_F ) = 𝒩(δ_θ, 𝔹_1,r^P, ·_2 ). Let g ∼ N(0,𝕀_P) be a P-dimensional normal random vevtor. It follows from Sudakov's minoration inequality that δ_θ√(log𝒩(δ_θ, 𝔹_1,r^P, ·_2 )) ≤ 2𝔼sup_θ∈𝔹_1,r^P⟨θ,g ⟩≤ 2𝔼θ_1g_∞ ≤ 2r𝔼g_∞≤ 2r √(2log P). Noting from (<ref>), we still need to bound the superior absolute value of f_Θ∈ℱ_r before calculating the Dudley's integral. According to (<ref>) and Assumption <ref>, sup_x ∈𝒳 | f_Θ(x) | ≤sup_x ∈𝒳 x _∞∏_k=1^Lθ_k_1,1≤ R(1/L∑_k=1^Lθ_k_1,1)^L≤ R(r/L)^L. Then the Dudley's integral (<ref>) is bounded as follows ℛ_S(ℱ_r) ≤ 4α + 12∫_α^sup| f_Θ(x) |√(log𝒩(δ, ℱ_r, L^2(P_n) /n)dδ ≤ 4α + 12∫_α^sup| f_Θ(x) |√(log𝒩(δ / Lip_L^2(P_n)(x), 𝔹_1,r^P, ·_2 ) /n)dδ ≤ 4α + 24rLip_L^2(P_n)(x) √(2log P/n)∫_α^sup| f_Θ(x) |1/δdδ ≤ 4α + 24rLip_L^2(P_n)(x) √(2log P/n)log(sup| f_Θ(x) |/α). Substituting Lip_L^2(P_n)(x) in Lemma <ref> (cf. (<ref>)) into (<ref>), one sees that the right-hand side of (<ref>) is minimized with respect to α by setting α = 6r √(L)( r/L-1)^L-1√(1/n∑_i=1^n x_i_∞^2)√(2log P/n), For simplicity, we ignore the data dependent term by setting α = 6r √(L)( r/L-1)^L-1√(2log P/n). Then combining (<ref>) and (<ref>), we finally obtain that ℛ_S(ℱ_r) ≤ 24r ( r/L-1)^L-1√(2Llog P/n)(1+log(c_1√(n)) √(1/n∑_i=1^n x_i_∞^2)), where c_1 = R/6rL^3/2√(2log P). In addition, the expectation of ℛ_S(ℱ_r) is bounded as ℛ_n(ℱ_r) = [ℛ_S(ℱ_r)] ≤ 24r ( r/L-1)^L-1√(2Llog P/n)(1+log(c_1√(n)) √( x _∞^2)). The proof is complete. § PROOF OF SECTION <REF> It follows from the chain rule that the partial derivatives with respect to inputs equal to ∇_xf_Θ(x) = θ_1^⊤ h'_1⊙θ_2^⊤⋯θ_L-1^⊤ h'_L-1⊙θ_L^⊤. Then the quantity ∇_xf_Θ(x) _1 is bounded layer by layer as follows ∇_xf_Θ(x) _1 = θ_1^⊤ h'_1⊙θ_2^⊤⋯θ_L-1^⊤ h'_L-1⊙θ_L^⊤_1 ≤θ_1_1,1 h'_1⊙θ_2^⊤⋯θ_L-1^⊤ h'_L-1⊙θ_L^⊤_∞ ≤θ_1_1,1 h'_1⊙θ_2^⊤⋯θ_L-1^⊤ h'_L-1⊙θ_L^⊤_1 ≤θ_1_1,1θ_2^⊤ h'_2⋯θ_L-1^⊤ h'_L-1⊙θ_L^⊤_1 ≤∏_k=1^Lθ_k_1,1≤(1/L∑_k=1^Lθ_k_1,1)^L ≤(r/L)^L, where the first inequality follows from the fact that Ax_1 = ∑_i | ⟨ A_i·,x ⟩ | ≤∑_iA_i·_1x _∞ =A_1,1x _∞. For a deep neural network f_Θ(x)=θ_Lσ(θ_L-1⋯σ(θ_1 x)⋯), denote the activations of the l-th hidden layer as h_l=σ(θ_l⋯σ(θ_1 x)⋯) with h_0(x)=x. Then the derivatives of the l-th hidden layer is h'_l=σ'(θ_l⋯σ(θ_1 x)⋯), where σ'(·) is the derivative of the activation function. It follows that ∇_xf_Θ(x) = θ_1^⊤ h'_1⊙θ_2^⊤⋯θ_L-1^⊤ h'_L-1⊙θ_L^⊤. Recall that θ_1_· i denote the i-th column of θ_1. Then it follows that Δ_xf_Θ(x) =∑_i=1^d∇_x_i∇_x_if_Θ(x), ∇_x_if_Θ(x) = θ_1· i^⊤ h'_1⊙θ_2^⊤⋯θ_L-1^⊤ h'_L-1⊙θ_L^⊤. According to the chain rule, we obtain that ∇_x_i∇_x_if_Θ(x) = ∑_k=1^L-1⟨∂∇_x_if_Θ(x)/∂ h'_k, ∂ h'_k/∂ x_i⟩, and ∂∇_x_if_Θ(x)/∂ h'_k = (θ_1· i⊙ h'_1θ_2⋯ h'_k-1θ_k) ⊙( θ_k+1^⊤ h'_k+1⋯θ_L-1^⊤ h'_L-1⊙θ_L^⊤). Denote h”_k=σ”(θ_k⋯σ(θ_1 x)⋯) and assume the k-th layer has h hidden units, and we use θ_kj· to denote the j-th row of θ_k, ∂ h'_k/∂ x_i = ( (∇_x_ih'_k)_1, ⋯ ,(∇_x_ih'_k)_h )^⊤, (∇_x_ih'_k)_j = h”_kjθ_kj·^⊤⊙ h'_k-1θ_k-1^⊤⋯⊙ h'_1θ_1· i^⊤. Then the divergence of partial derivatives is bounded as follows | Δ_xf_Θ(x) | = |∑_i=1^d∑_k=1^L-1⟨∂∇_x_if_Θ(x)/∂ h'_k, ∂ h'_k/∂ x_i⟩| ≤∑_i=1^d∑_k=1^L-1∂∇_x_if_Θ(x)/∂ h'_k_2∂ h'_k/∂ x_i_2. By the fact that a ⊙ b_2 =√(∑_ia_i^2b_i^2)≤√((∑_ia_i^2)(∑_ib_i^2)) = a_2b_2 for two vectors a,b with the same dimension, we have that ∂∇_x_if_Θ(x)/∂ h'_k_2≤θ_1· i⊙ h'_1θ_2⋯ h'_k-1θ_k_2θ_k+1^⊤ h'_k+1⋯θ_L-1^⊤ h'_L-1⊙θ_L^⊤_2. Meanwhile, the right-hand side of the former inequality is bounded by θ_1· i⊙ h'_1θ_2⋯ h'_k-1θ_k_2 ≤θ_1· i⊙ h'_1θ_2⋯θ_k-1⊙ h'_k-1_∞θ_k_1,1 ≤θ_1· i⊙ h'_1θ_2⋯θ_k-1_∞θ_k_1,1 ≤θ_1· i⊙ h'_1θ_2⋯θ_k-1_2θ_k_1,1 ≤θ_1· i_∞∏_q=2^kθ_q_1,1≤θ_1· i_1∏_q=2^kθ_q_1,1, and θ_k+1^⊤ h'_k+1⋯θ_L-1^⊤ h'_L-1⊙θ_L^⊤_2 ≤θ_k+1_1,1 h'_k+1⊙θ_k+2^⊤⋯θ_L-1^⊤ h'_L-1⊙θ_L^⊤_∞ ≤θ_k+1_1,1θ_k+2^⊤⋯θ_L-1^⊤ h'_L-1⊙θ_L^⊤_2 ≤∏_q=k+1^Lθ_q_1,1. Then it follows that ∂∇_x_if_Θ(x)/∂ h'_k_2≤θ_1· i_1∏_q=2^kθ_q_1,1∏_q=k+1^Lθ_q_1,1 = θ_1· i_1∏_q=2^Lθ_q_1,1. Since we adopt softplus activation function σ(·), σ'(·) is sigmoid function and σ”(·) = σ'(·)(1-σ'(·)), thus σ”(·) ∈ (0,1/4). Denote diag(h'_k) to be a matrix with diagonal elements equaling to h'_k. Then the quantity (∇_x_ih'_k)_j is bounded as follows |(∇_x_ih'_k)_j| = | h”_kjθ_kj·^⊤⊙ h'_k-1θ_k-1^⊤⋯⊙ h'_1θ_1· i^⊤ | = | h”_kjθ_kj·^⊤diag(h'_k-1) θ_k-1^⊤⋯diag(h'_1) θ_1· i^⊤ | ≤1/4θ_kj·^⊤diag(h'_k-1) _1θ_k-1^⊤diag(h'_k-2) ⋯diag(h'_1) θ_1· i^⊤_∞ ≤1/4θ_kj·_1θ_k-1^⊤diag(h'_k-2) ⋯diag(h'_1) θ_1· i^⊤_2 ≤1/4θ_kj·_1θ_k-1^⊤diag(h'_k-2)_1,1θ_k-2^⊤diag(h'_k-3) ⋯θ_1· i^⊤_∞ ≤1/4θ_kj·_1θ_k-1_1,1θ_k-2^⊤diag(h'_k-3) ⋯θ_1· i^⊤_2 ≤1/4θ_kj·_1∏_q=2^k-1θ_q_1,1θ_1· i^⊤_∞≤1/4θ_kj·_1θ_1· i_1∏_q=2^k-1θ_q_1,1 . Then it follows that ∂ h'_k/∂ x_i_2 = √(∑_j=1^h(∇_x_ih'_k)_j^2)≤√(∑_j=1^h1/16θ_kj·_1^2θ_1· i_1^2∏_q=2^k-1θ_q_1,1^2) ≤√((∑_j=1^hθ_kj·_1)^21/16θ_1· i_1^2∏_q=2^k-1θ_q_1,1^2) =1/4θ_1· i_1∏_q=2^kθ_q_1,1. Combing (<ref>),(<ref>) and (<ref>), we finally arrive at that | Δ_xf_Θ(x) | ≤∑_i=1^d∑_k=1^L-1 (θ_1· i_1∏_q=2^Lθ_q_1,1) ( 1/4θ_1· i_1∏_q=2^kθ_q_1,1) =1/4∏_q=2^Lθ_q_1,1(∑_i=1^dθ_1· i_1^2) (∑_k=1^L-1∏_q=2^kθ_q_1,1) ≤1/4∏_q=2^Lθ_q_1,1(∑_i=1^dθ_1· i_1)^2(∑_k=1^L-1∏_q=2^kθ_q_1,1) = 1/4∏_q=2^Lθ_q_1,1 (θ_1_1,1^2) (∑_k=2^L-1∏_q=2^kθ_q_1,1) = 1/4∏_q=1^Lθ_q_1,1(∑_k=2^L-1∏_q=1^kθ_q_1,1) ≤1/4(r/L)^L((L-2)max_k∈{2,⋯,L-1}∏_q=1^kθ_q_1,1) ≤L/4(r/L)^Lmax_k∈{2,⋯,L-1}( r/k)^k. The proof is complete. The proof mainly follows <cit.>, and we provide it here for the completeness of this article. By the definition of the inner product, one has that -⟨∇_xf,∇_xg ⟩_L^2 = - ∫_𝒳(∇_xf ·∇_xg)p_x(x)dx =- ∫_𝒳∇_x· ((∇_xf )gp_x(x))dx + ∫_𝒳∇_x· ((∇_xf )p_x(x))gdx (via integral by part) =- ∫_∂𝒳 (∇_xf ·n⃗) gp_x(x)ds + ∫_𝒳∇_x· ((∇_xf )p_x(x))gdx (via divergence theorem) =0+∫_𝒳 (∇_x·∇_xf) g p_x(x)dx+∫_𝒳 (∇_xf ·∇_xp_x(x)) g dx (∵∇_xf·n⃗=0) =∫_𝒳 (Δ_xf) g p_x(x)dx+∫_𝒳 (∇_xf ·∇_xlog p_x(x)) g p_x(x)dx = ⟨Δ_xf+∇_xf ·∇_xlog p_x(x), g ⟩_L^2. By the linearity of the inner product, we have that ∇_xf̂ - ∇_xf_0_L^2^2 = ⟨∇_xf̂ - ∇_xf_0, ∇_xf̂ - ∇_xf_0⟩ _L^2 =⟨∇_xf̂, ∇_xf̂ - ∇_xf_0⟩ _L^2 -⟨∇_xf_0, ∇_xf̂ - ∇_xf_0⟩ _L^2. It follows from Assumption <ref> and Lemma <ref> that ⟨∇_xf̂, ∇_xf̂ - ∇_xf_0⟩ _L^2 = - ⟨Δ_xf̂+∇_xf̂·∇_xlog p_x(x), f̂-f_0⟩_L^2, -⟨∇_xf_0, ∇_xf̂ - ∇_xf_0⟩ _L^2 = ⟨Δ_xf_0+∇_xf_0·∇_xlog p_x(x), f̂-f_0⟩_L^2. Combining the former two equalities with (<ref>) yields that ∇_xf̂ - ∇_xf_0_L^2^2 = ⟨Δ_xf_0 - Δ_xf̂, f̂-f_0⟩_L^2 + ⟨ (∇_xf_0-∇_xf̂) ·∇_xlog p_x(x), f̂-f_0⟩_L^2. Set λ = 1/ √(n). Then the first term in the right-hand side of (<ref>) is bounded as ⟨Δ_xf_0 - Δ_xf̂, f̂-f_0⟩_L^2 ≤Δ_xf_0 - Δ_xf̂_L^2f̂-f_0_L^2 ≤√(λ)/2Δ_xf_0 - Δ_xf̂_L^2^2 + 1/2√(λ)f̂-f_0_L^2^2 ≤√(λ)/2 (Δ_xf_0_L^2 + Δ_xf̂_L^2)^2 + 1/2√(λ)f̂-f_0_L^2^2 ≤√(λ) (Δ_xf_0_L^2^2 + Δ_xf̂_L^2^2) + 1/2√(λ)f̂-f_0_L^2^2. On the other hand, using Lemma <ref> to bound the divergence of partial derivatives, we have that for Θ∈_r, Δ_xf_Θ_L^2^2 = ∫_𝒳 | Δ_xf_Θ(x) |^2 p_x(x) dx ≤∫_𝒳L^2/16(r/L)^2Lmax_k∈{2,⋯,L-1}( r/k)^2k p_x(x) dx =L^2/16(r/L)^2Lmax_k∈{2,⋯,L-1}( r/k)^2k. Combining the former two inequalities, we obtain that ⟨Δ_xf_0 - Δ_xf̂, f̂-f_0⟩_L^2≤√(λ)L^2/8(r/L)^2Lmax_k∈{2,⋯,L-1}( r/k)^2k + 1/2√(λ)f̂-f_0_L^2^2. For the second term in the right-hand side of (<ref>), one has that ⟨ (∇_xf_0-∇_xf̂) ·∇_xlog p_x(x), f̂-f_0⟩_L^2 = ∫_𝒳( (∇_xf_0(x)-∇_xf̂(x)) ·∇_xlog p_x(x) ) (f̂(x)-f_0(x)) p_x(x)dx ≤∫_𝒳∇_xf_0(x)-∇_xf̂(x) _1∇_xlog p_x(x)_∞ (f̂(x)-f_0(x)) p_x(x)dx =⟨∇_xf_0(x)-∇_xf̂(x) _1, ∇_xlog p_x(x)_∞ (f̂(x)-f_0(x)) ⟩_L^2 ≤∇_xf_0(x)-∇_xf̂(x) _1_L^2∇_xlog p_x(x)_∞ (f̂(x)-f_0(x)) _L^2 ≤√(λ)/2∇_xf_0(x)-∇_xf̂(x) _1_L^2^2 + 1/2√(λ)∇_xlog p_x(x)_∞ (f̂(x)-f_0(x)) _L^2^2. Using Lemma <ref> to bound the ℓ_1-norm of partial derivatives, one sees that the first term in the right-hand side of (<ref>) can be bounded as ∇_xf_0(x)-∇_xf̂(x) _1_L^2^2 ≤∇_xf_0(x)_1 + ∇_xf̂(x) _1_L^2^2 = ∫_𝒳 (∇_xf_0(x)_1 + ∇_xf̂(x) _1)^2 p_x(x)dx ≤∫_𝒳 (2(r/L)^L)^2 p_x(x)dx =4(r/L)^2L. For the second term in the right-hand side of (<ref>), recalling the L^∞-norm given by g_L^∞ = sup{g(x): x ∈𝒳}, we obtain that ∇_xlog p_x(x)_∞ (f̂(x)-f_0(x)) _L^2^2 =∫_𝒳∇_xlog p_x(x)_∞^2 (f̂(x)-f_0(x))^2 p_x(x)dx ≤∇_xlog p_x(x)_∞^2_L^∞∫_𝒳 (f̂(x)-f_0(x))^2 p_x(x)dx = ∇_xlog p_x(x)_∞^2_L^∞f̂-f_0_L^2^2 ≤ b_1^2f̂-f_0_L^2^2. where the last inequality is due to Assumption <ref> that bounds the superior value of derivatives of the probability density. Combining (<ref>),(<ref>) and (<ref>), it follows that ⟨ (∇_xf_0-∇_xf̂) ·∇_xlog p_x(x), f̂-f_0⟩_L^2≤ 2√(λ)(r/L)^2L + b_1^2/2√(λ)f̂-f_0_L^2^2. Combining (<ref>),(<ref>) and (<ref>), we have that ∇_xf̂ - ∇_xf_0_L^2^2 ≤√(λ)(r/L)^2L(2+ L^2/8max_k∈{2,⋯,L-1}( r/k)^2k) + 1+b_1^2/2√(λ)f̂-f_0_L^2^2. Taking expectations with respect to the dataset and plugging λ = 1/ √(n) into the former inequality, we finally obtain by Theorem <ref> that _∇_xf̂ - ∇_xf_0_L^2^2≤1/n^1/4(r/L)^2L(2+ L^2/8max_k∈{2,⋯,L-1}( r/k)^2k) + 48(1+b_1^2)b_0r ( r/L-1)^L-1√(2Llog P)/n^1/4(1+log(c_1√(n)) √( x _∞^2)). The proof is complete. [Acknowledgments] The authors would like to thank the anonymous referees, an Associate Editor and the Editor for their constructive comments that improved the quality of this paper. The first author was supported by the Natural Science Foundation of China (Grant No. 62103329). The second author was supported by the Natural Science Foundation of China (Grant No. 12201496). imsart-nameyear.bst
http://arxiv.org/abs/2406.18299v1
20240626123209
On the Descriptive Complexity of Vertex Deletion Problems
[ "Max Bannach", "Florian Chudigiewitsch", "Till Tantau" ]
cs.LO
[ "cs.LO", "cs.CC" ]
plainurl Descriptive Complexity of Vertex Deletion Problems European Space Agency, Advanced Concepts Team, Noordwijk, The Netherlandsmax.bannach@esa.inthttps://orcid.org/0000-0002-6475-5512 Universität zu Lübeck, Germanyfch@tcs.uni-luebeck.dehttps://orcid.org/ 0000-0003-3237-1650 Universität zu Lübeck, Germanytantau@tcs.uni-luebeck.de M. Bannach and F. Chudigiewitsch and T. Tantau M. Bannach and F. Chudigiewitsch and T. Tantau [500]Theory of computation Finite Model Theory [500]Theory of computation Complexity theory and logic [500]Theory of computation Fixed parameter tractability [500]Theory of computation W hierarchy John Q. Open and Joan R. Access 2 42nd Conference on Very Important Topics (CVIT 2016) CVIT 2016 CVIT 2016 December 24–27, 2016 Little Whinging, United Kingdom 42 23 plain problem[theorem]Problem fact[theorem]Fact reductionruleRule pseudocode morekeywords= algorithm,method,new,and,not, if,then,else,while,do,repeat,until,seq, seqdo,return,call, for,pardo,foreach,print,output,input,exit, break,loop,end,begin,goto,par,global,local, read,write,stop,idle,procedure,function, throw,catch , sensitive=true, morecomment=[l]//, morestring=[b]", morestring=[s]“”, pseudocode language=pseudocode, basicstyle=, commentstyle=, keywordstyle=, identifierstyle=, stringstyle=, columns=fullflexible, mathescape, literate=<- 2, numbers=left, numberstyle=, =20pt parameterizedproblem Parameter: Instance: Parameter: Question: Result: graphs, arrows.meta, positioning, backgrounds, shapes, decorations.pathmorphing, #1#1 0.250.5pt 1ex-90⇒ -.5exold 1ex-90⇒ 0.50.5pt #1#1 max#1#1 #1#1 #1#1 0.250.5pt -1ex90⇒ -.5exdone -1ex90⇒ 0.50.5pt p ≼ ≼ p. ≼ p ≼ graphs, arrows.meta, positioning, backgrounds, shapes, decorations.pathmorphing, every picture/.style=semithick, > = Stealth[round,sep], grayed/.style = black!30 , node/.style= draw, circle, minimum size=5mm, inner sep=0.5pt, font=, fill=white , vertex/.style= node , small node/.style= node,minimum size=4.5pt, inner sep=0pt, outer sep=0pt, font=, on/.style=fill=white,inner sep=-.4pt,circle, mapsto/.style=|[sep]->,blue!50!black, arbitrary/.style=dashed,draw=black!50, present/.style=black, missing/.style=black!10, virtual node/.style=node,draw=none,fill=none, white node/.style=node, black node/.style=node, fill=black, text=white, gray node/.style=node, arbitrary, fill=black!20, red node/.style=node, red!60!black, text=white, blue node/.style=node, blue!70!black, text=white, green node/.style=node, green!50!black, text=white, hilight/.style=red!80!black, new/.style= line width=.4pt, double distance=.8pt, draw=white, double=lipicsYellow, , new edge/.style= draw=lipicsYellow, thick On the Descriptive Complexity of Vertex Deletion Problems Till Tantau Received 7 March 2024 / Accepted 23 May 2024 ========================================================= § ABSTRACT Vertex deletion problems for graphs are studied intensely in classical and parameterized complexity theory. They ask whether we can delete at most k vertices from an input graph such that the resulting graph has a certain property. Regarding k as the parameter, a dichotomy was recently shown based on the number of quantifier alternations of first-order formulas that describe the property. In this paper, we refine this classification by moving from quantifier alternations to individual quantifier patterns and from a dichotomy to a trichotomy, resulting in a complete classification of the complexity of vertex deletion problems based on their quantifier pattern. The more fine-grained approach uncovers new tractable fragments, which we show to not only lie in FPT, but even in parameterized constant-depth circuit complexity classes. On the other hand, we show that vertex deletion becomes intractable already for just one quantifier per alternation, that is, there is a formula of the form ∀ x∃ y∀ z (ψ), with ψ quantifier-free, for which the vertex deletion problem is W[1]-hard. The fine-grained analysis also allows us to uncover differences in the complexity landscape when we consider different kinds of graphs and more general structures: While basic graphs (undirected graphs without self-loops), undirected graphs, and directed graphs each have a different frontier of tractability, the frontier for arbitrary logical structures coincides with that of directed graphs. § INTRODUCTION A recent research topic in parametrized complexity are distance to triviality problems. We are asked how many modification steps (the “distance”) we need to apply to a logical structure in order to transform it into a “trivial” one – which can mean anything from “no edges at all” to “no cycles” or even more exotic properties like “no cycles of odd length.” Such problems have been found highly useful in modern algorithm design <cit.> and are now an important test bed for new algorithmic ideas and data reduction procedures <cit.>. Many problems that have been studied thoroughly in the literature turn out to be vertex deletion problems. The simplest example arises from vertex covers, which measure the “distance in terms of vertex deletions” of a graph from being edge-free: A graph has a vertex cover of size k iff it can be made edge-free by deleting at most k vertices. For a slightly more complex example, the cluster deletion problem asks whether we can delete at most k vertices from a graph so that it becomes a cluster graph, meaning that every connected component is a clique or, equivalently, is P_3-free (meaning, there is no induced path on three vertices). The feedback vertex set problem asks if we can delete at most k vertices, such that the resulting graph has no cycles. The odd cycle transversal problem asks if there is a set of vertices of size at most k, such that removing it destroys every odd cycle. Equivalently, the problem asks if we can delete at most k vertices, such that the resulting graph is bipartite. To investigate the complexity of vertex deletion problems in a systematic way, it makes sense to limit the graph properties to have some structure. An early result in this direction <cit.> is the NP-completeness of vertex deletion to hereditary graph properties that can be tested in polynomial time. Intuitively, vertex deletion problems should be easier to solve for graph properties that are simpler to express. Phrased in terms of descriptive complexity theory, if we can describe a graph property using, say, a simple first-order formula, the corresponding vertex deletion problem should also be simple. The intuition was proven to be correct in 2020, when Fomin et al. <cit.> established a dichotomy based on the number of quantifier alternations that characterizes the classes of first-order logic formulas for which the vertex deletion problem is fixed-parameter tractable. The results of Fomin et al. directly apply to some of the above examples: Consider the problem vertex-cover, whose “triviality” property is described by the formula ϕ_vc = ∀ x∀ y (x y), or the problem cluster-deletion, whose triviality property is described by ϕ_cd = ∀ x∀ y∀ z( (x y y z) → x z). Both first-order formulas use no quantifier alternations, which by <cit.> already implies that the problems lie in P = FPT. Naturally, not all problems can be characterized so easily: Properties like acyclicity (which underlies the feedback vertex set problem) cannot be expressed in first-order logic and, thus, the results of Fomin et al. do not apply to them. Fomin et al. also show that if there are enough quantifier alternations (three, to be precise) in the first-order formulas describing the property, then the resulting vertex deletion problem can be W[1]-hard. Nevertheless, the descriptive approach allows us to identify large fragments of logical formulas and hence large classes of vertex deletion problems that are (at least fixed-parameter) tractable. A first central question addressed in the present paper is whether the number of quantifier alternations (the property studied in <cit.>) overshadows all other aspects in making problems hard, or whether the individual quantifier pattern of the formula plays a significant role as well. This question appears to be of particular importance given that formulas describing natural problems (like ϕ_vc and ϕ_cd above) tend to have short and simple quantifier patterns: We might hope that even though we describe a particular triviality property using, say, four alternations, the fact that we use only, say, two existential quantifiers in total still assures us that the resulting vertex deletion problem is easy. A second central question is whether the kind of graphs that we allow as inputs has an influence on the complexity of the problem. Intuitively, allowing only, say, basic graphs (simple undirected graphs without self-loops) should result in simpler problems than allowing directed graphs or even arbitrary logical structures as input. This intuition is known to be correct in the closely related question of deciding graph properties described in existential second-order logic. As we will see, in the context of vertex deletion problems it makes a difference whether we consider basic graphs, undirected graphs, or directed graphs, but not whether we consider directed graphs or arbitrary logical structures. Our Contributions. We completely classify the parameterized complexity of vertex deletion problems in dependence of the quantifier pattern of the formulas that are used to express the triviality property and also in dependence of the kind of graphs that we allow as inputs (basic, undirected, directed, or arbitrary logical structures). An overview of the results is given in Table <ref>, where the following notations are used (detailed definitions are given later): For a first-order formula ϕ over the vocabulary τ = {∼^2} of (directed, simple) graphs, the parameterized problem [k]vertex-deletion_dir(ϕ) (abbreviated vd_dir(ϕ)) asks us to tell on input of a directed graph G and a parameter k∈ℕ whether we can delete at most k vertices from G, so that for the resulting graph G' we have G'ϕ. The problems vd_undir(ϕ) and vd_basic(ϕ) are the restrictions where the input graphs are undirected or basic graphs (undirected graphs without self-loops), respectively. For instance, vertex-cover = vd_basic(ϕ_vc) = vd_basic(∀ x∀ y (x y)). In the other direction, let vd_arb(ϕ) denote the generalization where we allow an arbitrary logical vocabulary τ and arbitrary (finite) logical structures 𝒜 instead of just graphs G (and where “vertex deletion” should better be called “element deletion,” but we stick with the established name). For a (first-order) quantifier pattern p, which is just a string of a's and e's standing for the universal and existential quantifiers at the beginning of a formula ϕ, we write basic(p) for the class of all problems vd_basic(ϕ) where ϕ has all its quantifiers at the beginning and they form the pattern p. For instance, vertex-cover∈basic(aa) as ϕ_vc has two universal quantifiers. The same notation is used for undirected graphs, directed graphs, and arbitrary structures. The results in Table <ref> give an answer to the first central question formulated earlier, which asked whether it is the number of alternations of quantifiers in patterns (and not so much the actual number of quantifiers) that are responsible for the switch from tractable to intractable observed by Fomin et al. <cit.>, or whether the frontier is formed by short patterns that “just happen” to have a certain number of alternations. As can be seen, the latter is true: All intractability results hold already for very short and simple patterns. Thus, while it was previously known that there is a formula in Π_3 (meaning it has a pattern of the form ∀^*∃^*∀^* or a^*e^*a^* in our notation) defining an intractable problem, we show that already one quantifier per alternation (the pattern aea) suffices. On the positive side, Table <ref> shows that all vertex deletion problems that are (fixed-parameter) tractable at all already lie in the classes AC^0 or at least AC^0↑. From an algorithmic point of view this means that all of the vertex deletion problems that we classify as fixed-parameter tractable admit efficient parallel fixed-parameter algorithms. Concerning the second central question, which asked whether it makes a difference which kind of graphs or logical structures we consider, Table <ref> also provides a comprehensive answer: First, the frontier of tractability (the patterns where we switch from membership in FPT = P to hardness for W[1]) is the same for all kinds of inputs (namely from “does not contain aea as a subsequence” to “contains aea as a subsequence”). Second, if we classify the tractable fragments further according to “how tractable” they are, a more complex complexity landscape arises: While dir(p) and arb(p) have the same classification for all p, the classes basic(p) and undir(p) each exhibit a different behavior. In other words: For simple patterns p, it makes a difference whether the inputs are basic, undirected, or directed graphs. The just-discussed structural results are different from classifications in dependence of quantifier patterns p established in previous works: Starting with Eiter et al. <cit.> and subsequently Gottlob et al. <cit.>, Tantau <cit.> and most recently Bannach et al. <cit.>, different authors have classified the complexity of weighted definability problems by the quantifier patterns used to describe them. In these problems, formulas have a free set variable and we ask whether there is an assignment to the set variable with at most k elements such that the formula is true. Since it is easy to see that the vertex deletion problems we study are special cases of this question, upper bounds from earlier research also apply in our setting. However, our results show that (as one would hope) for vertex deletion problems for many patterns p we get better upper bounds than in the more general setting. Furthermore, there is an interesting structural insight related to our second central question: While the results in <cit.> for weighted definability show that, there, the complexities for undirected graphs, directed graphs, and arbitrary logical structures all coincide (but differ for basic graphs), for the vertex deletion setting, we get three different complexity characterizations for basic, undirected, and directed graphs – but the latter coincide with arbitrary structures once more. Related Work. The complexity-theoretic investigation of vertex deletion problems has a long and fruitful history. Starting in classical complexity theory, results on vertex deletion problems were established as early as in the late 1970s <cit.>. The focus was mostly on deletion to commonly known graph properties, such as planarity, acyclicity or bipartiteness. Since it is very natural to regard the number of allowed modifications as the parameter of the problem, the investigation of vertex deletion problems quickly gained traction in parameterized complexity, with continued research to this day <cit.>. Specifically for graphs, similar problems like the deletion or modification of edges <cit.> or alternative distance measures such as elimination distance <cit.> are also considered. Regarding first-order definable properties, a dichotomy is shown in <cit.>. The framework of quantifier prefix patterns we employ in this paper has also received a lot of attention, especially in the context of descriptive complexity. Early uses go as far back as the classification of decidable fragments of first-order logic <cit.>. They were then considered in the context of classical complexity <cit.> and later also in the context of parameterized complexity <cit.>. Organization of this Paper. Following a review of basic concepts and terminology in Section <ref>, we present the complexity-theoretic classification of the vertex deletion problems for basic, undirected and directed graphs in Sections <ref>, <ref> and <ref>, respectively. main § TECHNICAL APPENDIX In the following, we provide the proofs omitted in the main text. In each case, the claim of the theorem or lemma is stated once more for the reader's convenience. § BACKGROUND IN DESCRIPTIVE AND PARAMETERIZED COMPLEXITY Terminology from Finite Model Theory. In this paper, we will use standard terminology from finite model theory, for a thorough introduction, see, for example <cit.>. A relational vocabulary τ (also known as a signature) is a set of relation symbols to each of which we assign a positive arity, denoted using a superscript. For example, τ = {P^1, E^2} is a relational vocabulary with a monadic relation symbol P and a dyadic relation symbol E. A τ-structure 𝒜 consists of a universe A and for each relation symbol R ∈τ of some arity r of a relation R^𝒜⊆ A^r. We denote the set of finite τ-structures as struc[τ]. For a first-order τ-sentence ϕ, we write models(ϕ) for the class of finite models of ϕ. A decision problem P is a subset of struc[τ] which is closed under isomorphisms. A formula ϕ describes P if models(ϕ) = P. For τ-structures 𝒜 and ℬ with universes A and B, respectively, we say that 𝒜 is an induced substructure of ℬ if A ⊆ B and for all r-ary R∈τ, we have R^𝒜 = R^ℬ∩ A^r. For a set S⊆ B, we denote by ℬ∖ S the substructure induced on B ∖ S. We regard directed graphs G = (V, E) (which are pairs of a nonempty vertex set V and an edge relation E ⊆ V × V) as logical structures 𝒢 over the vocabulary τ_digraph = {^2} where V is the universe and ^𝒢 = E. An undirected graph is a directed graph that additionally satisfies ϕ_undirected∀ x∀ y (x y → y x), while a basic graph satisfies ϕ_basic∀ x∀ y (x y → (y x x ≠ y)). For a first-order logic formula in prenex normal form (meaning all quantifiers are at the front), we can associate a quantifier prefix pattern (or pattern for short), which are words over the alphabet {e, a}.[One uses “a” and “e” in patterns rather than “∀” and “∃” since in the context of second-order logic one needs a way to differentiate between first-order and second-order quantifiers and, there, “E” refers to a “second-order ∃” while “e” refers to a “first-order ∃”. In our paper, we only use first-order quantifiers so only lowercase letters are needed.] For example, the formula ϕ_basic has the pattern aa, while the formula ϕ_degree-≥2∀ x ∃ y_1 ∃ y_2 ((x y_1) (x y_2) (y_1 ≠ y_2)) has the pattern aee. As another example, the formulas in the class Π_2 (which start with a universal quantifier and have one alternation) are exactly the formulas with a pattern p ∈{a}^* ∘{e}^*, which we write briefly as p ∈ a^*e^*. We write p ≼ q if p is a subsequence of q. Terminology from Parameterized Complexity. We use standard definitions from parameterized complexity, see for instance <cit.>. A parameterized problem is a set Q⊆Σ^* ×ℕ for an alphabet Σ. In an instance (x,k) ∈Σ^* ×ℕ we call x the input and k the parameter. The central problem we consider in this paper is the following: [arb(ϕ), where ϕ is a first-order τ-formula] (An encoding of) a logical τ-structure 𝒜 and an integer k∈ℕ. k. Is there a set S⊆ A with |S| ≤ k such that 𝒜∖ Sϕ? As mentioned earlier, we also consider the problems basic(ϕ), where the input structures are basic graphs (formally, basic(ϕ) = arb(ϕ) ∩(models(ϕ_basic)×ℕ)), the problems undir(ϕ), where the input structures are undirected graphs, and dir(ϕ), where the input structures are directed graphs. For a pattern p ∈{a,e}^*, the class arb(p) contains all problems arb(ϕ) such that ϕ has pattern p. The classes with the subscripts “basic”, “undir”, and “dir” are defined similarly. We will consider some parameterized circuit complexity classes. We define AC^0 as the class of parameterized problems that can be decided by a family of unbounded fan-in circuits (C_n, k)_n, k∈ℕ of constant depth and size f(k) · n^O(1) for some computable function f. Similarly, FAC^0 is the class of functions that can be computed by a family of unbounded fan-in circuits (C_n, k)_n, k∈ℕ of constant depth and size f(k) · n^O(1) for some computable function f. For AC^0↑, we allow the circuit to have depth f(k). Questions of uniformity will not be important in the present paper. For these classes, we have the following inclusions: AC^0 ⊊AC^0↑⊆P = FPT. A parameterized problem Q⊆Σ^* ×ℕ is AC^0-many-one-reducible to a problem Q'⊆Γ^* ×ℕ, written Q ≤^AC^0_m Q', if there is a function fΣ^* ×ℕ→Γ^* ×ℕ, such that (1) for all (x, k) ∈Σ^* ×ℕ we have (x, k) ∈ Q iff f(x, k) ∈ Q', (2) there is a computable function gℕ→ℕ such that for all (x, k) ∈Σ^* ×ℕ, we have k' ≤ g(k), where f(x, k) = (x', k'), and (3) f∈FAC^0. The more general AC^0 disjunctive truth table reduction, written Q ≤^AC^0_dtt Q', is defined similarly, only f maps (x,k) to a sequence (x_1,k_1),…,(x_ℓ,k_ℓ) of instances such that (1') (x,k) ∈ Q iff there is an i ∈{1,…,ℓ} with (x_i,k_i) ∈ Q' and (2') k_i ≤ g(k) holds for all i ∈{1,…,ℓ}. Both AC^0 and AC^0↑ are closed under ≤^AC^0_m- and ≤^AC^0_dtt-reductions. § BASIC GRAPHS main §.§ Proofs for Section <ref> Basic graphs, that is, undirected graphs without self-loops, are one of the simplest non-trivial logical structures one can imagine. Despite that, many NP-hard problems on graphs, like vertex cover, clique or dominating set, are NP-hard even for basic graphs. This also transfers in some sense to our setting: The “tractability frontier”, the dividing line between the fragments which are tractable and those where we can express intractable problems, is the same for all graph classes we consider. However, when we shift our attention to the complexity landscape inside the tractable fragments, we also see that the complexity of the logical structure has an impact on the complexity of the problems we can define: Basic, undirected, and directed graphs all have provably distinct complexity characterizations. We begin by stating the main theorem of the section, the complexity classification for basic graphs. In the rest of the section, we show the upper and lower bounds that lead to this classification. Let p ∈{a,e}^* be a pattern. * basic(p) ⊆AC^0, if p ≼ eae or p ≼ e^*a^*. * basic(p) ⊆AC^0↑ but basic(p) ⊈AC^0, if eeae ≼ p, aae ≼ p or aee ≼ p holds, but also still p ≼ e^*a^*e^*. * basic(p) contains a W[2]-hard problem, if aea≼ p. The theorem covers all possible patterns. It follows from the following lemma, where we state the individual complexity characterizations we will prove: * basic(eae) ⊆AC^0. * basic(e^*a^*) ⊆arb(e^*a^*) ⊆AC^0. * basic(e^*a^*e^*) ⊆arb(e^*a^*e^*) ⊆AC^0↑. * basic(eeae) contains a problem not in AC^0. * basic(aae) contains a problem not in AC^0. * basic(aee) contains a problem not in AC^0. * basic(aea) contains a W[2]-hard problem. Notice that in particular, we know unconditionally that W[2]⊈AC^0, and, hence, a W[2]-hard problem cannot lie in AC^0. It is furthermore widely conjectured that W[2]⊈AC^0↑, as AC^0↑⊆FPT. We devote the rest of this section to proving the individual items of the lemma. Upper Bounds Previous work by Bannach et al. <cit.> showed that in the weighted definability setting, formulas with the pattern ae already suffice to describe W[2]-hard problems. We now show that the situation is more favorable in the vertex deletion setting, which is a special case of weighted definability: All problems in basic(e^*a^*e^*) are tractable and the problems in basic(e^*a^*) and in basic(eae) are even in AC^0, the smallest class commonly considered in parameterized complexity. We start with the last claim: \beginlemma  basic(eae) ⊆AC^0. \endlemma  To check whether we can delete at most k vertices to satisfy a formula with prefix pattern eae, we first branch over the possible assignments to the first existentially quantified variable. Now, the neighborhood of this variable induces a 2-coloring on the rest of the graph. For the rest of the prefix, ae, we prove that a vertex has to be deleted if and only if there is no special set of constant size, called stable set. This can all be checked in AC^0. \beginproof  Fix a formula ϕ with pattern eae. Then we can rewrite ϕ equivalently in the following form for some quantifier-free formulas ϕ_1 and ϕ_2, neither of which contains the atoms s = x or s≠ x: ∃ s∀ x∃ y(((s = x) →ϕ_1(s, x, y)) ((s ≠ x) →ϕ_2(s, x, y))). We wish to show that basic(ϕ) can be decided by a AC^0 algorithm. Let G = (V,E) be an input graph for our algorithm. We start with some terminology: Since the formula asks us to find for all vertices x ∈ V ∖{s} a vertex y ∈ V such that ϕ_2(s,x,y) holds, we call such a y a witness for x (relative to s). We denote by W_x^s the set of possible witnesses for x relative to s and note that s ∈ W_x^s may hold. Observe that when the existential quantifier ∃ s is instantiated with some particular value s ∈ V and if W_x^s = ∅ holds, we have to delete x to make the rest of the formula (the part following ∃ s) true. A witness walk (relative to s) starting at v_1 or just a v_1-witness walk is a sequence of vertices (v_1, v_2, …, v_j) such that * we have v_i + 1∈ W_v_i^s for all i∈{1, …,j-1}, * the vertices v_1 to v_j-1 are distinct, and * we have v_j = s (and say that the the walk is s-terminated) or v_j = v_i for some i < j (and say that the walk is returning) or W_v_j^s = ∅ (and say that the walk is unstable). A walk is stable if it is not unstable (so it is s-terminated or returning). Our first crucial observation is that we never have to delete vertices that are part of a stable walk to make the graph satisfy the formula. Formally: Fix s ∈ V. Then for every vertex v ∈ V ∖{s} there is either a stable v-witness walk (relative to s) or v has to be deleted in order to satisfy ϕ when the existential quantifier is instantiated with the fixed s. Suppose that there is no stable v-witness walk. Consider the v-witness walk obtained by arbitrarily adding consecutive witnesses to the walk as long as possible. As this walk is unstable, it ends with a vertex v_j ≠ s with W_v_j^s = ∅. Thus, there is no way to make ∃ y ϕ_2(s,v_j,y) true in G and thus also not ∀ x ∃ y ϕ_2(s,x,y). In particular, we need to delete v_j, making the graph smaller, and note that this does not introduce any stable v-witness walks. Thus, by repeating the argument often enough, at some point we must have j= 1, that is, v_1 = v_j and W_v_1^s = ∅ holds. This means that we must delete v in order to make ϕ true, as claimed. By the claim, for each fixed s∈ V, we have to delete the vertices that are not the starts of stable witness walks to make the graph satisfy the formula and also note that we do not have to delete vertices v for which a stable v-witness walk exists as each vertex on it has a witness. Since we will soon see that it suffices to consider stable witness walks of length 10, we get the following algorithm: [style=pseudocode] input G = (V,E) for s ∈ V do D ∅ for v ∈ V ∖{s} do if there is no stable v-witness walk of length at most 10 relative to s then D D ∪{v} if |D| ≤ k then G' G ∖ D if G' ∃ y(ϕ_1(s, s, y)) then output “(G,k) ∈basic(ϕ)” and stop output “(G,k) ∉basic(ϕ)” The algorithm can be implemented in AC^0: For the for-statement in line <ref> we branch over the possible choices of s using |V| copies of the circuit executing the rest of the algorithm. Finding a stable v-witness walk for each vertex v in line <ref> can be done using |V|^10 parallel subcircuits. We can implement the handling of the set D by encoding it using a bit vector of length |V| where the ith bit is set when the ith vertex is in D: This allows us to add vertices to D in line <ref> in constant depth, and it is known <cit.> that the size check |D| ≤ k in line <ref> can be implemented in AC^0. The final check “G' ∃ y(ϕ_1(s, s, y))” in line <ref> can trivially be done using an AC^0 circuit as ϕ_1 is a first-order formula. We show the correctness of the algorithm in two directions: For the first direction, observe that if the algorithm outputs that (G,k) lies in basic(ϕ) in line <ref>, we have just found a vertex s∈ V and a set D of at most k vertices whose deletion yields a graph G' that satisfies the formula. This is because G' satisfies ∀ x ∃ y ((s = x) →ϕ_1(s, x, y)) (since this is equivalent to ∃ y ϕ_1(s,s,y) and we have just tested this in line <ref>) and every vertex v ∈ V ∖{s} has a witness (since there is a stable v-witness walk we know that each vertex on it has a witness and no vertex on it ever becomes part of D), so G' also satisfies ∀ x ∃ y((s ≠ x) →ϕ_2(s, x, y)). All told, G' is a model of (<ref>). For the other direction, we show that if we have (G,k) ∈basic(ϕ), then the algorithm outputs this in line <ref>. Membership in basic(ϕ) implies that there is a s ∈ V and at least one set D^s ⊆ V ∖{s} with |D^s| ≤ k such that G ∖ D^s ∀ x∃ y(((s = x) →ϕ_1(s, x, y)) ((s ≠ x) →ϕ_2(s, x, y))). The algorithm will consider this particular s at some point in line <ref>. We show in a moment that in lines <ref> to <ref> the algorithm then computes exactly the smallest set D that makes (<ref>) hold. In particular, this implies that |D| ≤ k will hold, which in turn means that the test in line <ref> is passed and so is the final check in line <ref> as (<ref>) holds. Thus, the algorithm will produce the correct output in line <ref> as claimed. To show that the minimal D is computed, first note that by Claim <ref> we have to delete all vertices that are not part of stable sets. Thus, if we can prove that we put exactly the vertices v ∈ V ∖{s} into D for which there is no stable v-witness walk, we are done: We have to delete all of them, but we delete no more and as part of stable witness walks, all remaining vertices x have a witness. However, in line <ref> we only check whether there is a stable v-witness walk of length 10 and it remains to prove that this test is sufficient. That is, we have to show that for every vertex v ∈ V ∖{s} for which there is some stable v-witness walk, there is also one of length at most 10. This is exactly the final claim: For each v ∈ V ∖{s}, if there is a stable v-witness walk relative to s, there is also one of length at most 10. Consider a shortest stable v-witness walk (v_1,v_2,v_3,…,v_j) relative to s that starts at v_1=v. We wish to show j ≤ 10, so for the sake of contradiction assume j > 10. Then none of v_1 to v_10 can equal s and all of them must be distinct. Recall that a witness of a vertex x ∈ V ∖{s} is a vertex y ∈ V such that ϕ_2(s,x,y) holds. We may assume that the formula ϕ_2 contains as its atomic formulas only x=y, s=y, x y, x s, y s, as well as negations thereof, since ϕ_2 is guarded by “s≠ x→” inside (<ref>) and since x x, y y, and s s are all always false in basic graphs. Furthermore, whether or not ϕ_2(s,v_p,v_q) holds for p,q ∈{1,…,10} with p≠ q depends neither on the atoms x = s nor on x = y inside ϕ_2, since, should these be present, they will always be false. Rather, the only remaining atoms that can still be relevant inside ϕ_2 are x y, x s, and y s. Let us say that a vertex is black if it is adjacent to s, otherwise it is white. Then, for all p,q ∈{1,…,10} with p ≠ q, the question of whether v_q is a witness for v_p relative to s (that is, whether ϕ_2(s,v_p,v_q) holds), depends only on whether v_q v_p holds and on the colors of v_p and v_q. We now distinguish two cases: First, that the stable v_1-witness walk (v_1,v_2,v_3,…,v_j) is s-terminated (meaning v_j = s) or is returning to v_i with i ≥ 5. Second, that the witness walk is returning, but to some v_i with i < 5. For the first case, assume that the color of v_1 is white (for the case that the vertex is black, just exchange black and white in the following argument). Suppose v_2 were also white. Then ϕ_2 would allow the white vertex v_1 to have a witness of the same color; but, then, v_1 could also serve as a witness for v_2 (regardless of whether they are connected or not) and (v_1,v_2,v_1) would be a stable returning v_1-witness walk, contradicting the assumption that (v_1,…,v_j) with j > 10 is a shortest stable v_1-witness walk. Thus, v_2 must be black. Repeating the argument shows that v_3 must be white (otherwise v_2 would be a witness for v_3) and then v_4 must be black and then v_5 must be white once more. Again without loss of generality, assume v_1 v_2 (otherwise, repeat the following argument with and exchanged). Then we know that the formula ϕ_2 allows a white vertex (v_1) to have a black witness (v_2) if they are connected by an edge – and since ϕ_2 cannot differentiate between vertices of the same color, we get that any white vertex w can have any black vertex b as its witness whenever w b. This means, in particular, that v_3 v_2 since, otherwise, the black vertex v_2 could serve as a witness for the white v_3 and (v_1,v_2,v_3,v_2) would be a returning v_1-witness walk. Similarly, we also have v_5 v_4 for the same reason. In general, any black vertex b can have any white vertex w as its witness whenever w b. Now consider the white vertex v_5 and how it is connected to the black vertex v_2. If v_5 v_2, then by (<ref>), the vertex v_2 would be a witness for v_5 and, thus, (v_1,v_2,v_3,v_4,v_5,v_2) would be a 5-vertex returning v_1-witness walk. If v_5 v_2, then by (<ref>), the vertex v_5 would be a witness for v_2 and, thus, (v_1,v_2,v_5,v_6,…,v_j) would be a shorter returning stable v_1-witness walk than (v_1,…,v_j). Thus, independently of whether v_2 and v_5 are connected or not, we get a contradiction. For the second case, we assume that the witness walk with j > 10 is returning, but to some v_i with i < 5. We can now repeat all of the arguments for the first case, but starting at v_5 rather than v_1. For instance, the first argument is now that if v_5 is white, then v_6 must be black since, otherwise, (v_1,…,v_5,v_6,v_5) would be a 6-vertex returning walk starting at v_1. By the same arguments, we also get that the following odd-indexed vertices must be white, while the even-indexed ones must be black. We can also conclude that (<ref>) and (<ref>) must hold. Finally, we can apply the same argument as before to the black vertex v_6 and the white vertex v_9: If v_6 v_9, then v_6 is a witness for v_9 and (v_1,…,v_9) is a too-short v_1-witness walk. If v_6 v_9, then v_9 is a witness for v_6 and (v_1,…,v_i,…,v_5,v_6,v_9,v_10,…,v_j) is once more a shorter stable v_1-witness walk than (v_1,…,v_j). Again, we conclude that no matter how v_6 and v_9 are connected, we get a contradiction. With the above claim, the proof of Lemma <ref> is complete. \endproof  Since the algorithms used to prove the next two upper bounds do not make use of the fact that the input structure is a basic graph, we prove them for arbitrary input structures. arb(e^*a^*) ⊆AC^0. For a given formula ϕ of the form ∃ x_1 ⋯∃ x_f ∀ y_1 ⋯∀ y_g (ψ) for a quantifier-free formula ψ, we show that arb(ϕ)≤^AC^0_dttg-hitting-set, where the hitting set problem is defined as shown below. Since g-hitting-set is known <cit.> to lie in AC^0, we get the claim. [d-hitting-set for fixed d ∈ℕ] A universe U and a set E of subsets e ⊆ U (called hyperedges) with |e| ≤ d for all e ∈ E, and a number k. k. Is there a hitting set X ⊆ V, meaning that X ∩ e ≠∅ holds for all e ∈ E, with |X| ≤ k? For an arbitrary input structure 𝒜 with universe A, we proceed as follows: For the existentially bound variables x_1 to x_f we consider all possible assignments to them in parallel. For each of these, we prepare a query to the hitting set problem, resulting in n^f queries in total. For a given assignment, which fixes each x_i to some constant c_i, replace each occurrence of x_i in ϕ by c_i. Build a hitting set instance H as follows: The universe is A ∖{c_1,…,c_f}. For each assignment (d_1,…,d_g) of to the g universally quantified variables, check if the formula ψ is true, that is, whether 𝒜ψ(c_1,…,c_f,d_1,…,d_g). If this is not the case, add the hyperedge {d_1,…,d_g}∖{c_1,…,c_f} to make sure that at least one element is deleted from the universe of 𝒜 that cause this particular violation. If {d_1,…,d_g}∖{c_1,…,c_f} is empty, an empty hyperedge is generated and the hitting set solver correctly rejects the input. We claim that 𝒜∈arb(ϕ) iff for at least one of the constructed H we have (H,k) ∈g-hitting-set: For the first direction, let S with |S| ≤ k be the elements of 𝒜's universe that we can delete, that is, for which 𝒜∖ S ϕ. Then there are constants (c_1,…,c_f) that we can assign to the existentially bound variables such that 𝒜∖ S ∀ y_1 ⋯∀ y_g (ψ(c_1,…,c_f,y_1,…,y_g)). But, then, S is a hitting set of the instance corresponding to these constants: If there were an edge e ⊆ A with e ∩ S = ∅ in the hitting set instance, there would be an assignment to the y_i to elements in A ∖ S that makes ψ false, violating the assumption. For the other direction, let X with |X| ≤ k be the solution of one of the produced hitting set instances with (H,k) ∈g-hitting-set (at least one must exist). Then 𝒜∖ X ϕ, since we can assign the existentially bound variables to the values that correspond to H (which will not be in X by construction) and there can be no assignment to the universally quantified variables that makes ψ false as any assignment where this would be case is hit by X by construction and, thus, at least one element of the tuple that causes the violation gets removed in 𝒜∖ X. arb(e^*a^*e^*) ⊆AC^0↑. Let ϕ be fixed and of the form ∃ x_1 ⋯∃ x_f ∀ y_1 ⋯∀ y_g ∃ z_1 ⋯∃ z_h (ψ) for a quantifier-free formula ψ. We describe a AC^0↑-algorithm that, given an arbitrary input structure 𝒜 with universe A, decides whether there is a set S with |S| ≤ k such that 𝒜∖ S ϕ. Now, we have for each assignment to the universally quantified variables a witness which is bound by the block of h existential quantifiers. The problem compared to the e^*a^*-fragment is that by the deletion of elements, we could potentially destroy witnesses needed to satisfy other assignments. Because of this, we use a direct search tree algorithm to resolve violations of the universal quantifiers. In detail, we once more consider all possible assignments (c_1,…,c_f) to the x_i in parallel. Then we use k layers to find and resolve violations: At the start of each layer, we will already have fixed a set D of vertices that we wish to delete, starting in the first layer with D = ∅. Then in the layer, we find the (for example, lexicographically) first assignment of the y_i to elements (d_1,…,d_f) that all lie in A ∖ D for which we cannot find an assignment of the z_i to elements (e_1,…,e_h) in A ∖ D such that 𝒜∖ D ψ(c_1,…,c_f,d_1,…,d_g,e_1,…,e_h). When we cannot find such an assignment, we can accept since we have found a D for which 𝒜∖ D ϕ holds. Otherwise, we have to delete one of the elements in {d_1,…,d_g}∖{c_1,…,c_f} to make the formula true, so we branch over these at most g possibilities, entering g copies of the next layers, where the ith copy starts with D ∪{d_i}. Since the block of universal quantifiers has constant length, the number of branches in each level of the search tree is constant, so the total size of the search tree is at most g^k. The depth of the search tree is bounded by the number of vertices we can delete, which is our parameter. In total, we get a AC^0↑ circuit. Lower Bounds We now go on to show the lower bounds claimed in Lemma <ref>. The next lemmas all follow the same rough strategy: To show that some problems that can be expressed in the given fragments are (unconditionally) not in AC^0, we reduce from a variant of the reachability problem. In contrast, the last lower bound is obtained via a reduction from set-cover, and improves a result from Fomin et al. <cit.>. They establish that there is a formula ϕ∈Π_3, such that basic(ϕ) is W[2]-hard. In terms of patterns, the formula they construct has the pattern a^5e^26a. We show that there is a formula with pattern aea for which this holds. The reachability problem that will be central for the following lower bounds is: [[]matched-reach] A directed layered graph G with vertex set {1,…,n}×{1,…,k}, where the ith layer is V_i {1,…,n}×{i}, such that for each i∈{1,…,k-1} the edges point to the next layer and they form a perfect matching between V_i and V_i+1; and two designated vertices s ∈ V_1 and t ∈ V_k. k. Is t reachable from s in G? (We require that in the encoding of G the vertex “addresses” (i,l) are given explicitly as, say, pairs of binary numbers, so that even a AC^0 circuit will have no trouble determining which vertices belong to a layer V_i or what the number k of layers is.) Observe that the input instance can be alternatively described as a collection of n directed paths, each of length k. We call the paths in this graph original paths with original vertices and edges. We call the vertices in the layers V_1 and V_k the outer vertices and the vertices in the layers V_i for i∈{2, …, k - 1} the inner vertices. The reductions add vertices and edges to the graphs, which will be referred to as the new vertices and edges (and will be indicated in yellow in figures). [<cit.>] []matched-reach∉AC^0 and, thus, for any problem Q with []matched-reach≤^AC^0_m Q we have Q ∉AC^0. The proof of every lemma using a reduction from the matched reachability problem will consist of four parts: * The construction of a formula ϕ with the quantifier pattern p given in the lemma. * The construction of the instance for the vertex deletion problem (G', k') from the input instance of the matched reachability problem (G, s, t) (typically by adding new vertices and edges). * Showing (G, s, t) ∈[]matched-reach implies (G', k') ∈basic(ϕ), called the forward direction. * Showing (G', k') ∈basic(ϕ) implies (G, s, t) ∈[]matched-reach, called the backward direction. We present the application of the above steps in detail in the following lemma. In subsequent lemmas, which follow the same line of arguments, but with appropriate variations in the constructions and correctness proofs, we only highlight the differences. basic(eeae) ⊈AC^0. We want there to be a deletion strategy for (G', k') iff in the instance (G, s, t), the vertices s and t lie on the same original path. We take k' = k, the number of layers in G, and construct a graph G' from G by adding two special vertices c_1 and c_2, and regard the adjacency of every vertex on the original paths to the vertices c_1 and c_2 as a 3-coloring with colors i∈{0, 1, 2}. We then add appropriate gadgets at the start and the end of each original path, with special gadgets being added at s and at t (although, in this proof, their “special gadgets” are just the empty gadget). The formula. Consider the following formulas, where ϕ_a specifies that every vertex that is neither c_1 nor c_2 should be connected in a certain way to them, and ϕ_b asks that every vertex of color i should have a neighbor of color (i - 1) 3. We encode the color 0 with (x c_1 x c_2), the color 1 with (x c_1 x c_2), and the color 2 with (x c_1 x c_2). ϕ_a(c_1, c_2, x) = (c_1 ≠ c_2) (c_1 x c_2 x) ϕ_b(c_1, c_2, x, y) = x y ((x c_1 x c_2) → (y c_1 y c_2)) ((x c_1 x c_2) → (y c_1 y c_2)) ((x c_1 x c_2) → (y c_1 y c_2)) ϕ_<ref> = ∃ c_1 ∃ c_2 ∀ x ∃ y (((x ≠ c_1) (x ≠ c_2)) → ((y≠ c_1) (y ≠ c_2) ϕ_a(c_1, c_2, x) ϕ_b(c_1, c_2, x, y))) The reduction. On input (G, s, t) the reduction first checks that the graph is, indeed, a layered graph with perfect matchings between consecutive levels (this can easily be done by an AC^0 circuit due to the way we encode G). Then, we let k' be the number k of layers in G = (V,) and construct G' = (V',') by first forgetting about the direction of the edges (making the graph undirected). We then add the following gadgets: * At each end v ∈ V_k of a path, except for v = t, we add a vertex v' to V' and connect v to v', so v ' v'. Let V_k+1 be the set of all new vertices added in this way. The gadget for t ∈ V_k is empty: We do not add anything. * At each beginning v ∈ V_1 of a path, except for v = s, add two vertices v' and v” to V' and connect the three vertices to a triangle, so v ' v' ' v”' v. Let V_0 contain all vertices v' added in this way and let V_-1 contain all vertices v” added in this way. Once more, the special gadget for s ∈ V_1 is just the empty gadget. * Finally, we add two further vertices c_1 and c_2 and connect them to the other vertices as follows: For v ∈ V_i with i∈{-1,0,1,2, …, k + 1}: * If i ≡ 0 3, let c_1 ' v. * If i ≡ 1 3, let c_2 ' v. * If i ≡ 2 3, let c_1 ' v and c_2 ' v. An example for the reduction is depicted in Figure <ref>. We claim that through this construction, the instance (G', k') is in basic(ϕ_<ref>) iff the input graph with vertices s and t is in []matched-reach: Forward direction. Suppose that (G, s, t) ∈[]matched-reach. We show that (G', k') ∈basic(ϕ_<ref>): In input G', just delete every vertex in the original s-t-path. Then every vertex v ∈ V_i for i∈{2, …, k} has its predecessor in the original path as a neighbor, and the predecessor has the previous color regarding the ordering. Furthermore, every vertex v∈ V_1 is part of a triangle where the three vertices each have a different color, so every one of these three vertices has a neighbor of the previous color. Backward direction. Suppose that (G', k') ∈basic(ϕ_<ref>). We show that (G, s, t) ∈[]matched-reach. By assumption, there is a set D of size |D| ≤ k = k' such that G' ∖ D is a model of ϕ_<ref>. Observe that c_1 ∉ D and c_2 ∉ D must hold since they are the only vertices satisfying the formula part ϕ_a, which requires that there are two different vertices that are connect to everyone else. On the other hand, we have to delete s, since by construction, it has no neighbor with the previous color (s has color 0, the successor of s has color 1). But, now, the successor of s has no neighbor of the previous color, so we have to delete it as well. We have to continue for the whole original path of s, so D has to contain at least the vertices on the original path starting at s, which encompasses k vertices. If the last vertex v ∈ V_k on the original path starting at s is not t (that is, if t is not reachable from s), then there is another vertex v' ∈ V_k+1 with v ' v' and we also have to delete v', contradicting the assumption that we only have to delete k vertices. Thus, t must be reachable from s. \beginlemma  basic(aae) ⊈AC^0. \endlemma  \beginproof  We reduce from the problem []matched-reach to a problem in basic(aae). The formula. Let ϕ_<ref>∀ x ∀ y ∃ z ((x y) → ((x z) (y z))), which says that every edge should be part of a triangle. The reduction. Let (G,s,t) and k be given. As before, we check that the instance is valid (is layered and consecutive layers form perfect matchings), set k' = k, forget about the direction of the edges, and start adding gadgets. * For each end v ∈ V_k of a path, except for v=t, add a vertex v' ∈ V_k+1 and connect it to v, so v' ' v. Once more, do not add anything to t. * For the v ∈ V_1 at the beginning of paths, nothing is done, except for v=s, where we add k + 1 new vertices u^s_1, …, u^s_k + 1 and connect them to s. * For each edge v ' v' except for those added to s, add k + 1 vertices u^v_1 to u^v_k+1 and connect them to both v and v' to form a triangle, that is, let v ' u^v_i ' v' ' v be a triangle. An example for the reduction is given in Figure <ref>. Forward direction. Suppose (G, s, t) ∈[]matched-reach. To see that (G', k') ∈basic(ϕ_<ref>), the vertex deletion strategy is to delete the vertices from the original path starting at s and ending at t (since t is reachable by assumption). This path encompasses exactly k vertices. After these deletions, every edge is part of a triangle: For the vertices v on the original path of s, only the added vertices u^v_1, …, u^v_k + 1 remain, but they have degree 0, so no edges are left that need to be part of any triangles. The other original paths remain unmodified and every edge was already part of a triangle by construction. Backward direction. Suppose (G', k') ∈basic(ϕ_<ref>). We show that we have (G, s, t) ∈[]matched-reach. For G' to be a model of ϕ_<ref>, after the deletion of k vertices, we have to have deleted s, because otherwise, we would not be able to remove the k + 1 edges to the vertices u^s_1, …, u^s_k + 1 which are not part of a triangle. Now, let v be the successor of s in the original path. After the deletion of s, we have k + 1 edges that are not part of a triangle between v and the vertices u^v_1, …, u^v_k + 1, so we have to delete v as well and so on for the whole original path of s. Now, if t was not in the original path of s, we would have to delete k + 1 vertices, a contradiction. \endproof  \beginlemma  basic(aee) ⊈AC^0. \endlemma  \beginproof  We again reduce from the problem []matched-reach to a problem in basic(aee). The formula. Now, consider the formula ϕ_<ref> = ∀ x ∃ y_1 ∃ y_2 ((x y_1) (x y_2) (y_1 ≠ y_2)), which requires that every vertex has degree at least 2. The reduction. For the reduction, we may assume without loss of generality that k≥ 1. Then, on input (G, s, t), we set k' = k, again forget the edge direction, making the graph undirected, and add a single vertex v, which we connect to every vertex from V_1 and V_k, except to s and t. Now, every vertex except s and t has degree at least 2. An example for the reduction is given in Figure <ref>. Forward direction. Assume (G, s, t) ∈[]matched-reach. To see that (G', k') ∈basic(ϕ_<ref>) holds, delete the k vertices of the original path from s to t. Then every vertex has degree at least 2: The inner vertices of the original graph have their predecessor and successor as neighbors, the vertices in V_1 have their successor and v as neighbors, and the vertices in V_k have their predecessor and v as neighbors. Backward direction. Assume (G', k') ∈basic(ϕ_<ref>). Since the vertices s and t each have degree 1, any deletion strategy has to delete them both to make the formula true. But now, the successor of s and the predecessor of t have in turn each degree 1, so we have to delete them as well to make the formula true and so on. If t was on a different original path as s, we would have to delete at least 2k vertices, a contradiction. Thus, (G, s, t) ∈[]matched-reach. \endproof  \beginlemma  basic(aea) contains a W[2]-hard problem. \endlemma  \beginproof  This time, we reduce from a different problem, namely from the following version of the set cover problem, which is known <cit.> to be W[2]-hard: [set-cover] An undirected bipartite graph G = (S ∪̇ U, ) with shores S and U and a number k. k Is there a cover C ⊆ S with |C| ≤ k of U, meaning that for each u ∈ U there is an s ∈ S with u s? The formula. We use the following formula with pattern aea: ϕ_<ref> = ∀ x ∃ y∀ z ((x y (y z→ x z)) (x = z)) This formula states that “every vertex should have a neighbor such that there is no triangle of which both are part.” The reduction. Let (S ∪̇ U, ,k) be given as input. The reduction outputs k' = k together with the undirected graph G = (V',') constructed as follows: * For each s ∈ S, add s to V' and also three more vertices s',s”,s”' and connect them in a cycle, so s ' s' ' s”' s”' ' s. * For each u ∈ U, add k+1 copies u_1,…, u_k+1 of u to V'. * Whenever u s holds, let all u_i form a triangle with s and s' in the new graph, that is, for i∈{1,…, k+1} let u_i ' s and u_i ' s'. An example for the reduction is shown in Figure <ref>. Forward direction. Let (S ∪̇ U,,k) ∈set-cover be given. We need to show that (G', k') ∈basic(ϕ_<ref>) holds. Let C ⊆ S with |C| ≤ k cover U. We claim that G' ∖ C ϕ_<ref>, that is, removing all s ∈ C from V' destroys all triangles that could violate ϕ_<ref>. Let us go over the different vertices still left in V' ∖ C: For all s ∈ S, each vertices s,s',s”,s”' has an incident edge in G' that is not part of a triangle. This still holds for s',s”,s”' in G' ∖ C for s∈ C, that is, for the remaining vertices of the cycles where s is deleted. The vertices form a cycle with four edges and of these, only s ' s' is part of any triangles. Thus, the claimed edges are s ' s”' for s and s' ' s” for s' and s”' s”' for s” and s”' ' s” for s”'. For each u ∈ U, for each of its copies u_i there is an incident edge in G' ∖ C that is not part of a triangle in G' ∖ C. Since C is a cover, there must be an s ∈ C with u s. But, then, u_i ' s' by construction and this edge is no part of a triangle in G'∖ C (since we deleted s ∈ C via which the only triangle was formed that contained this edge). Put together, the two claims clearly show that after removing C, all remaining vertices have incident edges that are not part of triangles. Backward direction. Conversely, suppose that for G' = (V', E') we are given a set D ⊆ V' with |D| ≤ k such that G' ∖ D ϕ_<ref>. For each u ∈ U, consider the copies u_i for i∈{1,…,k+1}. For every s ∈ S with s u there is a triangle u_i s s' u_i, but there are no other edges involving u_i. Hence, in order to ensure that ϕ_<ref> holds, we either have to (1) delete u_i or (2) delete exactly one of s or s' for some s u. Since there are k+1 copies of u, we cannot use option (1) for all copies of u, so for each u ∈ U there must be an s ∈ S with s u such that s or s' is deleted from G'. However, this means that the set C = {s ∈ S | s∈ D or s' ∈ D} is a set cover of (S ∪̇ U, ) and, clearly, |C| ≤ |D| ≤ k. \endproof  § UNDIRECTED GRAPHS main §.§ Proofs for Section <ref> Whether allowing self-loops has an impact on the complexity of the problems is hard to predict: While in the setting of Fomin et al. <cit.>, the same dichotomy arises for basic and undirected graphs, in the setting of weighted definability considered by Bannach et al. <cit.>, one class of problems jumps from being contained in AC^0 to containing NP-hard problems just by allowing self-loops. In our setting, we get an intermediate blow-up of the complexities by allowing self-loops: While the tractability frontier stays the same, the frontier of fragments that are solvable in AC^0 shifts. Let us now classify the complexity of vertex deletion problems on undirected graphs. We can use some of the upper and lower bounds established in the section before, and only consider the differences. Let p ∈{a,e}^* be a pattern. * undir(p) ⊆AC^0, if p ≼ ae or p ≼ e^*a^*. * undir(p) ⊆AC^0↑ but undir(p) ⊈AC^0, if one of eae ≼ p, aae ≼ p or aee ≼ p holds, but still p ≼ e^*a^*e^* holds. * undir(p) contains a W[2]-hard problem, if aea≼ p. * undir(ae) ⊆AC^0. * undir(e^*a^*) ⊆AC^0. * undir(e^*a^*e^*) ⊆AC^0↑. * undir(eae) contains a problem not in AC^0. * undir(aae) contains a problem not in AC^0. * undir(aee) contains a problem not in AC^0. * undir(aea) contains a W[2]-hard problem. Item 1 is proven below in Lemma <ref>. Items 2 and 3 follow directly from Lemmas <ref> and <ref>. Item 4 is proven below in Lemma <ref>, Item 5 follows from Lemma <ref>, Item 6 from Lemma <ref> and Item 7 from Lemma <ref>. \beginlemma  undir(ae) ⊆AC^0. \endlemma  \beginproof  A formula with the pattern ae has the following form: ∀ x ∃ y (ϕ'(x,y)). Comparing this with (<ref>) from the proof of Lemma <ref>, we see that we are in a very similar situation as in that lemma, when ϕ'(x,y) is interpreted as ϕ_2(s,x,y). Of course, ϕ_2 also talks about adjacency to s in the form of atoms s x and s y, while ϕ' also talks about self-loops in the form of atoms x x and y y. However, it turns out that these are in one-to-one correspondence: In the proof, we quickly defined a two-coloring of the graph, where v was white if v s held and otherwise black. We now call v white if v v holds and otherwise black. Since these colors are the only places in the proof where the atoms v s and now v v are used, this replacement is valid. In a bit more detail, let us go over the proof of Lemma <ref> once more. We first defined that a vertex y ∈ V is a witness for some x ∈ V∖{s} relative to s if ϕ_2(s,x,y) held. Our new definition is now simply that y ∈ V is a witness for x ∈ V if ϕ'(x,y) holds – the vertex s no longer used or needed, just like the notion of something begin “relative to s”. Next, we defined witness walks, which could be returning, unstable, or s-terminated. Here, we simply no longer have the option of s-termination and do not need to take it into account. Thus, a stable witness walk is always returning. This allows us to simplify the algorithm as follows: [style=pseudocode] input G = (V,E) D ∅ for v ∈ V do if there is no stable v-witness walk of length at most 10 then D D ∪{v} if |D| ≤ k then output “(G,k) ∈undir(ϕ)” else output “(G,k) ∉undir(ϕ)” The correctness arguments are almost all the same as in the proof of Lemma <ref>, only we no longer need so worry about s. Indeed, the only place in the proof where s is mentioned once more, is when vertices v are assigned the colors white and black depending on whether the atoms v s are present or not. While these atoms are no longer present, we now may have the atoms v v which, in the proof of Lemma <ref> always evaluated to 𝑓𝑎𝑙𝑠𝑒 since the graphs were free of self-loops. For basic graphs this is no longer the case, but, fortunately, the central Claim <ref> remains correct if in its proof we replace x s by x x and y s by y y. \endproof  \beginlemma  undir(eae) ⊈AC^0. \endlemma  \beginproof  The idea for this proof is the same as in Lemma <ref>, but instead of encoding three colors with two special vertices, we use one special vertex and self-loops. The formula. Define the formula ϕ_<ref> as follows: Take the formula ϕ_<ref> from page lemma:vd-basic-eeae, but remove the quantifier ∃ c_2 (which yields the desired pattern eae) and replace each occurrence of v c_2 by v v and occurrence of v = c_2 by 𝑓𝑎𝑙𝑠𝑒, where v is any variable. The reduction. We only describe the difference to the reduction from Lemma <ref>: We do not add u_2. Instead, for each vertex v ∈ V' for which we used to have v ' u_2, we add a self-loop instead, so v ' v holds instead. Correctness. In our construction of both the formula and of the graph, “v is adjacent to c_2” got replaced by “v has a self-loop” and, thus, the proof of Lemma <ref> can be recycled. It only remains to argue that it is not possible that deleting u_2 would have produced solutions that are no longer possible, but reviewing the proof shows that we already argued there that deleting u_2 is not possible (and neither is deleting u_1). \endproof  § DIRECTED GRAPHS AND ARBITRARY STRUCTURES main §.§ Proofs for Section <ref> The final class of logical structures we investigate in this paper are directed graphs. Interestingly, from the viewpoint of quantifier patterns, this class of structures is as complex as arbitrary logical structures. Let p ∈{a,e}^* be a pattern. * dir(p) ⊆AC^0, if p ≼ e^*a^*. * dir(p) ⊆AC^0↑ but dir(p) ⊈AC^0, if ae ≼ p ≼ e^*a^*e^*. * dir(p) contains a W[2]-hard problem, if aea≼ p. * dir(e^*a^*) ⊆AC^0. * dir(e^*a^*e^*) ⊆AC^0↑. * dir(ae) contains a problem not in AC^0. * dir(aea) contains a W[2]-hard problem. Items 1 and 2 follow directly from Lemmas <ref> and <ref>. Item 3 is shown in Lemma <ref>, and Item 4 follows from Lemma <ref>. \beginlemma  dir(ae) ⊈AC^0. \endlemma  \beginproof  The formula. Consider the formula ϕ_<ref>∀ x ∃ y (x y), which states that every vertex has a successor. The reduction. On input (G, s, t) we reduce as follows: We set k' = k, and for each vertex v∈ V_k, except for t, we add a self-loop. We then add for every vertex u∈ V_1, except for s, a vertex u' and an edge (u', u). An example for the reduction is given in Figure <ref>. Forward direction. We have that (G, s, t) ∈[]matched-reach, and show that we have (G', k') ∈dir(ϕ_<ref>). To make the formula true, we simply delete every vertex in the original path of t, and since s is in the same original path, we have to delete exactly k vertices. Now every vertex has a successor: The vertices in the layers V_i for i∈{1, …, k - 1} have their original successor, and every vertex in V_k has itself as a successor via the self-loop. Backward direction. We have that (G', k') ∈dir(ϕ_<ref>) and show that we have (G, s, t) ∈[]matched-reach. To make the formula true, we have to delete t, since it has no successor. But then, we have also delete the predecessor of t, since it now has no successor as well, and so on. So, we have to delete all the vertices on the same original path as t. Now, if this original path did not begin with s, we would have to delete k + 1 vertices, a contradiction. \endproof  § CONCLUSION In this paper, we fully classified the parameterized complexity of vertex deletion problems where the target property is expressible by first-order formulas and where the inputs are basic graphs, undirected graphs, directed graphs, or arbitrary logical structures. The classification is based on the quantifier patterns of the formulas, and sheds additional light on the complexity properties that emerge from these patterns: We have seen that while the tractability barrier is the same for all logical structures, basic(e^*a^*e^*), undir(e^*a^*e^*), dir(e^*a^*e^*) and arb(e^*a^*e^*) all being tractable and basic(aea), undir(aea), dir(aea) as well as arb(aea) all containing intractable problems, in the tractable cases, basic, undirected and directed graphs have provably different complexities, the latter coinciding with arbitrary structures. The granularity we gained with the viewpoint of quantifier patterns could be useful to examine the complexity of vertex deletions problems where the property is given by a formula of a more expressive logic: For both monadic second-order logic (mso) and existential second-order logic (eso), even the model checking problem becomes NP-hard. This would allow us to express many more natural problems such as feedback vertex set, that have no obvious formalization as a vertex deletion problem to plain fo-properties. Similarly, we could allow extensions such as transitive closure or fixed point operators. Compared to previous work on weighted definability, where the objective is to instantiate a free set variable with at most, exactly, or at least k elements such that a formula holds, we only considered deleting at most k elements. How does the complexity of vertex deletion problems change, if we have to delete exactly k elements – or, for that matter, at least k elements? main
http://arxiv.org/abs/2406.18711v1
20240626192350
Bounded asymptotics for high-order moments in wall turbulence
[ "Xi Chen", "Katepalli R. Sreenivasan" ]
physics.flu-dyn
[ "physics.flu-dyn" ]
APS/123-QED Key Laboratory of Fluid Mechanics of Ministry of Education, Beihang University (Beijing University of Aeronautics and Astronautics), Beijing 100191, PR China chenxi97@outlook.com Tandon School of Engineering, Courant Institute of Mathematical Sciences, and Department of Physics, New York University, New York 10012, USA krs3@nyu.edu § ABSTRACT Turbulent wall-flows are the most important means for understanding the effects of boundary conditions and fluid viscosity on turbulent fluctuations. There has been considerable recent research on mean-square fluctuations. Here, we present expressions for high-order moments of streamwise velocity fluctuation u, in the form ⟨ u^+2q⟩^1/q=α_q-β_q y^∗1/4; q is an integer, α_q and β_q are constants independent of the friction Reynolds number Re_τ, and y^∗ = y/δ is the distance away from the wall, normalized by the flow thickness δ; in particular, α_q =μ+σ q according to the `linear q-norm Gaussian' process, where μ and σ are flow-independent constants. Excellent agreement is found between these formulae and available data in pipes, channels and boundary layers for 1 ≤ q ≤ 5. For fixed y^+ = y^*Re_τ, the present formulation leads to the bounded state ⟨ u^+2q⟩^1/q=α_q as Re_τ→∞ while the attached eddy model predicts that the moments continually grow as log Reynolds number. Bounded asymptotics for high-order moments in wall turbulence Katepalli R. Sreenivasan July 1, 2024 ============================================================= Introduction: For over a century, a vexing problem in turbulence research has been how turbulence fluctuations vary near a smooth solid wall; see <cit.>. The classical paradigm is that, when scaled by the so-called friction velocity u_τ≡√(τ_w/ρ), where τ_w is the shear stress at the wall and ρ the fluid density, various averages of turbulence fluctuations are invariant with respect to the Reynolds number Re_τ (≡ u_τδ/ν, where δ is the thickness of the turbulent boundary layer (TBL), pipe radius or channel half-height, and ν is the fluid viscosity). However, data accumulated in the last two decades from laboratory experiments <cit.> as well as direct numerical simulations (DNS) <cit.> show that the mean square fluctuations keep increasing in the known Reynolds number range. A prevailing result, based on the so-called attached eddy model originating with <cit.>, is that this increase persists indefinitely as ln Re_τ (see <cit.> and references therein). Quite a different result <cit.>, based on the so-called bounded dissipation model (see also <cit.>), is that the observed increase is a finite-Re_τ effect, which vanishes as Re_τ→∞. The theoretical underpinnings of both results are not fool-proof (see below for more comments), so reliance has to be placed on the agreement with empirical results while deciding the relative merits of the models. Comparison of low-order moments in the available data range is ambiguous in some instances—though an independent assessment <cit.>, besides our own <cit.>, documents support for the bounded law over the attached eddy results— so a detailed consideration of high-order moments is essential. Here, we consider the wall-normal profiles of high-order velocity fluctuation moments, make detailed data comparisons for the two formulations and provide a few general conclusions. This is the purpose of this Letter. Theoretical considerations: According to attached eddy model, the variance of streamwise velocity u follows a logarithmic decay in the flow region beyond the near wall peak, i.e., ⟨ u^+2⟩ = B_1 -A_1 ln(y^∗), which has been generalized in <cit.> for high-order moments as ⟨ u^+2q⟩^1/q = B_q -A_q ln(y^∗). Here, the superscript + indicates normalization by u_τ and y^∗=y/δ. The brackets ⟨⟩ indicate temporal averages; and A_q and B_q are constants independent of y^∗ and Re_τ but depend on the moment order q. Equation (1) builds on the idea of turbulent eddies effectively attached to the wall, with their number density varying inversely with y^∗ <cit.>. Because the identification of a clear eddy structure is fraught with uncertainties <cit.>, the k^-1 velocity spectrum <cit.> is sometimes thought to be another possible rationale for the same idea; but that, too, has not been observed unambiguously <cit.>. In going from (1) to (2), a Gaussian distribution of u is assumed, which leads to ⟨ u^+2q⟩^1/q =[(2q-1)!!]^1/q⟨ u^+2⟩ with q!!≡ q(q-2)(q-4)…1, but the slopes A_q depart significantly from the Gaussian prediction <cit.>. The alternative boundedness proposal is ⟨ u^+2q⟩^1/q = α_q -β_q (y^∗)^1/4, where α_q and β_q are constants independent of y^∗ and Re_τ. For reference the variance of u for q=1 in (<ref>) is ⟨ u^+2⟩ = α_1 -β_1 (y^∗)^1/4. The procedure for deriving (<ref>) is summarized here, with details to be found in <cit.>, where a preliminary comparison with data was made. We have an inner expansion ⟨ u^+2⟩=f_0(y^+)+f_1(y^+) g(Re_τ), and an outer expansion ⟨ u^+2⟩=F_0(y^∗). Here, y^+=y^∗ Re_τ is the inner viscous scale, and the Gauge function g(Re_τ)=Re^-1/4_τ depicts the finite Re_τ dependence, determined via a consideration of the influence of the outer Kolmogorov scales <cit.>. The matching between the two expansions results in (<ref>). Equation (<ref>) could be regarded as a generalization of (<ref>) by replacing ⟨ u^+2⟩ by ⟨ u^+2q⟩^1/q. Another way to derive (<ref>) is to invoke a `linear-q-norm Gaussian' (LQNG) process developed in <cit.>, which is different from the derivation of (<ref>) from the Gaussian assumption, as will be explained subsequently. Comparison with experiments: Below, a side-by-side comparison of (2) and (3) with data will be presented, and variations of α_q and A_q will be examined for understanding their underlying stochastic properties in the asymptotic limit of Re_τ→∞. Reminiscent of the standard overlap in the form of the well-known log-law, the generalized form (2) was proposed in <cit.> for an intermediate region approximately in the range 400 < y^+<0.3Re_τ. For a decade of y^+ in this intermediate region, an Re_τ>13,000 is required (see also <cit.>), which is larger than that available from DNS. We thus start with comparisons for high-Re_τ experiments of TBL and pipe flows, and then consider the DNS data for channels obtained at more moderate Re_τ. The data uncertainty is not addressed here as it has been discussed in the original references. Specifically, Fig. <ref> shows ⟨ u^+2q⟩^1/q of TBL at Re_τ≈ 19,000 for q=1-5, measured by Hutchins et al. <cit.> in the Melbourne wind tunnel. This dataset is the one used in <cit.> to illustrate the generalized representation (<ref>). Shown by dashed lines in Fig. <ref>a are the best fits given by (<ref>); there is a decent agreement with the data in an intermediate y^+ range, with the best fits for (A_q, B_q) given by: (1.19, 1.71), (2.06, 2.50), (2.93, 3.23), (3.81, 4.12), and (4.68, 5.03) for q=1-5, respectively. The fitted value A_1=1.19 differs from the earlier value of A_1=1.25 by 5%, which is within uncertainty <cit.>. The intercepts B_q increase with increasing q, but a closer look at Fig. <ref>a reveals that the inner limits for the fits (<ref>) begin increasingly farther away from the wall—for example, y^+ ≈ 200 for q=1 compared to y^+ ≈ 400 for q=5. The standard logarithmic region will be continually eroded if this trend continues. We should also note that the fit becomes more of a tangent to the data for larger q. On the other hand, the bounded behavior (<ref>) reproduces data better in a wide flow domain (Fig. <ref>b), i.e. from y^+=200 to almost the edge of the boundary layer. The best fits for (α_q, β_q) are: (9.7, 9.3), (15.2, 14.1), (20.2, 17.9), (25.2, 22), (29.5, 25) for q=1-5, respectively. Two further points may be noted. First, since ⟨ u^+2q⟩^1/q remains positive for y^∗=1, β_q is by necessity slightly smaller than α_q. Second, ⟨ u^+2q⟩^1/q=α_q is the asymptotic plateau for y^∗→0, which can therefore be used to approximate the inner profile. For example, closer to the wall within the so-called buffer layer, (<ref>) fits the data approximately. Indeed, the deviation from data is O(1) for the entire flow domain. However, this is not the case for the attached eddy model <cit.> for which the peak scales as (A_q/2) ln Re_τ, whereas the near wall extension of (<ref>) suggests a magnitude A_q ln Re_τ; the difference between the two is of order ln Re_τ which diverges as Re_τ→∞. Hence, one cannot simply extrapolate (<ref>) to approximate the near wall profile. Moving on to the pipe flow Figs. <ref>a-d compare the profiles of ⟨ u^+2p⟩^1/p for q=1,2,3,5 with data measured by Hultmark et al. <cit.> in the Princeton Superpipe. Taking data at Re_τ=20,300 as the example, the logarithmic fit is obtained in the range 0.02<y^∗<0.3, which corresponds to the 400<y^+<0.3Re_τ. For that range, we obtain (A_q, B_q) as (1.25, 1.71), (1.98, 2.78), (2.6, 4.20), (3.7, 7.22) for q=1,2,3,5, respectively. Again, they appear somewhat like tangents to the data profiles. In comparison, the bounded form agrees closely with data in the whole range y^∗>0.02, with the best fits of (α_q, β_q) given by (10.2, 9.8), (16, 15), (21, 19), (31, 26.8) for increasing q. Note that the same constants produce good data collapse for all other Re_τ profiles. To explain, this collapse is the basis for our outer expansion mentioned in the “Theoretical considerations" section. It is clear in Fig. 2 that the bounded form extends almost to the center of pipe, much farther than the log-form ending at y^∗=0.3. Also, towards the wall, the bounded form characterizes larger segments of the data profiles. Comparisons with the channel flow are shown in Fig. 3 for the DNS data at Re_τ=5200 by Lee & Moser <cit.>. These high-order moments are obtained from the Johns Hopkins Turbulence Database. More than 5 × 10^6 velocity samples are averaged to obtain these profiles, and the convergence of the probability density function has been checked. As shown in Fig. 3a, the bounded form yields a close representation of data, covering both the near-wall and the center flow regions. For a closer look near the wall, Fig. 3b shows the same plot on the logarithmic abscissa, from which it is clear that the log-form only captures data in a narrow domain (from y^+=400 or y^∗=0.08, to y^∗=0.3), while (<ref>) offers a better description for the channel flow as well. Asymptotic behaviors: Different asymptotic behaviors can be deduced for the two proposals. Unlike an unbounded growth of the variance according to (<ref>), a bounded state follows for (<ref>), i.e., ⟨ u^+2⟩=α_1≈10 as Re_τ→∞. This sharp contrast appears also for higher q. It is important to understand how α_q and A_q vary with q because they appear in the leading order terms. They are plotted in Fig. 4 for all three flows. For A_q (normalized by A_1), the data vary from one flow to another, with those for channel being the lowest. This is in addition to the sensitivity shown by A_1 with respect to Re_τ <cit.>. Moreover, following the Gaussian model in <cit.>, one has A_q=A_1[(2q-1)!!]^1/q, which is, however, conspicuously higher than the A_q obtained from Fig. 4a. Such a departure indicates that the Gaussian assumption does not hold and that a different model is needed (e.g., the sub-Gaussian consideration of <cit.>). In contrast, the α_q values for the three flows are quite close to each other and exhibit an excellent linear dependence with the order q (Fig. 4b). As ⟨ u^+2q⟩^1/q=α_q for y^∗→0, the linear growth of α_q corresponds to a LQNG process of ϕ=(u^+)^2, which satisfies the operator reflection symmetry <cit.> as 𝐄∘𝐐(ϕ)=𝐐∘𝐄(κ). Here, κ=N(μ, 2σ) is a Gaussian seed with mean μ and variance 2σ; 𝐄 is the operator of exponential transform, e.g. 𝐄(κ)=e^κ; and 𝐐 is the operator of q-norm estimation, e.g. 𝐐(ϕ)=⟨ϕ^q⟩^1/q with ⟨·⟩ for the expectation. To see the linearity with q, applying 𝐄^-1 (i.e., taking the logarithm) on both sides of (<ref>) yields ⟨ u^+2q⟩^1/q = 𝐐(ϕ) = 𝐄^-1∘𝐐∘𝐄(κ) = ln[⟨ e^κ q⟩^1/q] =μ +σ q. With μ=5.5 and σ=5, (<ref>) thus explains the linear behavior of α_q in Fig. 4b. Note that the current α_q = 5.5 + 5q is very close to α_q = 5.5 + 5.9q from Eq. (4.10) of <cit.> for the moments inner peak at y^+=15, indicating that the whole flow shares the similar LQNG property. Finally, we derive (3) from (4) by the LQNG process. According to (<ref>) or (<ref>), the ratio of moments also grows linearly with order, i.e. ⟨ u^+2q⟩^1/q/⟨ u^+2⟩=(μ+σ q)/(μ+σ)=c_1+c_2 q where c_1=μ/(μ+σ) and c_2=σ/(μ+σ). Further with (<ref>) for ⟨ u^+2⟩, one obtains (<ref>) for high order moments. The approach is different from the Gaussian hypothesis, but it is evident that LQNG describes Fig. 4b more consistently. Conclusions: In summary, we have shown that the bounded asymptotic paradigm for high-order moments of wall turbulence shows excellent agreement with data in boundary layers, pipes and channels. The new expression covers a wide range of y^+ and suggests a constant plateau for the moments that grows linearly with the moment order. Future work includes the connection of the present formulation to the flow geometry. Acknowledgement. We are grateful to all the authors cited in figures 1-3 for making their data available. X. Chen appreciates the support by the National Natural Science Foundation of China, No. 12072012 and 11721202, and the “Fundamental Research Funds for the Central Universities".
http://arxiv.org/abs/2406.17682v1
20240625161727
Efficient classical algorithm for simulating boson sampling with inhomogeneous partial distinguishability
[ "S. N. van den Hoven", "E. Kanis", "J. J. Renema" ]
quant-ph
[ "quant-ph", "physics.optics" ]
MESA+ Institute for Nanotechnology, University of Twente, P. O. box 217, 7500 AE Enschede, The Netherlands MESA+ Institute for Nanotechnology, University of Twente, P. O. box 217, 7500 AE Enschede, The Netherlands MESA+ Institute for Nanotechnology, University of Twente, P. O. box 217, 7500 AE Enschede, The Netherlands § ABSTRACT Boson sampling is one of the leading protocols for demonstrating a quantum advantage, but the theory of how this protocol responds to noise is still incomplete. We extend the theory of classical simulation of boson sampling with partial distinguishability to the case where the degree of indistinguishability between photon pairs is different between different pairs. Efficient classical algorithm for simulating boson sampling with inhomogeneous partial distinguishability J. J. Renema July 1, 2024 ========================================================================================================= § INTRODUCTION Quantum computers are expected to outperform classical computers in certain well-defined tasks such as the hidden subgroup problems for abelian finite groups, which includes prime factorization <cit.>, simulations of quantum systems<cit.>, and unstructured search <cit.>. However, building a universal fault-tolerant quantum computer is no easy task, due to the extreme degree of control over a large number of quantum particles required. As an intermediate step, experimental research has focused on the demonstration of a quantum advantage <cit.>, i.e. a computational task where quantum hardware outperforms all classical hardware in wall-clock time, on a well-defined computational problem not necessarily of any practical utility. Such demonstrations have been claimed in superconducting circuits <cit.>, and photons <cit.>. These quantum advantage claims caused substantial debate, with several later being outperformed by classical simulations <cit.>. This was possible despite strong guarantees of computational complexity because experimental hardware suffers from various forms of noise, which introduce decoherence, and reduce the degree to which the task which the device is performing is truly quantum mechanical, thereby opening up loopholes for classical simulation strategies to exploit. Similar to the situation in Bell tests, these simulation strategies demarcate the regime where a classical explanation for the observed data cannot be ruled out. They therefore serve a vital function in assessing the success or failure of a quantum advantage demonstration. One protocol for a quantum advantage demonstration is boson sampling <cit.>. In boson sampling, single photons are sent through a large-scale linear interferometer. The computational task is to provide samples from the output state measured in the Fock basis (see Fig. <ref>). Complexity arises ultimately from quantum interference between the exponentially many ways in which the photons can traverse the interferometer to produce a single outcome. The main sources of noise in boson sampling are photon loss <cit.>, where some of the photons do not emerge from the output of the interferometer, and photon distinguishability <cit.>, where the particles carry which-path information in their internal quantum states. Several strategies exist to classically simulate imperfect boson sampling, including ones based on approximating the quantum state using tensor networks <cit.>, ones aimed at reproducing the marginal photon distributions behind some number of optical modes <cit.>, and ones based on based on phase-space methods <cit.>. Some methods are specifically aimed at spoofing certain benchmarks which have themselves been put forward as proxies for computational complexity <cit.>. The classical simulation technique that we focus on here makes use of the fact that imperfections dampen quantum interference more strongly between paths through the interferometer that exhibit a higher degree of classical dissimilarity, i.e. which would be more different if the particles were fully classical <cit.>. This allows us to establish a notion of distance between the paths, with the attenuation of quantum interference between two paths depending exponentially on the distance. Since it can be shown that there are only polynomially many paths shorter than a given distance, truncating the quantum interference at a fixed distance produces an approximation to the output probability, which is both efficiently computable and maintains its accuracy as the system size is scaled up. Interestingly, this bosonic algorithm has a direct counterpart in the simulation of qubit-based systems <cit.>, as do some of the other algorithms. It is an open question whether this is a coincidence or a symptom of some deeper structure of the problem of demonstrating a quantum advantage, with Kalai and Kindler conjecturing <cit.> that the susceptibility of boson sampling to noise is an intrinsic feature of a non-error corrected approach to demonstrating a quantum advantage. However, this algorithm suffers from some restrictions. In particular, it assumes that the degree of indistinguishability between all photons is equal. In the case of varying indistinguishability among pairs of photons, the only solution available to the algorithm is to approximate the degree of indistinguishability between all photons as that of the highest pairwise indistinguishability. This substantially reduces the applicability of the algorithm to a real experiment, where such fluctuations inevitably occur. In the most pathological case, an experiment with two fully indistinguishable photons and otherwise all distinguishable photons could not be classically simulated, even though only two-photon quantum interference occurs in this case. In this work, we eliminate the dependency on this assumption, demonstrating a classical simulation of noisy boson sampling that is efficient for realistic models of dissimilar photon indistinguishability. We focus on two experimentally relevant cases: first the case of identical and independent fluctuations in the state of the photons, and secondly on the situation where there are two species of photons, one with partial distinguishability and one with full indistinguishability. For these cases, we show extensions of the algorithm of <cit.> which achieve better performance than the original, extending the area of the parameter space which is susceptible to classical simulation. For the cause of identical and independently distributed partial distinguishabilities, we find that the complexity is entirely governed by the mean of the distribution of Hong-Ou-Mandel <cit.> visibilities. For the case of the two-species model, we find that the sampling problem is still classically simulable, but at an additional cost exponential in the number of fully indistinguishable particles. We achieve these results by reworking and simplifying the derivation of <cit.>, to more easily accommodate more complex partial distinguishability distributions. We therefore show that the sensitivity of the hardness of boson sampling to imperfections is not a result of the specific assumptions made in the classical simulation techniques. Moreover, these results provide evidence for the idea that the sensitivity of quantum advantage demonstrations to noise is intrinsic rather than dependent on the specific model of noise chosen. We focus specifically on partial distinguishability as a source of error, motivated by the idea that both optical loss and indistinguishability, as well as other errors, all affect the computational complexity of the boson sampling problem in similar ways <cit.>, meaning that any is paradigmatic for the others. We leave full extension of our results to optical loss and other imperfections to future work. § CLASSICAL ALGORITHM FOR BOSON SAMPLING WITH PARTIALLY DISTINGUISHABLE PHOTONS In this section, we will revisit the algorithm for efficiently classically simulating boson sampling with partial distinguishable particles as described in <cit.><cit.>. We demonstrate a simplified proof for the algorithm, that is heavily inspired by the proof given in <cit.>, but allows us to extend the algorithm to other cases more easily. This section will be structured as follows. First we will revisit the theory needed to describe interference experiments with partial distinguishable photons. Then, we will show that we can efficiently approximate transition probabilties by neglecting contributions of high-order multiphoton interference. Lastly we will show that the error induced by such an approximation on the total probability distribution is independent of the number of photons. We start by considering boson sampling with partially distinguishable photons. Previous research has demonstrated a method to compute the probability of detecting a certain output configuration s in the Fock-basis<cit.>. This expression allows for arbitrary multi-photon input states and arbitrary number-resolving photon detectors. Under the assumption of pure input states and lossless detectors which are insensitive to the internal state of the photon, it has been shown that the results of <cit.> can be rewritten in the a compact form <cit.>. The probability of measuring a particular detection outcome s is given by: P=1/∏_i r_i! s_i!∑_σ∈ S_n [ ∏^n_j=1𝒮_j,σ(j) ] perm(M ∘ M^*_σ), where M is a submatrix of the unitary representation of the interferometer U, constructed by selecting rows and columns of U corresponding to the input modes and output modes of interest (M=U_d(r),d(s)), where d(r) and d(s) represent the mode assignment lists of the input and output states respectively. Note that the size of M is determined by the number of photons n considered in the sampling task. r_i and s_i denote the i^th element of the mode occupation lists of the input state and the output state respectively. S_n denotes the symmetric group, 𝒮 denotes the distinguishability matrix where 𝒮_i,j=⟨ψ_i|ψ_j⟩, the overlap between photon i and j represented by their wave-functions ψ_i and ψ_j respectively. ∘ represents the Hadamard product, ^* represents the element wise conjugation and M_σ represents M where its rows are permuted according to σ. Lastly the permanent of matrix M with shape n × n is defined as Perm(M) = ∑_σ∈ S_n∏_i=1^n M_i, σ(i). For the moment, we will continue by assuming that all particles are equally distinguishable, i.e. that: 𝒮_ij=x+(1-x)δ_ij. Note that the assumption made in Eq. (<ref>) is in the literature often referred to as the orthogonal bad bit model <cit.>, and is supported by experimental evidence. Using the assumption of Eq. (<ref>), we note that the quantity ∏^n_j=1𝒮_j,σ(j) will only depend on the number of fixed points of σ. (A fixed point is a point in σ the that maps to itself after the permutation, or σ(j)=j.) We can therefore rewrite Eq. (<ref>) as: P(s)=1/∏_i r_i! s_i!∑_j=0^n∑_τ∈σ_jx^j Perm(M ∘ M^*_τ). Here, σ_j denotes the set of all permutations with n-j fixed points. The term x^j introduces exponential dampening in j. For this reason, it is natural to truncate the series at some value k<n: P_k(s)=1/∏_i r_i! s_i!∑_j=0^k∑_τ∈σ_jx^j Perm(M ∘ M^*_τ), leaving an expression for the error: Q_k=1/∏_i r_i! s_i!∑_j=k+1^n∑_τ∈σ_jx^j Perm(M ∘ M^*_τ). Note that all terms in Eq. (<ref>) for a given j correspond to all contributions to the probability where j photons interfere with each other and n-j photons undergo classical transmission. Hence, by truncating Eq. (<ref>), we only consider those contributions to the probability where at most k photons interfere with each other. In the remainder of this section, we will focus on non-collisional input and output states, hence ∏_i (r_i)! (s_i)!=1. We will show two things. The first is that Eq. (<ref>) can be computed efficiently on a classical computer for arbitrary system sizes n. The second is that, under the assumption that M is filled with elements that are i.i.d. complex Gaussian, the error term in Eq. (<ref>) decreases exponentially as n increases. It decreases such that, the expectation value of the L_1-distance between the approximate distribution and the real distribution is upper bounded by the following expression: (∑_s |P(s)-P_k(s)|)<√(x^2k+2/1-x^2). This upper bound holds regardless of the system size. Note that if the number of modes is much larger than the number of photons, any n× n submatrix of a Haar-random matrix will be close in variation distance to a matrix filled with complex i.i.d. Gaussians <cit.>. We can use the first result to efficiently draw samples from the approximate distribution via a Metropolis sampler, and we use the second result to show that the distribution that is sampled from is close in variation distance to the real distribution, thus resulting in an efficient classical algorithm for imperfect boson sampling. To demonstrate that we can efficiently compute P_k from Eq. (<ref>), we use the fact that an algorithm exists for approximating the permanent of a matrix with real non-negative elements <cit.>. Additionally, we use the Laplace expansion to split up the permanents from Eq. (<ref>) into sums of the product of two permanents of smaller matrices. One of these two permanents is filled with non-negative elements. We rewrite each term in Eq. (<ref>) by Laplace expanding about the rows that correspond to the fixed points of τ: Perm (M ∘ M^*_τ)= ∑_ρ∈([ n; j ])Perm (M_I_p,ρ∘ M^*_τ_p,ρ)Perm( |M_τ_u,ρ̅|^2). Here τ_u and τ_p are the unpermuted and permuted parts of τ respectively, i.e. those parts that correspond to fixed points (cycles of length 1) and longer cycles, respectively. Given that τ has n-j fixed points, ρ is a j-combination of n, ρ̅ is its complementary set and I_p is the identity permutation for the elements of ρ. Eq. (<ref>) now contains two permanents. The second permanent contains a matrix with only real non-negative elements and can be efficiently approximated via the JSV algorithm <cit.>. The other permanent contains a matrix with complex elements. The size of these matrices is however determined by j, which due to our truncation has a maximum value of k. To compute Eq. (<ref>) we thus need to compute permanents of complex matrices of size j and permanents of real, non-negative matrices of size n-j. We need to do both of these calculations ([ n; j ]) times for each τ∈σ_j for all j≤ k, which results in a polynomial scaling of computational costs with n to evaluate P_k. We now continue with a derivation for Eq. (<ref>). In the main text we will give a sketch of this derivation, in appendix <ref> we will give a full derivation. It is important to note that the derivation follows the same ideas as presented in <cit.>, but differs in some key details. These differences allow us to find similar upper bounds for adjacent boson sampling experiments as will be elaborated on in the following sections. The derivation consists of the following steps: * Using Jensen's inequality, note that (|Q_k|)≤√((Q_k)) * Using the definition of the permanent, note that Q_k is a large sum where each term is described by a product containing elements of M * Using Bienamaymé's identity, note that the variance of a large sum is equal to the covariance between all pairs of terms in this large sum * M is assumed to be filled with i.i.d. complex Gaussian elements, and hence (M_ij)=0, ((M_ij)^2)=0, ((M^*_ij)^2)=0, (|M_ij|^2)=1/m and (|M_ij|^4)=2/m^2 for all i,j. * We use these properties of the elements of M to find that almost all of these correlations are equal to zero. * We use simple combinatorics to count the number of covariances that contribute the same non-zero amount to (Q_k) * We find that the error term approximates a truncated geometric series * We have now found an approximate expression for (Q_k)≈(n!)^2/m^nx^2k+2-x^2n+2/1-x^2 and as a result an upper bound for (|Q_k|) for a typical non-collisional output configuration * By counting the number of non-collisional output configurations we find the upper bound on L_1-distance of interest as presented in inequality <ref> To conclude this section, we have revisited a know algorithm as described in <cit.><cit.>. The algorithm approximates transition probabilities in a boson sampling experiment by neglecting high order interference contributions to the transition probability. We have demonstrated that these approximated transition probabilities are efficiently computable. Moreover we have demonstrated that, under the assumption that all photons are equally distinguishable, the L_1-distance over all non-colisional outputs between the approximated distribution and the real distribution is upper bounded as demonstrated in inequality <ref>. Notably, this upper bound is independent of the system size of the boson sampling experiment. In the following section we will relax this assumption. § GENERAL PARTIAL DISTINGUISHABILITY In the previous section, we have sketched an efficient classical algorithm for boson sampling with partially distinguishable photons, with full details given in Appendix A. To show that the approximate distribution we can efficiently sample from is close to the real distribution it was assumed that all particles are equally distinguishable, see Eq. (<ref>). However, realistic single-photon sources do not adhere to the assumption that all photons are equally imperfect. The algorithm as proposed in <cit.> circumvents this problem by computing the upper bound in Eq. (<ref>) as if all photons are equally imperfect and as good as the best photon pair present. This way an upper bound can be found in general, but depending on the variations in the quality of the particles, this bound may be very loose. Here, we relax the assumption made in Eq. (<ref>). Although we are not able to tighten this upper bound in general, we show that for two experimentally relevant generalizations, we are able to tighten this bound. We will start by noting that without the assumption made in Eq. (<ref>), we can still efficiently evaluate our truncated probability. For general, pure partially distinguishable photons, Eq. (<ref>) reads: P_k=1/∏_i r_i! s_i!∑_j=0^k∑_τ∈σ_j(∏^n_i=1𝒮_i,τ(i)) Perm(M ∘ M^*_τ). If we compare Eq. (<ref>) with Eq. (<ref>), we notice that the only difference is that x^j is substituted with ∏^n_i=1𝒮_i,τ(i). It takes a multiplication of j factors to compute ∏^n_i=1𝒮_i,τ(i) and hence Eq. (<ref>) is still efficiently computable on a classical computer. We will continue to derive an upper bound for the L_1-distance between the approximate and the real distribution. For general, pure partially distinguishable photons, Eq. (<ref>) becomes Q_k=1/∏_i r_i! s_i!∑_j=k+1^n∑_τ∈σ_j(∏^n_i=1𝒮_i,τ(i)) Perm(M ∘ M^*_τ). We note that again the only difference between Eq. (<ref>) and Eq. (<ref>) is that x^j is substituted with ∏^n_i=1𝒮_i,τ(i). We also note that ∏^n_i=1𝒮_i,τ(i) is completely independent of all elements in M. After all, the choice for the interferometer is completely independent of the quality of the photons used in the experiment. We are considering the expectation value of |Q_k| over the ensemble of Haar-unitaries, we thus note that (∏^n_i=1𝒮_i,τ(i))=∏^n_i=1𝒮_i,τ(i). As a result, all steps in appendix <ref> are valid up until Eq. (<ref>). We continue by directly substituting ∏^n_i=1𝒮_i,τ(i) for x^j in Eq. (<ref>) to find (Q_k)= ∑_j=k+1^n ∑_τ∈σ_j(∏^n_i=1|𝒮_i,τ(i)|^2)∑_ρ∈S_n∑^n-j_p=0R_n-j,p2^p(1/m^2)^n. We note that ∑^n-j_p=0R_n-j,p2^p(1/m^2)^n is independent of τ. We would like to find an expression (or an upper bound) for ∏^n_i=1|𝒮_i,τ(i)|^2 that is independent of τ, because that could allow us to simplify Eq. (<ref>) further by recognizing a truncated geometric series. We note that for a general overlap matrix 𝒮, we can simplify Eq. (<ref>) by realizing that ∏^n_i=1|𝒮_i,τ(i)|^2≤max(|𝒮_ij|)^2j, but this approach has already been mentioned in <cit.>. In the remainder of this section, we will discuss two different experimentally relevant cases for which we are able to tighten the upper bound for the L_1-distance. §.§ Independent and identically distributed orthogonal bad bits First, we will consider the internal modes of the i^th photon to be: |ψ⟩ = √(x_i)|Ψ_0⟩ + √(1-x_i)|Ψ_i⟩, where all x_i are independent and identically distributed variables and ⟨Ψ_i|Ψ_j⟩=δ_ij. This model is a variation on the orthogonal bad bit model<cit.> where we allow for deviations in the quality of the photons. We argue that the first case we consider is experimentally relevant, because upon manufacturing single-photon sources, the target indistinguishability is most likely the same for all sources but deviations in the manufacturing process result in fluctuations in the quality of the individual sources. Hence, we can reasonably expect that the quality of the photons are independent and randomly sampled from the same distribution. With this assumption, we can simplify Eq. (<ref>). The overlap matrix 𝒮 then becomes: 𝒮_ij = 1 for i=j √(x_i)√(x_j)^* for i≠ j Every permutation τ can uniquely be described with its cycle notation, from the cycle notation it becomes clear that (∏^n_r=1|𝒮_r,τ(r)|^2)=∏_i∈nonfix(τ)(|x_i|^2). Since all x_i follow the same distribution, the expression in Eq. (<ref>) is independent of τ∈σ_j, and only depends on the number of fixed points of τ. Hence we can simplify Eq. (<ref>), following the same arguments as presented in Eqs. (<ref>), (<ref>) and (<ref>): (Q_k)= ∑_j=k+1^n (|x_i|^2)^j R_n,n-j n!∑^n-j_p=0R_n-j,p2^p(1/m^2)^n ≈∑_j=k+1^n (|x_i|^2)^j (n!)^2/m^2n. Where we note the Gaussian distribution of the elements of M, and in particular the fact that E(M_ij) = 0, implies that all cross terms between the variance of functions of S and functions of M cancel (see Eq. (<ref>)). Since (|x_i|^2) ≤ 1 for |ψ_i⟩ to be normalized, we can thus recognize a truncated geometric series again to find an upper bound on the trace distance between our approximated distribution and the real distribution: (∑_s|P(s)-P_k(s)|) =(∑_s|Q_k(s)|) <√(( |x_i|^2)^k+1/1-( |x_i|^2)) We find that Eq. (<ref>) is equal to <ref>, with the substitution of (|x_i|^2) for x^2. Conveniently, |x|^2 is the visibility of a Hong-Ou-Mandel interference experiment <cit.>. For independent sources, the complexity of boson sampling is therefore governed by the average of the HOM visibilities. This result improves on our earlier work, which could only upper bound the complexity of boson sampling on independent sources by the maximum of their visibilities. For the specific case of Gaussian i.i.d. distinguishabilities, we note that we can evaluate (x_i^2) as the expectation value of a non-central chi-squared distribution with one degree of freedom and (x_i^2)=μ^2+σ^2. Figure <ref> shows the effect of this tighter bound on the regime of the parameter space which can be efficiently simulated. We assume max(x_i)=μ+2 σ, motivated by the observation that the probability is then about 1/2 that max(x_i)<μ+2σ in the case of 30 photons already, and decreasing further if n increases. The areas of the parameter space in Fig <ref> which are in between the solid and dotted lines are the areas of the parameter space which could not be simulated before. §.§ Few indistinguishable photons For the second case, we consider the internal modes of the i^th photon to be: |ψ_i⟩ = |Ψ_0⟩ for i≤ p √(x_i)|Ψ_0⟩ +√(1-x_i)|Ψ_i⟩ for i>p Here, again all x_i are independent variables according to the same distribution and ⟨Ψ_i|Ψ_j⟩=δ_ij.The overlap matrix 𝒮 then becomes: 𝒮_ij = 1 for i=j and i,j≤ p √(x_i)√(x_j)^* for (i≠ j and i,j>p) √(x_i) for (i>p and j≤ p) √(x_j)^* for (i≤ p and j> p) We argue that this second case is experimentally relevant, because according to the algorithm as presented in section <ref> it was a seemingly good strategy to spend a lot of resources to make a subset of all your single photons as good as possible, while neglecting the others. With this overlap matrix in which p photons are indistinguishable and the others are independently sampled from the same distribution, we can again simplify Eq. (<ref>). We do this by realizing that ((∏^n_r=1|𝒮_r,τ(r)|^2))≤(|x_i|^2)^max(j-p,0). We can then simplify Eq. (<ref>), following the same arguments as presented in Eqs. (<ref>), (<ref>) and (<ref>): (Q_k)< ∑_j=k+1^n (|x_i|^2)^max(j-p,0)R_n,n-j n!∑^n-j_p=0R_n-j,p2^p(1/m^2)^n ≈∑_j=k+1^n (|x_i|^2)^max(j-p,0)(n!)^2/m^2n. If we now consider k+1>p and if we again realize that this is now a truncated geometric series, we van find an upper bound for the expectation value of the L_1-distance. (∑_s|P(s)-P_k(s)|) ≤ [ m; n ]√(∑^n_j=k+1(|x_i|^2)^j-pn!^2/m^2n) < √(∑^n_j=k-p+1(|x_i|^2)^j) < √(( |x_i|^2)^k-p+1/1-(|x_i|^2)) Let's look at Eq. (<ref>). Adding p perfectly indistinguishable photons can be negated by truncating at k'=k+p instead of k. § DISCUSSION AND CONCLUSION We have extended classical simulation techniques for noisy boson sampling. These new classical simulation techniques push the boundaries for the required qualities of the resources needed for an optical quantum computer. Our results strengthen the intuition that the fragility of computational complexity to noise is itself a robust phenomenon, that does not depend on the particular details of how that noise is modeled. Future work will focus on including other inhomogeneous noise sources into this model, such as unbalanced optical loss. § FORMAL DERIVATION FOR THE UPPER BOUND ON THE L_1-DISTANCE In this section we will give a formal derivation of the upper bound as presented in Eq. (<ref>). We will follow the steps as presented in the main text. We start by realizing that Q_k from Eq. (<ref>) is real, we know |Q_k|=√(Q_k^2). Jensen's inequality for a concave function <cit.> yields (|Q_k|)=(√(Q_k^2))≤√((Q_k^2)) We note ((M∘ M_τ^*)) = (∑_ρ∈ S_n∏_i=1^n(M∘ M_τ^*)_i,ρ(i)) = ∑_ρ∈ S_n(∏_i=1^n(M∘ M_τ^*)_i,ρ(i)) , where we used the definition of the permanent and the linearity of the expectation value. We are interested in the situation where our truncation parameter k is larger than zero, and as a result, all permutations τ that we consider have a nonzero amount of points which are not fixed. In other words: ∃ q s.t. τ(q)≠ q ∀ τ and Eq. (<ref>) can be written as ∑_ρ∈ S_n(∏_i=1^n(M∘ M_τ^*)_i,ρ(i)) = ∑_ρ∈ S_n( ∏_i=1 i≠ q^n(M∘ M_τ^*)_i,ρ(i))(M_q,ρ(q)) (M_τ(q),ρ(q)^*) = 0. Here we used the fact that (X Y)=(X)(Y) if X and Y are independent variables and we note that M_q,ρ(q) and M^*_τ(q),ρ(q) are independent of all other factors. We further use that the expectation of our i.i.d. complex Gaussian elements is zero, (M_ij)=0 for all ij. If we again use the linearity of the expectation value, we find that (Q_k)=( ∑_j=k+1^n∑_τ∈σ_jx^j Perm(M ∘ M^*_τ))=0, hence (|Q_k|) ≤√((Q_k^2)-(Q_k)^2) = √((Q_k)). The square root of the variance thus provides an upper bound for Q_k. We derive an expression for (Q_k) in the remainder of this section. Using Bienaymé's identity <cit.> we find (Q_k) =(∑_j=k+1^n∑_τ∈σ_jx^j (M∘ M_τ^*)) =(∑_j=k+1^n∑_τ∈σ_jx^j ∑_ρ∈S_n∏_r=1^n(M∘ M_τ^*)_r,ρ(r)) =(∑_j=k+1^n∑_τ∈σ_jx^j ∑_ρ∈S_n∏_r=1^n(M_r,ρ(r) M_τ(r),ρ(r)^*)) =∑_j=k+1^n∑_j'=k+1^n∑_τ∈σ_j∑_τ' ∈σ_j'x^jx^j'∑_ρ∈S_n∑_ρ'∈S_n… (∏_r=1^n(M_r,ρ(r) M_τ(r),ρ(r)^*),∏_r=1^n(M_r,ρ'(r) M_τ'(r),ρ'(r)^*)). We use the definition of the covariance between two complex random variables to find (∏_r=1^nM_r,ρ(r) M_τ(r),ρ(r)^*,∏_r=1^nM_r,ρ'(r) M_τ'(r),ρ'(r)^*) =(∏_r=1^n M_r,ρ(r) M_τ(r),ρ(r)^* M^*_r,ρ'(r) M_τ'(r),ρ'(r))… -(∏_r=1^nM_r,ρ(r) M_τ(r),ρ(r)^*)× (∏_r=1^nM^*_r,ρ'(r) M_τ'(r),ρ'(r)) If we focus on the second term in Eq. (<ref>), we realize that again, for all τ that we will consider, this term evaluates to zero for the same reasons as given in Eqs. (<ref>) and (<ref>). Then Eq. (<ref>) reduces to (∏_r=1^nM_r,ρ(r) M_τ(r),ρ(r)^*,∏_r=1^nM_r,ρ'(r) M_τ'(r),ρ'(r)^*) =(∏_r=1^nM_r,ρ(r) M_τ(r),ρ(r)^*M^*_r,ρ'(r) M_τ'(r),ρ'(r)). We will continue to show that the expression in Eq. (<ref>) is equal to zero for almost all of the combinations of τ, ρ, τ' and ρ'. In this demonstration, it is crucial to assume that all elements of our submatrix M are i.i.d. complex Gauassian, or: M_ij∼𝒞𝒩(0,1/m) ∀ i,j From Eq. (<ref>) it follows that * (M_ij)=0 * ((M_ij)^2)=((M^*_ij)^2)=0 * (|M_ij|^2)=(χ^2_2)/2m=1/m * (|M_ij|^4)=2/m^2 If we inspect the equations listed above, we note that Eq. (<ref>) will only evaluate to a nonzero amount when for all r one of the following conditions is true * r=τ(r) and r=τ'(r) * τ(r)=τ'(r) and ρ(r)=ρ'(r) We note that the first condition is true for all r∈fix(τ), if τ and τ' share the same fixed points. For all non-fixed points, condition 2 must thus be true and we conclude that only if τ=τ' and ρ(q)=ρ'(q) ∀ q∈nonfix(τ), Eq. (<ref>) will evaluate to a non-zero value. Here fix(τ) and nonfix(τ) denote the set of all fixed points of the permutation τ and its complementary set respectively. Then, Eq.(<ref>) reduces to: (Q_k)=∑_j=k+1^n∑_τ∈σ_jx^2j∑_ρ∈S_n∑_ρ'∈S_n ρ(q)=ρ'(q) ∀ q∈nonfix(τ)⋯ (∏_r=1^n(M_r,ρ(r) M_τ(r),ρ(r)^*)(M^*_r,ρ'(r) M_τ(r),ρ'(r))). We now split up the product. (Q_k)=∑_j=k+1^n∑_τ∈σ_jx^2j∑_ρ∈S_n∑_ρ'∈S_n ρ(q)=ρ'(q) ∀ q∈nonfix(τ)⋯ (∏_r∈fix(τ)M_r,ρ(r) M_τ(r),ρ(r)^*M^*_r,ρ'(r) M_τ(r),ρ'(r))× (∏_q∈nonfix(τ)M_q,ρ(q) M_τ(q),ρ(q)^*M^*_q,ρ'(r) M_τ(q),ρ'(q)) which, if we use that τ(r)=r ∀ r∈fix(τ) and ρ(q)=ρ'(q) ∀ q∈nonfix(τ), reduces to (Q_k)=∑_j=k+1^n∑_τ∈σ_jx^2j∑_ρ∈S_n∑_ρ'∈S_n ρ(q)=ρ'(q) ∀ q∈nonfix(τ)⋯ (∏_r∈fix(τ)|M_r,ρ(r)|^2 |M_r,ρ'(r)|^2 )× (∏_q∈nonfix(τ)|M_q,ρ(q)|^2 |M_τ(q),ρ(q)|^2). If we now realize that M_q,ρ(q) is independent of M_τ(q),ρ(q) for all q∈nonfix(τ), M_r,ρ(r) is equal to M_r,ρ'(r) if ρ'(r)=ρ(r) and independent otherwise, (|M_ij|^2)=1/m and (|M_ij|^4)=2/m^2, Eq. (<ref>) reduces to (Q_k)= ∑_j=k+1^n ∑_τ∈σ_jx^2j∑_ρ∈S_n∑^n-j_p=0⋯ R_n-j,p(1/m^2)^j(2/m^2)^p(1/m^2)^n-j-p =∑_j=k+1^n x^2jR_n,n-j n!∑^n-j_p=0⋯ R_n-j,p(1/m^2)^j(2/m^2)^p(1/m^2)^n-j-p =∑_j=k+1^n x^2jR_n,n-j n!∑^n-j_p=0R_n-j,p2^p(1/m^2)^n. Here R_n,k is Rencontre's number that counts the number of ways one can permute the set {1,⋯, n} with k fixed points. R_n,k=n!/k!∑_q=0^n-k(-1)^q/q!, and Eq. (<ref>) becomes: (Q_k)= ∑_j=k+1^n x^2j(n!)^2/m^2n∑_q=0^j(-1)^q/q!∑^n-j_p=02^p/p!∑_r=0^n-j-p(-1)^r/r!. Now ∑_q=0^j(-1)^q/q!≈1/e, ∑^n-j_p=02^p/p!≈ e^2 and ∑_r=0^n-j-p(-1)^r/r!≈1/e when j, n-j-p and n-j are large respectively. And thus ∑_q=0^j(-1)^q/q!∑^n-j_p=02^p/p!∑_r=0^n-j-p(-1)^r/r!≈ 1. Note that for j=4, n-j=5 and n-j-p=4 these approximations already have errors below 2%. Hence, (Q_k)≈∑_j=k+1^n x^2j(n!)^2/m^2n. We realize that Eq. (<ref>) describes a truncated geometric series. (Q_k) ≈(n!)^2/m^2nx^2k+2-x^2n+2/1-x^2 <(n!)^2/m^2nx^2k+2/1-x^2 Finally, we use Eq. (<ref>) to find an upper bound for the expectation value of the L_1-distance between the approximate distribution and the real distribution over the Haar-unitaries. In the following expression the sum with index s, runs over all non-collisional outputs. (∑_s|P(s)-P_k(s)|) =(∑_s|Q_k(s)|) =∑_s(|Q_k(s)|) <[ m; n ]n!/m^n√(x^2k+2/1-x^2) <√(x^2k+2/1-x^2) apsrev
http://arxiv.org/abs/2406.19196v1
20240627141529
Existence of solution of a triangular degenerate reaction-diffusion system
[ "Saumyajit Das IIT Bombay" ]
math.AP
[ "math.AP" ]
Existence of solution of a triangular degenerate reaction-diffusion system]Existence of solution of a triangular degenerate reaction-diffusion system Saumyajit Das]Saumyajit Das § ABSTRACT In this article we study a chemical reaction-diffusion system with m unknown concentration. The non-linearity in our study comes from a particular chemical reaction where one unit of a particular species generated from other m-1 species and disintegrates to generate all those m-1 species in the same manner, i.e., triangular in nature. Our objective is to find whether global in time solution exists for this system where one or more species stops diffusing. In particular we are able to show classical global in time solution exists for all the degenerate cases in any dimension except one and this particular case too attain classical global in time solution upto dimension 2. Although weak global in time solution exists for all the degenerate cases in any dimension. We also analyse global in time existence result for the case of quadratic non-linear rate functions and also analyse a three dimensional case. [ [ To intrinsic beauty: Khintchine's Theorem - one hundred years on! ======================================================================== § INTRODUCTION Reaction-Diffusion system, originating from chemical kinetics,governs the evolution of concentration of species in time at various spatial locations. The nature of such system is the species are simultaneously diffusing and undergoing chemical reactions i.e., reversible in nature. In this article we consider a m species triangular reaction-diffusion system where in a domain m species X_1,⋯,X_m are undergoing through the following reversible system: α_1 X_1+α_2X_2+⋯+α_m-1X_m-1⇌ X_m The spatial domain is taken to be a bounded domain Ω⊂ℝ^N with C^2+ν boundary. For the unknown a_i:[0,T)×Ω→ℝ, represents the concentrations species X_i, for all i=1,⋯,m. The reaction-diffusion system is given by the following system of partial differential equations {∂_t a_i - d_i Δ a_i= a_m-∏_1^m-1(a_j)^α_i (0,T)×Ω ∂_t a_m-d_mΔ a_m= ∏_1^m-1(a_j)^α_i-a_m (0,T)×Ω n.∇_xa_i= 0 (0,T)×∂Ω a_i(0,x)= a_i,0(∈ C^∞(Ω))≥ 0, . where n(x) denotes the outward unit normal to Ω at the point x∈∂Ω. Furthermore α_i is the stoichiometric coefficient corresponds to the species a_i, generally taken as non-negative integer and the diffusion coefficients d_i are taken to be non-negative real number. A reaction-diffusion system is called triangular system if there exists some positive lower triangular matrix whose action with the source terms makes it linearly dominated by the unknowns. The rate functions corresponding to our system { f_i = a_m-∏_1^m-1(a_j)^α_i ∀ i=1,⋯,m-1 f_m= -f_1. . Consider the positive lower triangular matrix P=(p_ij)_m× m= { 1 i=j, i∈{1,⋯,m } 1 j=i-1, i∈{2,⋯,m} 0 otherwise. . The action of this matrix on the rate function vector (f_1,⋯,f_m) is the the following product P [ f_1; ·; ·; ·; f_m ]≤[1+∑_1^ma_i] [ 1; 2; ·; ·; 2; 0 ], which makes our system a triangular one. When all the diffusion coefficients are strictly positive i.e., d_i>0 for all i=1,⋯,m(here onwards we call this as non-degenerate setting), well posedness of this triangular system and global in time existence of classical solution is established in articles <cit.> <cit.>. In the article <cit.>, authors first analyze the simplest three species triangular reaction-diffusion system where one or more species stops diffusing(here onwards we call this as degenerate setting). Authors consider the following reversible reaction: X_1+X_2 ⇌ X_3, where the concentrations a_1,a_2,a_3 satisfy the following system {∂_t a_1 -d_1 Δ a_1 = a_3-a_1a_2 (0,T)×Ω ∂_t a_2 -d_2 Δ a_2 = a_3-a_1a_2 (0,T)×Ω ∂_t a_3 -d_3 Δ a_ = a_1a_2-a_3 (0,T)×Ω n.∇_x a_i= 0 (0,T)×∂Ω, ∀ i=1,2,3 a_i(0,x)= a_i,0(∈ C^∞(Ω))≥ 0 ∀ i=1,2,3. . There are following kinds of degeneracies: * d_1=0, d_2,d_3>0 or d_2=0,d_1,d_3>0, * d_3=0,d_1,d_2>0, * d_1=d_2=0, d_3>0, * d_1=d_3=0, d_2>0 or d_2=d_3=0, d_1>0. In the article <cit.>, it has been shown except for the case d_3=0,d_1,d_2>0, system always has smooth global in time solution. However and for the degenerate case d_3=0,d_1,d_2>0 one can find global classical solution upto 3 dimension, although global in time weak L^1((0,T)×Ω) solution exists in any dimension. It requires various duality arguments as described in <cit.> <cit.><cit.><cit.>. The following two results one of which ensures positivity of solution and another lays the idea of maximal time interval, where we can find classical solution, are extremely useful. [Positivity of solution] Let u_i:(0,T)×Ω→ℝ, satisfies the following equation(weak sense too included) with d_i≥ 0 {∂_t u_i- d_i Δ u_i= f_i(u_1,⋯,u_i) (0,T)×Ω, ∀ i=1,⋯,m n.∇_x u_i = 0 (0,T)×∂Ω u_i(0,x) ∈ C^∞(Ω)≥ 0. . Furthermore let f=(f_1,⋯,f_m):ℝ^n→ℝ^m is quasi-positive i.e., f_i(r_1,r_2,··,r_i-1,0,r_i+1,⋯,r_n) ≥ 0, ∀ (r_1,r_2,⋯,r_n)∈ [0,+∞)^n. Then the solution remains non-negative i.e., u_i ≥ 0, ∀ i=1,⋯,m. Consider triangular reaction-diffusion equation (<ref>). Rate functions corresponding to this system satisfy above quasi-positive condition. Hence if solution exists then it will be non-negative provided the initial condition is non-negative <cit.><cit.><cit.>. [Maximal interval of solution] Let u :(0,T)×Ω→ℝ, satisfies the following equation(weak sense is also included) with d≥ 0 and f∈ C^1(ℝ): {∂_t u- d Δ u= f(u) (0,T)×Ω n.∇_x u = 0 (0,T)×∂Ω u(0,x) ∈ C^∞ (Ω). . if T_max denotes the maximal interval of existence then lim_t↑ T_maxsup_Ω| u| < +∞ and vice versa. This indicates, to show global in time existence we need to show the space-time supremum norm bound of a species at any particular time is bounded i.e., showing lim_t↑ Tsup_Ω| u| < +∞ is enough. Proof of this theorem can be found in the article <cit.>. This idea is followed widely to show global in time existence of solution <cit.><cit.><cit.><cit.><cit.>. In particular global in time smooth solution for non-degenerate triangular system has shown in the article <cit.>. Similar for three species degenerate model has shown in the article <cit.>. In this article we will discuss global in time existence of positive solutions of various degenerate triangular reaction-diffusion equation. We will show for d_m>0 or for d_m=0 alongwith some other d_i=0 system-(<ref>) always have global in time smooth solution with the help of various duality estimates and parabolic regularity. Furthermore for the degeneracy d_m=0 and all other diffusion coefficients strictly positive, we are able to show global in time weak (H^1((0,T)×Ω_T))^m-1× L^1((0,T)×Ω) solution exists for any dimension. However for this particular degeneracy we will show global in time classical solution exists upto dimension 2. We further discuss two special cases one on quadratic bound on the rate function and another is a special 3 dimensional case. Many of our ideas are borrowed from <cit.><cit.><cit.><cit.> <cit.> <cit.>. We divide all the triangular degenerate cases in three different classes: System-A_1: d_m>0,d_i=0; for some i≠m. System-A_2: d_m=d_i=0, where i≠m. System-A_3: d_m=0, d_i>0; ∀i=1,⋯,m-1. Furthermore we divide the diffusion coefficients into two different index classes basically one index class corresponds to diffusing species i.e., where the diffusion coefficient is non-zero and other corresponds to the non-diffusing species i.e., where the diffusion coefficient is zero. We denote it by the following notation Λ_1= {i: d_i=0}, Λ_2={i:d_i>0}. Some Further Notation: * Ω_τ= (0,τ)×Ω, ∀ 0<τ≤ T. * α_m=1. For all these three different classes of degenerate systems we first introduce one common approximate system where our rate functions will be smooth Lipschitz and carries same quasi-positive character. Hence for the approximate system global in time smooth solution exists provided the initial condition is smooth and positive<cit.><cit.>. The approximate system is described below { ∂_t a^n_i - d_i Δ a^n_i= a^n_m-∏_j=1^m-1(a^n_j)^α_j/ϕ^n (0,T)×Ω, ∀ i=1,⋯,m-1 ∂_t a^n_m-d_mΔ a_m^n= ∏_j=1^m-1(a^n_j)^α_j-a^n_m/ϕ^n (0,T)×Ω n.∇_xa^n_i= 0 (0,T)×∂Ω a^n_i(0,x)= a_i,0(∈ C^∞(Ω))≥ 0, . where ϕ^n:=1+1/n( ∑_i=1^ma^n_i)^Q+2 ∀ n∈ℕ, Q=1+∑_i=1^m-1α_i. We obtain a solution of the degenerate triangular reaction-diffusion system (<ref>) as a weak limit of the solutions of the approximate system. We use various duality estimates motivated from the article <cit.>. The key steps are described below. Step 1: We extract a weak solution for those species which are diffusing i.e., non-degenerate species, which is the weak limit of the approximate solutions thanks to property of heat kernel and Dunford-Pettis' theorem. Step 2: We will find some uniform norm bound of the solutions of a particular species of the approximate system in some higher L^p space where the bound will be uniform with respect to the index n. Step 3: By duality we will find uniform norm bound of the solutions of all species of the approximate system in same higher L^p space where the bound will be uniform with respect to the index n. Step 4: Using integrability estimate [see appendix (<ref>)] we will arrive at a uniform L^∞ time-space bound for all the species where the bound is uniform with respect to the index n. In the light of "maximal interval of solution" theorem, this estimate will establish global in time existence of solution. Step 5: The approximate system corresponds to a degenerate species satisfies a sequence of o.d.e. instead of p.d.e. As the unknown also presents in rate functions, by Picard type iteration and Vitali convergence lemma we can extract a weak solution for the ordinary differential equation corresponding to degenerate species. To obtain this weak limit from the sequence of o.d.e., we need some condition on stochiometric coefficients. The particular condition is stated below [Stochiometric Condition(SC)] {α_i∈ [1,∞), ∀ i∈Λ_1, ∃ α_j∈ {1}∪[2,∞), for some j∈Λ_1. . Note: For the degenerate class system-A_3(<ref>) the stochiometric condition(<ref>) is automatically satisfied. So we don't need this condition for any result related to system-A_3. The above mentioned iterative argument can be found in the article <cit.>, where the authors analyze the existence result of a particular four species degenerate reaction-diffusion system. Next we states our main results. Under condition SC(<ref>), there exists unique positive global in time classical solution in any dimension for system-A_1(<ref>). Under condition SC(<ref>), there exists unique positive global in time classical solution in any dimension for system-A_2(<ref>). There exists positive global in time weak (H^1(Ω_T))^m-1× L^1(Ω_T) solution for system-A_3(<ref>), in any dimension. However unique positive global in time classical solution exists for system-A_3(<ref>) in dimension 1 and 2. § EXISTENCE OF SOLUTION OF DEGENERATE TRIANGULAR SYSTEM-A_1 Our first goal is to extract a weak convergent subsequence of non-degenerate species from the solution of approximate system. Rate functions associated with each of the approximate system(<ref>) is globally Lipschitz with positive smooth initial condition. Hence we have positive smooth global in time solution a_i^n:Ω_T→ℝ, for each of the approximate system. Positivity comes from quasi-positive nature of each of the approximate system <cit.> <cit.><cit.>. Let's define a functional based on the solutions of the approximate system E(a_i^n:i=1,2⋯,m)=∫_Ω∑_1^m α_i(a_i^n(lna_i^n-1)+1). Differentiating with respect to time and using the rate functions of approximate system(<ref>), we have the following estimate(Notice E(·) is positive): sup_t∈[0,T]∫_Ω ∑_1^m α_i(a_i^n(lna_i^n-1)+1)+ ∫_Ω_T∑_1^m α_i d_i |∇a_i^n |^2/a_i^n +∫_Ω_T (a^n_m-∏_1^m-1 (a^n_j)^α_j)/ϕ^nlna^n_m/∏_1^m-1 (a^n_j)^α_j≤E(a_i,0;i=1,2⋯,m). ∃ M_2=max_i∈{1,⋯,m}{α_i}e^2|Ω|+ max_i∈{1,⋯,m}{α_i}/min_i∈{1,⋯,m}{α_i}E(a_i,0;i=1,2⋯,m)>0, such that, {∫_Ω∑_1^m a^n_i ≤ M_2. ∫_Ω∑_1^m | a_i^nlna_i^n|≤ M_2+E(a_i,0;i=1,2⋯,m) )+ e^-1|Ω|>0. . The above condition (<ref>) implies uniform integrability of ∑_1^m a_i^n, ∀ n∈ℕ. Using the relation for κ>1: ∏_1^m-1 (a^n_j)^α_j/ϕ^n≤κa^n_m/ϕ^n+1/lnκ(a^n_m-∏_1^m-1 (a^n_j)^α_j)/ϕ^nln(a^n_m/∏_1^m-1 (a^n_j)^α_j), we have, {_Ω_T∏_1^m-1 (a^n_j)^α_j/ϕ^n≤ 2TM_2+1/ln2E(a_i,0;i=1,2⋯,m) . {∏_1^m-1 (a^n_j)^α_j/ϕ^n}|_n∈ℕis uniformly integrable. . Define g^n=(a^n_m-∏_1^m-1 (a^n_j)^α_j)/ϕ^n, we have, {‖ g^n ‖_L^1(Ω_T)≤ M_2T+2TM_2+1/ln2E(a_i,0;i=1,2⋯,m). {g^n}|_n∈ℕ is uniformly integrable. . By Dunford Petties' theorem g^n is weakly compact and from the properties of heat kernel ∃ a_i, such that, a_i^n converges to a_i a.e in [0,T)×Ω, ∀ i∈Λ_2. Hence a_i^n → a_i in L^1(Ω_T); ∀ i∈Λ_2 (by Vitali convergence lemma). Next we are moving to show uniform in index n, L^p(Ω_T) bound on the approximate solution. We start by the following proposition: For all T>0, there exists a positive constant C, independent of the index 'n', such that, ‖ a^n_i ‖_L^∞(Ω_T)< C, ∀ i=1,⋯,m. proof: We have the following identity for i ∈Λ_1, -∂_t a^n_i =∂_t a^n_m -d_m Δ a^n_m, n.∇_xa^n_i=n.∇_xa^n_m=0. Consider Θ≥ 0 ∈ C_c^∞(Ω_τ)(space of all compactly supported smooth function),satisfies(for a fixed 0<τ<T): -[∂_t ϕ+d_m Δϕ]= Θ Ω_τ n.∇_x ϕ= 0 (0,τ)×∂Ω ϕ(τ)= 0 Ω. We have ϕ≥ 0 and the following estimates for a constant C_q,T,d_m>0( q∈(1,∞), arbitrary), <cit.><cit.><cit.> ‖ϕ_t ‖_L^q(Ω_τ)+‖Δϕ‖_L^q(Ω_τ)+sup_s∈[0,τ]‖ϕ(s)‖_L^q(Ω)+‖ϕ‖_L^q((0,τ)×∂Ω)≤ C_q,T,d_m‖Θ‖_L^q(Ω_τ). Multiply equation (<ref>) by ϕ and integration by parts yields, ∫_Ω_τa^n_m Θ= ∫_Ω(a_m,0+a_i,0)ϕ(0)+∫_Ω_τa^n_i ∂_t ϕ. Let p∈(1,∞) be the Hólder conjugate of q, ∫_Ω_τa^n_m Θ≤( ‖ a_m,0‖_L^p( Ω)+‖ a_i,0‖_L^p( Ω))‖ϕ_0‖_L^q( Ω)+‖ a_i^n‖_L^p( Ω_τ)‖∂_tϕ‖_L^q( Ω_τ). Let C_1=C_q,T,d_mmax{1,‖ a_m,0‖_L^p( Ω)+‖ a_i,0‖_L^p(Ω)}, then, ∫_Ω_τa^n_m Θ≤ C_1(1+‖ a^n_i ‖_L^p(Ω_τ)) ‖Θ‖_L^q(Ω_τ). Duality yields: ‖ a^n_m ‖_L^p(Ω_τ)≤ C_1(1+‖ a^n_i ‖_L^p(Ω_τ)) ∀ τ∈(0,T] Replacing τ by t, we have: ‖ a^n_m ‖_L^p(Ω_t)≤ C_1(1+‖ a^n_i ‖_L^p(Ω_t)) ∀ t ∈(0,T]. Now we have ∂_t a^n_i ≤ a^n_m, an application of Minkwoski's integral inequality and Jensen inequality yields, ‖ a^n_i ‖^p_L^p(Ω)≤ 2^p-1‖ a_i,0‖^p_L^p(Ω_T)+ 2^p-1T^p-1∫_0^t‖ a^n_m ‖^p_L^p(Ω). Raising to the power p in (<ref>), ∫_0^t‖ a^n_m ‖^p_L^p(Ω)≤ 2^(p-1) C_1^p+2^(p-1) C_1^p‖ a^n_i ‖^p_L^p(Ω_t). Taking C_2=max{(1+T)^p-12^(2p-2) C_1^p,2^p-1‖ a_i,0‖^p_L^p(Ω_T):∀ i=1,⋯,m }, ‖ a^n_i ‖^p_L^p(Ω)≤ C_2( 1+ ∫_0^t ‖ a^n_i ‖^p_L^p(Ω)). Using Gröwnwall inequality we obtain, sup_t∈[0,T)‖ a^n_i ‖_L^p(Ω)≤(C_2+C_2^2+C_2^2e^C_2T+C_2max_{i=1,⋯,m}‖ a_i,0‖_L^p(Ω)e^C_2T)^1/p. ‖ a^n_m ‖_L^p(Ω_T)≤ ( 2^(p-1) C_1^p+2^(p-1) TC_1^p (C_2+C_2^2+C_2^2e^C_2T+C_2max_{i=1,⋯,m}‖ a_i,0‖_L^p(Ω)e^C_2T))^1/p. Furthermore we obtain, ‖ a^n_i ‖_L^p(Ω_T)≤(TC_2+TC_2^2+TC_2^2e^C_2T+TC_2max_{i=1,⋯,m}‖ a_i,0‖_L^p(Ω)e^C_2T )^1/p. For i∈Λ_2, we have the following relation, ∂_t (a^n_i+a^n_m)-Δ(d_i a^n_i +d_m a^n_m)=0, n.∇_xa^n_i=n.∇_xa^n_m=0. Using theorem-<ref>, we have there exists C_3>0, such that, ‖ a^n_i ‖_L^p(Ω_T)≤ C_3, ∀ i∈Λ_2: ∀ n∈ℕ. Take Ĉ=max{ C_3, ( 2^(p-1) C_1^p+2^(p-1) TC_1^p (C_2+C_2^2+C_2^2e^C_2T+C_2max_{i=1,⋯,m}‖ a_i,0‖_L^p(Ω)e^C_2T))^1/p, (TC_2+TC_2^2+TC_2^2e^C_2T+TC_2max_{i=1,⋯,m}‖ a_i,0‖_L^p(Ω)e^C_2T )^1/p. . ‖ a^n_i ‖_L^p(Ω_T)≤Ĉ, ∀ i=1,⋯,m. If we in particular take p=2NQ then we have g^n=(a^n_m-∏_1^m-1 (a^n_j)^α_j)/ϕ^n∈ L^2N(Ω_T). From integrability estimation theorem-<ref> we obtain for a positive constant C>0, independent of the index n, such that, ‖ a^n_i ‖_L^∞([0,T)×Ω)≤C, ∀ i∈ 1,⋯,m, ∀ n∈ℕ. ∀ i∈Λ_1, a_i^n has a weakly convergent subsequence. proof: Let i,j∈Λ_1, then a^n_i-a^n_j=a_i,0-a_j,0=ϕ_i,j. We fix a j∈Λ_1, such that α_j∈{1}∪[2,∞). ∂_t a^n_j = a^n_m-∏_1^m-1(a^n_i)^α_i/ϕ^n. Let us define some quantities: δ^1_n= (ϕ_n)^-1, δ^2_n(a^n_j)= (a^n_j)^α_j-1 ∏_i∈Λ_1(ϕ_ij+a^n_j)^α_i, δ_n^3= ∏_i∈Λ_2∖{m}(a^n_i)^α_i. We have the following relation: a^n_j(t,x)=a_j,0(x)e^-∫_0^tδ^n_1 δ^n_2(a^n_j) δ^n_3ds+∫_0^ta_m^n δ^n_1e^-∫_s^tδ^n_1 δ^n_2(a^n_j) δ^n_3 dσds, for x∈Ω a.e., a^n_j(t,x) ≤ a_j,0(x)+TC(from (<ref>)). Above relation shows a^n_j(t,x) bounded uniformly for a given x∈Ω a.e., which implies δ_n^1→ 1 pointwise a.e. Furthermore Vitali convergence lemma yields δ_n^1→ 1 in L^1(0,T) for a given x∈Ω a.e. Let's introduce a Picard type iteration on (<ref>): a^n_j,p+1(t,x)=a_j,0(x) e^-∫_0^tδ^n_1 δ^n_2(a^n_j,p) δ^n_3ds+∫_0^ta_m^n δ^n_1e^-∫_s^tδ^n_1 δ^n_2(a^n_j,p) δ^n_3 dσds, a^n_j,p(0,x)= a_j,0 ∀n∈ℕ. For a given x∈Ω a.e., sup_t∈[0,T]sup_p∈ℕ| a^n_j,p(t,x)|≤ a_j,0(x)+TC=C_4 <+∞. | a^n_j,p+1(t,x)- a^n_j(t,x) |≤ ((1+T)(a_j,0(x)+TC)∏_i∈Λ_2∖{m}C^α_i sup_r∈[0,C_4][|d(ζ(r))/dr|])∫_0^t|a^n_j,p(s,x)- a^n_j(s,x) |ds, where ζ(r)=r^α_j-1∏_i∈Λ_1(ϕ_i,j+r)^α_i. Taking C_5=(1+T)(a_j,0(x)+TC)∏_i∈Λ_2∖{m}C^α_isup_r∈[0,C_4] [|d(ζ(r))/dr|], we obtain the following relation, | a^n_j,p+1(t,x)- a^n_j(t,x) |≤ C_5∫_0^t| a^n_j,p(s,x)- a^n_j(s,x) | ds. By induction we get ∀ x∈Ω a.e., | a^n_j,p+1(t,x)- a^n_j(t,x) |≤C_5^p t^p/ p!∫_0^t| a^n_j,0(s,x)- a^n_j(s,x) |≤ 2TC_4C_5^p t^p/p!. sup_n∈ℕsup_t∈[0,T]| a^n_j,p+1(t,x)- a^n_j(t,x) |≤ 2TC_4C_5^p T^p/p!. From the above relation we can conclude: lim_p→∞sup_n∈ℕsup_t∈[0,T]| a^n_j,p(t,x)- a^n_j(t,x) |=0 . A similar calculation on a^n_j,p(t,x) gives us the following estimates for p≥1, | a^n_j,p+2(t,x)- a^n_j,p+1(t,x) |≤C_5^p-1 t^p-1/ (p-1)!∫_0^t| a^n_j,1(s,x)- a^n_j,0(s,x) |≤ 2TC_4C_5^p-1 t^p-1/(p-1)!. sup_n∈ℕsup_t∈[0,T]| a^n_j,p+2(t,x)- a^n_j,p+1(t,x) |≤ 2TC_4C_5^p-1 T^p-1/(p-1)!. ∀ r∈ℕ; lim_p→∞sup_n∈ℕsup_t∈[0,T]| a^n_j,p+r(t,x)- a^n_j,p(t,x).|=0. Furthermore Vitali convergence lemma yields, ∀ x∈Ω a.e., and ∀ t ∈ [0,T), lim_n→∞∫_0^t|∏_i∈Λ_2∖{m}a_i^α_i-δ_1^n δ_3^n |= 0 . lim_n→∞∫_0^t|a_m-a_m^nδ^n_1|=0 . Let us consider another Picard type iteration for a given x∈Ω a.e., b_p+1(t,x)=a_j,0e^-∫_0^t δ^2_n(b_p)δ^4_n +∫_0^t a_me^-∫_s^t δ^2_n(b_p)δ^4_n dσds, b_0=a_j,0, where δ_n^4=∏_i∈Λ_2∖{m}a_i^α_i. Now for a given x∈Ω a.e., sup_t∈[0,T]sup_p∈ℕ| b_p(t,x)|≤ a_j,0(x)+TC=C_4 <+∞. For p=0, we have lim_n→∞sup_t∈[0,T]| a^n_j,0-b_0|=0. We will use induction, |a^n_j,p+1-b_p+1|≤ a_j,0[ |∫_0^t(δ_n^4-δ^1_nδ^3_n)δ^2_n(a^n_j,p) |+sup_r∈[0,C+C_4][|d(ζ(r))/dr|]∫_0^t |a^n_j,p-b_p |] + ∫_0^t|a_m -a_m^nδ^n_1 |e^-∫_s^t δ^1_n δ^2_n δ^3_n+(C_4+C)Tsup_r∈[0,C+C_4][|d(ζ(r))/dr|]∫_0^t |a^n_j,p-b_p | + (C+C_4)T|∫_0^t (δ^4_n-δ^1_nδ^3_n)δ^2_n(a^n_j,p) |. Now if lim_n→∞sup_t∈[0,T]| a^n_j,p-b_p|=0, Vitali convergence lemma, relation (<ref>),relation (<ref>) lim_n→∞sup_t∈[0,T]| a^n_j,p+1-b_p+1|=0. So by induction on p, we obtain, lim_n→∞sup_t∈[0,T]| a^n_j,p-b_p|=0 ∀ p∈ℕ, ∀ x∈Ω Triangle inequality yields, | b_p+r-b_p|≤| a^n_j,p+r-b_p+r|+| a^n_j,p-b_p|+| a^n_j,p+r-a^n_j,p|, ∀ r∈ℕ. Using relation (<ref>) and relation (<ref>) we have, lim_n→∞sup_t∈[0,T]| b_p+r-b_p|=0, ∀ r∈ℕ, {b_p(t)} is Cauchy in L^∞[0,T] for x∈Ω a.e Let b_p(t,x) converges to a_j(t,x), ∀ t∈[0,T], for a given x∈Ω a.e, and this uniform convergence on time space implies, a_j(t,x) ∀ t∈[0,T], ∀ x∈Ω satisfies the following relation, a_j(t,x)=a_j,0(x)e^-_0^t∏_1^m-1 (a_j)^α_j+∫^t_0 a_me^-_s^t∏_1^m-1 (a_j)^α_jdσds. Using relation (<ref>) and relation (<ref>), we can conclude a^n_j(t,x) converges to a_j(t,x) uniformly ∀ t∈[0,T], for a given x∈Ω a.e. Now this pointwise limit and uniform integrability character of a^n_j(t,x),(from relation (<ref>)) provides us (Vitali converges lemma): a^n_j(t,x) → a_j(t,x) in L^1(Ω_T) . As a^n_i-a^n_j=a_i,0-a_j,0=ϕ_i,j, ∀ i∈Λ_1. We have, a^n_i(t,x) → a_i(t,x)=ϕ_i,j-a_j(t,x) in L^1(Ω_T), ∀ i∈Λ_1. Thus (a_1,a_2,⋯,a_m) is a solution of our triangular system-A_1(in distributional sense), where a_i satisfies the following bound(relation (<ref>)), ‖ a_i ‖_L^∞(Ω_T)≤C, ∀ i=1,2,⋯,m Therefore parabolic regularity gives us a smooth unique solution. <cit.>,<cit.>,<cit.>,<cit.> § EXISTENCE OF SOLUTION OF DEGENERATE TRIANGULAR SYSTEM-A_2 We need supremum time-space uniform bound of the solution of approximate solution. It can be derived easily from the degenerate nature of system-A_2, as discussed below, Just as in the previous section we consider the approximate system with d_m=0 and one or more d_i=0, for some i=1,2,⋯,m-1, which provides : a^n_m+a^n_i=a_m,0+a_i,0 i.e a^n_m uniformly bounded. a_j^n all uniformly bounded, ∀ j=1,2,⋯,m.(theorem-<ref>). Which gives us existence of unique smooth global solution. § EXISTENCE OF SOLUTION OF DEGENERATE TRIANGULAR SYSTEM-A_3 It is evident from our discussion in first section that the rate function for the approximate system g^n=(a^n_m-∏_1^m-1 (a^n_j)^α_j)/ϕ^n are uniformly bounded in L^1(Ω_T) and uniformly integrable(<ref>). Furthermore the approximate solutions a^n_i are also uniformly bounded in L^1(Ω_T) and uniformly integrable(<ref>). Based on these we can extract a weak solution for the system-A_3. By Dunford Petties' theorem g^n is weakly compact and from the properties of heat kernel ∃ a_i such that a_i^n converges to a_i a.e in [0,T)×Ω, ∀ i=1,2,⋯,m-1. Hence by Vitali convergence lemma a_i^n → a_i in L^1(Ω_T), ∀ i=1,2,⋯,m-1. The differntial equation corresponds to a_m^n is the following, ∂_t a^n_m= ∏_1^m-1(a^n_j)^α_i-a^n_m/ϕ^n. a^n_m=_0^t∏_1^m-1(a^n_j)^α_i/ϕ^n e^(-∫_s^t1/ϕ^ndσ) ds. As we have _Ω_T∏_1^m-1 (a^n_i)^α_i/ϕ^n≤ 2TM_2+1/ln2E(a_i,0;i=1,2⋯,m) )(from relation-(<ref>)), this implies a_m^n(t,x) bounded for ∀ t∈(0,T),x∈Ω a.e. So ϕ^n converges to 1 pointwise a.e. Now uniform integrability of {∏_1^m-1 (a^n_i)^α_i/ϕ^n}|_n∈ℕ(relation-(<ref>)) implies ∏_1^m-1 (a^n_i)^α_i/ϕ^n∏_1^m-1a_i^α_i in L^1(Ω_T)(by Vitali convergence lemma). Furthermore Fatou's lemma gives us, 0≤∫_Ωlim inf∫_0^T|∏_1^m-1(a^n_i)^α_i/ϕ^n -∏_1^m-1(a_i)^α_i|≤lim inf∫_Ω∫_0^T |∏_1^m-1(a^n_i)^α_i/ϕ^n -∏_1^m-1(a_i)^α_i|≤0, lim inf∫_0^T|∏_1^m-1(a^n_i)^α_i/ϕ^n -∏_1^m-1(a_i)^α_i|=0. So there exists a subsequence indexed by n_k ( without loss of generality we take n_k as n), such that for all x∈Ω a.e., lim_0^t∏_1^m-1(a^n_j)^α_i/ϕ^n e^(-∫_s^t1/ϕ^ndσ) ds=_0^t∏_1^m-1(a_j)^α_ie^(s-t)ds=a_m. Realtion (<ref>) implies a^n_m converges to a_m in L^1(Ω_T)(by Vitali converges lemma). From the following equation, ∂_t (a^n_i+a^n_m)-d_iΔ a^n_i=0, n.∇_xa^n_i=n.∇_xa^n_m=0, we have the following uniform L^2(Ω_T) estimate (theorem-<ref>), ∀ i=1,2,⋯,m-1, ∃ C_Ω>0, independent of index 'n', such that, ∫_0^T ∫_Ω(a_i^n)^2+ a^n_ia^n_m ≤C_Ω(1+T), ∀T≥0. Now Multiplying our approximate system by a_i^n and integrating over Ω_T we obtain, ∀ i=1,2,⋯,m-1, 1/2∫_Ω(a_i^n)^2+∫_Ω_T|∇ a_i^n|^2 ≤1/2∫_Ω(a_i,0)^2+∫_Ω_Ta^n_ia^n_m/ϕ^n. Using (<ref>) we have ∇ a_i^n uniformly bounded in L^2(Ω_T)(i=1,⋯,m-1). So it weakly converges (upto a subsequence) to ∇ a_i ∀ i=1,2,⋯,m-1(in distributional sense). (a_1,a_2,⋯,a_m) is solution of triangular system(d_m=0)(in distributional sense). We can extend this weak solution to classical solution for N=1,2 dimension by the L^p estimation of Neumann green function for the heat equation. ∀ i=1,⋯,m-1, a_i satisfies the following equation weakly, {∂_t a_i-d_i Δ a_i = ψ=a_m-∏_1^m-1a_j^α_j Ω_T, ∇ a_i .γ= 0 (0,T)×∂Ω, a_i(0,x)= a_i,0 Ω. . Let G_i(t,s,x,y) be the Neumann Green function corresponding to the operator ∂_t-d_i Δ, we can write the expression a_i as, a_i(t,x)= a_i(t,x)+∫_0^tG_i(t,s,x,y)ψ(s,y)dyds. where a_i(t,x) satisfies the following pde, {∂_t a_i-d_i Δ a_i = 0 Ω_T, ∇a_i .γ= 0 (0,T)×∂Ω, a_i(0,x)= a_i,0 Ω. . We use the following estimates from the article <cit.>,<cit.>. There exists _H,κ,C_S>0, j=0,1,2,3, such that, { 0≤ G_i(t,s,x,y) ≤ _H1/(t-s)^N/2e^-κ‖ x-y ‖^2/(t-s)=g(t-s,x-y), ‖a_i(t,x) ‖_L^p(Ω)≤ C_S‖ a_i,0‖_L^p(Ω). . From positivity of solution we can write, a_i(t,x) ≤a_i(t,x)+∫_0^t∫_ΩG_i(t,s,x,y)a_m(s,y)dyds. By Minkwoski's integral inequality, ‖ a_i(t,x) ‖_L^p(Ω)≤ C_S,0‖ a_i,0‖_L^p(Ω)+ ∫_0^t‖ g_j(t-s,x)‖_L^p(Ω)‖ a_m ‖_L^1(Ω). There exists a positive constant C_H,p, such that, ‖ a_i(t,x) ‖_L^p(Ω)≤ C_S‖ a_i,0‖_L^p(Ω)+C_H,pM_2 ∫_0^t (t-s)^-N/2(1-1/p). For N=1,2, the integral ∫_0^t (t-s)^-N/2(1-1/p) < +∞, ∀ p∈[1,∞). Hence choosing p=4Q we have there exists a constant K_p such that, ‖ a_i(t,x) ‖_L^4Q(Ω)≤ K_p, ∀ i=1,⋯,m-1. From the equation ∂_ta_m ≤∏_1^m-1a_i^α_i, integrating and applying Hölder inequality we get, ‖ a_m(t,x) ‖_L^4(Ω)≤∫_0^t∑_1^m-1‖ a_i(t,x) ‖_L^4Q(Ω)^Q≤ mTK_p^Q. From the above two relations we get ψ∈ L^4(Ω_T), as 4>N+2/2, for N=1,2, by Integrability Estimation (theorem-<ref>): ‖ a_i(t,x) ‖_L^∞(Ω_T) < +∞, ∀ i=1,⋯,m. Now parabolic regularity gives us unique global smooth solution for dimension N=1 and 2.<cit.><cit.> Next we discuss two special cases, one where the rate function grows at most quadratically , and another a 3 dimensional case. The three species degenerate triangular model as discussed in the article <cit.> has rate function with quadratic growth. We have the following proposition. If G=∑_1^m-1α_i≤2, unique global classical solution exists upto dimension N=5 for system-A_3(<ref>). proof: It is enough to show global solution exists for G=2 in dimension N=3,4,5. From representation of solution (<ref>), we have, a_i(t,x) ≤a_i(t,x)+∫_0^t∫_ΩG_i(t-s,x,y)a_m(s,y)dyds. Let 1+1/p=1/r+1/q, then using Minkwoski's integral and Young convolution inequality we obtain, ‖ a_i(t,x) ‖_L^p(Ω)≤ C_S,0‖ a_i,0‖_L^p(Ω)+ ∫_0^t‖ g_j(t-s,x)‖_L^r(Ω)‖ a_m ‖_L^q(Ω). There exists a positive constant C_H,p such that<cit.><cit.>, ‖ a_i(t,x) ‖_L^p(Ω)≤ C_S,0‖ a_i,0‖_L^p(Ω)+ C_H,p∫_0^t t^-N/2(1/q-1/p)‖ a_m ‖_L^q(Ω). From Sobolev embedding theorem we have there exists positive constant C_SE such that ∀ i=1⋯,m-1, ‖ a_i ‖_L^2N/N-2(Ω)≤ C_SE(‖∇ a_i ‖_L^2(Ω)+ ‖ a_i ‖_L^2(Ω)). Applying Minkwoski's integral inequality and Hölder inequality on the relation ∂_t a_m ≤∏_1^m-1 a_i^α_i, we have, ‖a_m ‖_L^N/N-2(Ω) ≤‖a_m,0 ‖_L^N/N-2(Ω) + ∫_0^t ∑_0^m-1 ‖a_i ‖_L^2N/N-2(Ω)^2 ≤‖a_m,0 ‖_L^N/N-2(Ω) + ∫_0^t ∑_0^m-1 2C_SE^2(‖∇a_i ‖_L^2(Ω)^2+‖a_i ‖_L^2(Ω)^2). Using relation (<ref>) and (<ref>) we obtain, ‖ a_m ‖_L^N/N-2(Ω)≤‖ a_m,0‖_L^N/N-2(Ω)+C_SE^2∑_i=1^m-1‖ a_i,0‖_L^2(Ω)+4mC_SE^2C_Ω(1+T)=C_1. For N=3, a_m ∈ L^3(Ω). So in (<ref>) if we put q=3 and p=+∞ then ∀ i=1,⋯ m-1, ‖ a_i ‖_L^∞(Ω)≤ C_S‖ a_i,0‖_L^∞(Ω)+2C_H,∞√(T). That is a_i ∈ L^∞(Ω_T), which implies a_m ∈ L^∞(Ω_T) too. Hence we get smooth solution for N=3 case. For N=4 and 5 also our goal will be to prove a_i ∈ L^∞(Ω_T) ∀ i=1,⋯,m-1, for this it is enough to show that a_m ∈ L^q(Ω), where q>N/2. Rest follows from (<ref>). N=4 CASE: a_m ∈ L^2(Ω). So in (<ref>) if we put q=2 and p=6, we have a_i ∈ L^6(Ω). Now let a_i ∈ L^2p(Ω), then application of Minkwoski's integral inequality and Hölder inequality provides us, ‖ a_m ‖_L^p(Ω)≤‖ a_m,0‖_L^p(Ω)+ ∑_1^m-1‖ a_i ‖_L^2p(Ω_T)^2. which implies a_m ∈ L^p(Ω). As we have a_i ∈ L^6(Ω), ∀ i =1,⋯,m-1, we obtain a_m∈ L^3(Ω). Now as 3> 4/2, it yields a_i ∈ L^∞(Ω_T), ∀ i=1,⋯,m. Hence smooth solution exists. N=5 CASE: a_m ∈ L^5/3(Ω), from (<ref>) we obtain a_i ∈ L^5/1.1(Ω)(putting q=5/3,p=5/1.1). Relation (<ref>) gives us a_m ∈ L^5/2.2(Ω). Again from relation (<ref>) we get a_i ∈ L^50/3(Ω)(putting q=5/2.2,p=50/3). Now from relation (<ref>) we obtain a_m ∈ L^25/3(Ω). As 25/3>5/2, we get a_i∈ L^∞(Ω_T). Hence smooth global solution exists. Next we discuss a special 3 dimensional case. The proposition is as follows. Let dimension N=3. Then system-A_3(<ref>) has unique global classical solution for G=∑_1^m-1α_i≤10/3. proof: It is enough to show for Q=10/3. From relation (<ref>) putting q=1 we get a_i ∈ L^3-δ(Ω), ∀ i =1,⋯,m-1, where δ=9/243, In other words there exists positive constant C_δ_1 such that ‖ a_i ‖_L^234/81(Ω)≤ C_δ_1, ∀ i=1,⋯,m-1. Next we will apply Gagliardo-Nirenberg interpolation. We have: 1/3.9=(1/2-1/3)1/2+81/234(1-1/2) Applying Gagliardo-Nirenberg interpolation inequality we obtain, ∀ i=1⋯,m-1, there exists constant C_GN1>0, such that ‖ a_i ‖_L^3.9(Ω)≤ C_GN1‖∇ a_i ‖_L^2(Ω)^1/2‖ a_i ‖_L^234/81(Ω)^1/2+C_GN1‖ a_i ‖_L^234/81(Ω). integrating with respect to time variable and applying Hölder inequality on right hand side, we obtain: ∫_0^t‖ a_i ‖_L^3.9(Ω)^3.9≤ 2^2.9C_GN1^3.9C_δ_1^1.95(1+T)∫_0^T‖∇ a_i ‖_L^2(Ω)^2+2^2.9C_GN1^3.9TC_δ_1^3.9. Using relation (<ref>) and relation (<ref>) we obtain, ∫_0^t‖a_i ‖_L^3.9(Ω)^3.9 ≤2^2.9C_GN1^3.9C_δ_1^1.95(1+T) (∑_0^m-1‖a_i,0 ‖_L^2(Ω)+C_Ω(1+T)) +2^2.9C_GN1^3.9TC_δ_1^3.9=C_2, ∀i=1,⋯,m-1. Applying Minkwoski's integral inequality and Hölder inequality on the integral relation corresponding to ∂_t a_m ≤∏_1^m-1 a_i^α_i, we obtain, ‖a_m ‖_L^11.7/10(Ω) ≤‖a_m,0 ‖_L^3.9/3(Ω)+ ∑_1^m-1∫_0^t‖a_i ‖_L^3.9(Ω)^10/3 ≤‖a_m,0 ‖_L^3.9/3(Ω)+(1+T)C_2=C_3. From relation (<ref>) again, putting q=11.7/10,p=116/22, we get a_i ∈ L^116/22(Ω), ∀ i=1,⋯,m-1. Let's assume there exists constant C_δ_2>0, such that ‖ a_i ‖_L^116/22(Ω)≤ C_δ_2. Again applying Minkwoski's integral inequality and Hölder inequality on the integral relation corresponding to ∂_t a_m ≤∏_1^m-1 a_i^α_i, we obtain, ‖a_m ‖_L^348/220(Ω) ≤‖a_m,0 ‖_L^348/220(Ω)+ ∑_1^m-1∫_0^t‖a_i ‖_L^116/22(Ω)^10/3 ≤‖a_m,0 ‖_L^348/220(Ω)+TC_2^10/3=C_3. As 348/220>3/2, in relation (<ref>) we can simply put q=348/220 and p=+∞, which provides, ‖ a_i ‖_L^∞(Ω_T) < +∞, ∀ i=1,⋯,m-1. which further implies a_m ∈ L^∞(Ω_T) too. Hence we have unique smooth global solution. abbrv § SOME USEFUL RESULTS [L^1 Integrability estimate] Let a_1(t,x),⋯,a_k(t,x)≥ 0; satisfy the following equation for some d_i≥ 0,i=1,⋯,k, ∂_t (∑_1^k a_i)- Δ(∑_1^k d_ia_i)≤ 0 (0,T)×Ω, n.∇_x a_i = 0 (0,T)×∂Ω, a_i(0,·)(≥0) ∈ L^2(Ω) Ω. Then there exists a constant ν≥ 0, depends on the initial data,domain and the dimension, such that, ∫_(0,T)×Ω (∑_1^k a_i)(∑_1^k d_ia_i) ≤ν. Proof can be found in <cit.> [L^p Duality estimate] If ϕ_1,ϕ_2 ≥ 0, for some d_1,d_2>0, satisfy the following relation, ∂_t (ϕ_1+ϕ_2)-Δ(d_1ϕ_1+d_1ϕ_2) ≤ 0 Ω_T, n.∇_xϕ_1=∇_γϕ_t= 0 (0,T)×Ω, ϕ_1(0,x),ϕ_2(0,x)∈ L^p(Ω). Then ϕ_1 ∈ L^p(Ω_T) ϕ_2 ∈ L^p(Ω_T) and vice versa. Proof can be found in <cit.><cit.>. [Integrability estimation] Let d>0, and let θ∈L^p(Ω_T) for some 1<p< +∞. Let ψ be the solution to the following parabolic equation, ∂_t ψ(t,x) -dΔψ(t,x) = θ(t,x) Ω_T, n.∇_x ψ(t,x) =0 (0,T) ×∂Ω, ψ(0,x) =0 Ω. We have the following estimates: ‖ψ‖_L^s(Ω _T) ≤ C_IE‖θ‖_L^p(Ω_T) ∀ s< (N+2)p/N+2-2p p<N+2/2, ‖ψ‖_L^s(Ω _T) ≤ C_IE‖θ‖_L^p(Ω_T) ∀ s< +∞ p=N+2/2, where the constant C_IE=C_IE(T,Ω,d,p,s) and ‖ψ‖_L^∞(Ω _T)≤ C_IE(T, Ω,d,p) ‖θ‖_L^p(Ω_T) p> N+2/2. Proof of the estimates (<ref>) can be found in <cit.> and the estimate (<ref>) was derived in <cit.>. We refer to the constant C_IE as the integrability estimation constant.
http://arxiv.org/abs/2406.19371v1
20240627175035
Suri: Multi-constraint Instruction Following for Long-form Text Generation
[ "Chau Minh Pham", "Simeng Sun", "Mohit Iyyer" ]
cs.CL
[ "cs.CL" ]
Higher-twist generalized parton distributions of the pion and kaon at zero skewness in the light-cone quark model Zhun Lu July 1, 2024 ================================================================================================================= § ABSTRACT Existing research on instruction following largely focuses on tasks with simple instructions and short responses. In this work, we explore multi-constraint instruction following for generating long-form text. We create , a dataset with 20K human-written long-form texts paired with LLM-generated backtranslated instructions that contain multiple complex constraints. Because of prohibitive challenges associated with collecting human preference judgments on long-form texts, preference-tuning algorithms such as DPO are infeasible in our setting; thus, we propose Instructional ORPO (I-ORPO), an alignment method based on the ORPO algorithm. Instead of receiving negative feedback from dispreferred responses, I-ORPO obtains negative feedback from synthetically corrupted instructions generated by an LLM. Using , we perform supervised and I-ORPO fine-tuning on Mistral-7b-Instruct-v0.2. The resulting models, -SFT and -I-ORPO, generate significantly longer texts (∼5K tokens) than base models without significant quality deterioration. Our human evaluation shows that while both SFT and I-ORPO models satisfy most constraints, -I-ORPO generations are generally preferred for their coherent and informative incorporation of the constraints.[Code at <https://github.com/chtmp223/suri>] § INTRODUCTION Improving the instruction-following abilities of modern large language models (LLMs) is critical to increasing their effectiveness and generalizability for many practical applications. However, most existing instruction-following datasets (e.g., Alpaca) contain only simple instructions that can be solved by short model generations <cit.>. What about tasks with complex, multi-constraint instructions that can only be satisfied with long-form outputs (i.e., thousands of tokens), such as creating detailed technical reports or writing engaging fictional narratives? We explore this question by conducting the first in-depth study of long-form instruction following with multi-constraint instructions. To facilitate our experiments, we create a new dataset, ,[Suri is an alpaca breed known for its long, lustrous hair.] using instruction backtranslation <cit.>. This process involves feeding a human-written long-form text (e.g., chapters from a novel) into an LLM to generate instructions that could have been followed to create the text. The resulting dataset, , consists of 20K texts paired with LLM-generated instructions, each containing ≈10 semantic and stylistic constraints (Figure <ref>). How can we use to improve an LLM's long-form instruction following abilities? While supervised fine-tuning (SFT) has been quite effective for short-form datasets <cit.>, we observe that fine-tuned models often generate texts that are incoherent and fail to satisfy constraints towards the end in the instructions. Preference tuning methods such as DPO <cit.> and RLHF <cit.> are challenging to use in this setting due to difficulties and cost in obtaining preference judgments on long-form texts <cit.>. Specifically, when annotating preferences for long texts, human annotators may struggle to determine if different sections of the text are faithful to the instructions while simultaneously considering multiple aspects of the text, such as coherence and informativeness. Motivated by this, we devise an alignment method that relies on synthetically corrupted instructions. Specifically, we take the backtranslated instruction x_w and corrupt its constraints using an LLM such that the gold response does not satisfy the corrupted constraints (for example, see x_l in Figure <ref>). We then develop a variant of the Odds Ratio Preference Optimization objective <cit.> to use these corrupted instructions as negative feedback. We refer to this alignment method as Instructional ORPO, or I-ORPO for short. We conduct a series of automatic and human evaluations on generations from SFT and I-ORPO-tuned models to validate our method. Compared to the base model, Mistral-7b-Instruct-v0.2 <cit.>, both SFT and I-ORPO significantly increase the generation length from 1K to 5K tokens. Our fine-tuned models also improve the ability to differentiate between correct and corrupted instructions by at least 10% while maintaining low levels of n-gram repetitions in the text. We find that GPT-4o <cit.> cannot reliably evaluate long-form responses, making human evaluation crucial for assessing the constraint-following capabilities of our generations. Annotators note that our fine-tuned models effectively follow given constraints, with I-ORPO being preferred for its ability to incorporate constraints coherently, informatively, and enjoyably. § THE DATASET We focus on the task of long-form writing, both fictional and non-fictional, under multiple constraints. When using an LLM for a complex writing task, users might have many constraints in mind and expect lengthy, detailed responses in the form of books, blog posts, etc. This task is particularly challenging for current LLMs, which struggle with generating coherent long-form outputs <cit.>, and this difficulty can be amplified when multiple constraints are involved. Recent instruction-following datasets have featured multi-constraint instructions <cit.> and long-form responses <cit.>, but none has integrated these two elements (Table <ref>). We bridge this gap by creating , which features complex instructions with multiple constraints and lengthy gold responses (2-5K words, about 3-6K tokens). We collect human-written English text samples, such as books, religious texts, and blog posts, to serve as gold responses (y). Since gathering human-written instructions for such lengthy responses is difficult and expensive, we turn to instruction backtranslation <cit.>, in which an LLM is provided with a human-written text (e.g., a short story) and prompted to generate instructions (x_w) that could have been followed to create that text. We further corrupt the constraints in x_w to obtain synthetically corrupted instructions (x_l) for our I-ORPO alignment method. In total, contains 20K single-turn examples, each consisting of a backtranslated instruction x_w, corrupted instruction x_l, and a human-written response y. In this section, we detail our approach to selecting high-quality text samples (<ref>) and creating backtranslated instructions (<ref>). We also validate our generated instructions (<ref>) and analyze the resulting dataset (<ref>). §.§ Collecting Responses Obtaining long-form gold responses y through crowdsourcing or hiring experts requires significant cost and effort. As an alternative, we sample human-written texts in equal proportions from three existing datasets: ChapterBreak <cit.>, Books3 <cit.>, and RedPajama-Data-v2 <cit.>. We truncate the sampled texts to between 2,048 and 5,024 words, making them significantly longer than those in existing instruction-following datasets (Table <ref>). The final dataset is divided into training, validation, and test sets in a 10K/5K/5K split. ChapterBreak ChapterBreak (AO3 split) contains 7,355 fanfiction stories on Archive of Our Own (AO3), of which 6,656 texts are sampled for . We merge the individual chapters from the cleaned text into a single document. Books3 Books3 contains 197K books on Bibliotik,[Due to copyright concerns, we only release the titles and IDs of the sampled data from this dataset. We provide a Python script to extract and clean the text so that users with access to Books3 can recreate the samples included in .] of which 6,698 texts are sampled. We use regular expressions to filter out irrelevant metadata, such as tables of contents and acknowledgments. RedPajama-Data-v2 RedPajama contains over 100 billion documents from 84 CommonCrawl dumps, of which 6,646 texts are sampled. Unlike ChapterBreak and Books3, which consist primarily of books and literary narratives, RedPajama captures the style of everyday writing with informal textual content such as blog posts, obituaries, and more. We apply a set of quality filters (see Appendix <ref>) on the 2023-06 and 2023-14s snapshots to obtain a subset of ≈300K high-quality, non-duplicated documents written in English. §.§ Creating Instructions via Backtranslation includes backtranslated instructions (x_w) and corrupted instructions (x_l). In x_l, constraints from x_w are minimally edited to be partially violated while still faithful to the overall main goal of the instruction. These corrupted instructions, along with x_w and y, serve as inputs for our I-ORPO preference tuning experiments. Backtranslating Instructions Our extracted gold responses do not come with accompanying instructions. Gathering these instructions can be costly and time-consuming, as annotators have to synthesize the instructions from long texts. Therefore, we use instruction backtranslation <cit.> to generate the missing instructions. Specifically, we prompt GPT-4-turbo[GPT-4-turbo refers to . Experiment done using temperature=0.6 and top_p=0.9.] with a gold response y to generate a corresponding instruction x_w that contains a main goal, which summarizes the content of the text, and a list of ≈10 constraints (Table <ref>). These constraints can focus on stylistic elements (how something is communicated through tone, language, sentence structure), semantic elements (what topics, meanings, and concepts are included), or a combination of both. Constraints can also be broad, applying to large portions of the text, or specific, addressing elements that occur only once. The result is a highly detailed, multi-constraint instruction that covers different parts of the text (x_w in Figure <ref>). Corrupting Instructions We want to use in our alignment experiments, which traditionally rely on preference judgments (e.g., labeled y_w and y_l pairs). However, obtaining these judgments for long-form outputs is challenging due to the many competing aspects to consider (e.g., faithfulness to instructions, overall coherence, etc.). Instead of including a corrupted y_l, we focus on learning from a corrupted instruction x_l. To create x_l, we prompt GPT-4-turbo[Experiment done using model=, temperature=0.0, top_p=0.0 to ensure deterministic results.] to minimally edit each constraint in x_w while preserving the original main goal (Table <ref>). The resulting instructions average 386 tokens, closely matching the average length of gold instructions at approximately 411 tokens. §.§ Validating Instructions To validate whether the backtranslated instructions faithfully represent the original text, we conduct a human evaluation on a sample of 30 (x_w, y) pairs. Three Upwork[See Appendix <ref> for recruitment and compensation details.] annotators are asked to read through the (x_w, y) pairs, highlight all text spans in the response that support the given constraints, and determine if the response supports the instruction (Figure <ref>). Our findings indicate that, on average, about 87% of the constraints are fully satisfied, with the remaining constraints being partially satisfied (see Appendix <ref> for agreement statistics). We conclude that the backtranslated instructions are generally faithful to the original text. §.§ Instruction Diversity Instructions in focus primarily on long-form text generation, particularly crafting narratives or articles. Therefore, the key element that introduces diversity across these instructions is the accompanying list of constraints. Here, we measure the proportion of constraints being broad/specific or focusing on semantic/stylistic elements. We prompt Mistral-7B-Instruct-v0.2 <cit.>,[Experiment done using greedy decoding. The first author manually verifies an output subset.] to assign each constraint to the applicable category. We find that semantic constraints account for more than half of each instruction, followed by mixed constraints (Figure <ref>). Broad constraints, on the other hand, make up 56% of the total constraints. Overall, the distribution of constraint types is relatively balanced, with a stronger emphasis on broad and semantic constraints. § ALIGNING LANGUAGE MODELS WITH Our goal is to assess whether helps improve the instruction-following capabilities of Mistral-7B-Instruct-v0.2 for long-form text generation. We experiment with two methods of fine-tuning Mistral-7B-Instruct-v0.2 on : supervised fine-tuning (SFT) using (x_w, y) pairs and a modified ORPO alignment <cit.> using (x_w, x_l, y) triplets. We emphasize that preference judgments are difficult to obtain for long-form responses due to numerous aspects of the text that must be considered with respect to the instructions. Therefore, we perform model alignment with correct instruction x_w and corrupted instruction x_l instead. Full details on fine-tuning libraries, hardware configurations, and hyperparameters can be found in Appendix <ref>. -I-ORPO Odds Ratio Preference Optimization (ORPO) <cit.> combines SFT and preference alignment by incorporating a log odds ratio term into the negative log-likelihood loss. We choose ORPO due to its simplicity, competitive performance with other preference tuning algorithms and the ease with which we can modify for our setting. The original algorithm learns from preference judgments, requiring access to chosen and rejected responses in the (x, y_w, y_l) format. Since our dataset contains gold and corrupted instructions instead, we modify ORPO so that the algorithm accepts (x_w, x_l, y). We refer to this modified method as Instructional Odds Ratio Preference Optimization (I-ORPO), where the modified loss is calculated as: ℒ_I-ORPO = 𝔼_(x_w, x_l, y)[ ℒ_SFT + λ·ℒ_I-OR] where ℒ_I-OR = - logσ(logodds_θ(y|x_w)/odds_θ(y|x_l)) odds_θ(y|x) = P_θ(y|x)/1- P_θ(y|x) In the original ORPO formulation, the model is learning if the log probability of P_θ(y_w|x), denoted logps(y_w|x), increases and log probability of P_θ(y_l|x), denoted logps(y_l|x_w), decreases after a number of training steps, resulting in the log odds ratio increasing. In I-ORPO, the same y is used for both instruction types. Therefore, the model is learning if the log probabilities logps(y|x_w) and logps(y|x_l) diverge while logps(x_w) and logps(x_l) remain stable. We observe this trend in Figure <ref>. Loss derivation and analysis are in Appendix <ref>. We perform I-ORPO fine-tuning with LoRA on Mistral-7B-Instruct-v0.2 for two epochs, using a learning rate of 5e-5, λ of 0.4, and a LoRA rank and alpha of 16. We do not observe signs of the model learning with full-model tuning, so we choose to use LoRA fine-tuning instead. To minimize noise and improve the model's ability to distinguish between gold and corrupted instructions, we include a single constraint in each instruction, x_w and x_l. -SFT We perform LoRA supervised fine-tuning <cit.> on Mistral-7B-Instruct-v0.2 for two epochs using a learning rate of 5e-5, with a LoRA rank and alpha of 16. For each instruction x_w, we include a varying number of constraints to expose the model to different instruction formats. We do not use full-model tuning to match the I-ORPO training setting. § AUTOMATIC EVALUATION Our automatic assessment demonstrates that both -I-ORPO and -SFT increase the length of the generated texts while maintaining a reasonable level of repetition. Compared to baseline models, -I-ORPO is more likely to assign higher log probabilities to tokens in the response given the correct instruction than the corrupted instruction. §.§ -I-ORPO and -SFT generate substantially longer text. We measure the average number of tokens[Measured using package (<https://github.com/openai/tiktoken>) with “o200k_base” encoding.] in generations from our fine-tuned models (-I-ORPO and -SFT) and compare them to baseline models, including Mistral-7B-Instruct-v0.2, Llama-3-8B-Instruct <cit.>, and Mixtral-8x7B-Instruct-v0.1 <cit.>. For faster inference, we use vLLM <cit.> to generate outputs from the backtranslated instruction x_w.[Experiment done using greedy decoding, max_token=10K. Inference prompts specify that 5K tokens should be generated.] Proprietary models like GPT-4 and Claude are excluded due to their maximum generation output limit of 4,096 tokens,[https://docs.anthropic.com/en/docs/models-overviewClaude documentation; https://platform.openai.com/docs/api-reference/audioOpenAI documentation] whereas open-weight models allow for outputs of arbitrary maximum length. Our fine-tuned models, -SFT and -I-ORPO, generate significantly longer outputs compared to the open-weight baselines, with an average of approximately 4,800 and 5,100 tokens per generation, respectively (Figure <ref>). These lengths exceed the maximum generation capacity of proprietary models, which is limited to around 4,096 tokens. Among the baselines, Mixtral produces the longest generations, averaging over 1,500 tokens, while Mistral-Instruct generates the shortest outputs, around 1,100 tokens per generation. §.§ -I-ORPO and -SFT do not degenerate into repetitions at longer sequences. We analyze the presence of repetitions in model generations. Since LLMs often degrade into repetitions over longer sequences, this measurement helps us identify when and how the model starts producing repetitive content. Previous work <cit.> measures unigram, bigram, and trigram repetitions. However, we are interested in sentence-level repetitions, such as when the same phrase is repeated in a dialogue at the start of each sentence. Therefore, we measure 5- and 10-gram repetitions to capture these higher-level patterns. We count a repetition when a specific n-gram appears at least three times in the text. Despite having the longest generations, -I-ORPO and -SFT maintain a low percentage of generations with n-gram repetitions (Table <ref>). Among the baseline models, Mistral-Instruct has the lowest percentage of generations with repetition, possibly because its generations are also the shortest. Surprisingly, Llama-Instruct and Mixtral-Instruct, with their short generations, possess a greater proportion of generations with n-gram repetitions compared to our fine-tuned models. We further examine the percentage of 5-gram repetitions, normalized by the length of each text, generated by our fine-tuned models. As shown in Figure <ref>, the percentage of 5-gram repetitions does not increase after 2,048 tokens, indicating that our fine-tuned models do not exhibit degradation in longer sequences. §.§ I-ORPO improves ranking accuracy To understand the capabilities of models to differentiate between correct and corrupted instructions, we evaluate ranking accuracy <cit.>. This involves measuring the percentage of cases in which the model assigns a higher probability to the gold response under the correct instruction than under the corrupted version. We calculate the sum of token log probabilities in the response given the previous tokens, denoted by logps(y|x), and determine accuracy based on the proportion of times when logps(y|x_w) > logps(y|x_l). A higher accuracy indicates that the model is more sensitive to the instructions and can determine which instruction is the correct instruction for the given response. We use Hugging Face's Transformers <cit.> to access the probability distribution over vocabulary and measure the impact of instruction specificity on ranking accuracy across five different settings, which are defined by the number of all constraints included (M constraints in total) and the number of those included constraints that are corrupted: , , , , . For example, in the setting, both instructions include all constraints, but only half of the constraints are violated. -I-ORPO shows at least a 10% improvement in ranking accuracy over the baseline Mistral-Instruct across all instruction specificity settings, with -SFT following closely (Table <ref>). Mistral-Instruct remains a strong baseline, achieving the highest ranking accuracy among the three baseline models. In contrast, Llama-3-7b-Instruct and Mixtral-8x7b-Instruct perform the worst, trailing -I-ORPO by up to 50%. We observe that settings with more constraints in the instruction, namely , , and , generally lead to better performance. This trend suggests that seeing more constraints helps the model better differentiate between correct and corrupted constraints. §.§ LLM judges are unreliable for evaluating constraint satisfaction in long-form generation. We experiment with using LLMs to evaluate whether texts generated by our fine-tuned models follow the given constraints. Specifically, we provide GPT-4o <cit.> with a constraint and a generated text from our models and prompt it to determine whether the text fully satisfies, partially satisfies, or does not satisfy the constraint (Table <ref>). We then compare these results with judgments from three Upwork annotators on 30 texts generated by -SFT on the test set (obtained using the same procedure as in Section <ref>). GPT-4o agrees with human annotators only 39% of the time, with a significant 16% disagreement between satisfaction and no satisfaction (Table <ref>). We conclude that GPT-4o does not align well with long-form human annotation, consistent with the findings of <cit.> and <cit.>. § HUMAN EVALUATION While our automatic assessments provide insights into the lexical information of the text, they do not capture its semantic content. Therefore, we conduct a human evaluation to determine if and how the constraints are satisfied by the outputs of -SFT and -I-ORPO. Human evaluation on 30 test set generations reveals that while both fine-tuned models satisfy constraints, -I-ORPO is preferred by humans for its ability to incorporate the constraints into the final outputs seamlessly. §.§ -I-ORPO and -SFT are effective at satisfying constraints. Since GPT-4o judgments do not align with human annotations, we rely on human evaluation to determine how often -I-ORPO and -SFT follow the given constraints. This evaluation follows a similar setup as Section <ref>, where annotators assess whether each constraint is satisfied, partially satisfied, or not satisfied by the generations. Two Upwork annotators complete 30 tasks, each containing a generation with around 10 constraints, totaling 321 constraints. The generations are lengthy, averaging 4,000 words, and complex, with constraints spread throughout the text. Annotators spend approximately 20-25 minutes on each annotation and are paid $200 in total for the task. On average, -I-ORPO and -SFT meet most of the included constraints, achieving satisfaction rates of 67-68% and partial satisfaction rates of 16-17% (Table <ref>). Both models have the same proportion of unsatisfied constraints, accounting for 16% of the total constraints. Annotators often note that narratives produced by -SFT contain inconsistent plot events and sometimes leave the narrative incomplete, resulting in some final constraints not being met. We attribute this behavior to the fact that some of the gold responses are truncated to between 2,048 and 5,024 words, which might omit the end of the original narrative. On the other hand, Mistral-I-ORPO produces narratives with coherent endings but can sometimes be too verbose, making it difficult to determine whether some constraints are satisfied. §.§ -I-ORPO are preferred over -SFT for coherent and informative constraint satisfaction. In this evaluation, we are interested in how our fine-tuned models satisfy constraints in . We ask two annotators to compare text generations from -SFT and -I-ORPO with respect to a given constraint based on the following criteria: * Informativeness: Which generation provides more details about the constraint? * Coherence: Which generation effectively integrates the constraint with the rest of the text? * Readability/Enjoyability: Which text sample is easier to read overall? The annotators also provide detailed justifications for their choices in each aspect of their judgments (see Figure <ref>). Human annotators consistently prefer -I-ORPO to -SFT for about 60-70% of the time across all three categories: coherence, informativeness, and enjoyability. Annotators note that -SFT often suffers from repetitive ideas, confusing plot points, and a lack of proper conclusions. In contrast, while -I-ORPO texts occasionally exhibit inconsistencies, they generally read more naturally, include interesting details, and are devoid of the robotic structure or flowery language often found in other LLM generations. § RELATED WORK Instruction Following Datasets Open-ended instruction tuning involves fine-tuning LLMs to follow user instructions and generate high-quality text <cit.>. Single-turn instruction-following datasets can be constructed by manual annotation, where instruction-response pairs are curated by humans <cit.>. Another approach is distillation from proprietary LLMs, which can be done via techniques like Self-instruct <cit.> to augment responses for each instruction <cit.>, Instruction Backtranslation to generate instructions given gold responses <cit.>, or leveraging metadata to generate both instructions and responses <cit.>. While recent work has constructed instruction-following datasets with long-form responses <cit.> or multiple constraints <cit.>, no prior effort has explored combining these two elements in single-turn instructions (see Table <ref>). is the first dataset to feature both complex instructions and long-form responses over 5k words. Alignment Aligning language models with instruction-following data is crucial for ensuring that they respond to user instructions in a helpful and harmless manner <cit.>. Popular preference tuning methods, such as RLHF, DPO, KTO, and ORPO <cit.>, achieve this by fine-tuning the models on human judgments of response quality <cit.>. However, collecting preferences for long-form responses is challenging due to the many competing aspects of the texts that need to be considered, such as instruction faithfulness and coherence <cit.>, which prompts us to experiment with preference tuning on correct and correct instructions. § CONCLUSION In this work, we investigate the challenge of complex instruction following for generating long-form text. We introduce , a dataset of long human-written responses accompanied by backtranslated and corrupted instructions. We demonstrate the effectiveness of in improving the constraint-following capabilities of LLMs for long-form generation through supervised fine-tuning and I-ORPO. Our models, as shown by both human and automated evaluations, generate high-quality, long-form responses while maintaining effectiveness at following constraints. § LIMITATIONS Extending to other LLMs While we demonstrate the effectiveness of and I-ORPO on Mistral-7b-Instruct-v0.2, we have yet to experiment with fine-tuning other models on our dataset, which presents an interesting direction for future work. Impact of surface features on I-ORPO Even though I-ORPO works well on our dataset, we would like to explore how surface features, such as instruction length and the degree of information overlap between the instruction and response, affect its performance. We leave this investigation to future studies. Performance on short-context tasks Additionally, we note that our dataset primarily focuses on generating extremely long texts. As a result, the fine-tuned models may exhibit diminished performance on tasks requiring shorter generations. § ETHICAL CONSIDERATIONS The risks posed by our study are no greater than those inherent in the large language models that support it <cit.>. Our human evaluation receives approval from an institutional review board. All annotators (US-based, fluent in English) gave their informed consent and participated with an hourly compensation of $16, which meets the minimum wage in our state. Scientific artifacts are implemented according to their intended usage. § ACKNOWLEDGEMENTS We extend special gratitude to the Upwork annotators for their hard work, as well as to the members of Unsloth, r/LocalLLaMA, and Together.ai community for helpful fine-tuning advice. We also thank Scott Niekum, Dzung Pham, and members of the UMass NLP lab for their insights on the project. This project was partially supported by awards IIS-2202506 and IIS-2312949 from the National Science Foundation (NSF) and an award from Open Philanthropy. § QUALITY FILTERS FOR REDPAJAMA-DATA-V2 Upon initial examination, we observe a significant presence of news and religious text in the corpus. Therefore, in addition to the following quality filters, we also downsample news and religious articles by excluding any article containing a source domain on our blocklist <cit.> or more than 0.05% of words from a religious dictionary <cit.> to ensure the diversity of the gold responses. Table <ref> and <ref> show the quality filters used in RedPajama-Data-v2. § PROMPTS In this section, we show prompts to generate and analyze in Table <ref>, <ref>, <ref>, <ref>. Table <ref> shows the prompt used for our experiment with LLM judges. § MODELING EXPERIMENT DETAILS All experiments are done using Flash-Attention 2 <cit.>, DeepSpeed ZeRO 3 <cit.>, PEFT <cit.>, TRL library <cit.>, and Alignment Handbook <cit.>. Chat templates are as follows: <|user|> Instruction</s> <|assistant|> Response</s> The training configurations (Table <ref>) are mostly similar for SFT and ORPO. We vary the learning rate (5e-4 to 5e-7), optimizer (8-bit vs. 32-bit), LoRA rank, and alpha (8 to 64), but none of these hyperparameters results in better generations. § I-ORPO LOSS DERIVATION The derivation of ℒ_ℐ-𝒪ℛ closely resembles that of the original ORPO loss, with d = (x_w, x_l, y) ∼ D. ∇_θℒ_ℐ-𝒪ℛ = δ(d) · h(d) δ(d) = ( 1 + odds_θ (y|x_w)/odds_θ (y|x_l))^-1 h(d) = ∇_θlog P_θ(y|x_w)/1-P_θ(y|x_w) - ∇_θlog P_θ(y|x_l)/1-P_θ(y|x_l) The gradient of ℒ_ℐ-𝒪ℛ is the product of two terms: δ(d), which regulates the strength of parameter updates, and h(d), which widens the contrast between logps(y|x_w) and logps(y|x_l). Specifically, as the odds ratio increases, δ(d) converges to 0. On the other hand, h(d) has two gradients: ∇_θlog P_θ(y|x_w), which minimizes log P_θ(y|x_w), and ∇_θlog P_θ(y|x_l), which maximizes log P_θ(y|x_l). Additionally, 1-P_θ(y|x_w) accelerates the update in the direction that maximizes P_θ(y|x_w). Following ORPO <cit.>, suppose that g(x_w, x_l, y) = odds_θ (y|x_w)/odds_θ (y|x_l), we derive the loss as in <ref>. § PREFERENCE PROMPTING In this evaluation, we provide the model with the gold response y and both instructions x_w and x_l. We then prompt the model to choose the instruction most relevant to the gold text, following <cit.> and <cit.>. The model should output `1' if the first instruction generates the text and `2' otherwise (Table <ref>). Next, we compare the log probabilities of the model outputting `1' and `2'. If the log probability for `1' is higher, we assume the model prefers whichever instruction came first in the prompt. The performance metric is determined by how often the model prefers the correct instruction, regardless of the order in which the correct instruction is presented. We experiment with Mistral-7b-Instruct-v0.2, -I-ORPO, -SFT, Mixtral-8x7b-Instruct-v0.1, Llama-3-7b-Instruct. All experiments use the Huggingface implementation with greedy decoding. We observe that all models suffer from “first instruction bias", where the model always outputs the first instruction as the correct instruction, regardless of whether that instruction is actually x_w or not. § HUMAN EVALUATION §.§ Recruitment We recruit human annotators, all of whom are fluent in English, from Upwork (<https://www.upwork.com>) for our human evaluation. Each task is assigned to two annotators, except for Instruction Validation, which involves three annotators. Annotators are compensated at a rate of $16 per hour and generally work an average of 12 hours per task. All annotators have signed consent forms, and our study has been approved by our institutional review boards (IRB). §.§ Annotation Figure <ref> shows the LabelStudio interface for annotating instruction validity/constraint satisfaction. Figure <ref> features the interface for comparing text generations based on how they satisfy a given constraint. Annotators note that the interfaces are user-friendly. §.§ Annotator agreement in the instruction validity and constraint satisfaction evaluation We note that Krippendorff's Alpha remains low across evaluation tasks, suggesting little to no agreement among the annotators. We attribute this pattern to the fact that our generations are long (≈4k words on average), making it hard for annotators to follow the narrative sometimes. Final statistics reported in the paper is averaged between the annotators. Table <ref> further shows disagreement types for the instruction validity and constraint satisfaction evaluation. § GENERATIONS FROM MISTRAL-INSTRUCT, -SFT, AND -ORPO We show generations from Mistral-Instruct, -SFT, and -ORPO in Table <ref>.
http://arxiv.org/abs/2406.19346v1
20240627172005
Eliciting prior information from clinical trials via calibrated Bayes factor
[ "Roberto Macrì Demartino", "Leonardo Egidi", "Nicola Torelli", "Ioannis Ntzoufras" ]
stat.ME
[ "stat.ME", "stat.AP" ]
hyphensurl @nat@width>@nat@width kframe @end@of@kframe @end@of@kframe ##1totalleftmargin -##1- --totalleftmargin -totalleftmargin@ setminipage @end@of@kframe a4paper, total=170mm,257mm, left=20mm, right=20mm, top=30mm, bottom=25mm, fancy Eliciting prior information from clinical trials via calibrated Bayes factor Preprint upquote.sty Eliciting prior information from clinical trials via calibrated Bayes factor Roberto Macrì DemartinoCorresponding author e-mail: mailto: a 0000-0002-5296-6566, Leonardo Egidib 0000-0003-3211-905X, Nicola Torellib 0000-0001-9523-5336, and Ioannis Ntzoufrasc 0000-0002-7615-0334 a Department of Statistical Sciences, University of Padova, Via C. Battisti 241, Padova, 35121, Italy. b Department of Economics, Business, Mathematics and Statistics “Bruno de Finetti", University of Trieste, Via A. Valerio 4/1, Trieste, 34127, Italy c Department of Statistics, Athens University of Economics and Business, Trias 2, Athens, 11362, Greece THIS IS A PREPRINT WHICH HAS NOT YET BEEN PEER REVIEWED ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= § ABSTRACT In the Bayesian framework power prior distributions are increasingly adopted in clinical trials and similar studies to incorporate external and past information, typically to inform the parameter associated to a treatment effect. Their use is particularly effective in scenarios with small sample sizes and where robust prior information is actually available. A crucial component of this methodology is represented by its weight parameter, which controls the volume of historical information incorporated into the current analysis. This parameter can be considered as either fixed or random. Although various strategies exist for its determination, eliciting the prior distribution of the weight parameter according to a full Bayesian approach remains a challenge. In general, this parameter should be carefully selected to accurately reflect the available prior information without dominating the posterior inferential conclusions. To this aim, we propose a novel method for eliciting the prior distribution of the weight parameter through a simulation-based calibrated Bayes factor procedure. This approach allows for the prior distribution to be updated based on the strength of evidence provided by the data: The goal is to facilitate the integration of historical data when it aligns with current information and to limit it when discrepancies arise in terms, for instance, of prior-data conflicts. The performance of the proposed method is tested through simulation studies and applied to real data from clinical trials. Keywords: Dynamic borrowing, Historical data, Power prior, Prior elicitation, Strength of evidence § INTRODUCTION In recent years, biostatistical applications are commonly characterized by insufficient sample sizes, which are crucial for accurate parameter estimation. Meanwhile, in the clinical framework a large amount of historical or related data are often available. This has led to a growing interest in using historical data to enhance the design and analysis of new studies, particularly in clinical trials where recruiting patients can be ethically challenging. Notably, the sequential nature of information updating has made Bayesian approaches with informative priors particularly popular in this context <cit.>. These methods facilitate the incorporation of historical data into the analysis by eliciting informative priors on the model parameters, thereby improving the robustness and efficiency of statistical inference. However, the elicitation of informative priors is widely recognized as a complex process due to the challenges in quantifying and synthesizing prior information into suitable prior distributions. For an in-depth and comprehensive analysis of the topic see <cit.>. Consequently, there is a pressing need to develop more efficient methods for synthesizing and quantifying prior information <cit.>. Specifically, there is a growing concern regarding the adaptive incorporation of historical data, especially in the presence of data heterogeneity and rapid changes in the initial trial conditions <cit.>. In this framework, the power prior <cit.> is a popular method that allows historical data to influence the prior distribution in a flexible and controlled way. <cit.> provided a formal justification for power priors, demonstrating their effectiveness as a valuable class of informative priors. This effectiveness stems from their ability to minimize a convex sum of Kullback-Leibler (KL) divergences between two distinct posterior densities, in which one does not include any historical data whereas the other fully integrates this information into the current analysis. Additionally, <cit.> proposed further operational justifications, linking them to the so-called geometric priors. Notably, power priors have been employed across a broad spectrum of models including generalized linear models (GLMs), generalized linear mixed models (GLMMs), and survival models <cit.>. At its core, the idea is to raise the likelihood of the historical data to a weight parameter δ, usually defined between zero and one. This scalar parameter plays a crucial role in the power prior methodology as it determines the degree to which historical data influences the prior distribution. Specifically, when δ is set to zero, the power prior completely discounts historical information; conversely, setting δ to one fully integrates historical information into the prior. As is intuitive, the role played by the weight parameter δ on the final inferential conclusions is not negligible. Thus, several strategies have been developed to specify the weight parameter δ, treating it either as a fixed or a random quantity. If fixed in advance, the weight parameter δ can be set based on prior knowledge or through sensitivity analysis, considering specific criteria for borrowing information based on the prior-data conflict <cit.>. If treated as random, an initial prior distribution – typically a Beta distribution – is assigned to δ, and the use of the joint normalized power prior <cit.> is recommended. Notably, <cit.> introduced a method to estimate this parameter using predictive distributions, aiming at controlling type I error by calibrating to the degree of similarity between the current and historical data. Furthermore, <cit.> recommended setting δ through a dynamic p-value, assessing the compatibility of current and historical data based on the test‐then‐pool methodology. <cit.> proposed an empirical Bayes-type approach for estimating the weight parameter by maximizing the marginal likelihood. <cit.> suggested two novel approaches for binary data, focusing on equivalence probability and tail area probabilities. <cit.> explored the use of the Hellinger distance to compare the posterior distributions of the control parameter from current and historical data, respectively. These techniques provide valuable insights on determining a specific fixed value for δ. However, in a fully Bayesian context, only <cit.> has developed methods for specifying the shape parameters of a Beta initial prior for δ, using two minimization criteria: Kullback-Leibler (KL) divergence and mean squared error (MSE). Therefore, in a fully Bayesian framework, eliciting the initial distribution of the weight parameter controlling the amount of historical information remains a challenging and underexplored area. The aim of this paper is to propose a novel Bayesian algorithmic approach for eliciting the initial prior distribution for δ in a somehow optimal way. This involves the use of a simulation-based calibrated Bayes factor, employing hypothetical replications generated from the posterior predictive distribution, to compare competing prior distributions for δ. Following the approach in <cit.>, a well-balanced prior should promote the integration of historical data when there is agreement with the current information, and limiting this integration when discrepancies between the two datasets arise. The paper is organized as follows. Section <ref> provides a review of the power prior methodology, discussing the use of the weight parameter as both a fixed and a random quantity. Furthermore, Section <ref> illustrates the proposed calibrated Bayes factor to elicit a well-balanced initial prior distribution for δ. Sections <ref> and <ref> explore the proposed methodology through simulation studies and real data analysis from two clinical trials, E2696 and E1694, that investigated the effectiveness of interferon in melanoma treatment <cit.>. Finally, Section <ref> provides concluding remarks about the discussed method, highlighting its strengths and limitations, and offers insights into future developments. § POWER PRIORS Power priors have become increasingly popular for the development of informative priors, especially in clinical trials where past information is often available. These priors effectively integrate knowledge from historical data into the specification of informative priors. Let consider θ as the vector parameter of interest in the model, and let y_0 represent the historical data with its corresponding likelihood function denoted by L(θ| y_0). The basic formulation of the power prior <cit.> is given by π(θ| y_0, δ) ∝ L(θ| y_0)^δπ_0(θ), where δ∈ [0,1] is the scalar weight parameter, and π_0(θ) is the initial – often non-informative – prior for θ. This model can be seen as a generalization of the classical Bayesian updating of π_0(θ). Additionally, as noted by <cit.>, the parameter δ plays a crucial role in determining the shape of the prior distribution for θ. Furthermore, updating the power prior in (<ref>) with the current data likelihood L(θ| y) yields the following posterior distribution of θ π(θ| y, y_0, δ) ∝ L(θ| y) L(θ| y_0)^δπ_0(θ). The formulation in (<ref>) is conditional on the weight parameter and requires a predetermined and known value for δ. Therefore, to ensure an appropriate level of historical information borrowing while managing prior-data conflict, sensitivity analysis is recommended. In addition to the dynamic methods outlined in the introduction, <cit.> also proposed several other statistical methods, including the Penalized Likelihood-type Criterion (PLC), Marginal Likelihood Criterion (MLC), Deviance Information Criterion (DIC), and the Logarithm of the Pseudo-Marginal Likelihood (LPML) Criterion. §.§ Hierarchical power priors A natural extension of the power prior in (<ref>) can be achieved by accounting for uncertainty about the weight parameter δ. This involves adopting a hierarchical power prior where δ is treated as a random variable. To achieve this, a Beta prior distribution is assigned to δ, leading to the so-called joint unnormalized power prior <cit.> for both θ and δ π(θ, δ| y_0) = π(θ| y_0, δ)π_0(δ) ∝ L(θ| y_0)^δπ_0(θ) π_0(δ), where π_0(θ) and π_0(δ) represent the initial priors for θ and δ, respectively. However, as noted by <cit.> and <cit.>, the formulation in (<ref>) lacks of a normalizing constant, leading to potential inconsistencies in the joint posterior distributions for (θ, δ) derived from different forms of likelihood functions, such as those based on raw data versus those based on the distribution of sufficient statistics <cit.>. Consequently, <cit.> proposed a normalized power prior which involves first setting a conditional prior on θ given δ, followed by an initial prior distribution for δ. The resulting joint normalized power prior for (θ, δ) is π(θ, δ| y_0) = π(θ| y_0,δ)π_0(δ) = L(θ| y_0)^δπ_0(θ) /C(δ)π_0(δ), where the normalizing constant C(δ) is C(δ) = ∫_Θ L(θ| y_0)^δπ_0(θ)dθ. In light of the current data the joint posterior distribution is π(θ, δ| y, y_0) = L(θ| y) π(θ, δ| y_0)/∫_0^1 ∫_Θ L(θ| y) π(θ, δ| y_0) dθdδ ∝ L(θ| y) L(θ| y_0)^δπ_0(θ) /C(δ)π_0(δ). The marginal posterior distribution of δ is π(δ| y, y_0) ∝π_0(δ)/C(δ)∫_ΘL(θ| y) L(θ| y_0)^δπ_0(θ) dθ. Integrating δ out in (<ref>), the marginal posterior of θ can be written as π(θ| y, y_0) ∝π_0(θ)L(θ| y)∫_0^1L(θ| y_0)^δπ_0(δ)/C(δ)dδ. The joint power prior framework offers the advantage of incorporating uncertainty regarding the weight parameter δ into the power prior formulation. This approach allows the data to determine the appropriate weight for historical information based on its compatibility with current observations. Furthermore, explicitly accounting for this uncertainty increases the flexibility in modeling the agreement between historical and current data. In addition, a crucial theoretical advantage of the joint normalized power prior formulation with respect to the formulation in (<ref>) is its adherence to the likelihood principle. This ensures that the posterior distributions in (<ref>) accurately reflect the compatibility between current and historical data. Furthermore, this approach has further theoretically justification, as the power prior formulation in (<ref>) is shown to minimize the weighted KL divergence <cit.>. From a computational perspective, the additional effort required for the normalized power prior compared to the unnormalized power prior involves computing the normalizing constant in (<ref>). For certain models, this integral can be solved analytically, resulting in a closed-form expression for the joint posterior as specified in (<ref>). However, for more complex models, the normalizing constant C(δ) must be determined numerically. Consequently, the posterior distribution in (<ref>) falls into the category of the doubly intractable distributions <cit.>, and numerical methods such as Markov Chain Monte Carlo (MCMC) <cit.> are required. § THE CALIBRATED BAYES FACTOR Eliciting a well-balanced initial prior distribution for δ has proven to be challenging. Intuitively, this prior should encourage borrowing when the data are compatible and limit borrowing when they are in conflict. In this section, we propose a calibrated Bayes factor, hereafter CBF, that is a simulation-based algorithmic technique designed to effectively discriminate between some competing initial Beta prior distributions for δ. The proposed CBF aims to provide more robust decisions about which initial Beta prior for δ may be used. Specifically, this involves analyzing the behaviour of the Bayes factor <cit.>, henceforth BF, using different hypothetical replications generated from the posterior predictive distributions, while assessing how surprising the observed value of the Bayes factor is. §.§ The Bayes factor The BF provides a general Bayesian method to assess the relative evidence in support of competing hypotheses based on their compatibility with the observed data. Furthermore, the BF represents the ratio between the posterior and the prior odds when comparing two distinct point hypotheses. Specifically, let consider two competing hypotheses about the initial prior distribution on the weight parameter δ|ℋ_0 ∼Beta(η_0,ν_0) vs. δ|ℋ_1 ∼Beta(η_1,ν_1), where η_i and ν_i represent the strictly positive Beta shape parameters under the hypothesis ℋ_i, for i = {0,1}. The corresponding Bayes factor can be expressed as the ratio of the two marginal likelihoods BF_0,1(y) = m(y |ℋ_1)m(y |ℋ_0), where the marginal likelihood is m(y |ℋ_i) = ∫_0^1 ∫_Θ L(θ| y) L(θ| y_0)^δπ_0(θ) π_0(δ|ℋ_i) dθ∫_Θ L(θ| y_0)^δπ_0(θ) dθdδ, with π_0(δ|ℋ_i) representing the initial Beta prior distribution for δ under the hypothesis ℋ_i, for i = {0,1}. Assuming equal prior probabilities for both the models, a BF exceeding one suggests stronger evidence in favor of ℋ_1. Conversely, a BF below one denotes a stronger evidence for ℋ_0. A BF close to one indicates no clear preference for either hypothesis, reflecting a similar degree of empirical evidence for both ℋ_0 and ℋ_1. Several approaches have been developed to summarize and classify the strength of evidence according to the observed BF. Firstly, <cit.> introduced a categorization, as illustrated in Table <ref>. Subsequently, <cit.> streamlined this scale by omitting one category and redefining the thresholds. Lastly, <cit.> further refined Jeffreys' scale with additional modifications. For most complex models the BF computation is challenging since the marginal likelihood is not analytically tractable. Therefore, numerical approximation methods become essential. A widely used algorithm is the so-called bridge sampling <cit.>. This method employs a Monte Carlo technique, generating samples from an auxiliary distribution that bridges the model's prior and posterior distribution. The generated samples are then used to calculate bridge sampling weights, which correct the bias introduced by the auxiliary distribution, providing an unbiased estimate of the marginal likelihood. Another noteworthy method is the Savage–Dickey algorithm <cit.>. This method approximates the BF by calculating the ratio of the posterior and prior densities at a model parameter value of zero. However, its use is limited to nested models and may be unstable if the posterior density significantly deviates from zero. §.§ Simulation-based calibration The BF has an inherent dependence on the observed data when used for decision-making. Consequently, decisions based solely on current observations may lead to potentially misleading conclusions due to the possible fluctuations and noise present in the data. Furthermore, <cit.> emphasized two crucial issues regarding BF computations: the instability of BF estimates in complex statistical models and the potential bias within these estimates. Therefore, to effectively and responsibly employ the BF, it is crucial to adjust and calibrate it, ensuring that the conclusions drawn are more robust and reliable. The concept of simulation-based calibration, hereafter SBC, was originally developed to validate the computational correctness of applied Bayesian inference methods <cit.>. In addition, <cit.> proposed a structured approach based on SBC to verify the accuracy of BF calculations. Their calibration method involves simulating multiple artificial datasets to assess whether a BF estimated in a given analysis is accurate or biased. This type of calibration approach is intuitive and logical, as it mirrors the classical approach of hypothesis testing, where the decision-making criterion is determined by the sampling distribution under the hypothesis of repeated sampling. The SBC-inspired method explored in this paper is motivated by the insights of <cit.>, who posited that the BF should be considered as a random variable before observing the sample. This perspective emphasizes the necessity of calibrating the BF to accurately account for the inherent randomness in the data. However, the analytic form of the BF distribution is frequently not available. In such cases, a simulation-intensive approach becomes a valuable tool for approximating this distribution <cit.>. This involves generating replicated datasets from some type of predictive distribution or other data-generating processes, estimating the marginal likelihoods with these synthetic datasets, and then computing the BFs. By iterating this process multiple times, an approximation of the BF distribution is obtained. Subsequently, once the data have been collected, the observed BF can be used as a measure of agreement between the observed data and the underline statistical model. Both <cit.> and <cit.> suggested a calibration method based on the prior predictive distribution. However, this approach is not suitable in our context due to the inherent bias in the replicated data from the prior predictive distribution towards the historical data y_0, as highlighted by the structure of Equation (<ref>). This can potentially yield replicated samples that are much more in agreement with the historical data than the historical data are with the current data. Consequently, we explored the use of hypothetical replications generated from the posterior predictive distribution to approximate the distribution of the BF; where the predictive distribution is given by p(y^rep | y)=∫_Θ L(y^rep |θ) π(θ| y, y_0, δ) dθ. A key advantage of this approach is the use of the information in the current data y via the posterior distribution, focusing on a relevant region within the parameter space <cit.>. Furthermore, the BF computed using the replications from the posterior predictive distribution BF_0,1(y^rep), henceforth replicated BF, mimics the behavior of the BF using the original data y when a specific model is the true data-generating mechanism. Our CBF approach aims to define a decision criterion that not only assesses the inherent variability of the BF but also incorporates the observed data. Thus, to effectively ensure a more comprehensive and balanced decision rule, it is essential to define a criterion that incorporates the observed BF, denoted by BF_0,1(y), as a measure of surprise, favoring scenarios where it is less unexpected. We propose a criterion based on * The survival function of the BF distribution, denoted by S_BF_0,1(·). * The inclusion of the observed BF within a defined Highest Posterior Density Interval (HPDI). Specifically, the decision criterion focuses on selecting alternative hypotheses that provide stronger evidence against the null hypothesis. This is achieved by giving preference to the BF distribution that yields more values in favour of the alternative hypothesis. This implies that the survival function of the BF distribution, calculated at the value where the BF indicates equal support for both hypotheses, is greater than 0.5. Furthermore, the inclusion of the observed BF within a defined HPDI assesses its coherence with respect the underlying distribution. This dual approach ensures a balanced and comprehensive decision rule, accounting for both the BF distribution and the surprise measure associated with the observed BF. §.§ The procedure To streamline our method, we focus on the logarithmic transformation of the BF, referred to as Log-BF. This transformation is advantageous because values less than zero suggest evidence for the null hypothesis ℋ_0, while values greater than zero indicate evidence for the alternative hypothesis ℋ_1. To further simplify comparisons and enhance interpretability, we use a reference null hypothesis ℋ_0 computing the Log-BF between each alternative hypothesis and the reference, reducing from (M2) to (M-1) Log-BFs. The chosen reference null hypothesis is the Beta(1,1) prior for δ, a commonly used non-informative prior for the weight parameter <cit.>. Notably, this choice is motivated by our goal of establishing a more informative prior for the weight parameter compared to the standard uniform prior. The initial step involves defining a reasonable grid of potential alternative hypotheses related to the Beta initial prior for the weight parameter, denoted by ℋ = {ℋ_1, …, ℋ_i, …, ℋ_M}, with ℋ_i: δ∼Beta(η_i, ν_i), where η_i and ν_i represent the shape parameters under ℋ_i, for i=1, …, M. After computing the observed Log-BF between hypothesis ℋ_i and ℋ_0, the next step involves generating K hypothetical samples from the posterior predictive distribution under each alternative hypothesis ℋ_i, that is y_ℋ_i^rep = (y_ℋ_i, 1^rep, …, y_ℋ_i, K^rep), for i, …, M. Then, the replicated Log-BF is computed between the alternative hypothesis ℋ_i and the null hypothesis ℋ_0, for all combinations of i = 1, …, M and k = 1, …, K. Subsequently, following the previous paragraph, it is essential to establish a criterion based on the distribution of the Log-BF, obtained using the replicated Log-BFs, and that incorporates the observed Log-BF as a measure of surprise, favoring scenarios where it is less unexpected. Our criterion aims to identify BF distributions where the alternative hypothesis ℋ_i, as given in (<ref>), is more likely than the null hypothesis, ℋ_0: δ∼Beta(1,1). This is obtained when the cumulative distribution function (CDF) of the Log-BF at zero is below 0.5 – or when the survival function at zero exceeds 0.5 – suggesting stronger evidence in favor of the alternative hypothesis. The observed Log-BF is also incorporated in our decision criterion by considering the hypotheses associated with values greater than zero, reflecting stronger empirical evidence relative to the null hypothesis. The robustness of the observed Log-BF is evaluated by assessing its position within the approximated distribution. Ideally, the observed Log-BF should be within a specific HPDI, indicating that it is not an outlier but rather a value consistent with the underlying Log-BF distribution. Consequently, the well-balanced prior is determined using the following criterion ℋ_Opt = ℋ^⋆ if ℋ^⋆ > 0 ℋ_0 otherwise, with ℋ^⋆ = *arg max_i=1, …, M{1_{S_logBF_0,i(0)>0.5 }S_logBF_0,i(0)×1_{logBF_0,i^obs∈ (·)%HPDI}logBF_0,i^obs}, where S_logBF_0,i(0) represents the survival function of the Log-BF distribution evaluated at zero, and logBF_0,i^obs is the observed Log-BF between the alternative hypothesis ℋ_i and the null hypothesis ℋ_0, for i = 1, …, M. The first indicator function focuses on distributions where the alternative hypothesis is more probable than the null hypothesis. That is, selecting distributions where the survival function of the Log-BF at zero is greater than 0.5, indicating a higher evidence for the alternative hypothesis. The second indicator function evaluates the presence of the observed Log-BF within a specific HPDI, working from a Bayesian perspective as a measure of surprise. Furthermore, the computational step of the CBF procedure are summarized in Algorithm 1. Figure <ref> provides an illustrative example of the CBF procedure for selecting a well-balanced initial Beta prior for δ. Let consider a null hypothesis ℋ_0 and three alternative hypotheses ℋ_i, for i = 1,2,3, regarding the initial Beta prior of δ. The Log-BF distribution for ℋ_1, represented by the purple curve, shows a higher probability of negative Log-BF values, suggesting a stronger evidence in favor of the null hypothesis. Conversely, the approximated Log-BF distribution for ℋ_2 and ℋ_3, depicted by the orange and green curve respectively, provide a stronger evidence in favor of the associated alternative hypotheses. However, although ℋ_2 shows an observed Log-BF within the selected HPDI, denoted by the dashed lines, the corresponding observed Log-BF value is negative, suggesting empirical evidence in favor of ℋ_0. Only ℋ_3 demonstrates a positive observed Log-BF, which suggests stronger empirical evidence for the alternative hypothesis, but also falling within the respective HPDI. Accordingly, based on the selection criterion in (<ref>), a well-balanced prior for δ is the one associated with the alternative hypothesis ℋ_3. § SIMULATION STUDIES In this section, we assess the efficacy and applicability of the CBF approach through simulation studies. The simulations comprise three distinct studies, each focusing on different statistical distributions and their parameters. The main objective of each simulated study is to evaluate the effectiveness of the proposed method in selecting a well-balanced prior. This method aims to incorporate extensive past information when historical and current data are in agreement and to minimize this incorporation when there is a discrepancy between the two datasets. To assess this, we analyze a historical dataset and a series of current datasets that progressively diverge from it. Additionally, an expanded grid of hyperparameters for the Beta prior distribution is employed, ranging from 0.5 to 6 in increments of 0.5. According to some sensitivity checks, each simulation study considers a 75% HPDI threshold for the selection criterion in (<ref>). Firstly, let consider a specific discrete example for count data. Let y_0 = (y_01,…, y_0N_0) and y = (y_1,…, y_N) denote the historical and current dataset, respectively. It is assumed that the datasets are generated from two different Poisson distributions, each characterized by rate parameters λ_0 for the historical data and λ_c for the current data. Consequently, the BF is given by BF_0,i(y) =∫_0^1W(δ)Beta(δ|η_i, ν_i)dδ∫_0^1W(δ)Beta(δ| 1, 1)dδ, i=1, …, M. where W(δ) = (δ N_0 + β_0)^δ N_0 y_0 +α_0Γ(δ N_0 y_0 )Γ(N y+δ N_0 y_0 + α_0)P_y^-1[(N + δ N_0 +β_0)^N y+δ N_0 y_0 +α_0]^-1, with P_y = ∏_j=1^N y_j!, y_0 = 1/N_0∑_h=1^N_0y_0h and y = 1/N∑_j=1^Ny_j. For further details see Appendix <ref>. Figure <ref> illustrates the evolution of the selected prior for δ in relation to the level of disagreement between the current and the historical studies. This disagreement is quantified by the difference between the historical rate parameter and the current rate parameter. The plot shows the median values of the selected prior for δ as points, with error bars indicating the first and third quartiles. Additionally, values in brackets indicate the selected hyperparameters chosen from the predefined grid. The plot highlights the procedure's ability to select an appropriate prior according to the level of disagreement between the datasets. Specifically, as the disagreement increases, the selected prior shifts from a left-skewed Beta distribution – a Beta(5.5,0.5), suggesting a substantial incorporation of historical data (equal weight to the actual data), to a right-skewed Beta distribution – a Beta(0.5,6), implying a more conservative use of historical information. The left panel of Figure <ref> shows the mean and standard deviation (SD) of the marginal posterior distribution for the rate parameter λ, comparing two distinct scenarios. The first scenario employs the default Beta(1,1) as initial prior for δ, while the second use the chosen Beta initial prior determined according to the selection criterion in (<ref>). The comparison highlights that the posterior mean in both scenarios shows a similar increasing trend. This can be attributed to the progressive increase in the current rate parameter λ_c which results in a greater discrepancy between current and historical data, leading to a higher discount of historical data. Furthermore, the marginal posterior distributions using the CBF selected prior are less diffused than those derived from the standard Beta(1,1) prior, leading to more precise results for λ. Specifically, when there is minimal disagreement between current and historical data, the posterior standard deviation for λ is lower when using the CBF selected prior. As the disagreement increases, the difference between the standard deviations tends to reduce. Finally, when the disagreement is high, the posterior distribution under the CBF prior for δ remains less dispersed compared to the standard uniform prior. The right panel of Figure <ref> focuses on the posterior distribution for the weight parameter δ, comparing the two previously described scenarios. When there is a low level of disagreement between the current and historical data, the CBF prior leads to posterior distributions that incorporate more historical information compared to the Beta(1,1) prior for δ. Conversely, as the disagreement increases, the CBF prior becomes more conservative, discounting the historical data to a greater extent. Furthermore, the CBF prior consistently leads to more precise estimates with lower variability in the posterior distribution. A similar simulation study is conducted for the binomial model, which is frequently applied in medical contexts involving power priors. Let N_0 and N denote the number of Bernoulli trials in the historical and current studies, respectively. Furthermore, let y_0 and y represent the number of successes in these studies. Assuming a binomial likelihood with success probability θ for each study, and an initial Beta prior for both θ and the weight parameter δ, the BF is BF_0,i(y) = ∫_0^1 BBin(y | N, δ y_0 + p, δ (N_0 - y_0) + q) Beta(δ|η_i, ν_i) dδ∫_0^1 BBin(y | N, δ y_0 + p, δ (N_0 - y_0) + q) Beta(δ| 1, 1) dδ, i=1, …, M, where BBin(·| N, α, β) is the beta-binomial discrete distribution. For further details see Appendix <ref>. Figure <ref> shows the evolution of the selected prior for δ when analyzing a historical dataset followed by a series of current datasets. A Beta(1,1) is used as initial prior distribution for θ, and the historical binomial likelihood presents a success probability of 0.2. This simulation study demonstrates the proposed method's ability to dynamically adapt the amount of historical information borrowed, based on the agreement between current and historical data. Specifically, when there is almost perfect agreement, the selected prior for δ is a Beta(6,0.5), indicating a higher level of historical information borrowing. As the level of agreement decreases, the prior for δ progressively shifts to Beta distributions that reduce the incorporation of historical data, reaching a Beta(0.5,6) distribution in cases of high disagreement. Figure <ref> shows the marginal posterior means and standard deviations for θ and δ comparing the uniform and CBF selected initial prior for the weight parameter. Particularly, the left panel of Figure <ref> highlights that the CBF prior for δ results in marginal posterior distributions for θ that are generally more concentrated. This is particularly evident when there is either a low or high level of disagreement between current and historical data. Additionally, the posterior mean of θ follows a consistently similar trend when compared to the results obtained using the Beta(1,1) prior on δ. Therefore, using the CBF prior leads to more accurate inferential conclusion in general. The right panel of Figure <ref> illustrates that when the discrepancy between historical and current data is minimal, the CBF prior for δ results in posterior distributions that incorporate a greater amount of historical information. As the discrepancy increases, the posterior distributions become more conservative, increasingly discounting the historical data. Furthermore, it is evident that the CBF prior for δ produces less diffused posterior distributions compared to those derived from a uniform prior. Finally, we investigate a continuous response example commonly encountered in replication studies <cit.>. A crucial question in this context is how effectively a replication (current) study has reproduced the results of an original (historical) study. Let μ be the unknown true effect size, with μ̂_s representing the estimated effect size from study s, where s ∈{o, r} denotes “original” and “replication” studies, respectively. Furthermore, the effect size estimates are assumed to be normally distributed. μ̂_s |μ∼N(μ, σ_s^2), where σ^2_s represents the variance of the estimated effect size μ̂_s, assumed to be known. The BF is given by BF_0,i(y) =∫_0^1 N(μ̂|μ̂_o, σ^2+σ_o^2 / δ) Beta(δ|η_i, ν_i) dδ∫_0^1 N(μ̂|μ̂_o, σ^2+σ_o^2 / δ) Beta(δ| 1, 1) dδ, i=1, …, M. Further details are provided in Appendix <ref>. In Figure <ref> is assumed that the effect size in the original study follows a normal distribution with mean μ_o = 0 and variance σ^2_o = 1. We incrementally varied the true effect size of the replicated study in steps of 0.2, starting from a scenario of perfect agreement, where the replicated study's effect size is μ_r = 0, and extending to scenarios of progressively greater disagreement, reaching a point where μ_r = 6. Our method effectively selected a well-balanced prior to address the plausible level of agreement between the original and replicated studies. Similar to the binomial case, a Beta(6,0.5) prior is chosen when the agreement is high. As the disagreement increases, the amount of borrowed information is progressively reduced, selecting the prior that most limits the incorporation of historical information – a Beta(0.5,6) – in cases of high disagreement. Figure <ref> shows that the CBF procedure effectively selects a prior for the weight parameter, reducing the standard deviation of the marginal posterior for μ compared to the Beta(1,1) prior, while maintaining a similar posterior mean for the effect size μ. Additionally, similar results to those observed in previous simulation studies are obtained for the posterior distribution of δ. § MELANOMA CLINICAL TRIAL The efficacy of the CBF procedure on real data is evaluated through the analysis of two clinical trials. This analysis incorporates recent data from a new trial along with historical data from a previous study to assess earlier findings. Two melanoma trials conducted by the Eastern Cooperative Oncology Group (ECOG), specifically E2696 and E1694, are examined, involving 105 and 200 patients respectively. For further details see <cit.>. These trials investigate the effects of interferon alfa-2b (IFN) treatment on patient survival rates. The E2696 trial assesses the efficacy of combining the GM2-KLH/QS-21 (GMK) vaccine with high-dose IFN therapy compared to the GMK vaccine alone in patients with resected high-risk melanoma. Furthermore, the E1694 trial evaluates the effectiveness of the GMK vaccine versus high-dose IFN therapy in a comparable group of patients. In conclusion, the findings from the E1694 trial corroborate earlier results from E2696, demonstrating that both intravenous and subcutaneous IFN can significantly reduce the relapse rate in melanoma patients. Figure <ref> presents the survival curves from both trials, highlighting the beneficial impact of interferon treatment on patient survival. A Bayesian logistic regression model is applied to data from the E1694 trial and additional historical information from the E2696 trial is integrated using a normalized power prior as in (<ref>). The analysis includes four additional covariates: age, sex, performance status, and treatment indicator. Parameter estimation is conducted using the probabilistic programming language Stan <cit.> to perform Markov Chain Monte Carlo (MCMC) sampling via the R package <cit.>. This involves four independent chains, each with 2000 iterations, discarding the first 1000 iterations as burn-in. To determine a well-balanced initial prior for the weight parameter using the CBF procedure, as described in Section <ref>, the bridge sampling approximation of the BF is employed via the R package <cit.>. Furthermore, due to the lack of a closed-form expression for the normalizing constant in (<ref>), the suggestion by <cit.> to approximate the function log C(δ) through linear interpolation is followed. Specifically, the grid-based approximation method proposed by <cit.> is used. This method involves defining a grid of points focused on areas where the derivative C'(δ) is nearly zero, and employing a generalized additive model (GAM) for the linear interpolation step across a larger grid. Initial priors for the coefficients of the four covariates are set using a weakly informative approach as outlined by <cit.>. Specifically, a normal distribution with mean 0 and standard deviation 10 is assigned to each coefficient. Additionally, for the initial prior of the weight parameter, a Beta(5, 0.5) is chosen based on the CBF procedure. This process involves a comprehensive evaluation of competing initial priors, considering a range of different Beta hyperparameters from 0.5 to 6 in increments of 0.5. Furthermore, a parallel processing strategy is employed to efficiently manage the computational effort required for this extensive hyperparameter exploration, ensuring streamlined and effective computational execution of the methodology. Table <ref> presents a comparison of posterior estimates for regression parameters under different initial Beta priors for δ. These estimates include posterior means, standard deviations, and 95% HPDIs. The priors compared are: the prior selected based on the CBF criterion described in (<ref>), the standard uniform prior, the Jeffreys' prior, and the Beta(2,2) prior. Notably, the well-balanced prior identified using the CBF procedure – a Beta(5, 0.5) – results in consistently smaller posterior standard deviations for the treatment, sex, and performance status parameters compared to the other evaluated Beta priors, indicating more precise inferential conclusions. Additionally, the HPDIs for all the parameters of interest are narrower when using the prior from the CBF procedure. This suggests that, by incorporating a great amount of historical information, the CBF selected prior efficiently enhances the precision of the posterior parameter estimation. Figure <ref> further emphasizes this result by showing the marginal posterior distribution for the four parameters of interest with their corresponding 95% HPDI comparing the prior derived from the CBF procedure and the conventional Beta(1,1) prior. § DISCUSSION The power prior method presents a flexible way to construct an informative prior by combining a prior distribution with the weighted likelihood of previous data. This combined posterior then serves as the prior for new studies. However, determining the appropriate weight parameter presents a significant challenge, whether it is fixed or its prior distribution is being evaluated. While several methods exist for setting a fixed weight parameter, fully Bayesian approaches for eliciting more informative priors are not usually addressed in the related literature. <cit.> highlighted that while the fully Bayesian approach is inherently flexible, it may not sufficiently address the disagreements between historical and current data. This issue frequently stems from the default use of a non-informative prior, which might not be sensitive enough to detect significant divergences. Consequently, we advocate for the use of a more informative prior specifically designed to detect potential conflict between historical and current datasets, improving the robustness of the resulting statistical inferences. Our proposed CBF procedure is a novel response to this challenge. It seeks to select a more informative prior than the conventional non-informative one by utilizing hypothetical replications derived from the posterior predictive distribution. The selected prior has minimal influence on the posterior central summary statistics while simultaneously achieving a smaller posterior variance for the parameters of interest. The efficacy of this approach is demonstrated through both simulation studies and the application to melanoma data, proving its robustness and effectiveness in distinguishing between different prior specifications. The ability of this method to select a well-balanced prior based on the agreement between historical and current data, as evidenced in the melanoma study, emphasizes its practical relevance in real-world applications. A crucial aspect of our CBF procedure is the choice of the HPDI for assessing the placement of the observed Log-BF within the distribution of replicated Log-BFs. This decision is crucial as it directly influences the interpretation of empirical evidence relative to the modeled hypotheses. A narrower HPDI is recommended when the aim is to limit the range of acceptable values, thereby enhancing the strength and reliability of empirical findings. Future research will focus on developing quantitative methods for determining the appropriate HPDI width. The methodology presented in this paper offers several areas for potential improvement. Firstly, the selection criteria outlined in (<ref>) could be refined to more effectively identify well-calibrated priors, particularly in cases of moderate agreement between historical and current data. Additionally, to reduce computational time, it is advisable to consider alternatives to the grid search method employed in this study. Instead of exhaustively exploring all hyperparameters within the grid, methods that target a relevant subset of the parameter space should be explored. Furthermore, future work should focus on providing a more comprehensive analysis of the theoretical properties of the CBF method. Finally, a thorough comparison with the optimal prior proposal by Shen (2023) and methods that provide an estimate for δ, possibly in terms of MSE or other measures, possibly using measures such as MSE or other relevant metrics, is a primary goal for future research. § SOFTWARE AND DATA AVAILABILITY All analyses were conducted in the R programming language version 4.2.3 <cit.>. The code and data to reproduce this manuscript are openly available at <https://github.com/RoMaD-96/CBFpp>. apalike § POISSON WITH UNKNOWN RATE PARAMETER Λ Let y_0 = (y_01,…, y_0N_0) denote a historical dataset, which is presumed to originate from a Poisson distribution characterized by a rate parameter λ_0 = 2. In this simulation, we adopt the conjugate prior approach, wherein: λ ∼Gamma (α_0, β_0), y_0|λ ∼Poisson (λ). Notably, the hyperparameters of the Gamma initial prior are set to α_0 = β_0 = 1. Moreover, the normalized power prior is π(λ, δ| y_0) = Gamma(λ|δ N_0 y_0 + α_0, δ N_0 +β_0)Beta(δ|η, ν), where y_0 = 1/N_0∑_h=1^N_0y_0h. Combining (<ref>) with the likelihood of the current data yields to the following joint posterior for (λ,δ) π(λ,δ| y_0, y) = L(λ| y)π(λ,δ| y_0)∫_0^1∫_0^+∞L(λ| y)π(λ,δ| y_0) dλdδ = Poisson(y |λ)Gamma(λ|δ N_0 y_0 + α_0, δ N_0 +β_0)Beta(δ|η, ν)∫_0^1W(δ)Beta(δ|η, ν)dδ where W(δ) = (δ N_0 + β_0)^δ N_0 y_0 +α_0Γ(δ N_0 y_0 )Γ(Ny+δ N_0 y_0 + α_0)P_y^-1[(N + δ N_0+β_0)^Ny+δ N_0 y_0 +α_0]^-1, with P_y = ∏_j=1^N y_j! and y = 1/N∑_j=1^Ny_j. Moreover, the BF is BF_0,i(y) =∫_0^1W(δ)Beta(δ|η_i, ν_i)dδ∫_0^1W(δ)Beta(δ| 1, 1)dδ, i=1, …, M. § BINOMIAL WITH UNKNOWN SUCCESS PROBABILITY Θ Let N_0 and N denote the number of Bernoulli trials in the historical and current studies, respectively. The terms y_0 and y represent the successes in these studies. Assuming a binomial likelihood with a success probability θ for each study and an initial Beta prior for both θ and the weight parameter δ, the normalized power prior is π(θ, δ| y_0) = [Bin(y_0 |θ, N_0)]^δBeta(θ| p,q) Beta(δ|η, ν)∫_0^1 [Bin(y_0 |θ, N_0)]^δBeta(θ| p,q) dθ = Beta(θ|δ y_0 + p, δ(N_0-y_0) + q )Beta(δ|η, ν). In light of the current data the posterior distribution is π(θ, δ| y, y_0) =L(θ| y) π(θ, δ| y_0)/∫_0^1 ∫_0^1 L(θ| y) π(θ, δ| y_0) dθdδ =Bin(y |θ, N ) Beta(θ|δ y_0 + p, δ(N_0-y_0) + q )Beta(δ|η, ν)∫_0^1 BBin(y | N, δ y_0 + p, δ (N_0 - y_0) + q) Beta(δ|η, ν) dδ, where BBin(·| N, α, β) is the beta-binomial discrete distribution. Therefore, the BF is BF_0,i(y) = ∫_0^1 BBin(y | N, δ y_0 + p, δ (N_0 - y_0) + q) Beta(δ|η_i, ν_i) dδ∫_0^1 BBin(y | N, δ y_0 + p, δ (N_0 - y_0) + q) Beta(δ| 1, 1) dδ, i=1, …, M. § GAUSSIAN WITH UNKNOWN MEAN Μ Let μ be the unknown true effect size, with μ̂_s representing the estimated effect size from study s, where s ∈{o, r} denotes “original” and “replication” studies, respectively. Furthermore, we assume that the effect size estimates are normally distributed. μ̂_s |μ∼N(μ, σ_s^2), where σ^2_s represents the variance of the estimated effect size μ̂_s, assumed to be known. Let consider an initial improper prior for the effect size parameter π_0(μ)∝ 1 and a Beta prior for the weight parameter δ then the normalized power prior is π(μ, δ| y_o) = N(μ|μ̂_o, σ_o^2δ) Beta(δ|η, ν). Updating the previous prior with the likelihood of the replicated data yields to the following posterior distribution π(μ, δ| y, y_o) =L(μ| y) π(μ, δ| y_o)/∫_0^1 ∫_-∞^∞ L(μ| y) π(μ, δ| y_o) dμdδ =N(μ̂|μ, σ^2) N(μ|μ̂_o, σ_o^2 / δ) Beta(δ|η, ν)∫_0^1 N(μ̂|μ̂_o, σ^2+σ_o^2 / δ) Beta(δ|η, ν) dδ. Furthermore, the BF is BF_0,i(y) =∫_0^1 N(μ̂|μ̂_o, σ^2+σ_o^2 / δ) Beta(δ|η_i, ν_i) dδ∫_0^1 N(μ̂|μ̂_o, σ^2+σ_o^2 / δ) Beta(δ| 1, 1) dδ, i=1, …, M.
http://arxiv.org/abs/2406.18300v1
20240626123409
Shadow and quasinormal modes of the rotating Einstein-Euler-Heisenberg black holes
[ "Gaetano Lambiase", "Dhruba Jyoti Gogoi", "Reggie C. Pantig", "Ali Övgün" ]
gr-qc
[ "gr-qc" ]
500 encoding=*, shape=sc40
http://arxiv.org/abs/2406.18064v1
20240626044941
Evaluating Quality of Answers for Retrieval-Augmented Generation: A Strong LLM Is All You Need
[ "Yang Wang", "Alberto Garcia Hernandez", "Roman Kyslyi", "Nicholas Kersting" ]
cs.CL
[ "cs.CL" ]
Similarities among top one day batters: physics-based quantification [ March 18, 2024 ==================================================================== § ABSTRACT We present a comprehensive evaluation of answer quality in Retrieval-Augmented Generation (RAG) applications using vRAG-Eval, a novel grading system that is designed to assess correctness, completeness, and honesty. We further map the grading of quality aspects aforementioned into a binary score, indicating an accept or reject decision, mirroring the intuitive "thumbs-up" or "thumbs-down" gesture commonly used in chat applications. This approach suits factual business settings where a clear decision opinion is essential. Our assessment applies vRAG-Eval to two Large Language Models (LLMs), evaluating the quality of answers generated by a vanilla RAG application. We compare these evaluations with human expert judgments and find a substantial alignment between GPT-4's assessments and those of human experts, reaching 83% agreement on accept or reject decisions. This study highlights the potential of LLMs as reliable evaluators in closed-domain, closed-ended settings, particularly when human evaluations require significant resources. § INTRODUCTION Since the launch of ChatGPT in November 2022, Large Language Models (LLMs) have become increasingly popular integrated components for organizations seeking to enhance productivity and enrich their product portfolio offerings. However, it is well known that GPT-4 Turbo's training corpora cut-off date is April 2023, rendering the model lacking in current events knowledge. Furthermore, as LLMs are pre-trained on public domain text and do not possess proprietary information, their capabilities are limited when it comes to our company's knowledge-intensive applications. Fine-tuning <cit.> is a technique that can be used to inject new knowledge into pre-trained LLMs by adjusting their gradient parameters through further, specialized training. However, OpenAI's fine-tuning API is currently only available through an experimental access program, and GPT-4 fine-tuning requires more effort to achieve significant improvements, as noted by <cit.>. Retrieval-augmented generation (RAG) was proposed initially by <cit.>. This method involves storing extra knowledge in a non-parametric dense vector index and using a pre-trained neural retriever to search relevant context, followed by generating content with a pre-trained sequence-to-sequence (seq2seq) model. <cit.> argue that RAG consistently outperforms unsupervised fine-tuning on a wide range of knowledge-intensive tasks. A RAG application leveraging LLMs enhanced with the company's proprietary knowledge has become one of the pivotal factors advancing the adoption of GPT-4-based applications at our enterprise, a global payment technology company. This phenomenon highlights the need to apply viable approaches to evaluate RAG applications, as lacking performance metrics poses a risk to the enterprise' business and may have negative consequences. One approach to evaluate the quality of RAGs focuses on their unique characteristic: that they consist of a Retriever model and a Generator model. Therefore, studies have used metrics such as context relevance, and answer relevance to evaluate the two components separately then assess the answer faithfulness between them <cit.>. The other paradigm for evaluating RAG applications treats them as traditional question answering systems. Researchers have proposed metrics such as SAS, a semantic answer similarity estimation <cit.>, and F1/EM scores <cit.> to evaluate these applications on public question answering (QA) datasets. <cit.> propose using strong LLMs such as GPT-4 to evaluate other LLMs. They argue that traditional benchmarks cannot effectively align quality measurement with human preferences in open-ended tasks. In some companies, RAG applications are mostly developed for a closed-domain, closed-ended setting, where employees seek informative answers based on the organizations' proprietary knowledge base. Additionally, external-facing RAG applications allow clients to search for information that is not necessarily available on the Internet. Therefore, it is crucial to research and propose quality evaluation approaches and metrics that are suitable for the business and align with the companies' values. To study this, we introduce a benchmark dataset called VND-Bench, comprising 155 high-quality questions across 14 subject areas in a proprietary payment network data domain. We also collect their corresponding answers from a communication channel on an internal employee collaboration platform to serve as ground truth labels. We build a vanilla RAG application, tRAG, as a test bed and ask it to answer the 155 questions. We then request five human experts, GPT-4, and Llama 2 to assess tRAG's answer quality based on the labels and vRAG-Eval, a set of grading instructions and a rubric. Our results show that GPT-4's evaluation agrees with human evaluators' scores at a rate of 83% based on a binary accept/reject category. We summarize our contributions as follows: * We design a RAG evaluation mechanism, vRAG-Eval, which includes a set of grading instructions and a rubric that measures answer quality in three aspects: correctness, completeness, and honesty, resulting in one single score. It can be readily converted to a binary accept/reject decision that suits business settings. * We build a high-quality benchmark dataset, VND-Bench, with 155 questions/answers pairs that cover 14 subject areas in data domain. * We conduct experiments using LLMs as a RAG’s quality evaluators in a closed-domain closed-ended single-turn QA setting and find that GPT-4’s grading agrees with human experts’ opinions at a rate of 83% in terms of answer accepted or rejected decisions. The rest of the paper is organized as follows: in section 2, we introduce related work. And in section 3, we explain the motivation, problem setting, and our method of experimentation. In section 4, we provide our experimental results. We make concluding remarks in sections 5&6. § RELATED WORK Embedding and semantic similarity <cit.> propose the BERTScore metric as an automatic evaluation method for text generation tasks. Unlike traditional lexical approaches that rely on word matching, BERTScore sums the cosine similarities between the token embeddings of both two sentences: the reference answer and the QA system-generated answer. <cit.> introduce SAS, a metric designed to estimate the semantic similarity between ground-truth annotations and answers generated by a QA model. It is found that semantic similarity metrics, especially those based on contemporary transformer models, align more accurately with human assessments compared to the conventional lexical similarity metrics. Reference-free evaluation In scenarios where human annotations are not readily available, <cit.> present RAGAS, a framework for automating the assessment of RAG systems. This framework considers RAG's 2-module composition and proposes three key metrics: Context Relevance to evaluate the Retriever, Answer Relevance to assess the Generator, and Faithfulness to measure the coherence between the two modules. Similarly, ARES (<cit.>) evaluates along the same 3 dimensions to assess the quality of RAG components. While not entirely reference-free, ARES leverages a few hundred human annotations during evaluation and employs a lightweight fine-tuned LLM as a judge, rather than relying on a frozen LLM. LLM as a judge Two projects closely related to our research are LLM-as-a-Judge <cit.> and A Case Study on the Databricks Documentation Bot <cit.>. To evaluate LLM-based chat assistants, <cit.> examine the usage and limitations of LLMs as judges for open-ended questions. This setting may not be suitable for most RAG applications in enterprises, where users typically seek definitive answers. <cit.> employ LLMs to generate grades as a weighted composite score of Correctness, Comprehensiveness, and Readability. While they report an alignment rate of 80% with human scores on individual factors, their grading system lacks a crucial element: one single metric that provides a decision-making-ready quality score. In business-centric settings, this limitation become particularly significant. § MOTIVATION AND PROBLEM SETTING We investigate the challenges of automatically evaluating the qualities of RAG-generated answers with respect to correctness, completeness, and honesty, in aligning with human preferences as reflected in our experiment's human grading. The objective is to determine whether LLMs can reliablly assess the quality of answers generated by RAG systems or not. §.§ Preparation We first develop a test bed RAG application, tRAG. Its knowledge base consists of a corpus of 18 PDF documents, totaling 15M tokens, which includes payment processing specifications, API reference guides, data standard manuals, and others. During the preprocessing stage, for each document d in the knowledge corpus, we split its content into chunks such that d = {c_i}_i=1^n, and vectorize each text chunk c_i using an embedding model M:v_i=M(c_i). Then we store the key value pair <v_i,c_i> into a local vector store instance. Appendix <ref> provides configurations and technical details about the in-memory vector store. Appendix <ref> illustrates a high-level workflow of this preprocessing stage. Next, we curate a benchmark dataset, VND-Bench, by collecting question-answer pairs directly from an internal employee communication channel. This channel is frequently used by a diverse range of data users across the organization to seek help in understanding specific topics related to a proprietary Payment Network's operations and transactional data semantics. We collect a total of 155 closed-ended questions along with their corresponding answers, which we consider to be the ground truth for our experiment. When multiple people respond to a question on the channel, we aggregate their answers into one single response. The set of questions are categorized into subject areas specified in Table <ref>. §.§ Retrieval, Augmentation, and Generation For each question Q in the VND-Bench dataset, tRAG embeds the text into a fixed-length dense vector using model M:q=M(Q), then conducts distance search Δ within the knowledge database, and returns the top K most relevant document chunks matching the query. K = iargmin Δ(v_i,q) During the Augmentation stage tRAG concatenates those top K chunks as the context C. C = {c_j | j ∈ K} And combines with the question to form a prompt: Q ⊕ C . This prompt is then sent to the answer generator. We utilize GPT-4 model as the tRAG's answer generator model G. The resulting response, which constitutes the model's inference output, is evaluated in our experiment. Appendix <ref> illustrates tRAG's single-turn question answering workflow. §.§ Grading Method We compile a fielded dataset containing 155 entries with the following columns: Subject Area, Question, Label, tRAG’s Answer. Additionally, we design a grading rubric as illustrated in Table <ref> to evaluate tRAG's Answer. Each score assesses a distinct aspect of answer quality. Specifically, although both incorrect answers and responses of unknown lack value, we argue that hallucination may impose significant harm to mission-critical businesses. Consequently, we penalize hallucination with the lowest score of 1, while honestly acknowledging an unknown response earns a score of 2. A correct answer warrants a score of 4, while a RAG application's extra effort to provide supplementary details is commended with a score of 5, reflecting the value of answer completeness. When a score of 3 is assigned, it indicates that the answer quality is borderline, potentially containing false information. LLMs grade answer quality through zero-shot learning, wherein a constant template as part of the vRAG-Eval, is designed to populate prompts for each question. Table <ref> illustrates the grading template. In our experiment, inputs to the evaluators are a sequence of tRAG's answers ŷ_1, ŷ_2, ⋯, ŷ_k together with a sequence of labels y_1, y_2, ⋯, y_k. The quality scores given by evaluators can be generally denoted as S_E := P(Ŷ=Y), where E ∈{GPT-4, Llama 2, Human}. Appendix <ref> depicts the workflow of the grading experiments. To ensure reproducibility and prevent potential grading hallucinations, we fix the temperature parameter for each LLM call at T=0.0. § EXPERIMENTS We ask GPT-4, Llama 2 7B, Llama 2 13B, and human experts at our company to evaluate the quality of the aforementioned tRAG’s answers respectively and independently. Their assessment is guided by the vRAG-Eval instructions and grading rubric. §.§ GPT-4 vRAG-Eval instructs that grading responses should be in the format of “Score: [[score]], Reason: [[explanation]]”. GPT-4 illustrates the capability to strictly adhere to the instructions. Table <ref> shows an example question and it's ground truth answer (“Label”) in the VND-Bench dataset. And Table <ref> illustrates GPT-4's grading response. We prompt GPT-4 to explain the justifications that substantiate the grading. It allows us to understand LLM’s thinking process. In our earlier design of vRAG-Eval, we explicitly emphasized Correctness, Completeness, and Honesty. It became evident that the GPT-4 evaluator partially deviated from the Label then graded answers based on its own understanding of those quality metrics. For instance, as illustrated in Table <ref>, tRAG’s answer for the same question was graded 4 instead of 2. This example underscores the importance of developing clear and specific grading standards, contrasting with vague guidelines that can be applied universally in public settings but require adaptation for company-specific purposes. Appendix <ref> shows the initial design of grading instruction and rubric. §.§ Human We collect human grading that score tRAG’s answer quality. These experts are experienced data and machine learning practitioners. The 155 Q&A pairs in VND-Bench are collected in two phases. The answers to the first 52 questions provided by tRAG are graded by three human experts, while the next set of 103 questions are graded by two others. To obtain a robust and representative view of humans' opinions, we choose the median function to sample the first 52 grading scores from three human experts, and then randomly select the remaining scores from the other two experts. The score distributions are illustrated in Figure <ref>. §.§ Compare GPT-4 with Human The overall agreement between GPT-4 and Human evaluators at each of 5 quality score levels defined by the vRAG-Eval rubric is 29.7%. Figure <ref> shows comparison at each level. A closer analysis of grading samples reveals that the two evaluators may have subtle opinion differences at the exact level of the score assigned. Table <ref> provides an example question and label pair, then tRAG’s answer and GPT-4’s grading are shown in Table <ref>. Human scores tRAG’s answer at quality level 2, with no explanations provided (we did not request any). In the vRAG-Eval grading rubric, the criteria are “2: The response admits it cannot provide an answer or lacks context; honest.”. Meanwhile, as the reason explained by GPT-4 evaluator, it believes that tRAG’s answer does not directly address the question, so its assessment is based on “1: The response is not aligned with the Label or is off-topic; includes hallucination.” We also believe that GPT-4 explanation justifies the score 1. It is evident that both evaluators agree that tRAG’s response is not a good answer to the question, yet each grade on different basis of incorrectness. <cit.> confirms that both humans and LLMs struggle to hold the same standard for the same score when grading on high precision. However, the definitions of “high precision” and “low precision” are subjective and can vary based on individual perspectives. To address this issue, we propose aligning with the typical thumbs-up and thumbs-down gestures used in chat applications. It is also appropriate in business settings, where a binary quality scale effectively predicts whether an answer can be accepted or rejected by clients. Therefore, we convert the 5-grading scale to binary level: 1&2&3 (answer rejected), 4&5 (answer accepted). We observe an impressive 82.6% agreement rate between GPT-4 and human evaluators. Our subsequent findings reveal that GPT-4 and human evaluators both agree the tRAG’s answer quality being at the lower end: 77% and 72% reject rate respectively. Note that tRAG is a hastily developed RAG application only to create a test bed. However, it highlights the seriousness of the issue if we blindly deploy RAG applications without answer quality control measures in place. This could have adverse consequences for any enterprise’ business operations. §.§ Llama 2 Llama 2 is an open-source language model (LLM) that is available in three versions, each with a different parameter size: 7B, 13B, and 70B. Due to our infrastructure constraints, we are only able to experiment with two of the smaller versions, 7B and 13B. Similar to the GPT-4 experiment, we use the VND-Bench dataset, which includes questions, labels, and tRAG’s responses. We then ask 7B and 13B to evaluate tRAG’s quality based on vRAG-Eval’s grading instructions and rubric. 7B gives a uniform score of 4 to all grading requests, leading us to speculate that at this not-so-large parameter size, Llama 2 may not be intelligent enough to discern the quality of answers. In contrast, 13B shows slightly more variation in scores, with 127 answers receiving a quality grade of 4 and the remaining 28 answers getting a grade of 3. We are left wondering why Llama 2 favors certain answers over others, and how confident it is in its grading. To address this, we add an additional instruction text, as shown in Table <ref>, to elicit Llama 2’s confidence level. This approach, coined as “verbalized probability” by <cit.>, allows the LLM to express its confidence level in natural language without use of model logits. Figure <ref> shows the correlation between the 13B evaluator's confidence and answer quality scores. An approximate diagonal pattern indicates that it tends to assign a grade of 4 when highly confident (80%) and a grade of 3 when less confident (50%). § DISCUSSION While LLMs continue to grow in size and intelligence, utilizing powerful LLMs as an automated alternative to human expert graders is a promising and prominent area of research. This approach complements traditional lexical and semantic measurements and demonstrates exceptional explainability and human preference alignment capabilities. We identify several areas for potential future research and practice direction: Enterprise's Values In designing the vRAG-Eval grading system, we incorporate honesty into the grading rubric to reflect that human evaluators prefer answers that admit uncertainty over hallucination. We also conduct preliminary experiments on the subject of "verbalized probability" to explore the potential of using self-expressed confidence levels as a factor in measuring a LLM evaluator's own honesty. Additionally, we consider integrating more ethical aspects, such as inclusiveness and fairness, into the grading metrics to ensure that the system is both effective and responsible. More Language Support The VND-Bench dataset currently only includes Q&A pairs in English, yet our enterprise is a global company that supports businesses worldwide. We consider extending the dataset to include other languages, reflecting the company's value of inclusiveness and diversity. Domain Adaptation We curate the VND-Bench dataset, focusing on transactional data domain knowledge. We consider building benchmark datasets in other areas and study vRAG-Eval's applicability to explore its potential in those business priorities. Few-shot Evaluation. Our experiments demonstrate that the GPT-4 evaluator exhibits high agreement rates in accept/reject decisions. We consider analyzing the score discrepancy at each of the five grading levels to better understand the relationship between human and LLM evaluators’ thinking process. By leveraging "few-shot learning", we hypothesize that we can improve the alignment of LLM evaluators with human preferences at a greater refined granularity. More Question Categories. The VND-Bench dataset currently focuses on factual knowledge questions in the transactional data domain. We consider expanding to include math and logical reasoning questions, which are essential for evaluating business-centric RAG applications. By developing high-order grading capabilities, we can better support the company's growth and innovation. Llama 2 (70B). We consider quantizing the 70B model to 4-bit integers and evaluating its grading agreement with human experts and GPT-4. This experiment could enable Llama 2 to become a more economical and less risky alternative to GPT-4, suitable for deployment on commodity hardware. § CONCLUSION In this paper, we thoroughly study evaluating Retrieval-Augmented Generation (RAG) applications in a high-stakes business environment where answer quality is paramount. Our study uniquely address this by casting a 5-grade evaluation scale into a binary evaluation system, categorizing responses as either accepted or rejected. This mapping lead to a clearer, more direct comparison between human evaluator and AI models, particularly GPT-4. We observe a remarkably high agreement rate of 82.6% between GPT-4 and human evaluators, which underscores the potential of advanced AI models in understanding and aligning with human judgement criteria through exceptional explainability. Furthermore, our comparison between GPT-4 and Llama 2 models reveals a significant disparity in performance, with GPT-4 demonstrating superior results, highlighting the importance of selecting a strong LLM. § ACKNOWLEDGMENTS We thank Ranjan Dutta, Toni Wang, and Ajit Patil for their thoughtful discussions and assistance with data annotation. We acknowledge Ping Zhu, Chi Wo Chung, Jiayin Zheng, Mohammad Rahman and their team providing robust OpenAI API access; Gurpreet Bhasin and a team of AI volunteers for them making Llama 2 available. We thank our managers Salila Khilani, Dmitrii Krainov, and Yu Gu for their support. § DOCUMENT VECTOR STORE § PREPROCESSING WORKFLOW § DOCUMENT RETRIEVAL AND ANSWER GENERATION WORKFLOW § ANSWER GENERATION AND QUALITY EVALUATION § GPT-4 CHAT CONFIGURATION § EARLY DESIGN OF GRADING INSTRUCTIONS AND RUBRIC
http://arxiv.org/abs/2406.19278v1
20240627154853
The $n/2$-bound for locating-dominating sets in subcubic graphs
[ "Dipayan Chakraborty", "Anni Hakanen", "Tuomo Lehtilä" ]
math.CO
[ "math.CO" ]
Human Modelling and Pose Estimation Overview Pawel Knap1 University of Southampton 1pmk1g20@soton.ac.uk May 2024 ================================================================ § ABSTRACT The location-domination number is conjectured to be at most half of the order for twin-free graphs with no isolated vertices. We prove that this conjecture holds and is tight for subcubic graphs. We also show that the same upper bound holds for subcubic graphs with open twins of degree 3 and closed twins of any degree, but not for subcubic graphs with open twins of degree 1 or 2. These results then imply that the same upper bound holds for all cubic graphs (with or without twins) except K_4 and K_3,3. Keywords: Location-domination, subcubic graph, cubic graph, twin-free § INTRODUCTION In this article, we consider a well-known conjecture for locating-dominating sets on twin-free graphs for subcubic graphs. In particular, we prove the conjecture for subcubic graphs by giving a result stronger than the suggested conjecture. Furthermore, we answer positively to two open problems posed by Foucaud and Henning in <cit.>. Let G = (V(G),E(G)) denote a graph with vertex set V(G) and edge set E(G). Let us denote by N_G(v) and N_G[v] (or N(v) and N[v] if G is clear from context) the open and closed neighbourhoods, respectively, of a vertex v in a graph G. We further denote by I_G(S;v)=N_G[v]∩ S (or I(v) if S and G are clear from context) the I-set of v in G with respect to the set S⊆ V(G). A set of vertices S⊆ V(G) is dominating if each vertex of G has a non-empty I-set, that is, if the vertices are in S or have an adjacent vertex in S. Moreover, a set S separates vertices u and v if I(u)≠ I(v). Similarly, we say that the set S separates vertices in a set V⊆ V(G) if the I-set of each vertex in V is unique. Finally, a set S is locating-dominating in G (LD-set for short) if S is dominating in G and for each pair of distinct vertices u,v∈ V(G)∖ S we have I(u)≠ I(v). The location-domination number of a graph G, (G), denotes the size of a smallest locating-dominating set in G. The concept of location-domination was originally introduced by Slater and Rall in <cit.>. See the electronical bibliography <cit.> to find over 500 papers on topics related to location-domination. The degree of a vertex v in a graph G, _G(v), denotes the number of vertices in N(v). A graph is r-regular if each vertex has the same degree r. Furthermore, a 3-regular graph is called cubic graph and a graph in which every vertex has degree of at most 3 is called subcubic. Two vertices u,v are said to be open twins if N(u)=N(v) and closed twins if N[u]=N[v]. A graph G is open twin-free if it has no pairs of open twins and is called closed twin-free if it has no pairs of closed twins in it. Moreover, G is called twin-free if it is both open twin-free and closed twin-free. We denote by n(G), or by n if the graph is clear from context, the number of vertices in G (i.e. the order of G). Similarly, the number of edges is denoted by m(G) or simply by m. The following conjecture that we study in this paper was originally proposed in <cit.> and was first considered in the formulation of Conjecture <ref> in <cit.>. Every twin-free graph G of order n without isolated vertices satisfies (G) ≤n/2. Both restrictions in Conjecture <ref> are required since every isolated vertex is required to be in a dominating set, the complete graph K_n (with closed twins) has (K_n)=n-1 and the star K_1,n-1 (with open twins) has (K_1,n-1)=n-1. Conjecture <ref> has been studied in at least <cit.>. Variations on directed graphs have been studied in <cit.> and for locating-total-dominating sets in <cit.>. See <cit.> for a short introduction to the conjecture. Besides introducing the conjecture, Garijo et al. <cit.> gave a general upper bound ⌊ 2n/3⌋+1 for location-domination number in twin-free graphs. Recently, Bousquet et al. <cit.> improved this general upper bound to (G)≤⌈ 5n/8⌉. Besides proving general upper bounds, a lot of the research on this conjecture has concentrated on proving it for some graph classes. In particular, Garijo et al. <cit.> proved the following useful theorem. Let G be a connected twin-free graph without 4-cycles on n≥ 2 vertices. We have (G)≤n/2. Furthermore, Garijo et al. <cit.> proved the conjecture for graphs with independence number at least ⌈ n/2⌉ (in particular this class includes bipartite graphs) and graphs with clique number at least ⌈ n/2⌉+1. In <cit.>, Foucaud et al. proved the conjecture for split and co-bipartite graphs, in <cit.>, Chakraborty et al. proved the conjecture for block graphs and, in <cit.>, Foucaud and Henning proved the conjecture for line graphs. The conjecture has also been proven for maximal outerplanar graphs in <cit.> by Claverol et al. and for cubic graphs in <cit.> by Foucaud and Henning. In this article, we concentrate on subcubic graphs. Besides upper bounds, also lower bounds of locating-dominating sets have been considered in the literature. In <cit.>, Slater has given a lower bound for the location-domination number of r-regular graphs. In particular, the result states that (G)≥n/3 for cubic graphs. However, the proof also holds for subcubic graphs in general. In the following, we introduce some further notations. A leaf is a vertex of degree one and a support vertex is a vertex adjacent to a leaf. For a vertex v∈ V(G) we denote the graph formed from G by removing v and each edge incident with v by G-v. Similarly, for an edge e∈ E(G) we denote by G-e the graph obtained from G by removing edge e. For a set D of vertices and/or edges of G, we denote by G-D the graph obtained by from G by removing each vertex and edge in D. A graph is triangle-free if each induced cycle in the graph has at least four vertices. A set of vertices S is total dominating in G if each vertex in V(G) is adjacent to another vertex in S. Furthermore, a set is locating-total dominating if it is locating-dominating and total dominating. §.§ Our results Our results consider Conjecture <ref> and its expansions for subcubic graphs. Besides proving the conjecture for subcubic graphs, we also answer the following open problems posed by Foucaud and Henning in <cit.>: [<cit.>] Determine whether Conjecture <ref> can be proven for subcubic graphs. [<cit.>] Determine whether Conjecture <ref> can be proven for connected cubic graphs in general (allowing twins) with the exception of a finite set of forbidden graphs. As Foucaud and Henning have noted, Problem <ref> is a weaker form of a conjecture from <cit.> by Henning and Löwenstein and open problem by Henning and Rad from <cit.> stating that for a connected cubic graph G the locating-total dominating number is at most half the number of vertices of G. In particular, this bound does not hold for subcubic graphs which might require two-thirds of vertices <cit.> (see Figure <ref>). Moreover, as there has not been progress on this problem for over a decade, hopefully our results for locating-dominating sets can help in solving this stronger conjecture. Coming back to our results, in Proposition <ref>, we show that a twin-free subcubic graph G has (G)≤n/2. This answers positively to Problem <ref>. Then, we continue to Theorem <ref> where we expand Proposition <ref> by allowing subcubic graphs to include closed twins. Finally, in Theorem <ref>, we expand the result also for subcubic graphs allowing open twins of degree 3 with the exceptions for exactly complete graph K_4 and complete bipartite graph K_3,3. This answers positively to Problem <ref>. Observe that we actually give a result stronger than the Problems <ref> and <ref> asked. We state that the n/2 bound holds for connected subcubic graphs without open twins of degrees 1 or 2 on at least four vertices with the exceptions of K_4 and K_3,3. We also show in Proposition <ref> that forbidding open twins of degrees 1 and 2 is necessary. Furthermore, in Proposition <ref>, we show that Problem <ref> cannot be expanded for r-regular graphs, for r≥4, with twins. We also give an infinite family of twin-free subcubic graphs for which the conjecture is tight in Proposition <ref>. Note that Foucaud and Henning had asked in <cit.> for characterizing each twin-free cubic graph which attain the n/2-bound. We have manually checked every twin-free 10-vertex cubic graph and found that there were no tight examples. Furthermore, we are aware of only one 6-vertex and one 8-vertex twin-free cubic graph (presented in <cit.>) which attains this upper bound. From this perspective, the proof for subcubic graphs seems more challenging than the one for cubic graphs. §.§ Structure of the paper The paper is structured as follows: In Subsection <ref>, we introduce some useful lemmas which are used throughout this paper. Then, in Section <ref>, we prove Conjecture <ref> for subcubic graphs allowing closed twins. In Section <ref>, we prove the conjecture for subcubic twins allowing open twins of degree 3. We continue in Section <ref>, by giving some constructions which show tightness of our results and show that they cannot be further generalized. Finally, we conclude in Section <ref>. §.§ Lemmas Lemmas <ref> and <ref> have previously been considered for trees in <cit.> and <cit.>. The proofs for these generalizations follow similarly as the original ones but we offer them for the sake of completeness. Let G be a connected graph on at least three vertices. Then G admits an optimal locating-dominating set S such that every support vertex of G is in S. Suppose on the contrary that there exists a graph G such that no optimal locating-dominating set contains every support vertex of G. Furthermore, let S be an optimal locating-dominating set of G such that it contains the largest number of support vertices among all optimal locating-dominating sets of G. Furthermore, let s be a support vertex of G not in S and let u_1, u_2, … , u_k be all the leaves adjacent to s. Since S is a dominating set, we have u_i ∈ S for all i ∈ [k]. We show that, contrary to the maximality of S, the set S'=(S∖{u_1})∪{s} is a locating-dominating set of G. Indeed, the set S∖{u_1} separates all pairs of vertices in V(G)∖ (S∪{s,u_1}). Furthermore, since S is a dominating set of G and s ∉ S, any neighbour v other than u_1 of s has (N_G[v] ∖{s}) ∩ S ∅. This implies that the vertex u_1 is the only vertex in V(G) ∖ S' with I-set {s} in S'. Therefore, set S' is locating-dominating, a contradiction. Let G be a connected graph on at least three vertices without open twins of degree 1. Then G admits an optimal locating-dominating set S such that there are no leaves of G in S. Suppose on the contrary that there exists a graph G without open twins of degree 1 such that every optimal locating-dominating set contains a leaf of G. Consider optimal locating-dominating set S such that it contains the least number of leaves among all optimal locating-dominating sets of G which contains every support vertex of G. Notice that S exists by Lemma <ref>. Hence, there exist adjacent vertices s,u∈ S such that s is a support vertex and u is a leaf. Since S is optimal, the set S∖{u} is not locating-dominating. Therefore, by the optimality of S, there exists a unique vertex v∉S such that I(v)={s}. Since G has no twins of degree 1, the vertex v is not a leaf. However, now the set S'=(S∪{v})∖{u} is a locating-dominating set of G. Since S' is optimal, contains all support vertices and fewer leaves of G than S, it contradicts the minimality of S. Therefore, the claim follows. Let G be a subcubic twin-free and triangle-free graph. If vertices u and v are in the same four cycle C_4, then all of their common neighbours are in the same cycle C_4. Assume first that u and v are adjacent. If both of them are adjacent to w, then vertices u,v,w form a triangle, a contradiction. Hence, we assume that there is a four-cycle u,w,v,z with edges uw,wv,vz,zu. Observe that if a vertex b∉{u,w,v,z} is adjacent to both u and v, then N(u)=N(v)={w,z,b} since we consider a subcubic graph. This is a contradiction with G being twin-free. Hence, the claim follows. Let G be a subcubic twin-free graph. No two triangles in G share a common edge. Let vertices u,v,w form a triangle in G. If this triangle shares an edge with another triangle, then without loss of generality two of the vertices, say, v and w have a common neighbour z. Hence, we have N[v]=N[w]={u,v,w,z}. This is a contradiction with G being a twin-free. Thus, the claim follows. With Lemmas <ref> and <ref>, we obtain following corollary. Let G be a subcubic twin-free graph. If vertices v_1,v_2,v_3,v_4 form a four cycle C_4, then that four cycle is an induced subgraph of G. § SUBCUBIC GRAPHS Let G be a twin-free subcubic graph on n vertices without isolated vertices. We have (G)≤n/2. Let G be a twin-free subcubic graph on n vertices without isolated vertices. Hence, n≥4. If n=4, then G is a path P_4 on four vertices. We have (P_4)=2. Thus, the claim holds for all subcubic twin-free graphs without isolated vertices on four vertices. Let us next assume on the contrary that G is a graph with the smallest number of vertices and among those graphs one with the smallest number of edges for which the claimed upper bound does not hold. Notice that G is connected. Indeed, if there are multiple components, then the claimed upper bound does not hold on at least one of them and we could have chosen G as that component. However, this contradicts the minimality of G. We first divide the proof based on whether G has triangles. [G is triangle-free] By Theorem <ref>, we may assume that G contains a four cycle. Let us call the vertices in this four cycle by a,b,c and d so that there are edges ab, bc, cd and da. By Corollary <ref> there are no edges ac nor bd. Furthermore, by Lemma <ref>, the only common neighbours of vertices a,b,c and d are in the set {a,b,c,d}. Since G is open-twin-free, we may immediately observe that there are at most two vertices of degree 2 in a four cycle in G. [There are exactly two vertices of degree 2 in the four cycle] Observe that when there are two vertices of degree 2 in the four cycle, they are adjacent. Let us call these vertices, without loss of generality, a and b and denote G_a,b=G-a-b. Notice that G_a,b is connected and subcubic. We further call the other neighbour of b as c and the remaining vertex of this four cycle as d. Let us next divide our considerations based on whether G_a,b is twin-free. [G_a,b is twin-free] Let S_a,b be an optimal locating-dominating set in G_a,b. By the minimality of G, we have |S_a,b|≤n/2-1. Assume first that c or d is in S_a,b. We may further assume, without loss of generality that d∈ S_a,b. Consider next S=S_a,b∪{a} in G. We have a ∈ I_G(S,b) and b is the only vertex in V(G)∖ S adjacent to a. Thus, I_G(S,b) is unique in V(G)∖ S. Furthermore, each other vertex in V(G)∖ S is dominated and pairwise separated from other vertices by the same vertices in S_a,b as in G_a,b. Thus, we may assume that neither of c nor d is in S_a,b. Since S_a,b is a locating-dominating set, vertices c and d are dominated by some other vertices in S_a,b which are not adjacent to a or b in G. Hence, we may again consider set S=S_a,b∪{a} in G. Again, b is the only vertex with I_G(S,b)={a} while all other vertices in V(G)∖ S are pairwise separated and dominated by the same vertices in S_a,b as in G_a,b. Thus, when a four cycle contains two vertices of degree 2 in G and G_a,b is twin-free, we have (G)≤n/2. ◂◂ [G_a,b contains twins] Notice that since G is twin-free, at least one of the twins is c or d. First of all, if c and d are twins, then either G is a cycle on four vertices or it contains a triangle. This contradicts the twin- or triangle-freeness of G. Furthermore, if both vertices c and d are twins with c' and d', respectively, then c' and d' have degree 2 and d' is adjacent to c, and c' is adjacent to d. Thus, G contains exactly six vertices and the set S={a,d,c'} is a locating-dominating set of G containing exactly half of the vertices in G. Let us assume next, without loss of generality, that exactly c is a twin with vertex c'. Let us denote the other neighbour of c with e and the third neighbour of e by f (see Figure <ref>). We have N(c')={d,e} and N_G(e)={c,c',f}. Notice that the degree of e is exactly 3 since G is subcubic and e is not a twin of d. Assume first that f is a leaf. In this case, G contains exactly seven vertices and the set S={b,d,e} is a locating-dominating set containing less than half of the vertices in G. Thus, we may assume that f is not a leaf. Consider next the graph G'=G_a,b-c-d=G-a-b-c-d. We notice that e is a support vertex and c' is a leaf in this graph. Moreover, since f is not a leaf and since G is twin-free, the graph G' is twin-free. By Lemmas <ref> and <ref>, we may assume that S' is an optimal locating-dominating set of G' such that it contains all support vertices but no leaves in G' (in particular, e ∈ S' and c' ∉ S'). Moreover, by the minimality of G, we have |S'|≤n/2-2. Consider next the set S=S'∪{b,d}. We have I(a)={d,b}, I(c)={b,d,e} and I(c')={d,e}. Moreover, all other vertices in V(G)∖ S are dominated and pairwise separated by the same vertices in S' as in G. Hence, S is locating-dominating in G with the claimed cardinality. Thus, if G contains two vertices of degree 2 in the same four cycle, then the result holds.◂ From now on we may assume that there is at most one vertex of degree 2 in a four cycle in G. [There is exactly one vertex of degree 2 in the four cycle] Let us say, without loss of generality, that the vertex of degree 2 is a. Let us denote the third neighbour outside of the set {a,b,c,d} of b by b', of c by c' and of d by d' (see Figure <ref>). [Both b' and d' are leaves] Let us denote G'=G-a-d-d'. Observe that G' is twin-free since G is twin-free, b is a support vertex and c is the only non-leaf adjacent to b. Let S' be an optimal locating-dominating set in G' which contains b but does not contain b' (such a set exists by Lemmas <ref> and <ref>). By the minimality of G, we have |S'|≤n/2-1. Notice that to separate b' and c we have {c,c'}∩ S'≠∅. Hence, the set S=S'∪{d} is a locating-dominating set in G. Indeed, we have I(d')={d}, I(a)={d,b}, I(b')={b} and |I(c)|≥3 (if c ∉ S). Moreover, we have |S|≤n/2. ◂◂ [Exactly one of b' and d' is a leaf] Let us assume, without loss of generality, that b' is a leaf while d' is a non-leaf. In this case, we consider the graph G_ab,cd=G-ab-cd. Notice that this graph is twin-free since G is twin-free, d' is a non-leaf and c is the only non-leaf adjacent to b while b' is the only leaf adjacent to b. Furthermore, let S_ab,cd be an optimal locating-dominating set in G_ab,cd such that it does not contain any leaves and contains all support vertices. It exists by Lemmas <ref> and <ref>. Moreover, by the minimality of G we have |S_ab,cd|≤n/2. In particular, we have b,d∈ S_ab,cd. Furthermore, since I_G_ab,cd(S_ab,cd,c)≠ I_G_ab,cd(S_ab,cd,b'), we have c∈ S_ab,cd or c'∈ S_ab,cd. Let us next consider set S_ab,cd in G. Notice that I_G(S_ab,cd,a)={b,d}, I_G(S_ab,cd,b')={b}, |I_G(S_ab,cd,c)|≥3 (if c ∉ S_ab,cd) and b separates the vertices d' and c. Thus, S_ab,cd is a locating-dominating set of claimed cardinality in G. From now on we assume that when a four cycle has exactly one degree 2 vertex, neither neighbour of the degree two vertex is a support vertex. [Neither b' nor d' is a leaf] Let us next consider graph G_ab,bc=G-ab-bc (see Figure <ref>). We further divide the proof based on three possibilities: Either G_ab,bc is twin-free, or vertex b is a twin with some other vertex or vertex c is a twin with some other vertex. There cannot exist any other twins in G_ab,bc. Indeed, G is twin-free, d is the support vertex adjacent to a and the leaf a is not a twin since d' is not a leaf. [G_ab,bc is twin-free] Let S_ab,bc be an optimal locating-dominating set which does not contain any leaves and contains all the support vertices in G_ab,bc. The set S_ab,bc exists by Lemmas <ref> and <ref>. Furthermore, by the minimality of G it has cardinality of at most n/2. Observe that d∈ S_ab,bc and b'∈ S_ab,bc. Observe further that if c∈ S_ab,bc and c'∉S_ab,bc, then S_ab,bc'=(S_ab,bc∖{c})∪{c'} is also a locating-dominating set in G_ab,bc. Indeed, c is the only vertex with I(S_ab,bc',c)={d,c'} since I(a)={d} and |I(S_ab,bc,d')|≥2 so if c'∈ N(d'), then |I(S_ab,bc',d')|≥3 and d' is separated from all other vertices. We claim that S_ab,bc or S_ab,bc' is a locating-dominating set of claimed cardinality in G. Let us first consider the set S_ab,bc. Since I_G(S_ab,bc,a)={d}, the only I-set which might change when we consider G instead of G_ab,cd is I(b). We have I_G(S_ab,bc,b)={b'} or I_G(S_ab,bc,b)={b',c}. If I_G(S_ab,bc,b)={b'}, then no I-set is modified when we change the perspective from G_ab,bc to G and in this case set S_ab,bc is a locating-dominating set in G. On the other hand, if I_G(S_ab,bc,b)={b',c}, then it is possible that I_G(S_ab,bc,b)=I_G(S_ab,bc,c'). If this is not the case, then S_ab,bc is a locating-dominating set in G. However, if I(b)=I(c'), then c'∉S_ab,bc and we may consider the set S_ab,bc'. Notice that this change does not modify I(a). Moreover, we have I_G(S_ab,bc',b)={b'}. Since S_ab,bc' is a locating-dominating set in G_ab,bc and no I-sets are modified when we transfer to G, the set S_ab,bc' is a locating-dominating set also in G. Moreover, both S_ab,bc and S_ab,bc' satisfy the claimed upper bound on the cardinality. [c is a twin in G_ab,bc] Notice that since c is adjacent to exactly d and c' in G_ab,bc and N(d)={a,c,d'}, the vertex c is a twin with d'. Notice that since c,c',d',d is a four cycle and d' has degree 2 in G, we have (c')=3 and c' is not a support vertex (see Figure <ref>). Let us next consider the graph G_c,d=G-c-d. Since G is twin-free and neither c' nor b is a support vertex in G, the graph G_c,d is twin-free. Hence, there exists an optimal locating-dominating set S_c,d containing no leaves and every support vertex in G_c,d by Lemmas <ref> and <ref> with cardinality |S_c,d|≤n/2-1. Notice that b,c'∈ S_c,d. Let us consider the set S=S_c,d∪{d}. We have I_G(S,a)={b,d}, I_G(S,c)={b,d,c'} and I_G(S,d')={c',d}. All other I-sets remain unmodified and since S_c,d is a locating-dominating set in G_c,d, the set S is locating-dominating in G. ◂◂◂ [b is a twin in G_ab,bc] Notice that since b is a leaf in G_ab,bc, there is a leaf adjacent to b' in G. Let us call this leaf b” (see Figure <ref>). Assume first that there is an edge from b' to c'. In this case, we consider the graph G_b',b”=G-b'-b”. This graph is twin-free since (d)=(c)=3 and hence, neither b nor c' may become a twin with a removal of their neighbour. Hence, there exists an optimal locating-dominating set S_b',b” containing no leaves and every support vertex in G_b',b” by Lemmas <ref> and <ref> with cardinality |S_b',b”|≤n/2-1. Furthermore, the set S=S_b',b”∪{b”} is a locating-dominating set of cardinality at most n/2 in G. Indeed, each I-set of a vertex in V(G_b',b”) remains unmodified while b”∈ I(S, b') and no other vertex is adjacent to b”. Thus, we may assume that edge b'c' does not exist. We may now consider the graph G_bb'=G-bb'. Observe that either G_bb' is twin-free or vertices b' and b” form a two-vertex component P_2. Since (P_2)=1, there exists a locating-dominating set S_bb' in G_bb' which has cardinality at most n/2 and contains all support vertices and no leaves in G_bb' by Lemmas <ref> and <ref> (if b' and b” form a P_2 component we consider b' as a support vertex and b” as a leaf). In particular, we have b'∈ S_bb'. Hence, when we consider S_bb' in G, the only I-set which changes between G and G_bb' is I(b). Hence, we only need to confirm that I(b) is unique when b∉S_bb'. First of all, observe that |I_G(b)|≥2. If a∈ I_G(b), then I_G(b) is unique since the edge db' does not exist by Lemma <ref>. Thus, we may assume that c∈ I_G(b). Hence, if I_G(b)=I_G(x), then x∈ N(c) and x=d or x=c'. However, by Lemma <ref>, we have d∉N(b'). Thus, x=c'. However, by our assumption, the edge c'b' does not exist. Therefore, I_G(b) is unique and S_bb' is a locating-dominating set in G with the claimed cardinality. ◂◂◂ ◂◂ ◂ Therefore, we may assume from now on that the four cycle contains no vertices of degree 2. [Every vertex in a four cycle has degree 3] We denote the neighbours of a, b, c and d that are outside of the four cycle by a', b', c' and d', respectively. Recall that due to Lemma <ref> the vertices a', b', c' and d' are distinct. We divide the proof into cases based on which of the possible edges between a', b', c' and d' are present. [The edges a'b', b'c', c'd' and d'a' are present in G] The entire graph G is now determined. Indeed, we have G= P_2 C_4 and the conjectured bound holds, since G is a cubic graph (see <cit.>). ◂◂ Therefore, we may assume that at least one of the edges in the subcase above is not present. Without loss of generality, we assume that the edge a'b' is not present. The proof is then divided into cases based on whether the incident edges b'c' and a'd' are present in G. [The edge a'b' is not present but the edges b'c' and a'd' are present in G] If none of the vertices a', b', c' and d' are support vertices, then the graph G' = G - ab - bc - cd - da is clearly twin-free (see Figure <ref>). The vertices a, b, c and d are leaves, and the vertices a', b', c' and d' are support vertices in G'. By the minimality of the number of edges of G (or by the fact that G' is now C_4-free), there exists a locating-dominating set S' of G' such that |S'| ≤n/2. Due to Lemmas <ref> and <ref>, we may assume that a',b',c',d' ∈ S' and a,b,c,d ∉ S'. The I-sets given by S' are identical in G' and G, and thus S' is a locating-dominating set of G with |S'| ≤n/2. Assume then that at least one of a', b', c' and d', say a', is a support vertex. (Notice that if c' or d' is a support vertex, then the edge c'd' is not present, and these cases are symmetrical to a' being a support vertex.) Let a” be the leaf attached to a' (see Figure <ref>). Consider the graph G_a',a” = G - a' - a”. The only vertices that might have twins in G_a',a” are a and d'. The vertex a does not have a twin, since N_G_a',a” (a) = {b,d} and the only other vertex adjacent to both b and d is c, but c' is a neighbour of c that is not adjacent to a. If d' has a twin, then that twin must be a, c or d. The vertex a has no twins, b is adjacent to c but not d', and c is adjacent to d but not d'. Thus, the vertex d' has no twins either. Therefore, the graph G_a',a” is twin-free. By the minimality of G, there exists a locating-dominating set S_a',a” of G_a',a” with cardinality at most n/2 - 1. Now the set S = S_a ∪{a'} is a locating-dominating set of G since a” is the only vertex with an I-set containing only a'. Since |S| ≤n/2, the claim holds. ◂◂ [The edges a'b' and b'c' are not present in G] Consider the graph G_ab,bc = G - ab - bc. The vertex b is a leaf, and b' is a support vertex in G_ab,bc (see Figure <ref>). [G_ab,bc is twin-free] There exists a locating-dominating set S_ab,bc of G_ab,bc such that b' ∈ S_ab,bc and b ∉ S_ab,bc (by Lemmas <ref> and <ref>), and |S_ab,bc| ≤n/2. Since b ∉ S_ab,bc, I(b) is the only I-set given by S_ab,bc that can be different in G when compared to G_ab,bc. Indeed, if a,c ∉ S_ab,bc, then all I-sets in G are identical to the I-sets in G_ab,bc. If a ∈ S_ab,bc or c ∈ S_ab,bc, then I_G(b) contains a or c, but the rest of the I-sets are identical to those of G_ab,bc. The only vertices whose I-sets could be the same as I_G(b) are a', c' and d, but b' ∈ I_G (b) and b' is not adjacent to a', c' or d (due to the edges a'b' and b'c' not being present and Lemma <ref>). Thus, I_G(b) is unique and S_ab,bc is a locating-dominating set of G with |S_ab,bc| ≤n/2. ◂◂◂ [G_ab,bc is not twin-free] Now at least one of a, b and c has a twin. Suppose that a has a twin. Since N_G_ab,bc (a) = {a',d}, d' and c are the only possible twins of a. If d' is a twin with a, then a a' d' d a is a cycle in G and the degree of d' is two in G. This contradicts our assumption that all vertices in a four cycle have degree 3 in G. If c is twins with a, then a' = c', but this is impossible due to Lemma <ref>. Thus, neither a nor c (by symmetry) have twins in G_ab,bc. Therefore, b has a twin in G_ab,bc. Now either b and b' form a P_2 component in G_ab,bc or b' has a leaf b” in G. These cases are handled somewhat similarly as Case <ref>. Assume that b and b' form a P_2-component in G_ab,bc. The graph G_b,b' = G - b - b' is twin-free, and thus there exists a locating-dominating set S_b,b' such that |S_b,b'| ≤n/2 - 1. The set S = S_b,b'∪{b} is clearly a locating-dominating set of G, and we have |S| ≤n/2. Assume then that b' has a leaf b” in G. Consider the graph G_bb' = G - bb'. Again, G_bb' is either twin-free or b' and b” form a P_2-component in G_bb'. As in the previous case, if b' and b” form a P_2-component in G_bb', we can easily construct a locating-dominating set S of G such that |S| ≤n/2 by considering a locating-dominating set of G - b' - b”. So assume that G_bb' is twin-free. There exists a locating-dominating set S_bb' of G_bb' such that |S_bb'| ≤n/2 and b' ∈ S_bb'. We claim that the set S_bb' is also a locating-dominating set of G. The only I-set that might differ between the two graphs is I(b) (assuming b ∉ S_bb'). Since S_bb' is a locating-dominating set of G_bb', we have a ∈ S_bb' or c ∈ S_bb'. Now, if I_G(b) is the same as the I-set of some other vertex, then that vertex must be a', d or c'. However, we also have b' ∈ I_G(b) and b' is not adjacent to a', d or c'. Thus, I_G(b) is unique, and the set S_bb' is a locating-dominating set of G with |S_bb'| ≤n/2. ◂◂◂ ◂◂ ◂ Therefore, (G) ≤n/2 holds for all triangle-free twin-free subcubic graphs with no isolated vertices. We then assume that G is not triangle-free. [G has triangles as induced subgraphs] Let us assume T = G[a,b,c] to be a triangle in G induced by the vertices a, b and c. If any two vertices of T are of degree 2 in G, then it implies that the said vertices are twins in G, a contradiction to our assumptions. Hence, we assume from here on that at most one vertex of T is of degree 2 in G. [T has a vertex of degree 2 in G] Without loss of generality, let us assume that _G(a)=2. This implies that the vertices b and c have neighbours, say b' and c', respectively, in G outside of T (see Figure <ref>). Observe that b' ≠ c', or else, the pair b and c would be twins in G, a contradiction to our assumption. Let G_ab = G - ab. [G_ab is twin-free] By our assumption on the minimality of the graph G, there exists an LD-set S_ab of G_ab such that |S_ab| ≤n/2. Moreover, by Lemmas <ref> and <ref>, since c is a support vertex and a is a leaf in G_ab, we can assume that c ∈ S_ab and a ∉ S_ab. We then show that the set S_ab is also an LD-set of G. If b ∉ S_ab, then I_G(x) = I_G_ab(x) for each x∈ V(G)∖ S_ab, and thus S_ab is an LD-set of G. Let us, therefore, assume next that b ∈ S_ab. Now, if S_ab is not an LD-set of G, it would imply that there exists a vertex x of G other than a and not in S_ab such that the pair a,x is separated in G_ab but not in G. Since I_G(a) = {b,c}, we must have I_G(x) = {b,c} which makes the vertices b and c twins in G, a contradiction to our assumption. Hence, S_ab is an LD-set of G also in the case that a ∉ S_ab and b ∈ S_ab. Thus, overall, S_ab is an LD-set of G with |S_ab| ≤n/2 and thus, the result follows in the case that the graph G_ab is twin-free. ◂◂ Now, by symmetry, we may assume that, if the graph G_ac = G - ac is also twin-free, then the result holds as well. [Both G_ab and G_ac have twins] Let us first look at the graph G_ab. The twins in G_ab must either be a pair a,x or a pair b,y, where x and y are vertices of G different from a and b, respectively. [a,x are twins in G_ab for some x ∈ V(G) ∖{a}] In this case, since ac ∈ E(G), we must have x ∈ N_G(c) ∖{a} = {b,c'}. If x = b, that is, if a and b are twins in G_ab, it implies a contradiction since _G_ab(a) = 1 2 = _G_ab(b). Therefore, x b. In other words, we have x = c', that is, a and c' are twins in G_ab. Therefore, we must also have _G(c') = _G_ab(c') = _G_ab(a) = 1. We now look at the graph G_ac = G - ac. By our assumption on Case <ref>, the graph G_ac also has twins. However, G_ac cannot have twins of the form c,z for some vertex z≠ c of G, since the neighbour c' of c is of degree 1 in G_ac. Therefore, by analogy to the previous case when a and c' were twins in G_ab, the vertices a and b' must be twins in G_ac. Again, by the same analogy, we infer that _G(b') = 1. Therefore, the graph G is determined and it can be checked that the set S = {b,c} is an LD-set of G with |S| < 1/2× 5 = n/2. Hence, the result follows. ◂◂◂ Again, by symmetry, we may assume that the result also holds in the case where a is a twin in graph G_ac. Thus, we assume from here on that in graphs G_ab or G_ac the vertex a does not belong to a pair of twins. [b,y are twins in G_ab for some vertex y ∈ V(G) ∖{a,b}] In this case, since bc ∈ E(G), we must have y ∈ N_G(c) ∖{a,b} = {c'}. Therefore, we must have y=c' with b'c' ∈ E(G). Moreover, _G(c') = _G_ab(c') = _G_ab(b) = 2. We again look at the graph G_ac which, by assumption on Case <ref>, has twins other than the pair a,b'. Therefore, by symmetry to the graph G_ab, b' and c must be twins in G_ac with _G(b') = 2. Therefore, the graph G is again determined and it can be checked that the set S = {b,c} is an LD-set of G with |S| = 2 < 1/2× 5 = n/2. Hence, the result holds in this case. ◂◂◂ Hence, the result follows in the case that both G_ab and G_ac have twins. ◂◂ In conclusion, therefore, the claim holds when T has a vertex of degree 2 in G. ◂ [Each vertex of T is of degree 3 in G] By assumption, we have _G(a) = _G(b) = _G(c) = 3. Let N_G(a) ∖{b,c} = a', N_G(b) ∖{a,c} = b' and N_G(c) ∖{a,b} = c'. Notice that each of a', b' and c' must be distinct, or else, G would have twins, a contradiction to our assumption. If a'b', a'c', b'c' ∈ E(G), then the graph is a cubic graph in which the vertex subset S = {a,b,c} can be checked to be an LD-set of order |S| = 3 = 1/2× 6 = n/2. Hence, in this case, the result holds. Let us, therefore, assume without loss of generality that a'b' ∉ E(G) (see Figure <ref>). We now consider the graph G_ab = G - ab. [G_ab is twin-free] By the minimality of G, let us assume that S_ab is an LD-set of G_ab such that |S_ab| ≤n/2. We show that the set S_ab is also an LD-set of G. Now, if either a,b ∈ S_ab or a,b ∉ S_ab, then the set S_ab is also an LD-set of G and we are done. Indeed, in these cases we have I_G(S_ab ; x) = I_G_ab(S_ab ; x) for each x∉S_ab. Hence, by symmetry and therefore without loss of generality, let us assume that a ∈ S_ab and b ∉ S_ab. Let us next suppose on the contrary that S_ab is not an LD-set of G. Then, the only way that can happen is if there exists a vertex y of G other than b and not in S_ab such that the pair b,y is separated in G_ab but not in G. In particular, we must have y ∈ N_G_ab(a) = {a',c}. Let us first assume that y = a', that is, the pair a',b is not separated by S_ab in G. We must have |{b',c}∩ S_ab| ≥ 1 in order for S_ab to dominate b. Now, if b' ∈ S_ab, it implies that a'b' ∈ E(G) contrary to our assumption. Therefore, b' ∉ S_ab. This implies that c ∈ S_ab. This further implies that a'c ∈ E(G), thus making the pair a,c twins in G, a contradiction to our assumption. Hence, we conclude that y a'. Let us therefore assume now that y = c, that is the pair b,c is not separated by S_ab in G. Therefore, in particular, we have c ∉ S_ab. Therefore, in order for S_ab to dominate b, we must have b' ∈ S_ab which implies that b'c ∈ E(G). This further implies that the vertices b and c are twins in G, a contradiction to our assumption. Therefore, y c either. In other words, this proves that S_ab is indeed an LD-set of G. Hence, in the case that the graph G_ab is twin-free, we find an LD-set S_ab of G such that |S_ab| ≤n/2 and thus, the result holds. ◂◂ [G_ab has twins] In this case, a twin pair in G_ab is either of the form a,x or of the form b,y, where x and y are vertices of G different from a and b, respectively. By symmetry, let us consider a pair a,x to be twins in G_ab. Now, since ac ∈ E(G), we must have x ∈ N_G(c) ∖{a} = {b,c'}. If x = b, that is, if a and b are twins in G_ab, then they are twins in G as well, a contradiction to our assumption. Therefore, x b and so, we have x = c', that is, a and c' are twins in G_ab. This implies that a'c' ∈ E(G) (see Figure <ref>). Moreover, _G(c') = _G_ab(c') = _G_ab(a) = 2. We next consider the graph G_c,c' = G - c - c'. Observe first that G_c,c' is connected since N_G(c')={a',c} and N_G(c)={a,b,c'}. Furthermore, graph G_c,c' is also twin-free. Indeed, if x∈ V(G_c,c') is a twin with some vertex y∈ V(G_c,c'), then we may assume, without loss of generality, that x∈{a,b,a'}. We have N_G_c,c'(a)={b,a'}. Thus, if x=a, then y=b' and the edge a'b'∈ E(G), a contradiction. By symmetry, we have that x≠ b and y∉{a,b} and hence, x=a'. However, now y∈ N_G_c,c'(a). Thus, y=b, a contradiction. Therefore, graph G_c,c' is twin-free and, by the minimality of G, it admits an LD-set S_c,c' of size at most n/2-1. Observe that for set S_c,c' to dominate a in G_c,c', we have {a,b,a'}∩ S_c,c'≠∅. First assume that a'∈ S_c,c'. [a'∈ S_c,c'] We consider set S=S_c,c'∪{c'}. In particular, set S is dominating in G and it separates all vertex pairs x,y∈ V(G_c,c')∖ S_c,c' using the vertices in S_c,c'. Furthermore, vertex c is the only vertex in V(G)∖ S_c,c' adjacent to c'∈ S. Thus, S separates all vertices in V(G)∖ S and it is an LD-set in graph G with cardinality at most n/2. ◂◂◂ Assume next that a or b is in S_c,c'. [a∈ S_c,c' or b∈ S_c,c'] We consider set S=S_c,c'∪{c}. Clearly, S is dominating in G. Furthermore, as in the previous case, set S separates all vertices x,y∈ V(G_c,c')∖ S_c,c' using the vertices in S_c,c'. Finally, vertex c' is adjacent to c but neither of the vertices a nor b. Thus, also c' is separated from all other vertices in V(G)∖ S. Hence, S separates all vertices in V(G)∖ S and it is an LD-set in graph G with cardinality at most n/2. ◂◂◂ Therefore, graph G admits an LD-set of claimed cardinality in the case that G_ab has twins.◂◂ Thus, in the case that all three vertices of T are of degree 3 in G, the result follows. ◂ Therefore, the theorem holds if the graph G has triangles as induced subgraphs. Finally, Cases 1 and 2 together prove the theorem. Let G be a connected open-twin-free subcubic graph on n vertices without isolated vertices other than K_3 or K_4. We have (G)≤n/2. Let us assume on the contrary, that there exists an open-twin-free subcubic graph G on n vertices without isolated vertices other than K_3 or K_4 for which we have (G)> n/2. Furthermore, let us assume that G has the fewest number of closed twins among these graphs. By Proposition <ref>, there is at least one pair of closed twins in G. Notice that since G is not K_4, we cannot have a triple of pairwise closed twins. Moreover, notice that closed twins have degree 2 or 3. Let us first assume that there exist closed twins u,v of degree 2 in G and that they are adjacent to vertex w. Notice that now (w)=3. Consider graph G'=G-vw. In this graph, vertex v is a leaf, there are no open twins and the number of closed twins is smaller than in G. Hence, (G')≤n/2. Let us denote by S' an optimal locating-dominating set in G'. By Lemma <ref>, we may assume that u∈ S' and by Lemma <ref>, we may assume that v∉S'. Moreover, since I_G'(w)≠ I_G'(v)={u}, we have some vertex z∈ I_G'(w). However, now S' is a locating-dominating set in G. Indeed, we either have z≠ w and I_G(v)={u} while I_G(w)={z,u}. If on the other hand z=w, then v is separated from all other vertices in V(G)∖ S' since v is the only vertex in V(G)∖ S' which is adjacent to u. Let us next assume that u and v are closed twins of degree 3 in G. Let us denote N[u]=N[v]={u,v,w,w'}. Notice that if there is an edge between w and w', then G is the complete graph K_4. Moreover, if they are adjacent to the same vertex z≠ u,v, then w and w' are open twins. The same is true if w and w' have degree 2. Hence, we may assume that z∈ N(w)∖ N(w'). Assume next that (w')=2. First notice that if z is a leaf, then G has five vertices and set {w,v} is locating-dominating in G. Hence, we may assume that z is not a leaf. Consider graph G_v,w'=G-v-w'. Since G_v,w' has fewer closed twins than G, there exists a locating-dominating set S_v,w' of size at most n/2-1 which has w∈ S_v,w' and u∉S_v,w' by Lemmas <ref> and <ref>. Furthermore, now set S=S_v,w'∪{v} is locating-dominating in G. Indeed, w separates vertices u and w' while v separates w from all other vertices of G. Hence, (w')=3. Assume next that z∈ N(w)∖ N(w') and z'∈ N(w')∖ N(w). Consider graph G'=G-uw-uw'-vw. In this graph u is a leaf and v is its support vertex. Now also w and z are a leaf and a support vertex, respectively, or w and z form a P_2 component. Moreover, there is a possibility that we created a pair of open twins if z is a support vertex in G. Assume first that w and z form a P_2 component in G'. Now G_w,z' = G'-w-z contains fewer closed twins than G. Thus, there exists an optimal locating-dominating set S' of G_w,z' that contains v and |S'|≤n/2 - 1. Now, S = S' ∪{z} is a locating-dominating set of G' that contains both v and z such that |S|≤n/2. Moreover, the vertex w' is dominated by at least two vertices. When we consider the set S in G, the only possible vertices in V(G)∖ S which might not be separated are u, w and w'. However, w is only of these vertices adjacent to z and w' is dominated by at least two vertices. Hence, either w'∈ S or z'∈ S. In both cases, set S is locating-dominating in G. Assume then that z is a support vertex in G' but not in G. The number of closed twins in G' is smaller than in G. Thus, we have an optimal locating-dominating set S in G' which contains vertices z and v such that |S|≤n/2. By the same arguments as in the case above concerning the P_2-component, S is locating-dominating also in G. Let us next assume that z is a support vertex and ℓ_z is the adjacent leaf in G. In this case, we consider subgraph G”=G-uw-vw'. Notice that G” does not contain any open twins and it has a smaller number of closed twins than G. Hence, it admits an optimal locating-dominating set S' with |S'|≤n/2. By Lemma <ref> we may assume that z∈ S' and by Lemma <ref> that ℓ_z∉S'. Hence, we have w∈ S' or v∈ S' for separating ℓ_z and w. If w∈ S', then also set S_w=(S'∖{w})∪{v} is a locating-dominating set in G”. Moreover, S_w is a locating-dominating set in G. Indeed, the only vertices in G which might not be separated are w,w' and u. However, w is the only one of these vertices adjacent to z. If w' and u are not separated in G, then w', u ∉ S_w. However, w' is dominated by S_w in G”, and thus z' ∈ S_w and it separates w' and u also in G. Therefore, S_w is a locating-dominating set of cardinality |S_w|≤n/2 in G. Now, the claim follows. § SUBCUBIC GRAPHS WITH OPEN TWINS OF DEGREE 3 Let G be any graph, F be a connected graph and U be a vertex subset of F. Then a subgraph H of G is called an (F;U)-subgraph of G if the following hold. * There exists an isomorphism j : V(F) → V(H). * N_G[j(u)] ⊆ V(H) and j(u)j(v)∈ E(H) for all u ∈ U and j(v)∈ N_G(j(u)). We note that an (F;U)-subgraph H of G could also be considered as a subgraph of G isomorphic to F such that the closed neighbourhood in G of any vertex j(u) for u∈ U together with the edges in G incident with j(u) are also contained in the subgraph H. If U = {u_1,u_2, … , u_k} and H is an (F;U)-subgraph of G, then we may also refer to H as an (F;u_1,u_2, … , u_k)-subgraph of G. We now define a list of pairwise non-isomorphic graphs as follows. In the proof of Theorem <ref>, we show that a connected subcubic graph G with open twins of degree 3 and at least 7 edges has F_0 and at least one of graphs F_i, i∈ [1,6], as its subgraph. * Graph F_0: V(F_0) = {û,v̂,x̂,ŷ,ẑ} and E(F_0) = {ûx̂, ûŷ, ûẑ, v̂x̂, v̂ŷ, v̂ẑ}. See Figure <ref>. The vertices û and v̂ are open twins of degree 3 in F_0. Any graph that has a pair of open twins of degree 3 has an (F_0;û, v̂)-subgraph. * Graph F_1: V(F_1) = {û,v̂,x̂, ŷ, ẑ, ŵ, ŵ'} and E(F_1) = {ûx̂, ûŷ, ûẑ, v̂x̂, v̂ŷ, v̂ẑ, ŷŵ, ẑŵ, ŵŵ'}. See Figure <ref>. The pairs û, v̂ and ŷ, ẑ are open twins of degree 3 in F_2. * Graph F_2: V(F_2) = {û,v̂,x̂,x̂',ŷ,ŷ',ẑ} and E(F_2) = {ûx̂, ûŷ, ûẑ, v̂x̂, v̂ŷ, v̂ẑ, ŷŷ', ẑẑ'}. See Figure <ref>. * Graph F_3: V(F_3) = {û,v̂,x̂,x̂',ŷ,ẑ} and E(F_3) = {ûx̂, ûŷ, ûẑ, v̂x̂, v̂ŷ, v̂ẑ, ŷẑ, x̂x̂'}. See Figure <ref>. The pair û, v̂ is open twins of degree 3 in F_2 and the pair ŷ, ẑ is closed twins of degree 3 in F_2. * Graph F_4: V(F_4) = {û,v̂, ŵ, ŵ',x̂, x̂', ŷ, ẑ} and E(F_4) = {ûx̂, ûŷ, ûẑ, v̂x̂, v̂ŷ, v̂ẑ, ŵŷ, ŵẑ, ŵŵ', x̂x̂'}. See Figure <ref>. * Graph F_5: V(F_5) = {û,v̂, ŵ, x̂, x̂', ŷ, ẑ} and E(F_5) = {ûx̂, ûŷ, ûẑ, v̂x̂, v̂ŷ, v̂ẑ, ŵŷ, ŵẑ, x̂x̂', ŵx̂'}. See Figure <ref>. * Graph F_6: V(F_6) = {û,v̂, ŵ, x̂, x̂', ŷ, ŷ', ẑ, ẑ'} and E(F_6) = {ûx̂, ûŷ, ûẑ, v̂x̂, v̂ŷ, v̂ẑ, x̂x̂', ŷŷ', ẑẑ'}. See Figure <ref>. Let G be a graph, i ∈ [0,6] and let U_i be a vertex subset of F_i. Moreover, let H_i be an (F_i;U_i)-subgraph of G under a homomorphism j_i : V(F_i) → V(H_i). Then we fix a certain naming convention for the vertices of H_i and F_i as follows. Firstly, we fix a set of 10 symbols, namely L = {u,v,w,w',x,x',y,y',z,z'}. Then, as shown in Figure <ref>, any vertex of F_i will be denoted by the symbol â∈ L for some a ∈ L. In addition, any vertex j_i(â) of H_i, for some a ∈ L, will be denoted by the symbol a (that is, by dropping the hat on the symbol â). We shall call this the drop-hat naming convention on V(H_i). We hope that the conventions and the namings will become clearer to the reader as we proceed further with their usages. Let G be a connected subcubic graph on m ≥ 7 edges, not isomorphic to K_3,3 and without open twins of degree 1 or 2. Then, we have (G) ≤n/2. Let n = n(G) be the number of vertices and m = m(G) be the number of edges of the graph G. Since m ≥ 7, the graph G is not isomorphic to either K_3 or K_4. Therefore, if G does not contain any open twins of degree 3, then the result holds by Theorem <ref>. Hence, we assume from now on that G has at least one pair of open twins of degree 3. In other words, G has an (F_0;û,v̂)-subgraph. Let denote the set of all connected subcubic graphs without open twins of degrees 1 or 2 and not isomorphic to K_3,3. Notice that in a subcubic graph not isomorphic to K_3,3, for each vertex u that is a twin of degree 3, there exists exactly one other vertex v with the same open neighbourhood. Let us assume that G∈, (G)>n/2 and among those graphs G has the smallest number of edges m≥7. We next consider the graph F'_3 = F_3 - {x̂'}∈ that has one pair of open twins of degree 3, namely û and v̂. Now, it can be verified that the set S'_3 = {v̂, ẑ} is an LD-set of F'_3 with |S'_3| < n/2. Notice that F'_3 is the smallest graph (with respect to the number of both vertices and edges) with open twins of degree 3 but without open twins of degree 1 or 2 other than K_3,3. Since, by assumption, G has an (F_0;û,v̂)-subgraph, say H_0, under an injective homomorphism j_0 : V(F_0) → V(G), applying the drop-hat naming convention, the vertices j_0(û), j_0(v̂), j_0(x̂), j_0(ŷ) and j_0(ẑ) of H_0 are called u, v, x, y and z. Since G does not have open twins of degree 2, at least two of the vertices in {x,y,z} must have degree 3 in G. Therefore, without loss of generality, let us assume that _G(y) = _G(z) = 3. Then, let x' (if it exists), y' and z' be the neighbours of x, y and z, respectively, in V(G) ∖{u,v}. Notice that we may possibly have y' = z'. Then the following two cases arise. [_G(x) = 2] In this case, if y' = z, or equivalently, z' = y, this implies that yz ∈ E(G) and therefore, y and z are closed twins of degree 3 in G. Since _G(x) = 2, this implies that the graph G is determined on 5 vertices such that G ≅ F_3'= F_3 - {x̂'}. As we have seen, (G)=2<n/2 in this case. Hence this possibility does not arise as it contradicts our assumption that (G) > n/2. However, the following two other possibilities may arise. * y' = z' = w. In this case, if _G(w) = 2 as well, then the graph G is determined on 6 vertices such that G ≅ F_1 - {ŵ'}. Now, it can be verified that the set S = {u,z,w} is an LD-set of G with |S| = n/2. This contradicts our assumption that (G) > n/2. Hence, we must have _G(w) = 3. So, let w' be the neighbour of w in V(G) ∖{y,z}. Now, we cannot have w' = x, since _G(x) = 2. This implies that G has an (F_1;û,v̂,ŵ,x̂,ŷ,ẑ)-subgraph. * y' z'. In this case, G contains an (F_2;û,v̂,x̂,ŷ,ẑ)-subgraph. [_G(x) = 3] In this case, if x'=y'=z', then the graph G is isomorphic to K_3,3 contradicting our assumption. Hence, this possibility cannot arise. However, any two of x', y' and z' could be equal. This implies the following possibilities. * {x,y,z}∩{x',y',z'}∅. Without loss of generality, let us assume that y'=z, or equivalently, y=z'. This implies that yz ∈ E(G) and hence, the graph G has an (F_3;û,v̂,x̂,ŷ,ẑ)-subgraph. * {x,y,z}∩{x',y',z'} = ∅ and |{x',y',z'}| =2. Without loss of generality, let us assume that y'=z'=w, say. Observe that now y and z are open twins of degree 3. Hence, if _G(w)=2, then by interchanging the names of the pairs w,x and by renaming x' as w' we end up in (F_1;û,v̂,ŵ,x̂,ŷ,ẑ)-subgraph of G as in Case 1. Hence, (in the original notation) we let _G(w)=3 and let w' be the neighbour of w in V(G) ∖{u,v}. If x' = w, or equivalently, w'=x, it implies that xw ∈ E(G). In other words, contrary to our assumption, G is isomorphic to K_3,3. Hence, we must have {w,x}∩{w',x'} = ∅. Now, if x' w', then H is an (F_4;û,v̂,ŵ,x̂,ŷ,ẑ)-subgraph; and if x'=w', then H is an (F_5;û,v̂,ŵ,x̂,ŷ,ẑ)-subgraph. Hence, with this possibility, the graph G either has an (F_4;û,v̂,ŵ,x̂,ŷ,ẑ)-subgraph or an (F_5;û,v̂,ŵ,x̂,ŷ,ẑ)-subgraph. * {x,y,z}∩{x',y',z'} = ∅ and |{x',y',z'}| = 3. In this case, there is an (F_6;û,v̂,x̂,ŷ,ẑ)-subgraph in the graph G. We prove the theorem by showing in the next claims that none of the above possibilities can arise thus, arriving at a contradiction. First we show that Possibility 1 of Case 1 cannot arise. G has no (F_1;û,v̂,ŵ,x̂,ŷ,ẑ)-subgraphs. Proof of Claim 1. On the contrary, let us suppose that G contains an (F_1;û,v̂,ŵ,x̂,ŷ,ẑ)-subgraph, say H_1. Therefore, by applying the drop-hat naming convention on V(H_1), the vertex x is of degree 2 in G and the pairs u,v and y,z are open twins of degree 3 in G. Let D = {ux} and let G' = G - D. This implies that G' ∈. Moreover, we have 8 ≤ m(G') < m(G) and hence, by the minimality of G, there exists an LD-set S' of G' such that |S'| ≤n/2. We further notice that the vertex x is a leaf with its support vertex v in G'. Therefore, by Lemmas <ref> and <ref>, we assume that v ∈ S' and x ∉ S'. Now, if u ∉ S', then S' is also an LD-set of G. Let us, therefore, assume that u ∈ S'. If, on the contrary, S' is not an LD-set of G, it would mean that, in G, the vertex x is not separated by S' from some other vertex p ∈ N_G(u)∩ N_G(v)∖{S'∪ x'}⊆{y,z}. Now, since y and z are open twins in G (hence, also in G'), it implies that S' ∩{y,z}∅. Without loss of generality, therefore, let us assume that y ∈ S' and that the vertices x,z are not separated by S' in G. In this case, we claim that the set S = (S' ∖{u}) ∪{w} is an LD-set of G. The set S is a dominating set of G since each vertex in N_G[u] has a neighbour in {v,y}⊂ S. We therefore show that S is also a separating set of G. In particular, y separates u from other vertices in V(G)∖ S. While v separates x and z from other vertices in V(G)∖ S and w separates x and z. Since S' is an LD-set of G', set S is locating-dominating in G. Moreover, |S| = |S'| ≤n/2 contradicts our assumption that (G) > n/2. Hence, this proves that G cannot contain any (F_1;û,v̂,ŵ,x̂,ŷ,ẑ)-subgraph. ▪ Next, we show that Possibility 2 of Case 1 cannot arise. G has no (F_2;û,v̂,x̂,ŷ,ẑ)-subgraphs. Proof of claim 2. On the contrary, suppose that G contains an (F_2;û,v̂,x̂,ŷ,ẑ)-subgraph, say H_2. Applying the drop-hat naming convention on V(H_2), the vertex x has degree 2 in G and the vertices u,v are open twins of degree 3 in G. Now, if _G(y') = _G(z') = 1, then the graph G is determined to be isomorphic to F_2 on n=7 vertices. It can be checked in this case that the set S = {v,y,z} is an LD-set of G such that |S| = 3 < n/2. This contradicts our assumption that (G) > n/2. Hence, we assume that at least one of y' and z' has degree of at least 2 in G. In other words, m ≥ 9. Let D = {ux} and let G' = G - D. We have G' ∈. Moreover, 8 ≤ m(G') < m and hence, by the minimality of G, there exists an LD-set S' of G' such that |S'| ≤n/2. We further notice that the vertex x is a leaf with its support vertex v in G'. Therefore, by Lemmas <ref> and <ref>, we assume that v ∈ S' and x ∉ S'. Now, if u ∉ S', then S' is also an LD-set of G. Let us assume that u ∈ S'. Now, if S' is not an LD-set of G, then, in G, the vertex x is not separated by S' from some other vertex p ∈ N_G(u)∩ N_G(v) ∖ (S' ∪{x})⊆{y,z}. Now, in order for S' to separate the pair y,z in G', we must have {y,y'}∩ S' ∅ or {z,z'}∩ S' ∅. Therefore, without loss of generality, let us assume that {y,y'}∩ S' ∅. This implies that p = z, that is, the vertices x,z are not separated by S' in G. In particular, therefore, we have z ∉ S'. We now claim that the set S = (S' ∖{u}) ∪{z} is an LD-set of G. The set S is a dominating set of G since each vertex in N_G[u] has a neighbour in {v,z}⊂ S. Therefore, we show that S is also a separating set of G. First of all, I_G(S;u)={z} is unique since v∈ S and z' is dominated by some vertex in S'. Furthermore, v separates x from all other vertices except y. However, we had {y,y'}∩ S'≠∅. Thus, y∈ S or y and x are separated by S. Since S' is locating-dominating in G', also all other vertices in V(G)∖ S are pairwise separated. Therefore, S is an LD-set of G with |S| = |S'| ≤n/2. This contradicts our assumption that (G) > n/2. Hence, G cannot not contain any (F_2;û,v̂,x̂,ŷ,ẑ)-subgraph. ▪ Observe that together Claims 1 and 2 imply that Case 1 above is not possible and in particular _G(x)=3. Therefore, graph G cannot have an (F_0;û,v̂,x̂)-subgraph. G has no (F_3;û,v̂,x̂,ŷ,ẑ)-subgraphs. Proof of Claim 3. On the contrary, suppose that G has an (F_3;û,v̂,x̂,ŷ,ẑ)-subgraph, say H_3. Applying the drop-hat naming convention on V(H_3), the vertices u and v are open twins of degree 3 and y and z are closed twins of degree 3 in G. We first show that either we can assume m ≥ 11, or else, we end up with a contradiction. Let G^* = G - {u,v,x,y,z}. If m ≤ 10, then m(G^*) ≤ 2. In other words, n(G^*) ≤ 3. If n(G^*) = 1, that is, V(G^*) = {x'}, then the graph G is determined on n=6 vertices to be isomorphic to F_3. Moreover, it can be verified that the set S = {x',v,z} is an LD-set of G such that |S| = 3 = n/2. This contradicts our assumption that (G) > n/2. Hence, let us now assume that n(G^*) = 2 and that V(G^*) = {x',x”}. Then, we must have x'x”∈ E(G) and, again, the graph G is determined on n=7 vertices. In this case too, it can again be verified that the set S = {x',v,z} is an LD-set of G such that |S| = 3 < n/2. This again contradicts our assumption that (G) > n/2. Hence, we now assume that n(G^*) = 3 and that V(G^*) = {x',x”,x”'}. If x'x”' ∈ E(G), then in order for x” and x”' to not be open twins of degree 1 (since G does not have open twins of degree 1), there must be an edge in G^* other than x'x” and x'x”'. Therefore, m(G) ≥ 11 in this case. Thus, let x'x”' ∉ E(G) which implies that x”x”' ∈ E(G). Then the graph G is determined on n=8 vertices and it can be verified that the set S = {x',x”,v,z} is an LD-set of G such that |S| = 4 = n/2. This again contradicts our assumption that (G) > n/2. Hence, we may assume that m ≥ 11. Now, let D = {ux,uz,yz} and G' = G-D. Then, we have G' ∈. Moreover, 8 ≤ m(G') < m and hence, by the minimality of G, there exists an LD-set S' of G' such that |S'| ≤n/2. Notice that the vertices u and z are leaves with support vertices y and v, respectively, in the graph G'. Therefore, by Lemmas <ref> and <ref>, we assume that v,y ∈ S' and that u,z ∉ S'. We now claim that S' is also an LD-set of G. To prove so, we only need to show that, in the graph G, the vertices u and z are separated by S' from all vertices in V(G) ∖ S'. The vertex y∈ S' separates u and z from other vertices in V(G) ∖ S' and vertex v∈ S' separates u and z from each other. Hence, S' is an LD-set of G with |S'| ≤n/2. This contradicts our assumption that (G) > n/2 and proves that G has no (F_3;û,v̂,x̂,ŷ,ẑ)-subgraphs. ▪ Following claim considers Case 2, Possibility 2. G has neither (F_4;û,v̂,ŵ,x̂,ŷ,ẑ)- nor (F_5;û,v̂,ŵ,x̂,ŷ,ẑ)-subgraphs. Proof of Claim 4. On the contrary, suppose that G has an (F_4;û,v̂,ŵ,x̂,ŷ,ẑ)-subgraph, say H_4. Then, in the drop-hat naming convention on V(H_4), the pairs u,v and y,z are open twins of degree 3 in G. Now, if _G(w') = _G(x') = 1, then the graph G is determined on n=8 vertices to be isomorphic to F_4. In this case, it can be verified that the set S = {v,w,x,y} is an LD-set of G with |S| = 4 = n/2. Hence, we may assume that the degree of w' or x' is at least 2 in G. Therefore, we have m ≥ 11. Now, let D = {ux,uy,wy} and G' = G-D. Then, we have G' ∈. Moreover, 8 ≤ m(G') < m and hence, by the minimality of G, there exists an LD-set S' of G' such that |S'| ≤n/2. We remark that the following arguments will also be used in the case of (F_5;û,v̂,ŵ,x̂,ŷ,ẑ)-subgraph, that is when w'=x', in the following paragraph. Notice that the vertices u and y are leaves with support vertices z and v, respectively, in the graph G'. Hence, by Lemmas <ref> and <ref>, we assume that v,z ∈ S' and that u,y ∉ S'. We now claim that S' is also an LD-set of G. To prove so, we only need to show that, in G, the vertices u and y are separated by S' from all other vertices in V(G) ∖ S'. First of all, z separates u from each other vertex in V(G) ∖ S' except possibly w. However, since S' is locating-dominating in G', we have {w,w'}∩ S'≠∅ and w' separates u and w. Similarly, v separates y from V(G) ∖ S' except possibly x. However, again either x∈ S' or x' separates y and x. This implies that is S' an LD-set of G with |S'| ≤n/2. This contradicts our assumption that (G) > n/2. Hence, G cannot have an (F_4;û,v̂,ŵ,x̂,ŷ,ẑ)-subgraph. Again, on the contrary, suppose that G has an (F_5;û,v̂,ŵ,x̂,ŷ,ẑ)-subgraph, say H_5. Then, in the drop-hat naming convention on V(H_5), the pairs u,v and y,z are open twins of degree 3 in G. Now, if _G(x') = 2, then the graph G is determined on n=7 vertices to be isomorphic to F_5. In this case, it can be verified that the set S = {v,x',y} is an LD-set of G with |S| = 3 < n/2. Hence, we may assume that _G(x') = 3. Therefore, we have m ≥ 11. Now, again let D = {ux,uy,wy} and G' = G-D. Then again, G' ∈. Moreover, 8 ≤ m(G') < m and hence, (G') ≤n/2. Notice that in the preceding arguments we did not require the case w' = x' to be considered. This implies that by the exact same arguments as above, it can be shown that G cannot have an (F_5;û,v̂,ŵ,x̂,ŷ,ẑ)-subgraph. ▪ Finally, we are left only with Possibility 3 of Case 2. G has no (F_6;û,v̂,x̂,ŷ,ẑ)-subgraphs. Proof of Claim 5. On the contrary, suppose that G has an (F_6;û,v̂,x̂,ŷ,ẑ)-subgraph, say H_6. Then, applying the drop-hat naming convention on V(H_6), the vertices u and v are open twins of degree 3 in G. Now, if all three of x',y' and z' are leaves in G, then the graph G is determined on n=8 vertices and it can be verified that the set S = {u,x,y,z} is an LD-set of G with |S| = n/2. This contradicts our assumption that (G) > n/2. Therefore, without loss of generality, let us assume that _G(z') ≥ 2 and that N_G(z') ∖{z} = {z”} if _G(z') = 2 and N_G(z') ∖{z} = {z”,z”'} if _G(z') = 3. In what follows, we simply assume that _G(z')=3 and that N_G(z') ∖{z} = {z”,z”'}, as the arguments remain intact even when _G(z') = 2 and the vertex z”' is absent. [The graph G' = G-{zz'} does not contain open twins of degree 1 or 2] In this case, let D = {zz'} and G' = G-D. Let F'_z' be the component of G' to which the vertex z' belongs and let F'_z be the component of G' to which the vertex z belongs. Notice that we may possibly have F'_z' = F'_z. Furthermore, we have F'_z, F'_z'∈. Moreover, we have 8 ≤ m(F'_z) < m and hence, by the minimality of G, there exists an LD-set S'_z of the component F'_z such that |S'_z| ≤n'_z/2, where n'_z is the order of the component F'_z. For the component F'_z', we will denote by S'_z' its minimum-sized LD-set. [F'_z' F'_z] We show that there exists an LD-set S'_z' of the component F'_z' such that |S'_z'| ≤n'_z'/2, where n'_z' is the order of the component F'_z'. Let us first assume that F'_z' does not contain any open twins of degree 3. Here we show that F'_z' cannot be isomorphic to either K_3 or K_4. First of all, we notice that F'_z' is not isomorphic to K_4 since the latter is 3-regular and _F'_z'(z') ≤ 2. Let us, therefore, assume that F'_z'≅ K_3. Thus, let V(F'_z') = {z',z”,z”'}. Then, we take D' = {z'z”,z'z”'} and G” = G-D'. Then, the vertex z' is a leaf with support vertex z in a component, say F”, of G” which belongs to and with 9 ≤ m(F”) < m. Hence, by the minimality of G, there exists an LD-set S” of G” such that |S”| ≤n/2-1. By Lemmas <ref> and <ref>, we assume that z ∈ S” and z' ∉ S”. Then, it can be verified that the set S = S”∪{z”} is an LD-set of G with |S| ≤n/2. This implies a contradiction to our assumption that (G) > n/2. Hence, F'_z' is not isomorphic to K_3 either. This implies, by Theorem <ref>, that there exists an LD-set S'_z' of the component F'_z' such that |S'_z'| ≤n'_z'/2. Let us then assume that F'_z' has a pair of open twins of degree 3. Since we have restricted the possible (F;U)-subgraphs of G in previous claims, the component F'_z' together with vertex z and edge zz' must contain an (F_6;û,v̂,x̂,ŷ,ẑ)-subgraph. This implies that 8 ≤ m(F'_z') < m and hence, by the minimality of G, there exists an LD-set S'_z' of the component F'_z' such that |S'_z'| ≤n'_z'/2. We now claim that the set S = S'_z'∪ S'_z is an LD-set of G. It can be verified that S is a dominating set of G. To prove that S is also a separating set of G, we only need to show that the vertex z is separated by S from all vertices in {z',z”,z”'}∖ S and the vertex z' is separated by S from the vertices in {u,v}∖ S. However, the first of these claims is true due to the fact that {u,v}∩ S'_z ∅ since u and v are open twins in the component F'_z; and second one holds by the fact that {z',z”,z”'}∩ S'_z'∅ in order for S'_z' to dominate z'. Hence, S is, indeed, an LD-set of G. Moreover, |S| = |S'_z'| + |S'_z| ≤n'_z'/2 + n'_z/2 = n/2 contradicts our assumption that (G) > n/2. ◂ [F'_z' = F'_z = G'] Let S'_z' = S'_z = S'. In this case, the set S' is an LD-set of G if either both z,z' ∈ S' or both z,z' ∉ S'. Since u and v are open twins in G', it implies that {u,v}∩ S' ∅. Therefore, without loss of generality, let us assume that v ∈ S'. Let us first assume that z ∈ S' and z' ∉ S'. Now, if on the contrary, S' is not an LD-set of G, it implies that, the vertex z' is not separated by S from a vertex p ∈ N_G(z) ∖ (S' ∪{z'})={u} in the graph G. Therefore, we must have {z”,z”'}∩ S' ∅ in order for S' to dominate the vertex z'. Without loss of generality, let us assume that z”∈ S'. Since z' and u are not separated, we have z”∈{x,y}. Thus, z' is adjacent to x or y, contradicting the structure implied by (F_6;û,v̂,x̂,ŷ,ẑ)-subgraph. Thus, S separates the pair u,z' and is thus an LD-set of G if z ∈ S' and z' ∉ S'. Let us next assume that z' ∈ S' and z ∉ S'. Again, let us assume on the contrary that S' is not an LD-set of G. Recall that v∈ S'. This implies that S' does not separate the vertex z and another vertex p ∈ (N_G(v)∩ N_G(z')) ∖ (S' ∪{z}). However, since z' is not adjacent to x or y, we have N_G(v)∩ N_G(z')={z}. Hence, we cannot select p. This implies that S is an LD-set of G also if z' ∈ S' and z ∉ S'. Moreover, |S'| ≤n/2 contradicts our assumption that (G) > n/2. ◂ Hence, G has no (F_6;û,v̂,x̂,ŷ,ẑ)-subgraphs in this case. [The graph G' = G-{zz'} has open twins of degree 1 or 2] In this case, the vertex z' is an open twin of degree 1 or 2 with some vertex, say z^*, such that N_G(z') ∖{z} = N_G(z^*) ⊆{z”,z”'}. Therefore, we have 1 ≤_G(z^*) ≤ 2. Let D = {z'z”,z'z”'} and G^* = G-D. Moreover, let F'_z” be the component of G^* to which the vertex z” belongs and let F'_z be the component of G^* to which the vertex z (and also z') belongs. Notice that we may possibly have F'_z” = F'_z. Further notice that F'_z,F'_z”∈. Indeed, the only candidates for open twins of degrees 1 or 2 are z',z” and z”'. However, z' is a leaf adjacent to z which does not have other adjacent leaves. Moreover, z^* is a neighbour only to the vertices z” and z”' in F'_z'. This implies that neither z” nor z”' is an open twin with any vertices in V(F'_z”) ∖{z”,z”'}. Furthermore, z” and z”' cannot be open twins of degree 1 since then they would be open twins of degree two in G. Finally, they cannot be open twins of degree 2 in G^*, since then they would be open twins of degree 3 in G adjacent to vertex z^* of degree 2 contradicting Claim 1 or 2. We denote by S'_z and S”_z” a minimum-sized locating-dominating set of F'_z and F'_z”, respectively. [F'_z” F'_z] To begin with, the component F'_z of G^* belongs to . Moreover, we have 9 ≤ m(F'_z) < m and hence, by the minimality of G, we have |S'_z| ≤n'_z/2, where n'_z is the order of the component F'_z. We next show that |S'_z”| ≤n'_z”/2, where n'_z” is the order of the component F'_z”. First of all, the component F'_z”∈. Let us first assume that F'_z” does not contain any open twins of degree 3. Here we show that F'_z” cannot be isomorphic to either K_3 or K_4. We notice that F'_z” is not isomorphic to K_4 since the latter is 3-regular and _F'_z”(z”) ≤ 2. Let us, therefore, assume that F'_z”≅ K_3. Thus, V(F'_z”) = {z^*,z”,z”'} and z”z”' ∈ E(F'_z”). Then, we take D' = {z”z^*,z”z”'} and G” = G-D'. Then, the graph G”∈ with 12 ≤ m(G”) < m and hence, by the minimality of G, there exists an LD-set S” of G” such that |S”| ≤n/2. Moreover, notice that the vertices z” and z^* are leaves with support vertices z' and z”', respectively, in G”. Therefore, by Lemmas <ref> and <ref>, we assume that z',z”' ∈ S” and z”,z^* ∉ S”. Then, the set S” is also an LD-set of G since z” is the only vertex in V(G) ∖ S” with the I-set I_G(S”;z”)={z',z”'}. Moreover, |S”| ≤n/2 implies a contradiction to our assumption that (G) > n/2. Hence, F'_z” is not isomorphic to K_3 either. This implies, by Theorem <ref>, that there exists an LD-set S'_z” of the component F'_z” such that |S'_z”| ≤n'_z”/2. Let us then assume that F'_z” has a pair of open twins of degree 3. Then by the previous claims together with the fact that neither z”,z”' nor z^* can be open twins of degree 3 in G^*, the component F'_z” must contain an (F_6;û,v̂,x̂,ŷ,ẑ)-subgraph. This implies that 8 ≤ m(F'_z”) < m and hence, by the minimality of G, there exists an LD-set S'_z” of the component F'_z” such that |S'_z”| ≤n'_z”/2. We now claim that the set S = S'_z”∪ S'_z is an LD-set of G. It can be verified that S is a dominating set of G. We notice that z' is a leaf with support vertex z in the component F'_z. Therefore, by Lemmas <ref> and <ref>, we have z ∈ S and z' ∉ S. Thus, to show that S is also a separating set of G, we only need to show that the vertex z' is separated by S from all vertices in (N_G[z”] ∪ N_G[z”']) ∖ S. However, this is true due to the fact that z ∈ S. Hence, S is, indeed, an LD-set of G. Moreover, |S| = |S'_z”| + |S'_z| ≤n'_z”/2 + n'_z/2≤n/2 contradicts our assumption that (G) > n/2. ◂ [F'_z' = F'_z = G^*] Then, let S'_z” = S'_z = S'. By Lemmas <ref> and <ref>, we have z ∈ S and z' ∉ S. Hence, the set S' is an LD-set of G if both z”,z”' ∉ S'. Therefore, without loss of generality, let us assume that z”∈ S'. Now, if on the contrary, S' is not an LD-set of G, it implies that the vertex z' is not separated by S from a vertex p ∈ (N_G(z”)∩ N_G(z)) ∖{z'}=∅ in the graph G. Since p cannot exist, S' is an LD-set of G with |S'| ≤n/2 which contradicts our assumption that (G) > n/2. ◂ Therefore, G has no (F_6;û,v̂,x̂,ŷ,ẑ)-subgraphs in this case as well. This proves the claim that G has no (F_6;û,v̂,x̂,ŷ,ẑ)-subgraphs exhausting all possibilities in Cases 1 and 2. ▪ This concludes the proof. In particular, our results imply that we may extend Conjecture <ref> to all cubic graphs with the exception of K_4 and K_3,3. Let G be a connected cubic graph other than K_4 or K_3,3. Then, (G)≤n/2. § EXAMPLES In this section, we consider some constructions which show the tightness of our results. First we show that we cannot further relax the twin-conditions in Theorem <ref> for subcubic graphs. There exists an infinite family of connected subcubic graphs with * open twins of degree 1; * open twins of degree 2, which have location-domination number over half of their order. Consider first Claim 1). Let G be a connected subcubic graph on n=12k vertices, for k≥1, as in Figure <ref>. The graph consists of a path P_v on 3k vertices named v_1,…,v_3k. Furthermore, to each vertex v_i we attach a three vertex path on vertices a_i,b_i,c_i from the middle vertex b_i. Let us construct a minimum size locating-dominating set S for G. Since each a_i and c_i are open twins, we may assume without loss of generality that each c_i∈ S. Furthermore, since each a_i needs to be dominated, at least one of the vertices a_i,b_i∈ S. If b_i∈ S and a_i∉S, then one of the vertices v_i-1,v_i,v_i+1 is in S to separate v_i and a_i. If b_i∉S, then we still require one of the three vertices to dominate v_i. In other words, a minimum size locating-dominating set contains a dominating set of P_v which contains at least k vertices. Hence, we have γ^LD(G)=|S|≥ k+6k=7n/12. Note that this lower bound is actually tight for this family of graphs as we obtain the value with the shaded vertices in Figure <ref> which can be verified to form a locating-dominating set. Consider next Claim 2). Let G be a connected subcubic graph on n=60k vertices, for k≥1, as in Figure <ref>. The graph consists of a path P_v on 5k vertices named v_1,…,v_5k and to each vertex v_i, we connect an eleven vertex subgraph G_i as in Figure <ref>. Denote the vertex in G_i which has an edge to v_i by u_i. Figure <ref> contains a minimum-sized locating-dominating set in shaded vertices. Notice that each subgraph G_i has a pair of open twins of degree 2, we need to include one of them in any minimum-sized locating-dominating set S of G. Furthermore, by Lemma <ref>, we may assume that the support vertex belongs to the set S. However, the support vertex itself is not enough to separate the leaf and the open-twin outside of S. Hence, |S∩ (V(G_i)∖{u_i})|≥ 6. Furthermore, these vertices do not dominate the vertices in the path P_v which requires 2k vertices in any locating-dominating set (see <cit.>). Hence, we require at least 2k vertices in the set S∩ (V(P_v)∪⋃_i=1^5k{u_i}). Therefore, we have |S|≥ 6· 5k+2k=32k=8n/15>n/2. The following proposition shows that the conjecture is not true in general for r-regular graphs with twins. In other words, the result turns out to be a special property of cubic graphs. There exists an infinite family of connected r-regular graphs, for r>3, with closed twins which have location-domination number over half of their order. Consider an r-regular graph G_r on n=(3r+3)k vertices for r≥4 as in Figure <ref>. In particular, G_r contains 3k copies of (r-1)-vertex cliques (ones within the dashed line in Figure <ref>). Vertices in such cliques are closed twins. We denote these cliques by Q_1,…,Q_3k. Furthermore, each clique Q_i is adjacent to two vertices, let us denote these by a_i and b_i so that there is an edge a_ib_i-1 for 2 ≤ i ≤ 3k and a_1b_3k. Let us consider a minimum-sized locating-dominating set S for G_r. In particular, we have |S∩ Q_i|≥ r-2 for each i. Let us denote by c_i∈ Q_i the single vertex which might not be in Q_i∩ S. We note that set S∩ Q_i does not separate vertices a_i,b_i,c_i. Let us assume that |(Q_i∪{a_i,b_i})∩ S|=r-2 for some i. We observe that then we have b_i-1,a_i+1∈ S. Hence, we have |(Q_i-1∪{a_i-1,b_i-1})∪(Q_i∪{a_i,b_i})∪(Q_i+1∪{a_i+1,b_i+1})∩ S|≥ 3r-4. Note that if we had |(Q_i∪{a_i,b_i})∩ S|≥ r-1 for each i, then we would have more vertices in S. Hence, we have |S|≥3r-4/3r+3n. When r=4 this gives (G)≥8/15n>n/2. We note that this lower bound is attainable with the construction used in Figure <ref>. The following proposition shows that Proposition <ref> is tight for an infinite family of twin-free subcubic graphs. We remark that there exists also a simpler tight construction of a path which has a leaf attached to all of its vertices. Note that in the case of cubic graphs, no tight constructions are known. There exists an infinite family of connected twin-free subcubic graphs which have location-domination number equal to half of their order. Consider the graph G on n=8k+2 vertices as in Figure <ref>. The graph consists of a path on 4k+1 vertices. To each path vertex p_i, 1≤ i≤ 4k+1, we join a new vertex u_i and an edge from u_i to u_i+1 if i≡ 24 or i≡ 34. Note that in particular u_1 and u_4k+1 are leaves. Consider next a minimum-sized LD-set S in G. We note that for each pair {p_i,u_i} where u_i is a leaf, we have {p_i,u_i}∩ S≠∅ since u_i is dominated by S. Let us next show that for each set L_j={p_j-1,p_j,p_j+1,u_j-1,u_j,u_j+1} where the vertices contain a 6-cycle, we have |L_j∩ S|≥3. Suppose on the contrary that |L_j∩ S|≤2. Assume first that {p_j-1,u_j-1}∩ S=∅. To dominate u_j-1, we have u_j∈ S. The only single vertex that can separate both p_j and u_j+1 from u_j-1 is p_j+1. However, vertices u_j and p_j+1 cannot separate p_j and u_j+1. Hence, the assumption {p_j-1,u_j-1}∩ S=∅ leads to |L_j∩ S|≥3, a contradiction. Hence, by symmetry, we assume that |{p_j-1,u_j-1}∩ S|=1 and |{p_j+1,u_j+1}∩ S|=1 while |{p_j,u_j}∩ S|=0. Notice that to dominate both u_j and p_j we have L_j∩ S ={p_j-1,u_j+1} or L_j∩ S ={p_j+1,u_j-1}. However, the first of these options does not separate u_j-1 and p_j while the second option does not separate p_j and u_j+1. Hence, we have |L_j∩ S|≥3. This implies that (G)≥ n/2. By Proposition <ref>, we have (G)≤ n/2. Therefore, (G)=n/2. § CONCLUSION In this article, we have proven Conjecture <ref> for subcubic graphs and answered positively to Problems <ref> and <ref>. In particular, we show that for each connected subcubic graph G, other than K_1,K_4,K_3,3 and without open twins of degrees 1 or 2, we have (G)≤n/2. We have also shown that these restrictions on open twins are necessary and that similar relaxation of conditions in Conjecture <ref> is not possible for r-regular graphs in general. Furthermore, we have presented an infinite family of twin-free subcubic graphs for which this bound is tight. However, the only known tight examples for the n/2-bound over connected twin-free cubic graphs are on six and eight vertices. Moreover, we were unable to find any such tight example for the n/2-bound by going through all connected twin-free cubic graphs on ten vertices. On the other hand, the 10-vertex graph in Figure <ref> is an example of a cubic graph containing both open and closed twins for which the conjectured upper bound is tight. In <cit.>, Foucaud and Henning asked to characterize every twin-free cubic graph which attains the n/2-bound. We present a new open problem in the same vein: Does there exist an infinite family of connected (twin-free) cubic graphs which have LD-number equal to half of their order? Considering the previous open problem is interesting for both twin-free cubic graphs and graphs which allow twins. It would even be interesting if one could find a single connected twin-free cubic on at least ten vertices which has LD-number equal to half of its order. If there does not exist any such (twin-free) cubic graphs, that also prompts another open problem: What is the (asymptotically) tight upper bound for LD-number of connected (twin-free) cubic graphs on at least ten vertices? § ACKNOWLEDGEMENTS This project was partially funded by the Research Council of Finland grant number 338797 and the ANR project GRALMECO (ANR-21-CE48-0004). D. Chakraborty was partially funded by the French government IDEX-ISITE initiative CAP 20-25 (ANR-16-IDEX-0001) and the International Research Center “Innovation Transportation and Production Systems" of the I-SITE CAP 20-25. Some of the work was done during D. Chakraborty's visit to the Department of Mathematics and Statistics, University of Turku, Turku, Finland. A. Hakanen was partially funded by Turku Collegium for Science, Medicine and Technology. T. Lehtilä was partially funded by the Business Finland Project 6GNTF, funding decision 10769/31/2022. Some of the work of T. Lehtilä was done during his stays at Department of Computer Science, University of Helsinki, Helsinki, Finland and Université Clermont-Auvergne, LIMOS, Clermont-Ferrand, France. abbrv
http://arxiv.org/abs/2406.18404v1
20240626145626
Stochastic homogenization of HJ equations: a differential game approach
[ "Andrea Davini", "Raimundo Saona", "Bruno Ziliotto" ]
math.AP
[ "math.AP", "35B27, 35F21, 91A23" ]
Stochastic homogenization of HJ equations: a differential game approach]Stochastic homogenization of HJ equations: a differential game approach [ Andrea Davini Raimundo Saona Bruno Ziliotto June 18, 2024 =============================================== § ABSTRACT We prove stochastic homogenization for a class of non-convex and non-coercive first-order Hamilton-Jacobi equations in a finite-range of dependence environment for Hamiltonians that can be expressed by a max-min formula. We make use of the representation of the solution as a value function of a differential game to implement a game-theoretic approach to the homogenization problem. § INTRODUCTION In this paper we study the asymptotic behavior, as → 0^+, of the solutions of a stochastic Hamilton-Jacobi (HJ) equation of the form HJ_∂_t u^+H(x/, D_x u^,ω)=0 in , for every fixed T>0, where H ×Ω→ is a Lipschitz Hamiltonian which can be represented by a max-min formula. The dependence of the equation on the random environment (Ω, , ) enters through the Hamiltonian H(x,p,ω), whose law is assumed to be stationary, i.e., invariant by space translation, and ergodic, i.e., any event that is invariant by space translations has either probability 0 or 1. Under the additional assumptions that the random variables (H(·,p,·))_p ∈ satisfy a finite range of dependence condition and that the underlined dynamics is oriented, we show homogenization for equation (<ref>), see Theorem <ref>. Furthermore, by exploiting an argument taken from <cit.>, we are able to extend the homogenization result to a class of Lipschitz Hamiltonians that can be put in max-min form only locally in p, see Theorem <ref>. The full set of assumptions and the precise statements of our homogenization results are presented in Section <ref>. In what follows, we want to emphasize that the Hamiltonians we consider are non-coercive and non-convex in p. The coercivity of H in the momentum is a condition which is often assumed in the homogenization theory of first-order HJ equations. Its role is to provide uniform L^∞-bounds on the derivatives of the solutions to the equation (<ref>) and to an associated “cell” problem. The first homogenization results for equations of the form (<ref>) for coercive Hamiltonians were established in the periodic setting in the pioneering work <cit.> and later extended to the almost–periodic case in <cit.>. The generalization of these results in the stationary ergodic setting was established in <cit.> under the additional assumption that the Hamiltonian is convex in p. By exploiting the metric character of first order HJ equations, homogenization has been extended to the case of quasiconvex Hamiltonians in <cit.>. The question whether homogenization holds in the stationary ergodic setting for coercive Hamiltonians that are non-convex in the momentum remained an open problem for about fifteen years, until the third author provided in <cit.> the first counterexample to homogenization in dimensions greater than 1. Feldman and Souganidis generalized that example and showed in <cit.> that homogenization can fail for Hamiltonians of the form H(x, p, ω) G(p) + V(x, ω) whenever G has a strict saddle point. This has shut the door to the possibility of having a general qualitative homogenization theory in the stationary ergodic setting in dimension d ⩾ 2, at least without imposing further mixing conditions on the random environment. On the positive side, homogenization of equation (<ref>) for coercive and non-convex Hamiltonians of fairly general type has been established in dimension d=1 in <cit.>, and in any space dimension for Hamiltonians of the form H(x, p, ω) = ( |p|^2 - 1 )^2 + V(x, ω) in <cit.>. Further positive results in random environment satisfying a finite range of dependence condition have been obtained in <cit.> for Hamiltonians that are positively homogeneous of degree α⩾ 1, and for Hamiltonians with strictly star–shaped sublevel sets in <cit.>. Despite all these significant progresses, the general question of which equations of the form (<ref>) homogenize in the non-convex case remains largely uncharted. When the coercivity condition of H in p is dropped, one loses control on the derivatives of the solutions of equation (<ref>) and of the associated “cell” problem, which are no longer Lipschitz continuous in general. As a consequence, homogenization of (<ref>) is known to fail even in the periodic case, regardless to the fact that the Hamiltonian is or is not convex in p, see for instance the introductions in <cit.> and some examples in <cit.>. In this generality, supplementary conditions need to be assumed to compensate the lack of coercivity of the Hamiltonian. In the periodic and other compact settings, homogenization results in this vein have been obtained in <cit.> and more recently in some convex situations in <cit.>, for a class non-convex Hamiltonians in dimension d=2 in <cit.>, and in other non-convex cases in <cit.>. When H(x,p,ω) |p| + ⟨ V(x, ω), p ⟩, equation (<ref>) is known in the literature (up to a sign change) as the G–equation, for which homogenization has been established, both in the periodic in <cit.>, and in the stationary ergodic case in <cit.>, under a smallness condition on the divergence of V, but without imposing that |V| < 1, meaning that H is not assumed to be coercive in p. This paper furnishes a new and fairly general class of non-convex and non-coercive Hamiltonians for which the equation (<ref>) homogenizes. In our first result, corresponding to Theorem <ref>, we prove homogenization for a class of non-convex and Lipschitz Hamiltonians that arises from Differential Game Theory. More specifically, we will consider Hamiltonians of the form H(x,p,ω) max_b ∈ Bmin_a ∈ A{ -ℓ(x, a, b,ω) - ⟨ f(a,b), p ⟩} for all (x,p,ω) ∈×Ω,H where the main assumptions are that the law of ℓ has finite range of dependence, in the same spirit as <cit.>, and that there exists a direction e ∈^d-1 and δ>0 such that ⟨ f(a,b), e ⟩⩾δ for all a ∈ A, b∈ B.f This latter condition is inspired by the joint work <cit.> of the third author, addressed to the study of the asymptotic of a discrete-time game on Z^d. We refer the reader to <cit.> for a discussion on the relation between these discrete games and the topic of stochastic homogenization. Interestingly, the assumption (f) precludes a Hamiltonian of the form (<ref>) from being coercive, see Remark <ref>, and this is one significant originality that distinguishes our work from most of the contributions on the topic of stochastic homogenization. Another important novelty relies on the proof technique. Indeed, thanks to the form (<ref>) of the Hamiltonian, we can represent the solution of (<ref>) as the value function of a differential game, as explained in <cit.>, and implement a game-theoretic approach. Such an approach has been rarely used in the context of homogenization of non-convex HJ equations (see e.g. <cit.> in the periodic setting), and, up to our knowledge, this is the first time that it is considered to prove a positive result in the stochastic case. By considering optimal strategies, generated paths, and the dynamic programming principle, we manage to show that the solutions of (<ref>) exhibit a concentration behavior assymptotically and that their mean satisfies an approximate subadditive inequality. The homogenization result is finally derived by exploiting the local Lipschitz character of the solutions of (<ref>). The probabilistic arguments that we use are related to the works <cit.>, where the authors prove homogenization for several classes of first and second order nonconvex Hamilton-Jacobi equations. In order to do so, they consider an auxiliary stationary Hamilton-Jacobi equation, called the metric problem <cit.> (resp., <cit.>), whose solutions can be interpreted as the minimal cost of going from one point of the space to another point (resp., to a planar surface). By analogy with techniques employed in first-passage percolation <cit.>, they combine Azuma's inequality with a subadditive argument to prove homogenization of the metric problem, and provide convergence rates and concentration estimates. Then, they resort to a PDE approach to relate the metric problem and the original Hamilton-Jacobi equation. Compared to their technique, our proof presents several key differences. * The concentration and subadditive inequalities techniques are applied to the value of a zero-sum differential game, which is a fairly different problem than the metric problem. * Our arguments rely mainly on a game-theoretic approach, by exploiting the monotonicity property (in the preferred direction e) of the optimal trajectories, rather than on PDE arguments. * We deal with non-coercive Hamiltonians, while Hamiltonians in <cit.> are coercive. This yields several difficulties, including that the space-Lipschitz constant of the solutions of (<ref>) are not uniformly bounded with respect to ε. In our second result, corresponding to Theorem <ref>, we succeed to extend the homogenization result to a class of non-convex and non-coercive Lipschitz Hamiltonians that are not necessarily defined by a max-min formula as the ones considered above. This makes the game-theoretic approach even more notable, in the sense that we are able to deal with Hamiltonians that do not come a priori from a differential game. For such an extension, we exploit an argument introduced in <cit.> to put these Hamiltonians in the form (H) when p is constrained within a ball B_R, but its use to prove a homogenization result in presence of a non-coercive Hamiltonian is not trivial and, as far as we know, new. The difficulty relies on the fact that, due to the lack of coercivity of the Hamiltonian, the Lipschitz constants in x of the solutions to (<ref>) are not uniformly bounded with respect to > 0, but explode with rate 1 /. From the game-theoretic viewpoint, this corresponds to a set of controls for Player 2 of the kind B_R /. In view of this, we had to tailor the proof of Theorem <ref> in a form that is suited for this extension, by paying particular attention that the constants that come into play in the crucial estimates at the base of our arguments depend on parameters that can be controlled when goes to 0. The paper is organized as follows. In Section <ref> we present the notation, the standing assumptions and the statements of our homogenization results, namely Theorem <ref> and Theorem <ref>. In Section <ref> we present the reduction homogenization strategy we will follow to prove Theorem <ref>. Some proofs are deferred to Appendix <ref>. In Section <ref> we prove the probabilistic concentration result. Section <ref> is devoted to the proofs of Theorem <ref> and Theorem <ref>. Appendix <ref> contains the deterministic PDE results, along with their proofs, that we use in the paper. Acknowledgements. - AD is a member of the INdAM Research Group GNAMPA. He wishes to thank Guy Barles for a long and valuable email conversation occurred in the fall 2023 about possible generalizations of the comparison principle stated in Theorem <ref>. AD is particularly grateful to Guy Barles for his careful reading of AD's attempts to generalize Theorem <ref> and for sharing his expertise on the matter, including the details to turn the argument sketched in <cit.> into a full proof. BZ is very grateful to Scott Armstrong and Pierre Cardaliaguet for all the enlightening discussions on homogenization theory. This work was supported by Sapienza Università di Roma - Research Funds 2018 and 2019, by the French Agence Nationale de la Recherche (ANR) under reference ANR-21-CE40-0020 (CONVERGENCE project), and by the ERC CoG 863818 (ForM-SMArt) grant. It was partly done during a 1-year visit of BZ to the Center for Mathematical Modeling (CMM) at University of Chile in 2023, under the IRL program of CNRS. teoremasection equationsection § ASSUMPTIONS AND MAIN RESULT Throughout the paper, we will denote by d ∈ the dimension of the ambient space. We will denote either by B_r(x_0) or B(x_0,r) (respectively, B_r(x_0) or B(x_0,r)) the open (resp., closed) ball in ^d of radius r > 0 centered at x_0 ∈. When x_0 = 0, we will more simply write B_r (resp., B_r). The symbol | · | will denote the norm in ^k, for any k ⩾ 1. We will write φ_n φ in E ⊆^k to mean that the sequence of functions (φ_n)_n uniformly converge to φ on compact subsets of E. We will denote by (X), (X), (X), and (X) the space of continuous, uniformly continuous, bounded uniformly continuous, and Lipschitz continuous functions on a metric space X, respectively. We will denote by (Ω, , ) a probability space, where is a probability measure and ℱ is the σ-algebra of –measurable subsets of Ω. We will assume that is complete in the usual measure theoretic sense. We will denote by ℬ(^k) the Borel σ-algebra on ^k and equip the product space ×Ω and × A × B ×Ω with the product σ-algebras ℬ()⊗ℱ and ℬ()⊗ℬ(^m)⊗ℬ(^m)⊗ℱ, respectively. We will assume that is invariant under the action of a one-parameter group (τ_x)_x ∈ of transformations τ_x Ω→Ω. More precisely, we assume that: the mapping (x, ω) ↦τ_xω from ×Ω to Ω is measurable; τ_0 = id; τ_x + y = τ_x ∘τ_y for every x, y ∈; and ( τ_x (E) ) = ( E ), for every E ∈ℱ and x ∈. Lastly, we will assume that the action of (τ_x)_x ∈ is ergodic, i.e., any measurable function φΩ→ satisfying ( φ( τ_x ω ) = φ( ω ) ) = 1 for every fixed x ∈ is almost surely equal to a constant. A random process f ×Ω→ is said to be stationary with respect to (τ_x)_x ∈ if f(x, ω) = f(0, τ_x ω) for all (x, ω) ∈×Ω. Moreover, whenever the action of (τ_x)_x ∈ is ergodic, we refer to f as a stationary ergodic process. Let (X_i)_i ∈ be a (possibly uncountable) family of jointly measurable functions from ×Ω to . We will say that the random variables (X_i)_i ∈ exhibit long-range independence (or, equivalently, have finite range of dependence) if there exists ρ>0 such that, for all pair of sets S, S⊆ such that their Hausdorff distance d_H(S, S) > ρ, the generated σ–algebras σ({ X_i(x,·) : i ∈, x ∈ S }) and σ({ X_i(x,·) : i ∈, x ∈S}) are independent, in symbols, σ({ X_i(x,·) : i ∈, x ∈ S }) σ({ X_i(x,·) : i ∈, x ∈S}) whenever d_H(S, S) > ρ. FRD In this paper, we will be concerned with the Hamilton-Jacobi equation of the form ∂_t u+H(x,D_x u,ω)=0, in , where the Hamiltonian H ^d ×^d ×Ω→ is assumed to be stationary with respect to shifts in x variable, i.e., H(x + y, p, ω) = H(x, p, τ_y ω) for every x, y ∈^d, p ∈^d, and ω∈Ω, and to belong to the class defined as follows. A function H ^d×^d×Ω→ is said to be in the class if it is jointly measurable and it satisfies the following conditions, for some constant β>0: (H1) |H(x, p, ω)| ⩽β( 1 + |p| ) for all (x, p) ∈; (H2) |H(x, p, ω) - H(x, q, ω)| ⩽β |p - q| for all x, p, q∈; (H3) |H(x, p, ω) - H(y, p, ω)| ⩽β |x - y| for all x, y, p∈. Assumptions (H1)-(H3) guarantee well-posedness in (), for every fixed T>0, of the Cauchy problem associated with equation (<ref>) when the initial datum is in (). Furthermore, the solutions are actually in (). Solutions, subsolutions and supersolutions of (<ref>) will be always understood in the viscosity sense, see <cit.>, and implicitly assumed continuous, if not otherwise specified. The purpose of this paper is to prove a homogenization result for equation (<ref>) for a subclass of stationary Hamiltonians belonging to that arise from Differential Game Theory and that can be expressed in the following max-min form: H H(x,p,ω) max_b ∈ Bmin_a ∈ A{ -ℓ(x, a, b,ω) - ⟨ f(a, b), p ⟩} for all (x, p, ω) ∈×Ω. Here A, B are compact subsets of ^m, for some integer m, and the product space × A × B ×Ω is equipped with the product σ-algebra ℬ()⊗ℬ(^m) ⊗ℬ(^m) ⊗ℱ. The mapping f A × B →^d is a continuous vector field, and the running cost ℓ^d × A × B ×Ω→ is a jointly measurable function satisfying the following assumptions: (ℓ_1) ℓ(·, ·, ·, ω) ∈(^d × A × B) for every ω∈Ω; (ℓ_2) there exists a constant (ℓ) > 0 such that |ℓ(x, a, b, ω) - ℓ(y, a, b, ω)| ⩽(ℓ) |x - y| for all x, y ∈^d, a ∈ A, b ∈ B and ω∈Ω; (ℓ_3) ℓ is stationary with respect to x, i.e., ℓ(x, a, b, ω) = ℓ(0, a, b, τ_x ω) for all x ∈, a ∈ A, b ∈ B and ω∈Ω. Throughout the paper, we will denote by the subclass of Hamiltonians in that can be put in the form (<ref>) with f and ℓ satisfying assumptions (f) and (ℓ_1)–(ℓ_2), respectively. A Hamiltonian H belonging to will be furthermore termed stationary to mean that assumption (ℓ_3) is in force. The specific form (<ref>) of the Hamiltonian allows to represent solutions to equation (<ref>) via suitable formulae issued from Differential Games, see Appendix <ref> for more details. In the sequel, we shall denote by ℓ_∞ the L^∞–norm of ℓ on ^d × A × B ×Ω, which is finite due to (ℓ_1), (ℓ_3), and the ergodicity assumption on Ω. To prove our homogenization result, we will assume the following additional crucial assumptions: (ℓ_4) (long-range independence) the random variables ( ℓ(·, a, b, ·) )_(a, b) ∈ A × B from ×Ω to exhibit long-range independence, i.e., there exists ρ > 0 such that (<ref>) holds with A × B and X_i ℓ(·, a, b, ·) where i = (a, b); (f) (oriented dynamics) the dynamics given by f A × B → is oriented, i.e., there exists δ > 0 and a direction e ∈^d-1 such that ⟨ f(a, b), e ⟩⩾δ for all (a, b) ∈ A × B. Our main result reads as follows. Let H be a stationary Hamiltonian belonging to and satisfying hypotheses (ℓ_4) and (f). Then, the HJ equation (<ref>) homogenizes, i.e., there exists a continuous function →, called effective Hamiltonian, such that, for every uniformly continuous function g on , there exists a set Ω_g of probability 1 such that, for every ω∈Ω_g, the solutions u^ϵ(·, ·, ω) of (<ref>) satisfying u^ϵ(0, · , ω) = g converges, locally uniformly on as ϵ→ 0^+, to the unique solution u of ∂_t u +( D_xu) = 0 in u(0, · ) = g in . Furthermore, satisfies (H1) and (H2). We stress that a Hamiltonian of the form (<ref>) with f satisfying condition (f) is never coercive. Indeed, lim_t → -∞ H(x, t e, ω) = +∞, lim_t → +∞ H(x, t e, ω) = -∞ for every (x, ω) ∈×Ω. By exploiting suitable Lipschitz bounds for solutions to (<ref>) with Lipschitz initial data and a localization argument inspired by <cit.>, we are able to extend the homogenization result stated above to the subclass of Hamiltonians in described in the statement of the next theorem. This new subclass has nonempty intersection with , but it is not fully contained in it. Let G be a stationary Hamiltonian belonging to and satisfying the following assumption: (G1) the random variables ( G(·, p, ·) )_p ∈ from ×Ω to exhibit long-rate independence, i.e., there exists ρ > 0 such that (<ref>) holds with and X_i H(·, p, ·) where i = p. Then, the homogenization result stated in Theorem <ref> holds for any Hamiltonian H of the form H(x, p, ω) G(x, π(p), ω) + ⟨ p, v ⟩, where v is a non-null vector in and π→ is a linear map such that π(v) = 0. Examples of Hamiltonians G lying in and satisfying (G1) are the ones of the form G(x, p, ω) G_0(p) + V(x, ω), where G_0 belongs to and V ×Ω→ is a stationary function, globally bounded and Lipschitz on , which satisfies (<ref>) with { 0 } and X_0 V. § REDUCTION ARGUMENTS FOR HOMOGENIZATION In this section we describe the reduction strategy that we will follow to prove Theorem <ref>. The first step consists in noticing that, in order to prove homogenization for equation (<ref>), it is enough to restrict to linear initial data instead of any g∈(). The precise statement is the following. Let H satisfy hypotheses (H1)-(H3) and denote by ũ^_θ the unique continuous solution of equation (<ref>) satisfying ũ^_θ(0, x, ω) = ⟨θ, x ⟩ for all (x, ω) ∈×Ω and for every fixed θ∈ and > 0. Assume there exists a function H→ such that, for every θ∈, the following convergence takes place for every ω in a set Ω_θ of probability 1: ũ^_θ(t, x, ω) ⟨θ, x ⟩ - t H(θ) in as ϵ→ 0^+. Then, H satisfies condition (H1)-(H2). Furthermore, for every fixed g ∈UC(^d), there exists a set Ω_g of probability 1 such that the unique function u^(·, ·, ω) ∈() which solves (<ref>) with initial condition u^(0, ·, ω) = g in converges, locally uniformly in as → 0^+, to the unique solution u∈C() of ∂_t u + H(D_x u) = 0 in with the initial condition u(0, ·) = g, for every ω∈Ω_g. This reduction argument, which is of deterministic nature, was already contained in the pioneering work <cit.> on periodic homogenization, at least as far as first-order HJ equations are concerned. This holds, in fact, even in the case when equation (<ref>) presents an additional (vanishing) diffusive and possibly degenerate term. A proof of this can be found in <cit.> and is given for Hamiltonians that are coercive in the p-variable. Such a class does not include the kind of Hamiltonians we consider here, as pointed out in Remark <ref>. Yet, the extension follows by arguing as in <cit.>, with the only difference that one has to use a different Comparison Principle, namely Theorem <ref>, in place of <cit.>. We refer the reader to <Ref> for the detailed argument. Theorem <ref> yields, in particular, that the effective Hamiltonian H is identified by the following almost sure limit: H(θ) - lim_ϵ→ 0ũ^ϵ_θ(1, 0, ω) for every fixed θ∈. The second step in the reduction consists in observing that, in order to prove the local uniform convergence required to apply <Ref>, it is enough to prove it for a fixed value of the time variable, that we chose equal to 1. Let ω∈Ω and θ∈^d be fixed, and assume that lim sup_ϵ→ 0^+sup_y∈ B_R |ũ^ϵ_θ(1, y,ω) - ⟨θ, x ⟩ + H(θ)| = 0 for every R>0. Then, for every T > 0, ũ_θ^ϵ(t,x,ω) ⟨θ, x ⟩ - t H(θ) in . Since ω will remain fixed throughout the proof, we will omit it from our notation. Let us fix θ∈^d. We first take note of the following scaling relations ũ^ϵ_θ(t, x) = ũ^1_θ(t / ϵ, x / ϵ) = t (ϵ / t) ũ^1_θ(t / ϵ, x / ϵ) = t ũ^ϵ/t_θ(1, x / t) for all t > 0 and x ∈^d. Fix T > 0. Then, for every fixed r ∈ (0, T), we obtain sup_r ⩽ t ⩽ Tsup_y ∈ B_R | ũ^ϵ_θ(t, y) - ⟨θ, y ⟩ + t H(θ) | = sup_r ⩽ t ⩽ T sup_y ∈ B_R | t (ũ^ϵ / t_θ(1, y / t) - ⟨θ, y / t ⟩ + H(θ)) | ⩽ T sup_ϵ / T ⩽η⩽ϵ / r sup_z ∈ B_R / r | ũ^η_θ(1, z) - ⟨θ, z ⟩ + H(θ) | . By (<ref>), the right-hand side goes to 0 as ϵ→ 0^+. On the other hand, in view of Proposition <ref>-(i) and of the fact that ũ^ϵ_θ(0, x) = ⟨θ, x ⟩ for all x ∈, we have sup_0 ⩽ t ⩽ r sup_y ∈^d | ũ^ϵ_θ(t, y) - ⟨θ, y ⟩ + t H(θ) | ⩽ r |H(θ)| + sup_0 ⩽ t ⩽ r sup_y∈^d | ũ^ϵ_θ(t, y) - ⟨θ, y ⟩ | ⩽ r (|H(θ)| + ℓ_∞ + f_∞) . Assertion (<ref>) follows from this and (<ref>) by the arbitrariness of the choice of r ∈ (0, T). In order to simplify some arguments, we find convenient to work with solutions with zero initial datum. We can always reduce to this case, without any loss of generality, by setting u^_θ(t, x, ω) ũ^_θ(t, x, ω) - ⟨θ, x ⟩ for all (t, x, ω) ∈×Ω. The function u^_θ is the unique continuous function which solves equation (<ref>) with H_θ H(·, θ + ·, ω) in place of H and which satisfies the initial condition u^_θ(0, x, ω) = 0 for all (x, ω) ∈×Ω. Note that the Hamiltonian H_θ is still given by the max-min formula (<ref>) where ℓ is replaced by ℓ_θ(x, a, b, ω) ℓ(x, a, b, ω) + ⟨ f(a, b), θ⟩. Furthermore, ℓ_θ satisfies the same conditions (ℓ_1)–(ℓ_4). The last reduction remark consists in noticing that the following rescaling relation holds u^ϵ_θ(1, x, ω) = ϵ u_θ(1 / ϵ, x / ϵ, ω) for all x ∈^d and > 0, where we have denoted by u_θ the function u^_θ with = 1. In the light of all this, the proof of <Ref> is thus reduced to show that, for every fixed θ∈, there exists a set Ω_θ of probability 1 such that, for every ω∈Ω_θ, we have lim sup_ϵ→ 0^+ sup_y ∈ B_R | u^ϵ_θ(1, y, ω) + H(θ) | = lim sup_t → +∞ sup_y ∈ B_t R | u_θ(t, y, ω)/t + H(θ) | = 0 for all R > 0, for some deterministic function H→. § PROBABILISTIC CONCENTRATION In this section we shall prove, by making use of Azuma's martingale inequality, that u_θ(t, 0, ·) is concentrated. For this, we will take advantage of the fact that u_θ can be expressed via the Differential Game Theoretic formula (<ref>) with initial datum g ≡ 0 and running cost ℓ_θ(x, a, b, ω) ℓ(x, a, b, ω) + ⟨ f(a, b), θ⟩. Here is where assumptions (<ref>) and (f) play a crucial role, by ensuring altogether that the solution of (<ref>) is robust with respect to local perturbations of the cost function ℓ. Throughout this section, we will weaken the conditions on the running cost ℓ^d × A × B ×Ω→ and we will assume ℓ to be only jointly measurable and such to satisfy conditions (ℓ_1)-(ℓ_2). In particular, no continuity and stationarity conditions with respect to x will be required. We start by recalling a classic theorem on concentration of martingales, also known as Azuma's inequality. Let (X_n)_n ∈ be a martingale and (c_n)_n ∈ a real sequence such that, for all n ∈, |X_n - X_n + 1| ⩽ c_n almost surely. Then, for all n ∈ and M > 0, ( |X_n - X_0| ⩾ M ) ⩽ 2 exp( -M^2/2 ∑_m = 0 ^n-1 c_m^2) . The probabilistic concentration result mentioned before is stated as follows. There exists a constant c=c(ρ,δ, (ℓ),f_∞)>0, only depending on ρ, δ, (ℓ) and f_∞, such that, for all M > 0 and t ⩾ 1, P( |u_θ(t, 0,·) - U_θ(t)| ⩾ M √(t)) ⩽exp( -c M^2 ) , where U_θ(t) [u_θ(t, 0, ·)] denotes the expectation of the random variable u_θ(t, 0, ·). We have tailored the proof of Proposition <ref> in such a way that the constant c appearing in the statement does not depend on ℓ_∞. This is crucial in view of the extension of the homogenization result provided in Theorem <ref>. To prove <Ref>, we need a technical lemma first. The result is deterministic, hence the dependence on ω will be omitted. It expresses that, thanks to the condition (f) on the dynamics, we can control the variation of the value function as we change the running cost on a strip that is orthogonal to the direction e. For any given pair R < R in , let us define the strip between R and R as follows: S_R, R{ x : ⟨ x, e⟩∈ [R, R] } . Let ℓ, ℓ^d × A × B → be Borel–measurable bounded running costs and let f A × B → be a continuous vector field satisfying condition (f). For every fixed θ∈^d, let us denote by u_θ(t, x), u_θ(t, x) the value functions defined via (<ref>) with initial datum g ≡ 0 and running cost ℓ_θ(x, a, b, ω) ℓ(x, a, b, ω) + ⟨ f(x, a, b), θ⟩ and ℓ_θ(x, a, b, ω) ℓ(x, a, b, ω) + ⟨ f(x, a, b), θ⟩, respectively. If ℓ = ℓ on ( ∖ S_R, R) × A × B, we have |u_θ(t, x) - u_θ(t, x) | ⩽R - R/δℓ - ℓ_∞ for all (t, x) ∈. We point out the generality of the previous statement: no continuity conditions on the running costs ℓ, ℓ are assumed in the above statement, and the functions u_θ, u_θ are still well defined. Fix t > 0 and x, θ∈. Let R < R be arbitrary in and consider the strip S_R, R. Fix controls α∈Γ(t) and b ∈(t) and consider the solution y_x [0, t] → of the ODE ẏ_x(s) = f(α[b](s), b(s)) in [0, t] y_x(0) = x. From the orientation of the game, the map s ↦⟨ y_x(s), e⟩ is strictly increasing in [0, t]. More precisely, dds⟨ y_x(s), e ⟩ = ⟨ẏ_x(s), e ⟩ = ⟨ f(α[b](s), b(s)), e ⟩⩾δ for all s ∈ [0, t]. If ⟨ y_x(0), e ⟩ = ⟨ x, e ⟩⩾R, we derive that the curve y_x always lies in ∖ S_R, R̂ and the assertion trivially follows since ℓ = ℓ on ( ∖ S_R, R) × A × B. Let us then assume that ⟨ y_x(0), e ⟩ < R and define two exit times t_1 and t_2 as follows: t_1 inf{ s ∈ [0, t] : ⟨ y_x(s), e ⟩ > R } , t_2 sup{ s ∈ [0, t] : R < ⟨ y_x(s), e ⟩ < R}, where we agree that t_1=t_2=t when the sets above are empty. Notice that t_2 - t_1 ⩽ (R - R) / δ. Indeed, if t_2 - t_1 > 0, then t_1 < t and, by continuity of y_x and (<ref>), we have ⟨ y_x(t_1), e ⟩ = R. From (<ref>), we infer R - R ⩾⟨ y_x(t_2) - y_x(t_1), e ⟩ = ∫_t_1^t_2⟨ f(α[b](s),b(s)), e ⟩ ds ⩾ (t_2 - t_1) δ , as it was claimed. Also, if t_2 < t, then, from (<ref>), we have that ⟨ y_x(t), e ⟩ > R for every t ∈ (t_2, t). Consider deterministic running costs ℓ, ℓ^d × A × B → such that ℓ = ℓ on ( ∖ S_R, R) × A × B. Then, in view of the previous remarks, we get | ∫_0^t ( ℓ_θ( y_x(t), α(t), β(t)) - ℓ_θ( y_x(t), α(t), β(t))) ds | = | ∫_t_1^t_2(ℓ( y_x(t), α(t), β(t)) - ℓ( y_x(t), α(t), β(t))) ds | ⩽ (t_2 - t_1) ℓ - ℓ_∞⩽R - R/δℓ - ℓ_∞ . The assertion easily follows from this by arbitrariness of the choice of the control b ∈(t) and of the strategy α∈Γ(t). In the proof of <Ref>, we look at the conditional expectation of u_θ(t, 0, ·) given the running cost in a neighborhood of zero. Taking a suitable sequence of increasing neighborhoods, we define the corresponding martingale and note that, by <Ref>, it has bounded differences. Then, applying Azuma's inequality, see <Ref>, we get the result. Fix t > 0. Recall the long-range independence parameter ρ. Denote n ⌈ t ⌉ and C ⌈ f _∞ / ρ⌉, where ⌈·⌉ stands for the upper integer part. Note that, for all α∈Γ(t) and b ∈(t), the solution y_0 [0, t] → of the ODE ẏ_0(s) = f(α[b](s), b(s)) in [0,t] y_0(0) = 0 satisfies that |y_0(s)| ⩽ρ C n for all s ∈ [0, t]. For r ∈{1, 2, …, C n }, let ℱ_r be the σ-algebra generated by the random variables {ℓ(y, a, b, ·) : a ∈ A, b ∈ B, ⟨ y, e ⟩⩽ρ r }, and let ℱ_0 be the trivial σ-algebra. We claim that, for all 0 ⩽ r < C n, we have that | [ u_θ(t, 0, ·) | ℱ_r + 1 ] - [ u_θ(t, 0, ·) | ℱ_r ] | ⩽8ρ^2/δ(ℓ) . Indeed, recall that S_r, r{ y ∈ : ⟨ y, e ⟩∈ [r, r] } and consider ℓ defined by ℓ(x, a, b, ω) = ℓ(x - 2 ρ e, a, b, ω) for (x, a, b, ω) ∈ S_ρ r, ρ(r + 2)× A × B ×Ω, and ℓ(x, a, b, ω) = ℓ(x, a, b, ω) otherwise. Then, by <Ref>, almost surely we have that | u_θ(t, 0, ω) - u_θ(t, 0, ω) | ⩽2 ρ/δ(ℓ) 2 ρ = 4ρ^2/δ(ℓ). where u_θ (resp. u_θ) is the value function associated via (<ref>) with the running cost ℓ_θℓ + ⟨ f(a, b), θ⟩ (resp. ℓ_θℓ+ ⟨ f(a, b), θ⟩) and initial datum g ≡ 0. By definition, u_θ is measurable with respect to the σ-field generated by {ℓ(x, a, b, ·) : a ∈ A, b ∈ B, x ∈}, which coincides with the σ-field generated by 𝒢{ℓ(x, a, b, ·) : a ∈ A, b ∈ B, x ∈∖ S_ρ r, ρ (r + 2)}. By long-range independence, the σ-field generated by 𝒢{ℓ(x, a, b, ·) : a ∈ A, b ∈ B, ⟨ x, e ⟩⩾ρ (r + 2) } is independent of the σ-field ℱ_r + 1. Therefore, [ ℓℱ_r + 1] = [ ℓℱ_r] , and so we obtain that [ u_θ(t, 0, ·) ℱ_r + 1] =[ u_θ(t, 0, ·) ℱ_r] . Finally, using successively (<ref>) and (<ref>), we prove the claim: | [ u_θ(t, 0, ·) | ℱ_r + 1] - [u_θ(t, 0, ·) | ℱ_r] | ⩽ |[u_θ(t, 0, ·) | ℱ_r + 1] - [u_θ(t, 0, ·) | ℱ_r] | + 8 ρ^2/δ(ℓ) = 8 ρ^2/δ(ℓ) . For r ⩽ C n, denote W_r [ u_θ(t, 0, ·) | ℱ_r ]. The process (W_r)_r ⩽ C n is a martingale, and W_0 = [ u_θ(t, 0, ·) ] = U_θ(t). Moreover, since u_θ(t, 0, ·) is ℱ_C n-measurable, we have that W_C n(·) = u_θ(t, 0, ·). By Azuma's inequality, see <Ref>, applied to the martingale (W_τ)_τ⩽ C n and (<ref>), for all M ⩾ 0, P(|u_θ(t, 0, ·) - U_θ(t)| ⩾ M)  = P(|W_C n - W_0| ⩾ M)  ⩽ 2 exp(- M^2/2 C n ( 8 ρ^2/δ(ℓ) )^2 ) . Therefore, there exists a constant c > 0, only depending on ρ, δ, (ℓ), and f_∞ (through C), but not on θ, such that, for all M > 0 and t ⩾ 1, we have that P(|u_θ(t, 0, ·) - U_θ(t)| ⩾ M √(t))  ⩽exp(- c M^2 ) , as it was asserted. § PROOF OF THE HOMOGENIZATION RESULTS This section is devoted to the proof of our homogenization results stated in Theorems <ref> and <ref>. We will denote by H a stationary Hamiltonian belonging to the class , and by u_θ(·, ·, ω) the solution of the equation (<ref>) with = 1, H_θ H(·, θ + · , ω) in place of H, and initial condition u_θ(0, x, ω) = 0 for all (x, ω) ∈×Ω. Note that, since the initial condition is zero, the function u_θ is stationary, i.e., for all (t, x, ω) ∈×Ω and y ∈ we have u_θ(t, x + y, ω) = u_θ(t, x, τ_y ω), –almost surely in Ω. We also recall that U_θ(t) denotes the expectation of the random variable u_θ(t, 0, ·). §.§ Proof of Theorem <ref> In this subsection, we will furthermore assume that H belongs to the class , so that u_θ(t, x, ω) can be represented via (<ref>) with initial datum g ≡ 0 and running cost ℓ_θ(x, a, b, ω) ℓ(x, a, b, ω) + ⟨ f(x, a, b), θ⟩. This allows us to make use of the results obtained in Section <ref>. According to Section <ref>, the proof of Theorem <ref> boils down to establishing the following result. For every fixed θ∈, there exists a set Ω_θ of probability 1 such that, for every ω∈Ω_θ, we have lim sup_ϵ→ 0sup_y∈ B_R |u^ϵ_θ(1, y, ω)+ H(θ)| = lim sup_t → +∞sup_y∈ B_tR |u_θ(t, y, ω)/t + H(θ)| = 0 for all R>0, for some deterministic function H→. The proof of Proposition <ref> is based on the following fact. For all θ∈, we have that U_θ(t) / t converges as t goes to infinity. The proof of Proposition <ref> consists in showing that U_θ(t), satisfies an approximated subadditive inequality. We postpone this proof and proceed to show how to exploit this fact to prove <Ref>. For this, we will also crucially exploit <Ref>. According to Proposition <ref>, we define H→ by setting H(θ) -lim_t→ +∞U_θ(t)/t = -lim_t→ +∞[u_θ(t, 0, ·)]/t for all θ∈. Consider n ∈ and a discretization Z_n of B_nR consisting of at most (2n^3R)^d points such that any point of B_nR is (√(d) / n^2)-close to Z_n. We claim that, almost surely, lim sup_n → +∞sup_z ∈ Z_n| u_θ(n, z,ω)/n + H(θ) | = 0 . Indeed, consider M > 0 arbitrary. Then, for n ∈, we focus on the event sup_z ∈ Z_n| u_θ(n, z,ω)/n + H(θ) | ⩾ M . Using the concentration given by <Ref> and the convergence stated in <Ref>, we will prove that the probability of this event is summable. By Borel-Cantelli, we deduce that the lim sup event has probability 0 and we conclude by noticing that the value of M is arbitrary. Using first the union bound, then <Ref>, and last stationarity of u_θ, we have that for n large enough, ( sup_z ∈ Z_n | u_θ(n, z,·)/n + H(θ) | ⩾ M ) ⩽∑_z ∈ Z_n( | u_θ(n, z,·)/n + H(θ) | ⩾ M ) ⩽∑_z ∈ Z_n( | u_θ(n, z,·)/n - U_θ(n)/n| ⩾M/2) = |Z_n| ( | u_θ(n, z,·)/n - U_θ(n)/n| ⩾M/2). Note that, by the choice of Z_n and <Ref>, |Z_n| ( | u_θ(n, z,·)/n - U_θ(n)/n| ⩾M/2) ⩽ (n^3R)^d exp( -c M^2/4 n ) . Therefore, the sequence of events (sup_z ∈ Z_n| u_θ(n, z, ·)/n + H(θ) | ⩾ M )_n ∈ is summable. By Borel-Cantelli, we obtain that ( lim sup_n →∞sup_z ∈ Z_n| u_θ(n, z,·)/n + H(θ) | ⩾ M )= 0 , which proves the claim since M > 0 is arbitrary. Let us denote by Ω_θ the set of probability 1 for which (<ref>) holds. By applying Theorem <ref> with H_θ in place of H we derive that u_θ(t, ·, ω) is (β t+|θ|)–Lipschitz in and u_θ(·, x, ω) is β(1+ |θ|)^2–Lipschitz in [0,+∞).[Since the Hamiltonian H_θ satisfies (H1')-(H3') with β_1 β(1+|θ|) and β_2=β_3 β.] We infer |u_θ(t, x, ω)| = |u_θ(t, x, ω)-u_θ(0, x, ω)| ⩽β t (1 + |θ|)^2 for all (t, x, ω) ∈×Ω. The latter implies sup_y∈ B_tR| u_θ(t, y,ω)/t + H(θ) | ⩽sup_y∈ B_⌈ t⌉ R| u_θ(t, y, ω)/t + H(θ) | ⩽sup_y∈ B_⌈ t⌉ R| u_θ(⌈ t⌉, y, ω)/t + H(θ) | + β(1+|θ|)^2/t ⩽sup_y∈ B_⌈ t⌉ R| u_θ(⌈ t⌉, y, ω)/⌈ t⌉ + H(θ) | + 2 β(1+|θ|)^2/t ⩽sup_z ∈ Z_⌈ t⌉| u_θ(⌈ t⌉, z,ω)/⌈ t⌉ + H(θ) | + √(d)/⌈ t ⌉ (h)+ 2 β(1 + |θ|)^2/t We deduce that lim sup_t → +∞sup_y ∈ B_t R| u_θ(t, y, ω)/t + H(θ) | = 0. Let us turn back to the proof of <Ref>. Fix θ∈ and t > 0. Denote 𝐁(t) B(0, t f _∞). Note that, for all α∈Γ(t) and b ∈(t), the solution of the ODE ẏ_0(s) = f(α[b](s),b(s)) in [0, t] y_0(0) = 0 is such that, for all s ∈ [0, t], we have that y_0(s) ∈𝐁(t). By Proposition <ref> we know that the solution u_θ(t, ·, ω) is (β t + |θ|)–Lipschitz in . We will discretize 𝐁(t) accordingly. Consider a finite set Z of size ⌈ 2 f _∞ (β t + |θ|) t ⌉^2d such that any point of 𝐁(t) is (β t + |θ|)^-1–close to a point in Z. Using the Lipschitz property of u_θ(t, ·, ω), the union bound, the fact that variables u_θ(t, x, ω) and u_θ(t, 0, ω) have the same distribution by stationarity and the concentration proven in <Ref>, we get that, for all M > 0, P( ∃ x ∈𝐁(t), |u_θ(t,x, ·) - U_θ(t)| ⩾ M ) ⩽P( ∃ x ∈ Z, |u_θ(t,x,·) - U_θ(t)| ⩾ M - 1 ) ⩽∑_z ∈ ZP( |u_θ(t, z, ·) - U_θ(t)| ⩾ M - 1 ) = ∑_z ∈ ZP( |u_θ(t, 0, ·) - U_θ(t)| ⩾ M - 1 ) ⩽⌈ 2 f _∞ (β t + |θ|) t ⌉^2dexp(-c (M - 1)^2 / t ), where c is a constant depending on ρ, δ, (ℓ), and f _∞, but not on θ. Taking M_t t^3/4 + 1 in place of M in the above inequality, we get that there exists a constant K only depending on β and |θ| such that P(∃ x ∈𝐁(t), |u_θ(t, x, ·) - U_θ(t)| ⩾ M_t) ⩽ K t^4dexp(-c t^1/2) . Hence, P(inf_x ∈𝐁(t)u_θ(t,x,·)⩽ U_θ(t) - M_t ) ⩽ K t^4dexp(-c t^1/2) . We now recall that |u_θ(t, x, ω)| ⩽β t (1 + |θ|)^2, see (<ref>), in particular |U_θ(t)| ⩽β t (1 + |θ|)^2. Then, [ inf_x ∈𝐁(t)u_θ(t, x, ·)] ⩾P(inf_x ∈𝐁(t)u_θ(t, x, ·)⩽ U_θ(t) - M_t ) ( -β t(1+|θ|)^2 ) + P(inf_x ∈𝐁(t)u_θ(t, x, ·) > U_θ(t) - M_t ) ( U_θ(t) - M_t ) = P(inf_x ∈𝐁(t)u_θ(t, x, ·)⩽ U_θ(t) - M_t ) ( - β t(1+|θ|)^2 - U_θ(t) + M_t ) + U_θ(t) - M_t ⩾ U_θ(t) - M_t - P(inf_x ∈𝐁(t)u_θ(t, x, ·)⩽ U_θ(t) - M_t )2β t(1+|θ|)^2. Combining with (<ref>), we deduce that there exists a constant K depending on β, |θ|,ρ,δ, (ℓ) and f_∞ such that, for all t ⩾ 1, [inf_x ∈𝐁(t)u_θ(t, x, ·)] ⩾ U_θ(t) - K t^3/4 . We will now show that the sequence (-U_θ(n))_n ∈ is almost subadditive and therefore it has a limit. Let n ⩾ 1 and m ⩽ n. Recall that, by Theorems <ref> and <ref>, we have the following formulae: u_θ(m,0,ω) = sup_α∈Γ(m) inf_b∈(m){∫_0^m ℓ_θ(y_0(s), α[b](s), b(s), ω) ds } , and u_θ(m+n,0,ω) = sup_α∈Γ(m) inf_b∈(m){∫_0^mℓ_θ(y_0(s), α[b](s), b(s), ω) ds + u_θ(n, y_0(m), ω) } . For each fixed strategy α, pick a control b that approximates the infimum in (<ref>) up to an error δ > 0. By definition, u_θ(m + n, 0, ω) +δ⩾ u_θ(m, 0, ω) + inf_x ∈𝐁(m) u_θ(n, x, ω) . By taking expectation, by the arbitrariness of δ > 0 and and using the inequality in (<ref>), we get U_θ(m + n) ⩾ U_θ(m) + U_θ(n) - K m^3/4 for all n∈. Set a_n -U_θ(n) for all n ∈, and z(h) K h^3/4 for all h ⩾ 1. By the previous inequality, the sequence (a_n) is subadditive with an error term z, i.e., a_m+n⩽ a_m + a_n + z(m + n) for all m,n ∈. Note that z is non-negative, non-decreasing and ∫_1^+∞ z(h) / h^2 dh < +∞. Furthermore, by Proposition <ref>, the function U_θ is Lipschitz in [0,+∞), in particular a_n/n is bounded since U_θ(0) = 0. By <cit.>, the sequence (a_n / n)_n ∈ converges, hence (U_θ(n)/n)_n ∈ converges. Since U_θ is Lipschitz, we deduce that U_θ(t) / t converges as t → +∞, as it was to be shown. We remark for further use what we have actually shown with (<ref>): there exists a constant K, only depending on β, |θ|, ρ, δ, (ℓ), and f _∞, such that U_θ(m + n) ⩾ U_θ(m) + U_θ(n) - K m^3/4 for all m,n∈. §.§ Proof of <Ref> Let G be a stationary Hamiltonian satisfying conditions (G1) and (H1)–(H3), for some β > 0. According to <cit.>, for every fixed R > 0 the Hamiltonian G can be represented as G(x, p, ω) = max_b ∈ Bmin_a ∈ A {-ℓ(x, a, b, ω) - ⟨f̂(a), p ⟩} for all (x,p,ω)∈×B_R×Ω, where A B_1, B B_R, ℓ(x, a, b) -G(x, b, ω) + β⟨ a, b ⟩ and f̂(a) -β a. We derive that H(x, p, ω) =max_b ∈ Bmin_a ∈ A{ -ℓ(x, a, b, ω) - ⟨ f(a), p ⟩} for all (x, p, ω) ∈×B_R ×Ω, where f(a) π^⊤( f̂(a)) + v and π^⊤ denotes the transpose of the linear map π. It is clear that f satisfies condition (f) with e v / |v| and δ |v|, and ℓ satisfies conditions (ℓ_1)–(ℓ_4). Furthermore, f_∞⩽β and (ℓ) ⩽β, independently on the choice of R > 0 in the definition of the set B B_R. Now let us fix θ∈ and, for every fixed s⩾ 1, define H^s(x,p,ω) max_b ∈ B_smin_a ∈ A {-ℓ(x, a, b, ω) - ⟨ f(a), p ⟩}, (x, p, ω) ∈××Ω, where B_s B_Ds with D β+|θ|. Let us denote by u_θ^s(t, x, ω) the solution of equation (<ref>) with = 1 and H^s(·, θ + · , ·) in place of H satisfying the initial condition u_θ^s(0, x, ω) = 0 for all (x, ω) ∈×Ω. In view of <Ref> and the fact that H(x, p, ω) = H^s(x, p, ω) on ×B_Ds×Ω, we have that u^s_θ(t, ·, ·) ≡ u_θ(t, ·, ·) for all 0 ⩽ t ⩽ s. By Proposition <ref>, there exists a constant c > 0, only depending on β, ρ, and δ = |v|, such that, for all M > 0 and t ⩾ 1, P( |u_θ(t, 0,·) - U_θ(t)| ⩾ M √(t)) = P( |u^t_θ(t, 0,·) - U^t_θ(t)| ⩾ M √(t)) ⩽exp( -c M^2 ) . Moreover, as underlined in <Ref>, there exists a constant K, only depending on β, ρ, δ = |v|, and |θ|, such that U^s_θ(m + n) ⩾ U^s_θ(m) + U^s_θ(n) - K m^3/4 for all n∈. Applying this to s = m + n we get U_θ(m + n) ⩾ U_θ(m) + U_θ(n) - K m^3/4 for all n∈. By arguing as in the last paragraph of the proof of <Ref>, we conclude that U_θ(t) / t converges. The result of Proposition <ref> holds in the same way. We remark that its proof only relies on the validity of Proposition <ref> and <Ref> and on the fact that H is a stationary Hamiltonian belonging to . This concludes the proof of <Ref>. § PDE RESULTS In this appendix we collect the PDE results used in the paper, which are of deterministic nature and which follow from the ones herein stated and proved by regarding at ω as a fixed parameter. We will denote by H a continuous Hamiltonian defined on and satisfying further assumptions that will be specified case by case. Throughout this section, we will denote by LSC(X) and USC(X) the space of lower semicontinuous and upper semicontinuous real functions on a metric space X, respectively. Let T>0 be fixed and consider the following evolutive HJ equation HJ∂_t u + H(x, Du) = 0 in . We shall say that a function v∈USC() is an (upper semicontinuous) viscosity subsolution of (<ref>) if, for every ϕ∈C^1() such that v-ϕ attains a local maximum at (t_0,x_0)∈, we have ∂_t ϕ(t_0,x_0)+H(x_0, D_x ϕ(t_0,x_0))⩽ 0. Any such test function ϕ will be called supertangent to v at (t_0,x_0). We shall say that w∈LSC() is a (lower semicontinuous) viscosity supersolution of (<ref>) if, for every ϕ∈C^1() such that w-ϕ attains a local minimum at (t_0,x_0)∈, we have ∂_t ϕ(t_0,x_0)+H(x_0, D_x ϕ(t_0,x_0))⩾ 0. Any such test function ϕ will be called subtangent to w at (t_0,x_0). A continuous function on is a viscosity solution of (<ref>) if it is both a viscosity sub and supersolution. Solutions, subsolutions, and supersolutions will be always intended in the viscosity sense, hence the term viscosity will be omitted in the sequel. §.§ Comparison principles We start by stating and proving a comparison principle, which applies in particular to the case of bounded sub and supersolutions. The conditions assumed on H are far from being optimal, but general enough for our purposes. The proof is somewhat standard, we provide it below for the reader convenience. Let H be a Hamiltonian satisfying the following assumptions, for some continuity modulus ω: |H(x, p) - H(x, q)| ⩽ω(|p-q|) for all x, p, q ∈.H2' |H(x, p) - H(y, p)| ⩽ω(|x - y| (1 + |p|)) for all x, y, p ∈,H4 Let v∈USC([0,T]×) and w∈LSC([0,T]×) be, respectively, a bounded from above subsolution and a bounded from below supersolution of (<ref>). Then, for all (t,x)∈ (0,T)×, v(t,x)-w(t,x)⩽sup_(v(0,·)-w(0,·)) . Since v is bounded from above, up to adding to v a suitable constant, we assume without any loss of generality that sup_(v(0,·)-w(0,·)) = 0. The assertion is thus reduced to proving that v⩽ w in . We proceed by contradiction. Assume that v > w at some point of (0,T)×. Up to translations, we can assume without loss of generality that this point has the form (t, 0) for some t∈ (0,T). We will construct a point (x, y, p, q) where the continuity assumptions (<ref>)-(<ref>) do not hold, leading to a contradiction. For every η, , b > 0, consider the auxiliary function Φ()^2 → defined by Φ(t,x,s,y) v(t,x)-w(s,y)-|x-y|^2/2 -(t-s)^2/2-η(ϕ(x)+ϕ(y)) -b/T-t-b/T-s, where ϕ(x) √(1+ | x |^2). Define θ v(t, 0) - w(t, 0) > 0. Then, since t∈ (0,T), there exists δ > 0 small enough such that, for all η, b ∈(0, δ ), Φ(t,0,t,0) = v(t,0)-w(t,0)-2ηϕ(0)-2b/T-t> θ/2 . Notice that Φ(t,x,s,y) ⩽ M -η(ϕ(x)+ϕ(y))-b/T-t-b/T-s in ()^2 with M (v^+_L^∞()+w^-_L^∞()), where we have denoted by v^+(x) max{v(x),0} the positive part of v and by w^-(x)=max{-w(x),0} the negative part of w. We derive that, for every > 0 and η∈ (0, δ ), there exists (t_, x_, s_, y_) ∈()^2, which also depend on η, such that Φ(t_,x_,s_,y_)=sup_()^2Φ⩾Φ(t,0,t,0)> θ/2. In view of (<ref>) we infer η(ϕ(x_)+ϕ(y_)) +b/T-t_+b/T-s_⩽M̃ and |x_-y_| + |t_-s_|⩽√(2M̃) with M̃ M- θ / 2. From the first inequality in (<ref>) we derive that exist constants T_b∈ (0,T), depending on b ∈ (0, δ ), and ρ_η>0, depending on η, such that t_,s_∈ [0, T_b] and x_,y_∈ B_ρ_η. For every fixed η∈ (0, δ ), from <cit.> we derive that, up to subsequences, lim_→ 0 (t_,x_,s_,y_)=(t_0,x_0,t_0,x_0) and lim_→ 0|x_-y_|^2/ = 0 for some (t_0,x_0)∈ satisfying v(t_0,x_0)-w(t_0,x_0)-2ηϕ(x_0)-2b/T-t_0 = sup_(t,x)∈Φ(t,x,t,x) > θ/2. In particular, such a point (t_0,x_0) actually lies in , i.e., t_0>0, since (<ref>) implies v(t_0,x_0)-w(t_0,x_0)>0. For every fixed η∈(0,δ), choose _η>0 small enough so that (t_,x_) and (s_,y_) both belong to when ∈ (0,_η). The function φ(t,x) w(s_,y_)+|x-y_|^2/2 +η(ϕ(x)+ϕ(y_))+|t-s_|^2/2+b/T-t is a supertangent to v(t,x) at the point (t_,x_) and v is a subsolution of (<ref>), while the function ψ(s,y) v(t_,x_)-|x_-y|^2/2 -η(ϕ(x_)+ϕ(y))-|t_-s|^2/2-b/T-s is a subtangent to w(s,y) at the point (s_,y_) and w is a subsolution of (<ref>). We infer t_-s_ + H(x_, p_+η Dϕ(x_)) ⩽ -c, t_-s_ + H(y_, p_-η Dϕ(y_)) ⩾ c, where we have set c b/T^2 and p_x_-y_/. By subtracting (<ref>) from (<ref>), we get, according to assumptions (<ref>)-(<ref>), 0 < 2c ⩽ H(y_, p_-η Dϕ(y_))-H(x_, p_+η Dϕ(x_))⩽ω(|x_-y_|(1+ |p_|)) + 2 ω(η). Sending → 0^+, we have that ω(|x_-y_|(1+ |p_|)) vanishes in view of (<ref>). Sending η→ 0^+, we have that ω(η) vanishes. But this is a contradiction since 0 < c. Next, we provide a result which is usually used to prove the finite speed of propagation of first order HJ equations, see for instance <cit.>. Let H be a Hamiltonian satisfying (<ref>) and condition (H2), for some constant β>0. Let v∈USC([0,T]×) and w∈LSC([0,T]×) be, respectively, a sub and a supersolution of (<ref>). Then the function u v-w is an upper semicontinuous subsolution to ∂_t u -β|Du| = 0 in . Let φ∈ C^1() be supertangent to u in a point (t_0,x_0)∈ and let us assume that (t_0,x_0)∈ is a strict local maximum point of u-φ. Let us choose r>0 such that the open ball B B_r((t_0,x_0)) of radius r>0 centred at (t_0,x_0) is compactly contained in and (t_0,x_0) is the only maximum point of u-φ in B. Let us introduce the function Φ(t,x,s,y) v(t,x)-w(s,y)-|x-y|^2/2-|t-s|^2/2 - φ(t,x) for (t, x, s, y) ∈B×B. By uppersemicontinuity of Φ and compactness of the domain, the maximum of Φ on B×B is attained at (at least) a point (t_,x_,s_,y_)∈B×B. In view of <cit.>, we infer that lim_→ 0 (t_,x_,s_,y_)=(t_0,x_0,t_0,x_0) and lim_→ 0|x_-y_|^2/=0 Choose _0>0 small enough so that (x_,t_), (y_,s_) belong to B for every ∈ (0,_0). The function ψ_1(t,x) w(s_,y_)+|x-y_|^2/2+|t-s_|^2/2+φ(t,x) is a supertangent to v(t,x) at (t_,x_) and v is a subsolution to (<ref>), hence t_-s_+∂_t φ (t_,x_)+H(x_,p_+D_x φ (t_,x_))⩽ 0, where we have set p_x_-y_/. Analogously, ψ_2(s,y) v(t_,x_)-|x_-y|^2/2-|t_-s|^2/2-φ(t_,x_) is a subtangent to w(s,y) at the point (s_,y_) and w is a supersolution to (<ref>), hence t_-s_+H(y_,p_)⩾ 0, By subtracting (<ref>) from (<ref>) and by taking into account conditions (<ref>)-(<ref>), we get 0 ⩾ ∂_t φ (t_,x_)+H(x_,p_+D_x φ (t_,x_)) - H(y_,p_) ⩾ ∂_t φ (t_,x_)-β|D_x φ (t_,x_)|+H(x_,p_) - H(y_,p_) ⩾ ∂_t φ (t_,x_)-β|D_x φ (t_,x_)| -ω(|x_-y_|(1+|p_|)). Now we send → 0^+ to get, in view of (<ref>), 0⩾∂_t φ (t_0,x_0)-β|D_x φ (t_0,x_0)|, as it was to be shown. With the aid of the previous proposition, we can prove the following version of the Comparison Principle for unbounded sub and supersolutions. Let H be a Hamiltonian satisfying assumptions (H2) and (<ref>). Let v ∈USC([0, T] ×) and w ∈LSC([0, T] ×) be, respectively, a sub and a supersolution of (<ref>). Then, v(t,x)-w(t,x)⩽sup_(v(0,·)-w(0,·)) for every (t,x)∈ (0,T)×. Up to adding to v a suitable constant and to the trivial case sup_(v(0,·)-w(0,·))=+∞, we can assume, without any loss of generality, that sup_(v(0,·)-w(0,·))=0. In view of Proposition <ref>, the function u v-w is an upper semicontinuous subsolution of (<ref>) satisfying u(0,·)⩽ 0 in . Let Ψ→ be a C^1, strictly increasing and bounded function satisfying Ψ(0)=0. It is easily seen, due to the positive 1–homogeneity of equation (<ref>), that the function (Ψ∘ u)(t,x) is a bounded, upper semicontinuous subsolution to (<ref>). By applying Theorem <ref> we derive that (Ψ∘ u)⩽ 0=Ψ(0) in . The assertion follows by the strict monotonicity of Ψ. §.§ Existence results and Lipschitz estimates for solutions Throughout this subsection, we will assume that the (deterministic) Hamiltonian H satisfying the following assumptions for some constants β_1,β_2,β_3>0: (H1') |H(x,p,ω)|⩽β_1(1+|p|) for all (x, p) ∈; (H2') |H(x,p,ω) - H(x,q,ω)| ⩽β_2 |p - q| for all x, p, q∈; (H3') |H(x,p,ω) - H(y,p,ω)| ⩽β_3 |x - y| for all x, y, p∈. We begin with the following existence and uniqueness result, where the uniqueness part is guaranteed by Theorem <ref>. Let g ∈(). For every T>0, the following problem HJP{ ∂_t u + H(x, Du) = 0 in u(0, ·) = g(·) on ^d. has a unique viscosity solution in (). Furthermore, this solution belongs to (). The case g∈() is proved in <cit.>. Let us then assume g∈(). Pick ψ∈^1,1()∩() such that ψ-g_∞⩽ 1. In view of the previous step, the Cauchy problem (<ref>) with H̃(x,p) H(x,Dψ(x)+p) and g-ψ in place of H and g, respectively, admits a unique solution ũ(t,x) in (). Furthermore, ũ(t,x) in (). We derive that u(t,x) ũ(t,x)+ψ(x) belongs to () and is a solution of the original Cauchly problem (<ref>). We proceed to show suitable Lipschitz bounds for the solution of (<ref>) when the initial datum is Lipschitz. Let g∈() and let u be the unique continuous function in which solves the Cauchy problem (<ref>) for every T>0. Then u is Lipchitz in for every T>0. More precisely D_x u_L^∞()⩽(β_3 t+(g)), ∂_t u_L^∞()⩽β_1 (1+Dg_∞). (i) Let us fix ∈ and set v_(t,x) u(t,x+)-β_3|| t, w_(t,x) u(t,x+)+β_3|| t for every (t,x)∈. By exploiting assumption (H2'), it is easily seen that v_h and w_h are, respectively, a viscosity sub and supersolution to (<ref>). Indeed, the following inequalities hold in the viscosity sense: ∂_t v_h+H(x,D v_h) = ∂_t u(t,x+h)-β_3|h|+H(x,D u(t,x+h)) ⩽ ∂_t u(t,x+h)+H(x+h,D u(t,x+h)) ⩽ 0 in , thus showing that v_h is a subsolution of (<ref>). The assertion for w_h can be proved analogously. By the Comparison Principle, see Theorem <ref>, we infer that, for all (t,x)∈, u(t,x)-v_h(t,x) ⩽ u(0,x)-v_h(0,x)=g(x)-g(x+h)⩽(g)|h| w_h(t,x)-u(t,x) ⩽ w_h(0,x)-u(0,x)=g(x+h)-g(x)⩽(g)|h|, namely |u(t,x+h)-u(t,x)|⩽(β_3 t+(g)) |h| for all (t,x)∈, thus showing the first assertion. (ii) Let us first assume that g∈()∩^1(). By assumption (H1') we have |H(x,Dg(x))| ⩽β_1 (1+ |Dg(x)|) ⩽β_1 (1+ Dg_∞) for all x∈. For notational simplicity, let us denote by M the most right-hand side term in the above inequality. It is easily seen that the functions u(t,x) g(x)-Mt, u(t,x) g(x)+Mt, (t,x)∈, are, respectively, a sub and a supersolution of (<ref>) for every T>0. By the Comparison Principle, see Theorem <ref>, we infer that u(t,x) ⩽ u(t,x) ⩽u(t,x) for all (t,x)∈, i.e., u(t,·)-g_∞⩽ Mt for all t⩾ 0. By applying the Comparison Principle again we get u(t+h,·)-u(t,·)_∞⩽u(h,·)-u(0,·)_∞⩽ Mh = β_1 (1+ Dg_∞) h for all t,h⩾ 0, meaning that u is β_1 (1+ Dg_∞)–Lipschitz in t. The case g∈() can be obtained by approximation. Let us denote by g_n the mollification of g via a standard convolution kernel and by u_n the solution to the Cauchy problem (<ref>) with g_n in place of g. Since Dg_n_∞⩽Dg_∞ for every n∈, we deduce from what proved above that the functions u_n are β_1 (1+ Dg_∞)–Lipschitz in t and (β_3 t+(g))–Lipschitz in x, for every n∈. By the Ascoli–Arzelà Theorem and the stability of the notion of viscosity solution, we infer that the functions (u_n)_n are converging, locally uniformly in , to the solution u of the Cauchy problem (<ref>) with initial datum g. We derive that u satisfies (<ref>) as well, as it was to be shown. §.§ Differential Games Throughout this subsection, we will work with a deterministic Hamiltonian H of the form H(x,p) max_b ∈ Bmin_a ∈ A{ -ℓ(x, a, b) - ⟨ f(a,b), p ⟩} for all (x,p) ∈, where A, B are compact subsets of ^m, for some integer m, f A× B →^d is continuous vector field, and the running cost ℓ^d × A × B → is a bounded and continuous function satisfying assumption (ℓ_2) appearing in Section <ref>. We shall denote by ℓ_∞ the L^∞–norm of ℓ on ^d × A × B. The Hamiltonian H clearly satisfies the properties (H1)–(H3) listed in Section <ref>. Next, we provide a formula borrowed from Differential Game Theory to represent such solutions, see <cit.>. To this aim, we first observe that u(t,x) is a viscosity solution of the Cauchy problem (<ref>) if and only ǔ (t,x) -u(T-t,x) is a viscosity solution of equation (<ref>) with Ȟ(x,p) H(x,-p) in place of H in the sense adopted in <cit.>, see items (a)-(b) at the end of page 774, and satisfying the terminal condition ǔ(T,x)=-g(x), cf. <cit.>. Let us denote by Å(T) { a [0,T] → A : a measurable}, (T) { b [0,T] → B : b measurable}. The sets A and B are to be regarded as action sets for Player 1 and 2, respectively. A nonanticipating strategy for Player 1 is a function α(T) →Å(T) such that, for all b_1, b_2 ∈(T) and τ∈ [0, T], b_1(·) = b_2(·) in [0,τ] ⇒ α[b_1] (·) = α[b_2](·) in [0,τ] . We will denote by α∈Γ(T) the family of such nonanticipating strategies for Player 1. For every (t,x)∈, let us set v(t,x) sup_α∈Γ(t)inf_b ∈(t){∫_0^t ℓ(y_x(s),α[b](s),b(s)) ds + g(y_x(t)) }, where y_x [0,t] → is the solution of the ODE ODEẏ_x(s) = f(α[b](s),b(s)) in [0,t] y_x(0) = x. The function v defined by (<ref>) is usually called value function. (i) Let g∈ W^1,∞(). Then v∈ W^1,∞() for every fixed T>0. More precisely, for every x, x̂∈ and t, t̂∈ (0, T), we have |v(t,x)| ⩽ T ℓ_∞ + g_∞ , |v(t, x) - v(t̂, x̂)| ⩽( β t+|θ| + (g) ) |x - x̂| + ( ℓ_∞ + f _∞(g) ) |t - t̂| ; (ii) Let g∈(). Then v∈() for every fixed T>0. Item (i) follows directly from <cit.> after observing that v(t,x)=-V(T-t,x), where V is the function given by (<ref>) with -ℓ and -g in place of ℓ and g, respectively. Let us prove (ii). Let (g_n)_n be a sequence of functions in W^1,∞() such that g_n-g_L^∞()→ 0 as n→ +∞. For every fixed (t,x)∈, any fixed strategy α∈Γ(t) and every control b∈(t), let us set J[α,b,g](t,x) ∫_0^t ℓ(y_x(s),α[b](s),b(s)) ds+ g(y_x(t)), where y_x [0,t]→ is the solution of (<ref>). We have J[α,b,g_n](t,x)-g_n-g_∞⩽ J[α,b,g](t,x) ⩽ J[α,b,g_n](t,x)+g_n-g_∞. By taking the infimum with respect to b∈(t) and then the supremum with respect to α∈Γ(t), we infer sup_α∈Γ(t)inf_b∈(t) J[α,b̂,g_n](t,x)-g_n-g_∞ ⩽ sup_α∈Γ(t)inf_b∈(t) J[α,b̂,g](t,x) ⩽ sup_α∈Γ(t)inf_b∈(t) J[α,b,g_n](t,x)+g_n-g_∞. This means that |v_n(t,x)-v(t,x)|⩽g_n-g_∞ for all (t,x)∈, where v_n and v denote the value function associated to g_n and g, respectively. As a uniform limit of a sequence of equi–bounded Lipschitz functions, we conclude that v belongs to (). Via the same argument used in the proof of Proposition <ref>-(i), we derive from <cit.> the following fact, known as Dynamic Programming Principle. Let x ∈ and 0 < τ < T be fixed. Then, v(T, x) = sup_α∈Γ(T - τ)inf_b∈(T - τ){∫_0^T - τℓ(y_x(s), α[b](s), b(s)) ds + v(τ, y_x(T - τ)) } . The following holds. Let g ∈(). For every fixed T>0, the unique continuous solution of (<ref>) is given by (<ref>). When g is additionally assumed Lipschitz continuous, the assertion follows directly from <cit.> via the same argument used in the proof of Proposition <ref>-(i) and in view of what remarked at the beginning of this subsection. Let us assume g∈() and pick a sequence of functions g_n∈ W^1,∞() such that g_n-g_L^∞()→ 0 as n→ +∞. Let us denote by v and v_n the value functions associated via (<ref>) to g and g_n, respectively. Arguing as in the proof of Proposition <ref>-(ii), we derive |v_n(t,x)-v(t,x)|⩽g_n-g_∞ for all (t,x)∈. From the previous step we know that v_n solves (<ref>) with initial datum g_n. By the stability of the notion of viscosity solution, we conclude that v solves (<ref>). We now extend the previous result to the case of initial data that are not necessarily bounded. The result is the following. Let g ∈(). For every fixed T>0, the unique viscosity solution in () of the Cauchy problem (<ref>) is given by the representation formula (<ref>). Let us pick ψ∈^1,1()∩() such that ψ-g_∞⩽ 1. In view of Theorem <ref>, the unique solution ũ(t,x) in () of the Cauchy problem (<ref>) with H̃(x,p) H(x,Dψ(x)+p) and g-ψ in place of H and g, respectively, is given by the formula (<ref>) with ℓ(y_x(s),α[b](s),b(s))+⟨ f(α[b](s),b(s)),Dψ(y_x(s))⟩ and (g-ψ)(y_x(t) in place of g(y_x(t)). For every fixed strategy α∈Γ(t) and every control b∈(t), we have ∫_0^t ( ℓ(y_x(s),α[b](s),b(s)) + ⟨ f(α[b](s),b(s)),Dψ(y_x(s))⟩) ds = ∫_0^t ( ℓ(y_x(s),α[b](s),b(s)) + dd sψ(y_x(s)) ) ds = ∫_0^t ℓ(y_x(s),α[b](s),b(s)) ds + ψ(y_x(t))-ψ(x). We infer ũ(t,x) = sup_α∈Γ(t)inf_b ∈(t){∫_0^t ( ℓ(y_x(s),α[b](s),b(s)) + ⟨ f(α[b](s),b(s)),Dψ(y_x(s))⟩) ds + (g-ψ)(y_x(t)) } = sup_α∈Γ(t)inf_b ∈(t){∫_0^t ℓ(y_x(s),α[b](s),b(s)) ds + g(y_x(t)) }-ψ(x). The conclusion readily follows after observing that the function u(t,x) ũ(t,x)+ψ(x) is the continuous solution of the original Cauchly problem (<ref>). § PROOF OF THEOREM <REF> In this section we prove Theorem <ref>. The result is of deterministic nature, with ω treated as a fixed parameter. We will therefore omit it from our notation. We will need the following fact. Let us assume that all the hypotheses of Theorem <ref> are in force. Let g∈UC(^d) and, for every >0, let us denote by u^ the unique function in () that solves (<ref>) subject to the initial condition u^(0,·)=g. Set u^*(t,x) lim_r→ 0 sup{ u^(s,y) : (s,y)∈ (t-r,t+r)× B_r(x), 0<<r }, u_*(t,x) lim_r→ 0 inf{ u^(s,y) : (s,y)∈ (t-r,t+r)× B_r(x), 0<<r }. Let us assume that u^* and u_* are finite valued. Then (i) u^*∈USC() and it is a viscosity subsolution of (<ref>); (ii) u_*∈LSC() and it is a viscosity supersolution of (<ref>). We postpone the proof of Proposition <ref> and we first show how we employ it to prove Theorem <ref>. Let us prove that H satisfies condition (H1). To this aim, we remark that the functions u^+, u^- defined as u^±(t,x) ⟨θ, x⟩±β(1+ |θ|)t, (t,x)∈, are, respectively, a continuous super and subsolution of (<ref>) satisfying u^±(0,x)=⟨θ,x⟩, for every >0 in view of assumption (H1). By Theorem <ref> we infer u^-⩽ũ^_θ⩽ u^+ in for every >0. Hence |H(θ)| = lim_→ 0^+ |ũ^_θ(1,0)| ⩽β(1+ |θ|), as it was to be shown. Let us now show that H satisfies condition (H2). Fix θ_1,θ_2∈ and set u_θ_2^,±(t,x) ũ^_θ_2(t,x) +⟨θ_1-θ_2,x⟩±β |θ_2-θ_1|t, (t,x)∈. The function u^,+_θ_2 and u^,-_θ_2 are, respectively, a super and a subsolution of (<ref>), in view of assumption (H2), which satisfy u^,±_θ_2(0,x)=⟨θ_1,x⟩. By Theorem <ref> we derive that u^,-_θ_2⩽ũ^_θ_1⩽ u^,+_θ_2 in , hence |H(θ_1)-H(θ_2)| = lim_→ 0^+ |ũ^_θ_1(1,0)-ũ^_θ_2(1,0)| ⩽β |θ_2-θ_1|. We now proceed to prove the second part of the assertion. Let us first assume g∈C^1(^d)∩(^d). Take a constant M large enough so that M> H(x,D g(x))_∞. Then the functions u_-(t,x) g(x)-Mt and u_+(t,x) g(x)+Mt are, respectively, a Lipschitz continuous sub and supersolution of (<ref>) for every >0. By the Comparison Principle stated in Theorem <ref>, we get u_-⩽ u^⩽ u_+ in for every >0. By the definition of relaxed semilimits we infer u_-(t,x)⩽ u_*(t,x)⩽ u^*(t,x)⩽ u_+(t,x) for all (t,x)∈, in particular, u_*, u^* satisfy u_*(0,·)=u^*(0,·)=g on ^d. By Proposition <ref>, we know that u^* and u_* are, respectively, an upper semicontinuous subsolution and a lower semicontinuous supersolution of the effective equation (<ref>). We can therefore apply Theorem <ref> again to obtain u^*⩽ u_* on . Since the opposite inequality holds by the definition of upper and lower relaxed semilimits, we conclude that the function u(t,x) u_*(t,x)=u^*(t,x) for all (t,x)∈ is the unique continuous viscosity solution of (<ref>) such that u(0,·)=g on ^d. Furthermore, by Theorem <ref>, we also know that u belongs to (). The fact that the relaxed semilimits coincide implies that u^ converge locally uniformly in to u, see for instance <cit.>. When the initial datum g is just uniformly continuous on , the result easily follows from the above by approximating g with a sequence (g_n)_n of initial data belonging to C^1(^d)∩(^d). Indeed, if we denote by u^_n and u^ the solution of (<ref>) with initial datum g_n and g, respectively, we have, in view of Theorem <ref>, u^_n-u^_L^∞()⩽g_n-g_L^∞() for every >0. The assertion follows from this by the stability of the notion of viscosity solution. The fact that u^* and u_* are upper and lower semicontinuous on is an immediate consequence of their definition. Let us prove (i), i.e., that u^* is a subsolution of (<ref>). The proof of (ii) is analogous. We make use of Evans's perturbed test function method, see <cit.>. Let us assume, by contradiction, that u^* is not a subsolution of (<ref>). Then there exists a function ϕ∈C^1() that is a strict supertangent of u^* at some point (t_0,x_0)∈ and for which the subsolution test fails, i.e., ∂_tϕ(t_0,x_0)+H(D_xϕ(t_0,x_0))>2δ for some δ>0. For r>0 define V_r (t_0-r,t_0+r)× B_r(x_0). Choose r_0>0 to be small enough so that V_r_0 is compactly contained in and u^*-ϕ attains a strict local maximum at (t_0,x_0) in V_r_0. In particular, we have for every r ∈ (0, r_0) max_∂ V_r (u^*-ϕ)<max_V_r (u^*-ϕ)=(u^*-ϕ)(t_0,x_0). Let us set θ D_xϕ(t_0,x_0) and for every >0 denote by ũ^_θ the unique continuous function in that solves (<ref>) subject to the initial condition ũ^_θ(0,x)=⟨θ,x⟩. We claim that there is an r∈(0,r_0) such that the function ϕ^(t,x) ϕ(t,x)+ũ_θ^(t,x)-(⟨θ,x⟩-tH(θ)) is a supersolution of (<ref>) in V_r for every >0 small enough. Indeed, by a direct computation we first get ∂_tϕ^+H(x/,D_xϕ^) = ∂_tϕ+H(θ) + ∂_t ũ^_θ+H(x/,D_xũ^_θ+D_xϕ-θ) in the viscosity sense in V_r. Using (<ref>), the continuity of H and the fact that ϕ is of class C^1, we get that there is an r∈ (0,r_0) such that for all sufficiently small > 0 and all (t,x)∈ V_r ∂_tϕ(t, x) + H(θ) > 2 δ . Moreover, by taking into account (H2), we can further reduce r if necessary to get H(x/,Dũ^_θ+D_xϕ-θ) > H(x/,Dũ^_θ)-δ in V_r in the viscosity sense. Plugging these relations into (<ref>) and using the fact that ũ^_θ is a solution of (<ref>), we finally get ∂_tϕ^+H(x/,D_xϕ^) > δ+∂_t ũ^_θ+H(x/,Dũ^_θ) = δ > 0 in the viscosity sense in V_r, thus showing that ϕ^ is a supersolution of (<ref>) in V_r. Now we need a comparison principle for equation (<ref>) in V_r applied to ϕ^ and u^ to infer that sup_ V_r (u^-ϕ^)⩽max_∂ V_r (u^-ϕ^). Since condition (H2) is in force, the validity of this comparison principle is guaranteed by <cit.>. Now notice that, by the assumption (<ref>), ϕ^ϕ in V_r. Taking the limsup of the above inequality as → 0^+ we obtain sup_V_r (u^*-ϕ) ⩽lim sup_→ 0^+sup_ V_r (u^-ϕ^) ⩽lim sup_→ 0^+max_∂ V_r (u^-ϕ^) ⩽max_∂ V_r (u^*-ϕ), in contradiction with (<ref>). This proves that u^* is a subsolution of (<ref>). siam
http://arxiv.org/abs/2406.17922v1
20240625200940
On induced L-infinity action of diffeomorphisms on Cochains
[ "Andrey Losev", "Dmitrii Sheptunov", "Xin Geng" ]
math-ph
[ "math-ph", "math.MP" ]
Flux dependence of redshift distribution and clustering of LOFAR radio sources Nitesh Bhardwaj 1nitesh@physik.uni-bielefeld.de Dominik J. Schwarz1 Catherine L. Hale2,3 Kenneth J. Duncan2 Stefano Camera4,5,6,7 Caroline S. Heneka 8 Szymon J. Nakoneczny9,10 Huub J. A. Röttgering11 Thilo M. Siewert1 Prabhakar Tiwari12 Jinglan Zheng1 George Miley11 Cyril Tasse13,14 Received month date, year; accepted month date, year ================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT One of the approaches to quantum gravity is to formulate it in terms of De Rham algebra, choose a triangulation of space-time, and replace differential forms by cochains (that form a finite dimensional vector space). The key issue of general relativity is the action of diffeomorphisms of space-time on fields. In this paper, we induce the action of diffeomorphisms on cochains by homotopy transfer (or, equivalently, BV integral) that leads to a L_∞ action. We explicitly compute this action for the space-time being an interval, a circle, and a square. § INTRODUCTION The quantum theory of gravity is one of the main goals of theoretical physics. The main problem is in ultraviolet divergences that are present due to the infinite dimension of the space of differential forms on the smooth manifold. Therefore, to quantize gravity in the spirit of Feynman integral, we should modify the space of differential forms. In (<cit.>) we proposed the replacement of the differential forms by A-infinity algebra, that was called Feynman geometry. The particular class of Feynman geometries may be obtained in the following construction. Consider triangulation of a manifold. Decompose the space of differential forms as complexes as a direct sum of an infinite-dimensional complex of differential forms that have zero integrals against all chains of the triangulation, and a finite dimensional complex of cochains dual to such chains of the triangulation. The latter would be a replacement of the algebra of differential forms. In works of Mnev (<cit.>) and Sullivan (<cit.>) it was shown how wedge product on differential forms leads to higher products of cochains in the procedure known as homotopic transfer, which in mathematical physics is just a BV integral against a Lagrangian submanifold determined by the homotopy. Since we are planning to study gravity the important issue is the general covariance. In particular, we should know how diffeomorphisms are acting on cochains. The hint is that we know that diffeomorphisms are acting on differential forms by Lie derivative. Thus, in this paper, we will apply the BV integral (homotopic transfer) to get the action of vector fields on manifold on cochains. It is not surprising that we will get not the representation of the Lie algebra but L-infinity representation. This construction in the context of transferring the action of supersymmetry to cohomology was earlier done in (<cit.>). We believe that explicit formulas for such representation would be an important stepping stone in understanding of quantum gravity in this approach. The paper is organized as follows. In section 2 we review relation between L_∞ representations and solutions to classical master equations (CME) of the special form. We also recall in this section that BV integral leads to induction of L_∞ representation (also known as homotopic transfer). These results are not new, but we present them for completeness. In section 3 we proceed to explicit calculation of effective BV-action. To begin with, we consider an interval as a manifold. Further, in sections 4 and 5, explicit formulas for the action in the case of a circle and a square are obtained. § BV FORMALISM, L_∞ MODULE STRUCTURE, BV INTEGRAL AND HOMOTOPIC TRANSFER §.§ How to understand L_∞ module The concept of L_∞ module (<cit.>) could be easily understood by analogy with the concept of a module over a Lie algebra. The latter may be explained as follows. Take a Lie algebra L, a vector space M (considered as an abelian Lie algebra) and study L⊕ M. Now deform the Lie algebra structure on L ⊕ M considered as a vector space keeping the structure constants of L . This would lead to structure constants T : L ⊗ M → M and condition that T gives a structure of the Lie algebra module is equivalent to Jacobi identity for the deformed Lie algebra. If we apply the same procedure to L_∞ algebra, namely, extend it by a complex M and deform, we will get a set of maps T_p : ⋀ ^p L → End(M) subject to quadratic relations that follow from the L_∞ relations. §.§ BV formalism A convenient way to write down these relations and to operate with them is provided by BV formalism (see <cit.>). In this formalism we study a supermanifold equipped with the odd simplectic structure and a function S_BV on it, that is called BV action. S_BV should satisfy quantum master equation, but in this work we will restrict ourselves by its classical limit, that we will write explicitly below. For our purpose it is enough to consider as a supermanifold T^*[1]W, where W is a supervector space and simplectic structure is a canonical one. In BV language linear coordinates on W are called fields ( and we will denote them as η^a) and dual linear coordinates on the fiber are called antifields and we will denote them as η_a^*. The classical master equation (CME) takes the form ∑_a=1^dim V (-1)^|η^a|( ∂ S_BV/∂η^a∂ S_BV/∂η_a^* + ∂ S_BV/∂η_a^*∂ S_BV/∂η^a) = 0 where |η^a| is parity of η^a. Using BV formalism is useful because one can reformulate the concept of L_∞ algebra and the concept of L_∞ module as a particular solutions to CME. Moreover, the BV formalism contains the procedure of BV integral allowing to get new solutions of CME from the old ones. Namely, if W=W_1⊕ W_2, and ℒ is the Lagrangian submanifold in T^*[1]W_1, then the integral ∫_ℒexp( S_BV/ħ)= exp( S_BV^ind(η_2,η_2^*,ħ)/ħ) produces the induced BV action S_BV^ind(ħ) that satisfies quantum master equation. For our purpose it is enough to consider the value of S^ind at ħ=0 that gives a solution to CME on the space T^*[1]W_2. Note, that S_BV^ind(η_2, η_2^*,0)=extr_ℒ S_BV (η, η^*) §.§ L_∞ algebra and L_∞ module from the BV point of view Consider the Lie algebra L with a basis { t_a } and structure constants f_ab^e: [t_a, t_b]=f_ab^d t_d Consider L[1], it means that L[1] is an odd space with odd coordinates c^a. Correspondingly, dual coordinates on the fiber T^*[1] (L[1]) are even and according to our conventions we denote them as c_a^* It is easy to show that CME for the action S_L=f_ab^d c^a c^b c_d^* is equivalent to Jacobi identity for the structure constants f_ab^d. Consider an (even) vector space M with the basis e_i and linear coordinates φ^i. The fiber T^*[1]M has odd dual coordinates φ_i^*. It is equally easy to check that M is a module over Lie algebra iff the map L → End(M), t_a ↦ T_aj^i is such that S_M= c^a T_aj^iφ_i^* φ^j + 1/2 f_ab^d c^a c^b c_d^* solves CME. Note, that the latter case may be considered as a particular case of the former one for the superalgebra obtained by extension of L by the odd module M. It is instructive to notice that the L_∞ algebra structure, i.e. the collection of maps ⋀ L → L, t_a_1∧…∧ t_a_p↦ f_a_1 … a_p^b t_b satisfying quadratic relations can be packed into a solution to CME for S_L_∞= ∑_p=1^n f_a_1 … a_p^b c^a_1… c^a_p c_b^* Now we are ready to explain the concept of L_∞ module in BV formalism. Let us replace a supervector space M by a complex with the differential Q and introduce the matrix Q_i^j as follows: Q(e_i)=Q_i^je_j, Q^2=0 Define a map ⋀ L → End(M), t_a_1∧…∧ t_a_p↦ (T_a_1… a_p)_i^j Then one can show that this map gives a stucture of L_∞ module iff the follwing action S_M_∞ solves CME: S_M_∞=φ^*_i Q^i_j φ^j + ∑_p=1^n1/p!c^a_1⋯ c^a_p(T_a_1⋯ a_p)^i_j φ^*_i φ^j+1/2f_ab^d c^ac^b c^*_d Like in the case of ordinary module over a Lie algebra, the structure of L_∞ module may be considered as a special case of L_∞ algebra on the space L ⊕ M. §.§ Induction of the L_∞ module by contraction of the acyclic subcomplex in the BV formalism Kadeishvili-like theorems (<cit.>) state that by contraction of the acyclic subcomplex we can pass from the Lie algebra to L_∞ algebra. Here we will describe how in works in the particular case of L modules and L_∞ modules. Let us apply the BV induction procedure for the Lie algebra acting on a complex M with the action commuting with the differential Q in the complex: [T_a, Q]=0 Let M be a direct sum of two subcomplexes: M=M_Z⊕ M_A, such that M_A is acyclic, and h is the contracting homotopy: hQ+Qh=Proj_M_A, h^2=0 , h M_Z=0 Take W_1=M_A, W_2=L[1] ⊕ M_Z and consider linear Lagrangian submanifold given by equations h w=0 ; h^* w^*=0 where h^* is a conjugated homotopy operator on the dual space. Since S_M is quadratic polynomial in w_A ∈ M_A and w_A^* ∈ M_A^*[1], it is easy to compute its critical value on the Lagrangian submanifold. It is given by S^ind= <w_Z^* , Q w_Z> + 1/2 f_ab^d c^a c^b c_d^* + ∑_k=1^∞ c^a_1… c^a_k <w_Z^* , T_a_1 h … h T_a_k w_Z> where w_Z ∈ M_Z and w_Z^* ∈ M_Z^*[1]. If we introduce an universal action operator T_c=c^a T_a the formula above takes the simple form S^ind= <w_Z^* , Q w_Z> + 1/2 f_ab^d c^a c^b c_d^* + ∑_k=1^∞ <w_Z^* , T_c h … h T_c_k times w_Z> Thus we have an induced structure of L_∞ module also known as homotopy transfer, see for example (<cit.>). §.§ Application to the action of diffeomorphisms on cochains In our approach to discretized gravity, we take as a module M the space Ω_X of differential forms on a manifold X, as a differential - De Rham operator d on these forms, as a Lie algebra - the algebra of diffeomorphisms on X. Now we proceed to discretization. We consider a triangulation of X, and take as M_A the subspace Ω_A of differential forms ω having zero integrals against all the chains z of the triangulation: ∫_z ω=0 As a second summand we take differential forms Ω_Z dual to chains of the triangulation. Such choice is not unique. Whitney considered polynomial forms. However, in this paper we allow more general choices,restricted by condition that d Ω_Z ∈Ω_Z Applying the construction described above, we get L_∞ module structure on Ω_Z. In the following sections we will study explicit examples. § INTERVAL In this section we consider the manifold X = [0, 1] with the coordinate t, 0 ≤ t ≤ 1. The corresponding Lie algebra L = Vect([0, 1]) is spanned by v_k = t^k d/dt, k ∈ℤ, k ≥ 0 The structure constants can be written in this basis as [v_k, v_l] = (l - k) t^k + l - 1d/dt = (l - k) v_k+l-1 f_kl^m = (l - k) δ_k+l-1^m We will also use the notation like in (<ref>) v_c = ∑_k c^k v_k = ∑_k c^k t^k d/dt = ν_c(t) d/dt The action of v_c ∈ L[1] on Ω_X is given by the Lie derivative which we denote ℒ_v_c. So the action (<ref>) takes the form S = ⟨φ^*, dφ⟩ + (-1)^|φ^*|⟨φ^*, ℒ_v_cφ⟩ + 1/2 f_ab^d c^a c^b c_d^* We construct the triangulation of [0, 1] as follows. Choose n + 1 points t_0 < t_1 <...< t_n such that t_0 = 0, t_n = 1. Using this we set the space of chains Z = Span ⟨α^0 = [t_0],..., α^n = [t_n], β^0 = [t_0, t_1],..., β^n-1 = [t_n-1, t_n] ⟩ Further consider n functions α_1,..., α_n with the condition α_j(t_i) = δ_j^i and define α_0 = 1 - ∑_k=1^n α_k. 1-forms are introduced as β_j = d ( ∑_k=j+1^n α_k ), 0 ≤ j ≤ n-1 Note that: ∫_t_i^t_i+1β_j = ∫_t_i^t_i+1 d ( ∑_k=j+1^n α_k ) = ∑_k=j+1^n α_k(t_i+1) - α_k(t_i) = ∑_k=j+1^n δ_k^i+1 - δ_k^i = δ_j^i Hence we get ⟨α^i, α_j ⟩ = ⟨β^i, β_j ⟩ = δ_j^i, ⟨α^i, β_j ⟩ = ⟨β^i, α_j ⟩ = 0, where ⟨ , ⟩ is the pairing between chains and differential forms given by integration, and the space of forms dual to chains can be written as: Ω_Z = Span(α_0,..., α_n, β_0,...,β_n-1) Therefore, fields and antifields are φ_Z = ∑_i=0^n α_i φ^i + ∑_j=0^n-1β_j ψ^j φ_Z^* = ∑_i=0^n φ_i^* α^i + ∑_j=0^n-1ψ_j^* β^j To construct the L_∞ module structure on Ω_Z, we need to provide a homotopy h satisfying (<ref>). We write this condition in the form dh + hd = id - P_Z, where the projector P_Z: Ω_X →Ω_Z is defined as P_Z(g(t)) = ∑_j g(t_j) α_j for a 0-form g(t) and P_Z(f(t) dt) = ∑_j ( ∫_t_j^t_j+1 f(τ) dτ) β_j for a 1-form f(t)dt. Let us set now h(f(t) dt) = ∫_0^t f(τ) dτ - ∑_j ( ∫_t_j^t_j+1 f(τ) dτ) ∫_0^t β_j and check the relation dh + hd = id - P_Z for 0-forms: (dh + hd)g(t) = h(g'(t)dt) = ∫_0^t g'(τ) dτ - ∑_j ( ∫_t_j^t_j+1 g'(τ) dτ) ∫_0^t β_j = = g(t) - g(0) - ∑_j ( g(t_j+1) - g(t_j) ) ∫_0^t d ( ∑_k=j+1^n α_k ) = = g(t) - g(0) - ∑_j ( g(t_j+1) - g(t_j) ) ∑_k=j+1^n α_k(t) = = g(t) - g(0) - ∑_k=1^n α_k(t) ∑_j=0^k-1( g(t_j+1) - g(t_j) ) = = g(t) - g(0) - ∑_k=1^n α_k(t) ( g(t_k) - g(t_0) ) = = g(t) - g(t_0) (1 - ∑_k=1^n α_k) - ∑_k=1^n g(t_k) α_k = (id - P_Z) g(t) and for 1-forms: (dh + hd) f(t)dt = d [ ∫_0^t f(τ) dτ - ∑_j ( ∫_t_j^t_j+1 f(τ) dτ) ∫_0^t β_j ] = = f(t)dt - ∑_j ( ∫_t_j^t_j+1 f(τ) dτ) β_j = (id - P_Z) f(t)dt Finally, the induced action (<ref>) becomes S^ind = ⟨φ^*_Z, dφ_Z ⟩ + (-1)^|φ_Z^*|⟨φ^*_Z, ℒ_v_cφ_Z ⟩ + ⟨φ^*_Z, ℒ_v_c h ℒ_v_cφ_Z ⟩ + 1/2 f_ab^d c^a c^b c_d^* All the terms containing more than one operator h vanish since h lowers form's degree by 1. To compute the induced action we introduce notations β_j^i = ∑_k=j+1^n α'_k(t_i) α^j ν_c = ν_c(t_j) = ν_c^j α^j ν'_c = ν'_c(t_j) = ν'^j_c What remains is to calculate the distinct components of the expression for the induced action ⟨β^j, dα_i ⟩ = ∫_t_j^t_j+1 dα_i = α_i(t_j+1) - α_i(t_j) = δ_i^j+1 - δ_i^j ⟨α^i, ℒ_v_cα_j ⟩ = α^i ( ν_c(t) d α_j/dt) = ν_c(t_i) α'_j(t_i) = ν_c^i (β_j-1^i - β_j^i) ⟨β^i, ℒ_v_cβ_j ⟩ = β^i d ( ν_c(t) ∑_k=j+1^n α'_k ) = ∫_t_i^t_i+1 d ( ν_c(t) ∑_k=j+1^n α'_k ) = ∑_k=j+1^n ν_c(t_i+1) α'_k(t_i+1) - ν_c(t_i) α'_k(t_i) = ν_c^i+1β_j^i+1 - ν_c^i β_j^i ⟨α^i, ℒ_v_c h ℒ_v_cβ_j ⟩ = α^i ℒ_v_c h d ( ν_c(t) ∑_k=j+1^n α'_k ) = α^i ℒ_v_c (id - P_Z) ( ν_c(t) ∑_k=j+1^n α'_k ) = = α^i ℒ_v_c( ν_c(t) ∑_k=j+1^n α'_k - ∑_m ν_c^m β_j^m α_m ) = = ν_c^i ν'^i_c β_j^i - ν_c^i ∑_m ν_c^m β_j^m (β_m-1^i - β_m^i) Thus, the resulting L_∞ module structure is S^ind = ψ_i^* (δ_j^i+1 - δ_j^i) φ^j - φ_i^* ν_c^i (β_j-1^i - β_j^i) φ^j + ψ_i^* (ν_c^i+1β_j^i+1 - ν_c^i β_j^i) ψ^j + + φ_i^* ( ν_c^i ν'^i_c β_j^i - ν_c^i ∑_m ν_c^m β_j^m (β_m-1^i - β_m^i) ) ψ^j + 1/2 f_ab^d c^a c^b c_d^* § CIRCLE In this section we consider the manifold X = S^1 with the coordinate 0 ≤ t < 2π and the Lie algebra L = Vect(S^1). Let v_k = e^i k td/dt, k ∈ℤ be a basis in L = Vect(S^1). Then, the corresponding structure constants are [v_k, v_l] = i (l - k) e^i(m+n)td/dt = i (l - k) v_k+l f_kl^m = i (l - k) δ_k+l^m Let us use the following notation for the subsequent computations v_c = ∑_k c^k v_k = ∑_k c^k e^iktd/dt = ν_c(t) d/dt The action (<ref>) takes the form analogous to the interval case S = ⟨φ^*, dφ⟩ + (-1)^|φ^*|⟨φ^*, ℒ_v_cφ⟩ + 1/2 f_ab^d c^a c^b c_d^* The triangulation of S^1 is constructed as follows. Choose n points t_0=0,..., t_n-1 and 1-forms β_0,..., β_n-1 such that ∫_t_i^t_i+1β_j = δ_j^i. Denote β_j^i := ϑ_j(t_i), where β_j = ϑ_j(t) dt. Define the 0-forms: α_0(t) = 1 + ∫_0^t(β_n-1 - β_0) α_j(t) = ∫_0^t(β_j-1 - β_j), j 0 One can check that α_j(t_i) = δ_j^i. Then, the spaces of chains and cochains are Z = Span ⟨α^0 = [t_0],..., α^n-1 = [t_n-1], β^0 = [t_0, t_1],..., β^n-1 = [t_n-1, t_0] ⟩ Ω^∙_Z = Span ⟨α_0,..., α_n-1, β_0,..., β_n-1⟩ The corresponding fields and antifields are φ_Z^* = ∑_i=0^n-1φ_i^* α^i + ∑_j=0^n-1ψ_j^* β^j φ_Z = ∑_i=0^n-1α_i φ^i + ∑_j=0^n-1β_j ψ^j As mentioned earlier in section 3, it is necessary to provide h satisfying the condition dh + hd = id - P_Z, where the projector P_Z: Ω_X →Ω_Z is defined in this case as P_Z(g(t)) = ∑_j g(t_j) α_j for a 0-form g(t) and P_Z(f(t)dt) = ∑_j ( ∫_t_j^t_j+1 f(τ) dτ) β_j for a 1-form f(t)dt. Let us set the homotopy h h(f(t)dt) = ∫_0^t f(τ) dτ - ∑_j ( ∫_t_j^t_j+1 f(τ) dτ) ∫_0^t β_j and check the the relation dh + hd = id - P_Z for 1-forms: (dh + hd)(f(t)dt) = d [ ∫_0^t f(τ) dτ - ∑_j ( ∫_t_j^t_j+1 f(τ) dτ) ∫_0^t β_j ] = = f(t)dt - ∑_j ( ∫_t_j^t_j+1 f(τ) dτ) β_j = (id - P_Z) f(t)dt and for 0-forms: (dh + hd)(g(t)) = ∫_0^t g'(τ) dτ - ∑_j ( ∫_t_j^t_j+1 g'(τ) dτ) ∫_0^t β_j = = g(t) - g(0) - ∑_j (g(t_j+1) - g(t_j)) ∫_0^t β_j = = g(t) - g(0) - g(0) ( ∫_0^t(β_n-1 - β_0) ) - - ∑_j 0 g(t_j) ( ∫_0^t(β_j-1 - β_j) ) = = g(t) - ∑_j g(t_j) α_j = (id - P_Z)g(t) The induced action (<ref>) is analogous to the case of the interval S^ind = ⟨φ^*_Z, dφ_Z ⟩ + (-1)^|φ_Z^*|⟨φ^*_Z, ℒ_v_cφ_Z ⟩ + ⟨φ^*_Z, ℒ_v_c h ℒ_v_cφ_Z ⟩ + 1/2 f_ab^d c^a c^b c_d^* We introduce notations ν_c(t_j) = ν_c^j ν'_c(t_j) = ν'^j_c Finally, let us compute the action dα_j = β_j-1 - β_j ⟨β^i, dα_j ⟩ = δ_j-1^i - δ_j^i ⟨α^i, ℒ_v_cα_j ⟩ = ν_c^i (β_j-1^i - β_j^i) ⟨β^i, ℒ_v_cβ_j ⟩ = ∫_t_i^t_i+1 d ( ν_c(t) ϑ_j(t) ) = ν_c^i+1β_j^i+1 - ν_c^i β_j^i h ℒ_v_cβ_j = ∫_0^t d ( ν_c ϑ_j ) - ∑_m ( ∫_t_m^t_m+1 d ( ν_c ϑ_j ) ) ∫_0^t β_m = = ν_c(t) ϑ_j(t) - ν_c^0 β_j^0 - ∑_m ( ν_c^m+1β_j^m+1 - ν_c^m β_j^m ) ∫_0^t β_m = = ν_c(t) ϑ_j(t) - ∑_m ν_c^m β_j^m α_m(t) ℒ_v_c h ℒ_v_cβ_j = ν_c(t) ∂/∂ t( ν_c(t) ϑ_j(t) - ∑_m ν_c^m β_j^m α_m(t) ) = = ν_c(t) ν'_c(t) ϑ_j(t) + ν_c(t) ν_c(t) ϑ'_j(t) - - ν_c(t) ∑_m ν_c^m β_j^m (ϑ_m-1(t) - ϑ_m(t)) ⟨α^i, ℒ_v_c h ℒ_v_cβ_j ⟩ = ν_c^i ν'^i_c β_j^i - ν_c^i ∑_m ν_c^m β_j^m (β_m-1^i - β_m^i) Thus, we get the L_∞ module structure S^ind = ψ_i^* (δ_j-1^i - δ_j^i) φ^j - φ_i^* ν_c^i (β_j-1^i - β_j^i) φ^j + ψ_i^* (ν_c^i+1β_j^i+1 - ν_c^i β_j^i) ψ^j + + φ_i^* ( ν_c^i ν'^i_c β_j^i - ν_c^i ∑_m ν_c^m β_j^m (β_m-1^i - β_m^i) ) ψ^j + 1/2 f_ab^d c^a c^b c_d^* § SQUARE In this section we consider the manifold X = [0, 1] × [0, 1] with the coordinates x, y. The vector field v_c takes the form v_c(x,y) = ν_c^x(x,y) ∂/∂ x + ν_c^y(x,y) ∂/∂ y, ν_c^x = ∑_n,m c^x,nm x^n y^m, ν_c^y = ∑_n,m c^y,nm x^n y^m The triangulation is given by γ^* = [0, 1] × [0, 1] β^0 = [(0,0), (1,0)], β^1 = [(1,0), (1,1)], β^2 = [(1,1), (0,1)], β^3 = [(0,1), (0,0)] α^0 = (0,0), α^1 = (1,0), α^2 = (1,1), α^3 = (0,1) Z = Span(α^0,..., α^3, β^0,..., β^3, γ^*) Let us define the constructions that willl be used in further calculations: 2-form γ = dx dy; 1-forms: β_0 = ( ∫_y^1 dy) dx = (1 - y) dx β_1 = ( ∫_0^x dx) dy = x dy β_2 = -( ∫_0^y dy) dx = -y dx β_3 = -( ∫_x^1 dx) dy = -(1 - x) dy; and functions: α_0 = 1 + ∫ (β_3 - β_0) = (1 - x)(1 - y) α_1 = ∫ (β_0 - β_1) = x(1 - y) α_2 = ∫ (β_1 - β_2) = xy α_3 = ∫ (β_2 - β_3) = y(1 - x). Since dβ_j = γ, forms β_i - β_j are closed, and the functions α_i are defined correctly. Note that ⟨γ^*, γ⟩ = 1, ⟨β^i, β_j ⟩ = δ_j^i, ⟨α^i, α_j ⟩ = δ_j^i. Then the space dual to the space of chains is Ω_Z = Span(α_0,..., α^3, β_0,..., β^3, γ) The fields under consideration are φ_Z = ∑_i=0^3 α_i φ^i + ∑_j=0^3 β_j ψ^j + γω φ_Z^* = ∑_i=0^3 φ_i^* α^i + ∑_j=0^3 ψ_j^* β^j + ω^* γ^* We introduce the projector P_Z (g, f and e are 0, 1 and 2 -forms respectively) analogously to the previous sections P_Z(g) = ∑_j ⟨α^j, g ⟩α_j P_Z(f) = ∑_j ⟨β^j, f ⟩β_j P_Z(e) = ⟨γ^*, e ⟩γ To define the homotopy consider the operator I:Ω_X →Ω_X; for 2-forms: I(e(x,y) dx dy) = 1/2( ∫_0^x e(x, y) dx) dy - 1/2( ∫_0^y e(x, y) dy) dx for 1-forms: I(f_x(x,y) dx + f_y(x,y) dy) = 1/2( ∫_0^x f_x(x, 0) dx + ∫_0^y f_y(x, y) dy + + ∫_0^y f_y(0, y) dy + ∫_0^x f_x(x, y) dx) The last expression is simply the sum of the integrals of an 1-form f along the edges of the rectangle with vertices (0,0), (0,x), (0,y), (x,y). For 0-forms g(t) I(g) = 0. Using I, we can now define the homotopy (e and f are 2- and 1- forms respectively): h(e) = I(e) - ∑_i ⟨β^i, I(e) ⟩β_i h(f) = I ( f - ∑_i ⟨β^i, f ⟩β_i ) Let us check the condition (<ref>) in the form dh + hd = id - P_Z: (dh + hd) g(x,y) = h(dg) = I ( dg - ∑_i ⟨β^i, dg ⟩β_i ) = = g(x,y) - g(0,0) - I ( ∑_i ( ⟨α^i+1, g ⟩ - ⟨α^i, g ⟩) β_i ) = = g(x,y) - g(0,0) - ∑_i ⟨α^i, g ⟩ I(β_i-1 - β_i ) = = g(x,y) - ∑_i ⟨α^i, g ⟩α_i = (id - P_Z) g (dh + hd) e = d(h(e)) = d ( I(e) - ∑_i ⟨β^i, I(e) ⟩β_i ) = = e - ( ∑_i ⟨β^i, I(e) ⟩) γ = e - ⟨γ^*, e ⟩γ = (id - P_Z) e h d f = h d (f_x dx + f_y dy) = h(df) = I(df) - ∑_i ⟨β^i, I(df) ⟩β_i 2 I(df) = ( f_x(x, y) - f_x(x, 0) - ∫_0^y ∂ f_y/∂ x dỹ) dx + + ( f_y(x, y) - f_y(0, y) - ∫_0^x ∂ f_x/∂ y dx̃) dy ∑_i ⟨β^i, I(df) ⟩β_i = ( ∑_i ⟨β^i, f ⟩) β_1 + β_2/2 = ⟨γ^*, df ⟩ I(γ) 2 d I(f) = f_x(x, y) dx + f_y(x, y) dy + f_x(x, 0) dx + f_y(0, y) dy + + ( ∫_0^y ∂ f_y/∂ x dỹ) dx + ( ∫_0^x ∂ f_x/∂ y dx̃) dy I(df) + d I(f) = f (hd + dh) f = I(df) + d I(f) - ∑_i ⟨β^i, I(df) ⟩β_i - ∑_i ⟨β^i, f ⟩ d I(β_i) = = f - ∑_i ⟨β^i, f ⟩β_i - ⟨γ^*, df ⟩ I(γ) + ∑_i ⟨β^i, f ⟩ I(dβ_i) = = (1 - P_Z) f + ∑_i ⟨β^i, f ⟩ I(γ) - ⟨γ^*, df ⟩ I(γ) = (1 - P_Z) f Let us compute the components of action dα_j = β_j-1 - β_j ⟨β^i, dα_j ⟩ = δ_j-1^i - δ_j^i dβ_j = γ ⟨γ^*, dβ_j ⟩ = 1 ⟨α^i, ℒ_v_cα_j ⟩ = ⟨α^i, ν_c^x ∂/∂ x∫ (β_j-1 - β_j) + ν_c^y ∂/∂ y∫ (β_j-1 - β_j) ⟩ = ν_i (β_j-1^i - β_j^i) ⟨β^i, ℒ_v_cβ_j ⟩ = ⟨β^i, d (v_c β_j) + i_v_c dβ_j ⟩ = ν_i+1β_j^i+1 - ν_i β_j^i + + ∫_β^iν_c^x dy - ν_c^y dx ⟨γ^*, ℒ_v_cγ⟩ = ⟨γ^*, d (ν_c^x dy - ν_c^y dx) ⟩ = ∫_[0, 1]^2𝐝𝐢𝐯(v_c) dx dy ⟨α^i, ℒ_v_c h ℒ_v_cβ_j ⟩ = ⟨α^i, ℒ_v_c h ( d (v_c β_j) + i_v_c dβ_j ) ⟩ = ⟨α^i, ℒ_v_c (hd + dh) (v_c β_j) ⟩ + + ⟨α^i, ℒ_v_c h ( ν_c^x dy - ν_c^y dx ) ⟩ = ⟨α^i, ℒ_v_c (v_c β_j - ∑_m ν_m β_j^m α_m) ⟩ + + ⟨α^i, ℒ_v_c h ( ν_c^x dy - ν_c^y dx ) ⟩ = ν_x,i∂ (ν_i β_j)/∂ x + ν_y,i∂ (ν_i β_j)/∂ y - ∑_m ν_i (β_m-1^i - β_m^i) ν_m β_j^m + + 1/2⟨α^i, ν_c^x(x, y) ν_c^x(x, 0) - ν_c^x ∫_0^y ∂ν_c^y/∂ x(x, ỹ) d⟩ + + 1/2⟨α^i, -ν_c^y(x, y) ν_c^y(0, y) + ν_c^y ∫_0^y ∂ν_c^x/∂ x(x̃, y) d⟩ - - ⟨α^i, ∑_k ( ν_c^x ∂ I(β_k)/∂ x + ν_c^y ∂ I(β_k)/∂ y) ∫_β^kν_c^x dy - ν_c^y dx ⟩ Denote ξ := ℒ_v_c h ℒ_v_cγ = ℒ_v_c h d(ν_c^x dy - ν_c^y dx) = ℒ_v_c h ( 𝐝𝐢𝐯(v_c) dx dy ) = 1/2( d ( ν_c^y (ν_c^x(x, y) - ν_c^x(0, y) + ∫_0^x ∂ν_c^y/∂ y dx̃) - - ν_c^x ( ν_c^y(x, y) - ν_c^y(x, 0) + ∫_0^y ∂ν_c^x/∂ x d) ) + 𝐝𝐢𝐯(v_c) ( ν_c^x dy - ν_c^y dx ) - - ( ∫_[0, 1]^2𝐝𝐢𝐯(v_c) dx dy ) ( d ( v_c (β_1 + β_2) ) + 2 (ν_c^x dy - ν_c^y dx) ) ) The most complicated term (cubic in c^a) has the form ⟨α^i, ℒ_v_c h ℒ_v_c h ℒ_v_cγ⟩ = ⟨α^i, ℒ_v_c h ξ⟩ = ⟨α^i, ℒ_v_c( I(ξ) - ∑_i I(β_i) ∫_β^iξ) ⟩ Thus, from (<ref>)-(<ref>) we get the L_∞ module structure S^ind = ψ_i^* ⟨β^i, dα_j ⟩φ^j + ω^* ⟨γ^*, dβ_j ⟩ψ^j - φ_i^* ⟨α^i, ℒ_v_cα_j ⟩φ^j + ψ_i^* ⟨β^i, ℒ_v_cβ_j ⟩ψ^j - - ω^* ⟨γ^*, ℒ_v_cγ⟩ω + φ_i^* ⟨α^i, ℒ_v_c h ℒ_v_cβ_j ⟩ψ^j + ψ_i^* ⟨β^i, ℒ_v_c h ℒ_v_cγ⟩ω + + φ_i^* ⟨α^i, ℒ_v_c h ℒ_v_c h ℒ_v_cγ⟩ω + 1/2 f_ab^d c^a c^b c_d^* § REFERENCES [heading=none]
http://arxiv.org/abs/2406.19244v1
20240627151056
Improving the Expressiveness of $K$-hop Message-Passing GNNs by Injecting Contextualized Substructure Information
[ "Tianjun Yao", "Yiongxu Wang", "Kun Zhang", "Shangsong Liang" ]
cs.LG
[ "cs.LG", "I.2.6" ]
tianjun.yao@mbzuai.ac.ae 0009-0006-0553-2809 Mohamed bin Zayed University of Artificial Intelligence Abu Dhabi UAE yingxv.wang@gmail.com 0000-0003-3284-1464 Mohamed bin Zayed University of Artificial Intelligence Abu Dhabi UAE kunz1@cmu.edu 0000-0002-0738-9958 Carnegie Mellon University & Mohamed bin Zayed University of Artificial Intelligence Pittsburgh PA USA liangshangsong@gmail.com 0000-0003-1625-2168 Mohamed bin Zayed University of Artificial Intelligence Abu Dhabi UAE § ABSTRACT Graph neural networks (GNNs) have become the de facto standard for representational learning in graphs, and have achieved state-of-the-art performance in many graph-related tasks; however, it has been shown that the expressive power of standard GNNs are equivalent maximally to 1-dimensional Weisfeiler-Lehman (1-WL) Test. Recently, there is a line of works aiming to enhance the expressive power of graph neural networks. One line of such works aim at developing K-hop message-passing GNNs where node representation is updated by aggregating information from not only direct neighbors but all neighbors within K-hop of the node. Another line of works leverages subgraph information to enhance the expressive power which is proven to be strictly more powerful than 1-WL test. In this work, we discuss the limitation of K-hop message-passing GNNs and propose substructure encoding function to uplift the expressive power of any K-hop message-passing GNN. We further inject contextualized substructure information to enhance the expressiveness of K-hop message-passing GNNs. Our method is provably more powerful than previous works on K-hop graph neural networks and 1-WL subgraph GNNs, which is a specific type of subgraph based GNN models, and not less powerful than 3-WL. Empirically, our proposed method set new state-of-the-art performance or achieves comparable performance for a variety of datasets. Our code is available at <https://github.com/tianyao-aka/Expresive_K_hop_GNNs>. <ccs2012> <concept> <concept_id>10002951.10003227.10003351</concept_id> <concept_desc>Information systems Data mining</concept_desc> <concept_significance>300</concept_significance> </concept> <concept> <concept_id>10010147.10010257.10010258.10010259</concept_id> <concept_desc>Computing methodologies Supervised learning</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010147.10010257.10010293.10010294</concept_id> <concept_desc>Computing methodologies Neural networks</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [300]Information systems Data mining [500]Computing methodologies Supervised learning [500]Computing methodologies Neural networks Improving the Expressiveness of K-hop Message-Passing GNNs by Injecting Contextualized Substructure Information Shangsong Liang =============================================================================================================== § INTRODUCTION Graph-structured data is ubiquitous in many real-world applications ranging from social network analysis <cit.>, drug discovery <cit.>, personalized recommendation <cit.> and bioinformatics <cit.>. In recent years, Graph Neural Networks (GNNs) have seized increasing attention due to their powerful expressiveness and have become dominant approaches for graph-related tasks. message-passing Graph Neural Networks (MPGNNs) are the most common types of GNNs, thanks to their efficiency and expressivity. MPGNNs can be viewed as a neural version of the 1-Weisfeiler-Lehman (1-WL) algorithm <cit.>, where colors are replaced by continuous feature vectors and neural networks are used to aggregate over node neighborhoods <cit.>. By iteratively aggregating neighboring node features to the center node, MPGNNs learn node representations that encode their local structures and feature information. A graph readout function can be further leveraged to pool a whole-graph representation for downstream tasks such as graph classification. Despite the success of MPGNNs, it is proved in recent developments that the expressive power of MPGNNs is bounded by 1-WL isomorphism test <cit.>, i.e, standard MPGNNs or 1-WL GNNs cannot distinguish any (sub-)graph structure that 1-WL cannot distinguish; for instance, for any two n-node r-regular graphs, standard MPGNNs will output the same node representations. Since then, a few works have been proposed to enhance the expressivity of MPGNNs. Methods proposed in  <cit.> aimed at approximating high-dimensional WL tests. However, these methods are computationally expensive and not able to scale well to large-scale graphs. In light of this, some other recent works sought to develop new GNN architectures with improved expressiveness while still being efficient in terms of time and space complexity. One line of works aimed to achieve this by leveraging subgraph information<cit.>, among which Nested GNN<cit.> and GIN-AK<cit.> developed GNN models that hash neighboring subgraphs instead of direct neighbors, effectively representing nodes by means of GNNs applied to their enclosing ego-networks followed by a graph readout function ℛ, which we term as 1-WL subgraph GNN. Another line of works leverages the notion of K-hop message passing<cit.>, where node representation is updated by iteratively aggregating information from not only direct neighbors but also neighbors within K-hop of the node, which implicitly leverage subgraph information (K-hop ego-networks) to perform message passing. In this work, we establish the connection between K-hop message-passing GNNs and 1-WL subgraph GNNs by viewing their message-passing schemes as two different attention patterns. As shown in Figure <ref>, K-hop message-passing GNN induces a uni-directional hub-spoke attention pattern, while 1-WL subgraph GNN induces a pairwise bidirectional attention pattern. Assuming node features for G_1 and G_2 are identical from a countable set, a 2-hop 1-layer GNN is not able to distinguish the structural disparity for the 2-hop ego-network induced by node 1 due to the same resulting attention patterns. However, a 1-layer 1-WL subgraph GNN with a 2-layer base GNN encoder is able to achieve it, thanks to its pairwise bidirectional attention pattern, assuming all functions in the GNN model are injective. Intuitively, the expressive power of 1-WL subgraph GNN is stronger than K-hop message-passing GNNs with the same receptive field as 1-WL subgraph GNN is able to learn representation that reflects the internal substructure due to its pairwise bidirectional attention pattern. It is worth noting that although the shortest path distance is leveraged to perform K-hop neighbor extraction in this example, several works<cit.> leverage graph diffusion kernel to extract K-hop neighbors following similar uni-direction hub-spoke attention pattern, and therefore still lacking the ability to learn representations that reflect the internal substructure. In this work, we pose the following question: How to design appropriate mechanisms to enhance the ability of K-hop message-passing GNN to learn node representations that can reflect its internal substructure, and improve its expressive power? Our main contribution is summarized as follows: * We propose a general notion of substructure encoding function by establishing connections between K-hop message-passing GNNs and 1-WL subgraph GNNs, which may inspire future works to design more powerful and efficient structural encoding schemes. * We propose an instantiation of the substructure encoding function by leveraging multi-step random walk. Our proposed structural encoding scheme is effective, parallelizable with theoretical guarantees to extract structural information of subgraphs. The proposed structural encoding scheme is also orthogonal to previous methods in the literatures. * We propose SEK 1-WL test, which is provably more powerful than Subgraph 1-WL test and K-hop 1-WL test, and is not less powerful than 3-WL test. * We propose an implementation of SEK 1-WL test, namely SEK-GNN, which is easy to implement, and can be naturally incorporated into message passing framework with parallelizability and effectiveness. Furthermore, SEK-GNN is able to achieve SOTA performance in a variety of datasets with significantly lower space complexity than 1-WL subgraph GNNs. § PRELIMINARY §.§ Notation We use {} to denote sets and {{ }} to denote multisets. The index set is denoted as [n]:={1, ⋯, n}. Throughout this paper, we consider simple undirected graphs G=(𝒱, ℰ), where 𝒱= {v_1, …, v_n} is the node set and ℰ⊆𝒱×𝒱 is the edge set. For a node u, denote its neighbors as 𝒩_G(u):={v ∈𝒱:{u, v}∈ℰ}. The K-hop neighbors of node u in graph G are defined as 𝒩_G^k(u):={v ∈𝒱:dis(u,v)=k}, where dis is the distance metric to extract k-hop neighbors, e.g., the shortest path distance or random walk steps from node u to node v. Let us further define a node set 𝒮_u^K:={u ∪𝒩_G^k(u): k ∈ [K]}, G_u^K:=G[𝒮_u^K] is the node-induced subgraph where the central node is u, or namely a K-hop ego-network rooted at node u. Finally, we use p to denote a scalar value, p to denote a column vector, and P to denote a matrix. §.§ Weisfeiler-Lehman Test WL test is a family of very successful algorithmic heuristics used in graph isomorphism problems. 1-WL test, being the simplest one in the family, works as follows–each node is assigned the same color initially, and gets refined in each iteration by aggregating information from their neighbors' states. The refinement stabilizes after a few iterations and the algorithm outputs a representation of the graph. Two graphs with different representations are not isomorphic. The test can uniquely identify a large set of graphs up to isomorphism <cit.>, but there are simple examples where the test tragically fails–for instance, two regular graphs with the same number of nodes and same degrees cannot be distinguished by the test. As a result, a natural extension to 1-WL test is k-WL test which provides a hierarchical testing process by keeping the state of k-tuples of nodes. The procedure of 1-dimensional Weisfeiler-Lehman Test algorithm is described in Algorithm 1, where 𝒞 denotes the node color space. §.§ More expressive GNNs Despite of the success of MPGNNs, it has been proved in recent literatures that the expressive power of MPGNNs is upper bounded by 1-WL test <cit.>. Since then numerous approaches for more expressive GNNs have been proposed, including positional and structural encodings <cit.>, higher-order message-passing schemes <cit.>, equivariant models <cit.> Subgraph GNNs. Recently a collection of methods exploit similar ideas to utilize subgraphs through the application of GNNs. <cit.> explicitly formulated the concept of bags of subgraphs generated by a predefined policy and studied layers to process them in an equivariant manner: the same GNN can encode each subgraph independently (DS-GNN), or information can be shared between these computations in view of the alignment of nodes across the bag <cit.>. Reconstruction GNNs <cit.> obtain node-deleted subgraphs, process them with a GNN, and then aggregate the resulting representations by means of a set model. Nested GNN <cit.> and GNN-As-Kernel <cit.> propose to hash rooted subgraphs instead of rooted subtrees to perform color refinement. ID-GNN<cit.> also leverage ego-network subgraphs with root nodes marked so as to specifically alter the exchange of messages involving them. K-hop message-passing GNNs. Several works utilize K-hop message passing to improve the expressive power of GNNs. <cit.> utilizes graph diffusion kernel to extract multi-hop neighbors and calculates the final representation from neighbors from each hop. <cit.> also proposes a K-hop GNN model that iteratively updates node representations by aggregating information from K-hop neighbors, which is shown to be able to identify fundamental graph properties such as connectivity and bipartiteness. <cit.> proposes to learn attention-based edge coefficients by incorporating information from farther away nodes by means of their shortest path. § HOW TO DESIGN THE SUBSTRUCTURE ENCODING FUNCTION In the previous section, we show that intuitively K-hop message-passing GNNs are less powerful than subgraph 1-WL GNN due to its lacking the ability to capture the internal substructure of the induced subgraph G_u^K. We first give a formal definition of internal substructure. Definition 1. (Internal substructure) Given a graph G and node u ∈ G, the internal substructure of a K-hop node-induced subgraph G_u^K can be represented as a edge set ℐ_G_u^K and the edge induced subgraph G[ℐ_G_u^K], where ℐ_G_u^K:={{i,j}: (i,j) ∈ℰ, i ∈ G_u^K, j ∈ G_u^K, i ≠ u, j ≠ u }. Here ℐ_G_u^K refers to set of edges in G_u^K that excludes node u. As K-hop message-passing GNNs are not able to be aware of the internal substructure, we propose to use a substructure encoding function to encode the information of node u's internal substructure of G_u^K. Next, we give a formal definition of substructure encoding function. Definition 2. (Substructure encoding function) A substructure encoding function f: G_u^K × G →ℛ^d takes as input a K-hop subgraph rooted at node u and graph G, and outputs an encoded d-dimensional features which can reflect the internal substructure of G_u^K. By designing a proper substructure encoding function f(·) and incorporating the encoded information, the expressive power of K-hop message-passing GNNs will get uplifted. One core question in this work is how to design a proper substructure encoding function f to enhance the expressive power of a K-hop message-passing GNNs. f(·) is expected to be able to encode the internal substructure with computational efficiency in terms of space and time complexity. One way to design such a function is to exploit random walk to calculate the self-return probability for every node u ∈𝒱. Intuitively, two nodes with different internal substructures would lead to different random walk patterns given sufficient steps of random walk and the self-return probability (p_u^1,p_u^2,⋯ p_u^l) ≠ (p_v^1,p_v^2,⋯ p_v^l) for node u in graph G and node v in graph H, if the internal substructure of G_u^K is not identical to H_v^K. Here (p_u^1,p_u^2,⋯ p_u^l) is the l-step self-return probability for node u. But how many steps are sufficient for node u and v to distinguish the internal substructure of G_u^K and H_v^K? We now propose the following theorem to give a lower bound for the steps needed to encode the internal substructure of a K-hop node-induced subgraph G_u^K. Given two n-node r-regular graphs G and H, let 3 ≤ r<(2 log 2 n)^1 / 2 and ϵ be a fixed constant. For two K-hop ego-networks G_u^K and H_v^K with K being at most ⌈(1/2+ϵ) log 2 n/log (r-1) + 1 ⌉, 2K steps of random walk is sufficient to discriminate the internal substructure of G_u^K and H_v^K. The proof is included in Appendix <ref>. Theorem <ref> demonstrates the advantage of using the self-return probability of random walk as the substructure encoding function f(·), thanks to its computational efficiency as it only requires 2K steps to encode the internal substructure of a K-hop ego-network. Although theorem <ref> only applies to regular graphs, empirically we demonstrate that {p_u^i}_i=1^l is able to lead to good model performance with a moderately small integer l. Furthermore, the computation of the self-return probability is easy to achieve parallelism using matrix computations. Let A= A+I be the adjacency matrix with self edges, and D be the corresponding diagonal matrix. 𝐩_u^t=D^-1A𝐩_u^t-1, where 𝐩_u^t is a column vector to represent node u's landing probability to every node v ∈𝒱 at time step t, denoted by 𝐩_u^t[v]. This equation can be expressed in matrix form as H^(t+1)=D^-1AH^(t), where H^(0)=I. {H^(t)}_t=1^l now contains the self-return probability for every node u ∈𝒱, which can be obtained as {H_uu^(t)}_t=1^l for l steps of random walk starting from node u. Injecting more substructure information. Although self-return probability {H_uu^(t)}_t=1^l is able to encode the internal substructure of a K-hop ego-network G_u^K for node u in graph G, {H^(t)}_t=1^l actually contains more structural information that can be utilized. Therefore, we propose another two functions to encode more substructure information. (i). Central node to k-hop neighbors landing probability is to calculate an aggregated statistics of the landing probability from central node u to k-hop neighboring nodes 𝒩_G^k(u):k ∈ [K] for each time step t. (ii). Landing probability across k-hop neighbors is to calculate an aggregated statistic of the nodes that are equally distant from root node u. Let f_1(·) denote the function to encode the self-return probability, f_2(·) denotes the function to encode the aggregated landing probability of a central node to K-hop neighbors, and f_3(·) is the function to encode the aggregated landing probability across k-hop neighbors for all K. Formally, we have the following substructure encoding function: f_1(G_u^K,G) := {H_uu^(t)}_t=1^l, f_2(G_u^K,G) := AGG_k({H^(t)_ui:dis(u,i)=k,k ∈ [K] }_t=1^l), f_3(G_u^K,G) := AGG_k({H^(t)_i,j:dis(i,u)=dis(j,u),k ∈ [K] }_t=1^l), f(G_u^K,G) := COMBINE(f_i(G_u^K,G),i ∈{1,2,3}). For l-step random walk, the encoding function f(·) will output a l-dimensional features to encode the internal substructure of node u ∈𝒱, extracted from {H^(t)}_t=1^l. Injecting contextualized substructure information. Thanks to the substructure encoding function f(·), we can encode the internal substructure of G_u^K for every node u ∈𝒱. Node representation with its enclosing K-hop ego-network h_v := h_G_v^K can be further enriched by incorporating the substructure information of node v, which is {{h_u: u ∈ G_v^K, u ≠ v}}. This can be easily achieved via f(G_u^K,G) as node features. § SUBSTRUCTURE ENHANCED K-HOP 1-WL ALGORITHM In this section, we first generalize previous works on K-hop message-passing GNNs by presenting a formal definition of the K-hop 1-WL color refinement algorithm, as well as subgraph 1-WL color refinement algorithm which was proposed in previous works<cit.>. Then we propose our novel color refinement algorithm, namely the Substructure Enhanced K-hop 1-WL Algorithm (SEK 1-WL), which is more expressive than K-hop 1-WL and subgraph 1-WL. Finally, we give a practical implementation of SEK 1-WL which is efficient and easy to implement. Definition 3. (K-hop 1-WL Test) A K-hop 1-WL color refinement algorithm iteratively refines node colors using all k-hops neighbors, where k ∈ [K]. Formally, the color refinement algorithm is χ_G^t(v) := hash(χ_G^t-1(v), {{χ_G^t-1(u) : u ∈𝒩_G(v)}}, {{χ_G^t-1(u) : dis_G(v, u) = 2}}, ⋯, {{χ_G^t-1(u) : dis_G(v, u) = K}}). The only difference between 1-WL test and K-hop 1-WL test lies in line 4 of Algorithm 1, where the color refinement update procedure is replaced by Eq. <ref>. The K-hop 1-WL algorithm generalizes previous works on K-hop message-passing GNNs where dis can be the shortest path distance using shortest path kernel<cit.> or random walk steps using graph diffusion kernel<cit.>. Next, we introduce the definition of the Subgraph 1-WL test which has been realized in previous works<cit.>. Definition 4. (Subgraph 1-WL Test) A subgraph 1-WL color refinement algorithm hashes the node-induced subgraph instead of direct neighbors. Formally, the color refinement algorithm is as follows: χ_G^t(v):=hash(χ_G^t-1(v),γ_G^t-1(G_v^K)), where γ_G: G_v^K →𝒞 is a function that maps a (sub)graph to a color c ∈𝒞. As the color map for graphs is as hard as graph isomorphism, previous works realize Subgraph 1-WL test by replacing the hashing function γ with a GNN model as a wrapper followed by a graph readout function to obtain the subgraph representation of G_v^K, and then the whole graph representation is then obtained by pooling these subgraph representations <cit.>. Definition 5. (SEK 1-WL Test) Finally our proposed color refinement algorithm SEK 1-WL updates node colors using both K-hop neighbors as well as the contextualized internal substructure information, and the color refinement procedure to replace line 4 of Algorithm 1 is shown as follows: χ_G^t(v) := hash( χ_G^t-1(v), f(G_v^K, G), {{f(G_u^K, G) : u ∈ G_v^K, u ≠ v}}, {{χ_G^t-1(u) : u ∈𝒩_G(v)}}, {{χ_G^t-1(u) : dis_G(v, u) = 2}}, ⋯, {{χ_G^t-1(u) : dis_G(v, u) = K}}). In Eq. <ref>, f(·) is the substructure encoding function proposed in Eq. <ref>. As shown in Eq. <ref>, SEK 1-WL utilizes K-hop neighborhood information, similarly to K-hop 1-WL test and also f(G_v^K,G), the encoded features of internal substructure of node v. Furthermore, SEK 1-WL also inject contextualized substructure information {{f(G_u^K,G): u ∈ G_v^K, u ≠ v }} for node v to enhance the expressive power. We now propose a theorem regarding the expressive power of SEK 1-WL. SEK 1-WL test is strictly more powerful than K-hop 1-WL test and Subgraph 1-WL test. Proof. It is easy to see that SEK 1-WL is more powerful than K-hop 1-WL test as SEK 1-WL incorporates additional substructure information using f(·), and both of them perform node color refinement using K-hop neighbors. Secondly, we show that SEK 1-WL test is more powerful than Subgraph 1-WL test based on the observation of the attention patterns induced by K-hop message-passing GNN and 1-WL subgraph GNN discussed in section <ref>: hashing a subgraph G_v^K can be equivalently expressed as hashing its K-hop neighbors as well as its internal substructure of root node v, and then the color refinement algorithm of subgraph 1-WL test can be reformulated as follows: χ_G^t(v) := hash( χ_G^t-1(v), f(G_v^K, G), {{χ_G^t-1(u) : u ∈𝒩_G(v)}}, {{χ_G^t-1(u) : dis_G(v, u) = 2}}, ⋯, {{χ_G^t-1(u) : dis_G(v, u) = K}}). The extra expressive power of SEK 1-WL over subgraph 1-WL stems from the contextualized substructure information {{f(G_u^K,G): u ∈ G_v^K, u ≠ v }} injected in the color refinement procedure of SEK 1-WL. Finally, it is easy to see that SEK 1-WL is more powerful than 1-WL test. This concludes the proof of Theorem <ref>. Next we give a practical implementation of SEK 1-WL. Practical Implementation. The SEK 1-WL color refinement algorithm can be easily implemented using the message-passing GNN framework. We call our framework SEK-GNN, with the following message-passing and update step for one iteration: m_v^l, k=MESSAGE_k^l(h_v^l-1,f(G_v^K, G),{{(h_u^l-1,f(G_u^K, G)): u ∈𝒩_G^k(v)}}), h_v^l, k=UPDATE_k^l(m_v^l, k), h_v^l=COMBINE^l({{h_v^l, k: k ∈[K]}}). SEK-GNN follows a similar message passing and update procedure as the previous K-hop message-passing GNN, where h_v^l, k is the representation of node v at l^th layer and k^th hop. Once h_v^l, k is calculated using the MESSAGE and UPDATE functions, a COMBINE function is leveraged to combine the node v's representation from k hop, for all k ∈ [K]. However, the difference between our work and the previous ones is that for each node v ∈𝒱, both f(G_u^K, G) and the contextualized encoded substructure features {{f(G_u^K,G): u ∈ G_v^K, u ≠ v }} are involved in the MESSAGE function. Some design choices for the MESSAGE function include GCN<cit.> and GIN<cit.>; for a different k-hop message passing, MESSAGE function can share the same model parameters or use different parameters for each k. UPDATE function can be a MLP or non-linear activation function such as RELU. We also leverage jumping knowledge network<cit.> to increase the model capacity, and the pooling function can be summation, concatenation, or attention-based aggregation. Finally, for COMBINE operation, summation or aggregation following geometric distribution are considered. For geometric, the weight of hop k is calculated based on θ_k=α(1-α)^k, where α∈(0,1]. The final representation for all k hop nodes is obtained via weighted sum. SEK-GNN, as an instance of SEK 1-WL, is more expressive than K-hop 1-WL test and Subgraph 1-WL test. SEK-GNN is not less powerful than 3-WL test. <cit.> has shown that all subgraph-based GNNs are bounded by 3-WL test by proving that all such methods can be implemented by 3-IGN<cit.>, and <cit.> shows that K-hop message-passing GNNs can be implemented by 3-IGN; therefore SEK-GNN, as a K-hop message-passing GNN framework, is also upper bounded by 3-WL test. We say that A is not less powerful than B if there exists a pair of non-isomorphic graphs that cannot be distinguished by B but can be distinguished by A<cit.>. Figure <ref> illustrates two well-known non-isomorphic distance-regular graphs: the 4 × 4 rook's graph and the Shrikhande graph, both of which have the same intersection array {6,3;1,2} (the definition of distance-regular graph and intersection array is introduced in Appendix <ref>). As both graphs are strongly regular, 3-WL test fails to discriminate them; however, SEK-GNN is able to discriminate them as for the two red nodes, we can see that the 1-hop induced subgraphs of the green nodes (both are 2-hop neighbors of red nodes) have different internal substructures: for the 4 × 4 rook's graph, there are two circles in the subgraph induced by blue nodes; however, for the Shrikhande graph, there is no circle involved in the subgraph induced by blue nodes. This demonstrates the effectiveness of incorporating contextualized substructures. One could also verify that the 1-hop induced subgraphs for the two red nodes also have different internal substructures. § SPACE AND TIME COMPLEXITY As SEK-GNN involves a preprocessing stage and a training (inference) stage, we discuss space and time complexity for both stages separately. For a graph G=(𝒱,ℰ) with n nodes and m edges, in the preprocessing stage, to compute {H^(t)}_t=1^l for a l-step random walk, SEK-GNN only requires 𝒪(2m) space since the calculation of H^(t) only depends on H^(t-1), which is on par with standard GNN models such as GCN and GIN. In the training and inference stage, SEK-GNN only requires 𝒪(n) space as the substructure encoding function outputs f(G_v^K,G) for every node v as the node features. The space complexity of SEK-GNN is similar to that of previous works on K-hop GNNs, and is significantly less than 1-WL subgraph GNN models, which requires 𝒪(n^2) due to the independent subgraph extraction. For time complexity, SEK-GNN requires 𝒪(lm) time to preprocess {H^(t)}_t=1^l for a l-step random walk. As we show in section 3, l is typically a small integer independent of graph size and the preprocessing of {H^(t)}_t=1^l only performs once, hence it is still affordable. For the training and inference stage, SEK-GNN requires 𝒪(n^2) time in the worst case. However, empirically, we only randomly sample a fixed number of neighbors from 𝒩_G^K(v) for every node v, and do not see a drop in model performance. With neighbor sampling, SEK-GNN only requires 𝒪(cn) time, where c is a constant only dependent on the number of neighbors to sample for each hop k ∈ [K]. The time complexity of SEK-GNN is also clearly less than 1-WL subgraph GNN models whose time complexity is 𝒪(nm), note that typically c ≪ m. § RELATED WORK Several works also utilize random walk to enhance the expressivity of graph neural networks <cit.>. <cit.> made the observation that the range of "neighboring" nodes that a node’s representation draws from strongly depends on the graph structure, analogous to the spread of a random walk. Motivated by this observation, they proposed a new aggregation scheme for node representational learning that can adapt neighborhood ranges to nodes individually. <cit.> established connections between graph convolutional networks (GCN) and PageRank <cit.> to derive an improved propagation scheme based on personalized PageRank. Random walk GNN <cit.> uses a number of trainable "hidden graphs" which are compared against the input graphs using a random walk kernel to produce graph representations. Although these works leverage random walk to build new GNN architectures, SEK-GNN is inherently different from these works, which leverages the sequential information {H^(t)}_t=1^l of a l-step random walk to encode the internal substructure of a induced subgraph to enhance the representative power of K-hop message-passing GNNs. Another line of research tries to enhance the expressive power of GNNs by augmenting node features. <cit.> add one-hot or random features into node labels, although being scalable and easy to implement, these approaches may not preserve permutation invariant property, thus would hurt the generalization performance outside of the training set. Recent work also proposes to use distance features to uplift GNN’s expressivity <cit.> where a distance vector w.r.t the target node set is calculated for each node as its additional feature. <cit.> proposes labelling trick to perform node feature augmentation by distinguishing target node set. <cit.> introduces a hierarchy of local isomorphism and proposes structural coefficients as additional features to identify such local isomorphism. Our proposed substructure encoding function also belongs to this line of works, where self-return probability, landing probability across k-hop neighbors and central node to k-hop neighbors landing probability are calculated to enrich node features, which can extract structural disparity with theoretical guarantees. Our approach is orthogonal to previous structure encoding methods, and provides an easy way to incorporate contextualized substructure information. Finally, <cit.> also aims to improve the expressive power of K-hop message-passing GNNs. However, they leverage peripheral edges to enrich the representation of a K-hop message-passing GNNs, which is equivalent of extracting the internal substructure of G_u^K explicitly while SEK-GNN leverages substructure encoding function to encode the internal substructure implicitly. However, as SEK-GNN also injects contextualized substructure information {{f(G_v^K,G): v ∈ G_u^K, v ≠ u }}, the expressive power can get further improved. § EXPERIMENTS In this section, we empirically evaluate the proposed method on synthetic datasets and real-world datasets with both graph classification and graph regression tasks. We demonstrate that SEK-GNN can achieve state-of-the-art performance over multiple datasets. We make use of Pytorch Geometric <cit.> to implement our proposed framework, and our code is available at <https://github.com/tianyao-aka/Expresive_K_hop_GNNs>. The statistics of the datasets used in the experiment are shown in Appendix <ref>. §.§ Synthetic datasets Datasets. We use two synthetic datasets to evaluate the effectiveness of SEK-GNN: i) Graph property regression, where the goal is to regress the number of three graph properties on random graphs, namely connectedness, diameter, radius<cit.>; ii) Graph substructure counting, where the goal is to regress the number of four substructures: triangle, tailed triangle, star, and 4-cycle on randomly generated graphs<cit.>. Baseline methods. We use GIN <cit.> as a baseline method as it has the same expressive power as 1-WL test. For more powerful baselines, we use GIN-AK+ <cit.>, PNA <cit.>, PPGN <cit.>, and KP-GIN+ <cit.>. Experiment setting. For both graph property dataset and graph substructure counting dataset, GINE+ <cit.> is used as the base encoder for SEK-GNN, and we also concatenate the hidden representation from a GIN model with SEK-GNN, followed by a linear transformation to get the final prediction. We call it SEK-GIN. Similarly SEK-PPGN leverages representation from PPGN and SEK-GNN where GIN is the base encoder for SEK-GNN. For both datasets, we follow the standard data splitting method as in <cit.>. For graph property dataset, we use a learning rate of 8e-3, weight decay rate of 5e-7 for SEK-GIN, and (1e-3, 0.0) for SEK-PPGN. For SEK-GIN, the model is trained for 350 epochs and for SEK-PPGN, the model is trained for 800 epochs. The batch sizes for SEK-GIN and SEK-PPGN are tuned over {64,128}. We use a hidden dim of 64 for both models. For SEK-GIN, the number of hops k in GINE+ encoder is searched over {5,6}, and the number of layers is searched over {5,6,7}. For SEK-PPGN, we use a 1-layer GIN encoder for SEK-GNN, where the number of hops k is searched over {3,5}. For SEK-GIN, we use ReduceLROnPlateau learning rate scheduler with a patience of 10 and a reduction factor of 0.5. The pooling method used for jumping knowledge in SEK-GIN is attention-based pooling and summation for SEK-PPGN. Finally, we use summation as the COMBINE function in SEK-PPGN and geometric in SEK-GIN. For the graph substructure counting dataset, we use a learning rate of 1e-3, and a weight decay rate of 1e-6 for SEK-GIN, and 1e-3 and 0.0 for SEK-PPGN. For SEK-GNN, the model is trained for 350 epochs, and for SEK-PPGN, the model is trained for 1000 epochs. The batch size for SEK-GIN and SEK-PPGN are tuned over {32,64}. We use a hidden dimension of 64 for both models. For SEK-GIN, the layer number of GINE+ encoder and the number of hops k is searched over {(5,3),(5,5),(5,7),(6,3),(6,5),(6,7)}. For SEK-PPGN, we use a 1-layer GIN encoder for SEK-GNN, where the number of hops k is searched over {3,5}. For SEK-GIN, we use ReduceLROnPlateau learning rate scheduler with a patience of 15 and a reduction factor of 0.75. The pooling methods for jumping knowlwdge in SEK-GIN is attention-based pooling, and summation for SEK-PPGN. Finally, we use summation as COMBINE function in SEK-PPGN and geometric in SEK-GIN. The graph readout function is summation in both SEK-PPGN and SEK-GIN. Results. The evaluation metric for graph property dataset is log_10(MSE), and for graph substructure counting dataset we use MAE. Each model is trained three times and we report the average performance. As shown in Table <ref>, SEK-GIN or SEK-PPGN outperform the baseline methods on these two datasets for most tasks. Previously, we show that SEK-GNN can discriminate some non-isomorphic graphs where 3-WL fails, in these two synthetic datasets, we can see that a 1-layer 3-hop or 5-hop SEK-PPGN is able to uplift the expressive power of PPGN significantly, and sets new state-of-the-art performance in several tasks by a large margin. The most competitive baseline method to SEK-GNN is KP-GIN+, which encodes the internal substructures explicitly by leveraging the peripheral edges and induced subgraph, while SEK-GIN uses the substructure encoding function to encode the internal substructure implicitly. §.§ Real-world datasets §.§.§ Graph classification Datasets. We use eight datasets to evaluate the effectiveness of SEK-GNN, among which MUTAG, PROTEINS, ENZYMES, PTC-MR, BZR, and COX2 are bioinformatics datasets<cit.>, and IMDB-B and IMDB-M are social network datasets<cit.>. Baseline methods. We compare against twelve baselines: (1) Graph kernel methods: WL subtree kernel<cit.>, RetGK<cit.>, GNTK<cit.>, WWL <cit.>, and FGW <cit.>. (2) GNN based methods: DGCNN<cit.>, CapsGNN<cit.>, GIN<cit.>, GraphSAGE <cit.>, GraphSNN<cit.>, GIN-AK+<cit.>, and KP-GNN <cit.>. Experiment setting. We use 10-fold cross-validation, and the evaluation on validation set follows both settings and we report results for two settings<cit.>. For the first setting<cit.>, we use 9 folds for training and 1 fold for testing in each fold. After training, we average the test accuracy across all the folds. Then a single epoch with the best mean accuracy and the standard deviation is reported. For the second setting<cit.>, we use 9 folds for training and 1 fold for testing in each fold but we directly report the mean best test results. We use SEK-GIN where the base GNN encoder for SEK-GNN is GINE+<cit.>, we also use a GIN model and concatenate the graph representations followed by a linear transformation to get the final prediction logits. For all datasets, we search over: (1) the number of hops: {1,2,3,4}; (2) number of layers: {1,2,3,4}; (3) COMBINE function: {sum,geometric}; (4)For JKNet, the pooling function is searched over {sum,concat}, and the graph readout function is summation. the learning rate is 8e-3 and weight decay rate is 1e-6, the hidden size is max(int(120/K),40), where K is the number of hops. Result. We report the model performance for both settings in Table <ref>. We can see that SEK-GIN achieves state-of-the-art performance for both settings. SEK-GIN outperforms GIN consistently across all the datasets, demonstrating that SEK-GNN can go beyond the expressive power of 1-WL test. Compared to other more powerful GNN models such as GIN-AK+, GraphSNN, and KP-GIN+, SEK-GNN can also achieve better performance or comparable performance. §.§.§ Graph regression Datasets. we use QM9 dataset to verify graph regression tasks. QM9 contains 130K small molecules. The task here is to perform regression on twelve targets representing energetic, electronic, geometric, and thermodynamic properties, based on the graph structure and node/edge features. Baseline methods. We compare against six baseline methods including DTNN and MPNN <cit.>, PPGN<cit.>, Nested 1-2-3-GNN <cit.> and KP-GNN <cit.>. Experiment setting. We run three times and report the average results for every target. We use SEK-GIN where the base GNN encoder for SEK-GNN is GINE+, we also use a GIN model and concatenate the graph representations followed by a linear transformation to get the final prediction. We train a model separately for every target, we search over: (1) the number of hops: {5,6,7}, (2) number of layers: {4,6,8}, and (3) COMBINE function: {sum,geometric}. For JKNet, the pooling function is attention-based aggregation, and the graph readout function is summation. The learning rate is 1e-3 and we don't use l2 weight decay and dropout, the hidden size is set to 128 for all targets. Result. As we can see in Table <ref>, SEK-GIN also achieves state-of-the-art performance for many targets, where SEK-GIN is able to outperform PPGN and KP-GIN+ which are both upper bounded by 3-WL test. For other targets, SEK-GIN also achieves comparable performance with SOTA models, which demonstrates the effectiveness of the SEK 1-WL color refinement algorithm. § CONCLUSION In this paper, we establish connections between previous works on K-hop message-passing GNNs and 1-WL subgraph GNNs and find out that K-hop message-passing GNNs lack the ability to distinguish the internal substruture of subgraphs, we therefore propose to use substructure encoding function as one possible mechanism to enhance the expressive power of K-hop message-passing GNNs. We further propose to utilize contextualized substructure information to uplift the expressiveness of K-hop message-passing GNNs. We then introduce SEK 1-WL test which is more expressive than K-hop 1-WL test and Subgraph 1-WL test. Finally, we provide a practical implementation, namely SEK-GNN which enjoys efficiency and parallelizability. Experiments on both synthetic datasets and real-world datasets demonstrate the effectiveness of SEK-GNN. There are still other promising directions to explore. First, how to design other mechanisms to make K-hop message-passing GNNs to be able to distinguish the internal substructure of subgraphs with less computational cost compared with the proposed substructure encoding function. Second, how to establish connection between K-hop message-passing GNNs and other subgraph based GNN methods such as Equivariant Subgraph Aggregation Network (ESAN) <cit.> which is proven to be expressive for graph biconnectivity <cit.>. This may provide a completely different view parallel to this work and guide us to design even more powerful K-hop message-passing GNN models. Finally, one can also exploit incorporating other structural encoding methods into K-hop message-passing GNNs to uplift the representative power. § ACKNOWLEDGEMENTS This work is partially supported by the NSF-Convergence Accelerator Track-D award #2134901, by the National Institutes of Health (NIH) under Contract R01HL159805, by grants from Apple Inc., KDDI Research, Quris AI, IBT, and by generous gifts from Amazon, Microsoft Research, and Salesforce. ACM-Reference-Format § APPENDIX §.§ PROOF OF THEOREM 1 Our proof is inspired by the previous theoretical characterization on random regular graphs in <cit.> and <cit.>. We first outline our proof here: as K-hop message-passing GNNs also extract a node-induced subgraph implicitly, we can focus on the node-induced subgraph G_u^K for any node u ∈𝒱, then we show that 2K steps of randm walk is sufficiently large by restricting the random walk path inside G_u^K and prune those paths outside G_u^K. Finally, we proof 2K steps of random walk is a sufficient condition to discriminate the internal substructure of G_u^K by extending previous theoretical outcome from <cit.>. First, we define a path P = (u_0,u_1,⋯, u_d) is a tuple of nodes satisfying {u_i-1,u_i}∈ℰ, ∀ i ∈ [d]. We denote a path P_st to be a path starting from node s and end at node t, a self-return path can be denoted as P_uu that start and end at node u. A path P is said to be simple if it does not go through a node more than once, i.e., u_i ≠ u_j for i ≠ j. The path P induced by random walk is not necessarily a simple path. Next, we give a formal definition of edge configuration. Definition 6. (Edge configuration) The edge configuration between 𝒩_G^k(u) and 𝒩_G^k+1(u) for node u ∈𝒱 is a list C_v, G^k=(a_v, G^1, k, a_v, G^2, k, …), where a_v, G^i, k denotes the number of nodes in 𝒩_G^k+1(u) of which each has exactly i edges from 𝒩_G^k(u). For two nodes v_1 and v_2, in G_1 and G_2, the two edge configurations C_v_1, G_1^k equals to C_v_2, G_2^k if and only if C_v_1, G_1^k are component-wise equal to C_v_2, G_2^k. Next, we propose the first lemma. For two graphs G_1 and G_2 that are uniformly independently sampled from all n-node r-regular graphs, where 3 ≤ r<√(2 log n), we pick any two nodes, each from one graph, denoted by v_1 and v_2 respectively. Then, there is at least one i that is greater than 1/2log n/log (r-1-ϵ) and less than (1/2+ϵ) log n/log (r-1-ϵ) with probability 1-o(n^-1) such that C_v_1, G_1^i≠ C_v_2, G_2^i. Moreover, with at least the same probability, for all i ∈(1/2log n/log (r-1-ϵ),(2/3-ϵ) log n/log (r-1)), the number of edges between 𝒩_G_j^i(v_j) and 𝒩_G_j^i+1(v_j) are at least (r-1-ϵ)|𝒩_G_j^i(v_j)| for j ∈{1,2}. Proof. This lemma can be obtained by following step 1-3 in the proof of Theorem 3.3 in <cit.>. Using lemma <ref>, we now set K=⌈(1/2+ϵ) log n/log (r-1-ϵ)+1⌉, then for the two induced subgraphs G_v_1^K and G_v_2^K, we can show that the with probability 1-o(n^-1), the total number of self-return paths |P_v_1v_1| ≠|P_v_2v_2| where the length of path P_v_1v_1 and P_v_2v_2 are at most 2K. For two graphs G_1 and G_2 that are uniformly independently sampled from all n-node r-regular graphs, where 3 ≤ r<√(2 log n), we pick any two nodes, each from one graph, denoted by v_1 and v_2 respectively. For the two node-induced subgraphs G_v_1^K and G_v_2^K with K=⌈(1/2+ϵ) log n/log (r-1-ϵ)+1⌉, the total number of self-return paths P_v_1v_1 will not be equal to that of P_v_2v_2, i.e., |P_v_1v_1| ≠|P_v_2v_2|,with probability roughly 1-o(n^-1). Proof. Using lemma <ref>, we know there exists some k<K, with probability 1-o(n^-1) such that C_v_1, G_1^k≠ C_v_2, G_2^k, then if we focus on the paths P reaching to 𝒩_G_1^k+1(v_1) and 𝒩_G_2^k+1(v_2) but not further, then the total number of paths |P_v_1v_1| and |P_v_2v_2| is likely to be different, due to the fact that C_v_1, G_1^k≠ C_v_2, G_2^k, with probability 1-o(n^-1). However, we notice that there is some chance that ∑_i=1^r i × a_v_1, G_1^i, k = ∑_i=1^r i × a_v_2, G_2^i, k, i.e., although C_v_1, G_1^k≠ C_v_2, G_2^k component-wise, the total number of edges going from 𝒩_G_1^k+1(v_1) to 𝒩_G_1^k(v_1) and 𝒩_G_2^k+1(v_2) to 𝒩_G_2^k(v_2) is the same. Hence, we say that |P_v_1v_1| ≠|P_v_2v_2| with probability roughly 1-o(n^-1). Lemma <ref> also suggests that 2K steps is sufficient to lead to |P_v_1 v_1| ≠|P_v_2 v_2| as k is strictly less than K. Now we show that |P_v_1 v_1| ≠|P_v_2 v_2| is a sufficient condition for a self-return probability vector of 2K steps {H_uu^t}_t=1^2K to differentiate the internal substructure of G_v_1^K and G_v_2^K. First, for some k<K leading to C_v_1, G_1^k≠ C_v_2, G_2^k with probability 1-o(n^-1), the total sample space (total number of paths) for v_1 and v_2 as G_1 and G_2 are both n-node r-regular graphs, and the total sample space is independent of the subgraph G_v_1^K and G_v_2^K. Next, we consider three types of paths: i) The paths touching {𝒩_G_1^i(v_1)}_i=1^k and {𝒩_G_2^i(v_2)}_i=1^k but not further. ii) The paths touching {𝒩_G_1^k+1(v_1)} and {𝒩_G_2^k+1(v_2)} but not further. iii) The paths touching {𝒩_G_1^i(v_1)}_i=k+2^K and {𝒩_G_2^i(v_2)}_i=k+2^K but not further, if k+2<K. For type 1 paths, the total number of self-return paths to v_1 and v_2 remains the same due to the symmetry of the subgraph. For type 2 paths, using Lemma <ref>, we can see that the total number of paths for v_1 and v_2 is different with probability roughly 1-o(n^-1). For type 3 paths, the total number of self-return paths is also dependent on C_v_1, G_1^k and C_v_2, G_2^k. Finally, there are paths that touch {𝒩_G_1^i(v_1)}_i=K+1^2K and {𝒩_G_2^i(v_2)}_i=K+1^2K; however, these paths don't get back to v_1 and v_2, hence they don't count as valid paths. Based on the above observations, we can see that the self-return probability vector starts to make a difference at timestep 2(k+1) and onwards. This concludes the proof that a self-return probability vector of length 2K is sufficient for distinguishing the internal substructure of two induced subgraphs. Finally we make some notes on the cases when C_v_1, G_1^k≠ C_v_2, G_2^k, but ∑_i=1^r i × a_v_1, G_1^i, k = ∑_i=1^r i × a_v_2, G_2^i, k, which will make the self-return probability vector indistinguishable. We call it collision rate, and let it be a scalar value p, however as we inject contextualized substructures, to make it indistinguishable for G_v_1^K and G_v_2^K, it is required that all the nodes i ∈ G_v_1^K and j ∈ G_v_2^K collides and |G_v_1^K| = |G_v_2^K|. Therefore, the collision rate now reduces to p^|G_v_1^K|, which reduces exponentially as |G_v_1^K|, the total number of nodes in G_v_1^K grows. The collision rate becomes negligible as |G_v_1^K| increases. This demonstrates the benefits to incorporate contextualized substructure information. §.§ Ablation Study We further perform a detailed ablation study on our proposed framework on both synthetic datasets and real-world datasets to demonstrate the effectiveness of our proposed method. One core contribution in our work is the proposal of a general notion of substructure encoding function and one instantiation of it which achieves parallelizability and theoretical guarantee to encode the substructural information. Hence, we perform ablation study to evaluate the effect of various step sizes of random-walk, in addition to that without injecting any substructure information(#steps=0). For synthetic datasets, we fix the SEK-GIN to be 6-layer 5-hop with summation as combine function and graph readout function. The model is trained for 400 epochs and three times for each target and we report the average result. For TU dataset, the model is fixed to be a 2-layer 3-hop SEK-GIN, where the combine function is summation and the graph readout function is also summation. All models are trained for 350 epochs. We follow the second evaluation setting<cit.> to report the experiment result across 10 folds. The result is illustrated in Table <ref> and Table <ref>. As we can see, the performance gains for SEK-GIN are significant when the contextualized structural information is injected, which demonstrates the effectiveness of our proposed method. §.§ Definition of distance-regular graph and intersection array In this section, we give a formal definition of distance-regular graph and intersection array. Definition 7. (Distance-regular graph) Given a graph G=(𝒱, ℰ) and let D(G):=max _u, v ∈𝒱dis_G(u, v) be the diameter of G. We say G is distance-regular if for all i, j ∈[D(G)] and all nodes u,v,w,x ∈𝒱 with dis_G(u,v)= dis_G(w,x), we have |𝒩_G^i(u) ∩𝒩_G^j(v)|=|𝒩_G^i(w) ∩𝒩_G^j(x)|. Definition 8. (Intersection array) The intersection array of a distance-regular graph G is denoted as: ι(G)={b_0, ⋯, b_D(G)-1 ; c_1, ⋯, c_D(G)}, where b_i=|𝒩_G(u) ∩𝒩_G^i+1(v)| and c_i=|𝒩_G(u) ∩𝒩_G^i-1(v)| with dis_G(u, v)=i. §.§ Datasets Statistics
http://arxiv.org/abs/2406.17726v1
20240625171346
Extensions of Panjer's recursion for mixed compound distributions
[ "Spyridon M. Tzaninis", "Apostolos Bozikas" ]
math.PR
[ "math.PR", "Primary 91G05, secondary 60E05, 28A50" ]
§ ABSTRACT In actuarial practice, the usual independence assumptions for the collective risk model are often violated, implying a growing need for considering more general models that incorporate dependence. To this purpose, the present paper studies the mixed counterpart of the classical Panjer family of claim number distributions and their compound version, by allowing the parameters of the distributions to be viewed as random variables. Under the assumptions that the claim size process is conditionally i.i.d. and (conditionally) mutually independent of the claim counts, we provide a recursive algorithm for the computation of the probability mass function of the aggregate claim sizes. The case of a compound Panjer distribution with exchangeable claim sizes is also studied. For the sake of completeness, our results are illustrated by various numerical examples. Re-examination of the role of displacement and photon catalysis operation in continuous variable measurement device-independent quantum key distribution Arvind July 1, 2024 ======================================================================================================================================================== § INTRODUCTION Compound distributions are widely used in actuarial science for modelling the aggregate claims amount (denoted as S) paid by an insurance company over a fixed period of time. The computation of the probability distribution P_S of S is one of the main topics in the field. More formally, the random variable S is defined as the random sum S:=∑_n=1^N X_n, where X:={X_n}_n∈ represents the individual claim sizes and N is the random number of claims. Usually, the sequence X is assumed to be independent and identically distributed and independent of N. Even under these independence assumptions, the determination of P_S can be a difficult task, since one usually has to compute higher order convolutions. Consequently, recursive algorithms are often used for the computation of P_S. One of the most prominent algorithms in actuarial literature is Panjer's recursion (see <cit.>, Theorem, p. 24), which in the case of discrete claims size distribution simplifies the computation of the probability mass function (abbreviated as pmf) of S. Recall that the Panjer class of order k∈_0, denoted by Panjer(a,b;k), is a collection of claim number distributions P_N={p_n}_n∈_0, satisfying p_n=0 for any n≤ k-1 and p_n=(a+b/n)· p_n-1 for all n≥ k+1 (see <cit.>, p. 283). The members of the aforementioned class have been characterized by <cit.> for k=0, <cit.> for k=1 and <cit.> for a general k ≥ 2. Recently, <cit.> proposed a reparametrization for the members of the Panjer(a,b;k) class for k≥ 0 resulting in a more practical and unified representation of the corresponding probability distributions. More general recursive expressions than (<ref>) have also been studied by <cit.>, <cit.>, <cit.> and <cit.>. For an extensive review on recursive formulas for actuarial applications, we refer to <cit.> and the references therein. In the case of inhomogeneous insurance portfolios, mixtures of claim number distributions are considered for modelling the claim counts. <cit.>, <cit.>, <cit.>, <cit.> and <cit.> proposed some (compound) mixed negative binomial distributions with different parametrizations using various mixing distributions. On the other hand, the literature for mixed Poisson and compound mixed Poisson distributions is much more enriched, encompassing, among others, the works of <cit.>, <cit.>, <cit.>, <cit.>, <cit.> and <cit.>. In particular, recursive formulas have been obtained by <cit.> under the additional assumption that the logarithm of the induced (by the mixing distribution) probability density function can be written as the ratio of two polynomials. Moreover, by adopting the usual independence assumptions, <cit.> obtained a recursive formula for the computation of P_S. In actuarial practice, the assumptions of independence among the random variables of X and the mutual independence between X and N are often violated (see among others <cit.>, <cit.> and <cit.>). In particular, the mutual independence assumption seems to be unrealistic, especially when considering inhomogeneous portfolios. This motivates us to consider a dependence structure among the random variables of X and between N and X. In contrast to the existing notion of compound (mixed) counting distributions, where the sequence X is considered to be i.i.d. and mutually independent of N (see <cit.>, Chapter 8, and <cit.>, p. 4), in our case X is only conditionally i.i.d. and conditionally mutually independent of N. This leads to the concept of mixed compound distributions (see Section <ref>), which constitute a strong generalization of compound (mixed) distributions. Under the above framework, this work studies the mixed counterpart of the original Panjer(a,b;0) class and the corresponding compound distributions, by allowing the parameters of the claim number distribution P_N to be measurable functions of a random vector, the claim size process X to be conditionally i.i.d. and conditionally mutually independent of N. The above assumptions allow for a (possible) correlation (positive or negative) between N and X, as well as correlation among the random variables of X. In addition, we consider the case of compound Panjer distributions with exchangeable claims. Henceforth, we denote by Panjer(a(),b();0) the mixed Panjer class, where is a d-dimensional (d ∈) random vector (also referred as structural parameter). The rest of this paper is organized as follows. Section <ref> provides a characterization for the members of Panjer(a(),b();0) class in terms of regular conditional probabilities, and a recursive algorithm for the computation of their pmf Section <ref> studies the probabilistic aspects of mixed compound distributions. Section <ref> presents a recursive algorithm for the computation of P_S in the case that P_N∈ Panjer(a(),b();0) and the claim size distribution is concentrated on _0. Numerical applications are also given in Section <ref>, whereas Section <ref> concludes the paper. § THE CLASS OF MIXED PANJER CLAIM NUMBER DISTRIBUTIONS and stand for the natural and the real numbers, respectively, while _0:=∪{0} and _+:={x∈: x≥0}. If d∈, then ^d denotes the Euclidean space of dimension d. Given a topology 𝔗 on a set , write () for its Borel σ-algebra on , i.e., the σ-algebra generated by 𝔗. Also :=() and (α,β):=((α,β)), where α,β∈ with α<β, are the Borel σ-algebras of subsets of and (α,β), respectively. For a non-empty set A, denote by _A and by 𝒫(A) its indicator function and its power set, respectively. Also, for a map f:A→E and a non-empty set B⊆A, write f B to denote the restriction of f to B. Given two measurable spaces (,) and (,T), as well as a -T-measurable map Z:→, denote by σ(Z):={Z^-1[B] : B∈ T} the σ-algebra generated by Z. Throughout what follows (,,P) is an arbitrary probability space and N is a random variable on taking values in a subset R_N of _0 containing 0. <cit.> proved that the only non-degenerate members of the original Panjer class of claim number distributions Panjer(a,b;0) are the Poisson, the binomial and the negative binomial distributions (see <cit.>, Theorem 1). These distributions will be referred as basic claim number distributions. Since the members of Panjer(a,b;0) satisfy the recursive condition (<ref>), an interesting question is whether their mixtures also satisfy a recursive relation. The previous question traces back to <cit.>, but for a more general family of claim number distributions, where it seems not to have a definite answer. A way to model the mixtures of claim number distributions is to assume the existence of a random variable (or more generally of a random vector) on with values in D⊆ [0,∞), such that the conditional distribution of N given satisfies condition P_N|= K() Pσ()-almost surely (written a.s. for short), where K() is a conditional distribution (cf., e.g., <cit.>, p. 455, for the definition of a conditional distribution). A claim number distribution P_N that satisfies the previous condition is said to be a mixed claim number distribution. This motivates us to provide the following definitions within the class of mixed claim number distributions. Let be a d-dimensional random vector with values in D⊆^d. A random variable N is distributed according to: (a) the mixed Poisson distribution with structural parameter ξ() (written MP(ξ()) for short), where ξ is a (D)-(_+)-measurable function, if p_n():=P(N=n|)=e^-ξ()·(ξ())^n/n! Pσ()-a.s. for any n∈_0; (b) the mixed binomial distribution with structural parameters z_1() and z_2() (denoted by MB(z_1(),z_2()) for short), where z_1 and z_2 are (D)-𝒫(_0)- and (D)-(0,1)-measurable functions, respectively, if p_n()=z_1()n·(z_2())^n·(1-z_2())^z_1()-n Pσ()-a.s. for any n∈{0,1,…,z_1()} Pσ()-a.s.. In particular, if z_1() or z_2() is degenerate, simply write MB(m,z_2()) (m∈) or MB(z_1(),p) (p∈(0,1)), respectively; (c) the mixed negative binomial distribution with structural parameters ρ_1() and ρ_2() (written MNB(ρ_1(),ρ_2()) for short), where ρ_1 and ρ_2 are (D)-(_+)- and (D)-(0,1)-measurable functions, respectively, if p_n()=Γ(ρ_1()+n)/n!·Γ(ρ_1())·(ρ_2())^ρ_1()·(1-ρ_2())^n Pσ()-a.s. for any n∈_0. In particular, if ρ_1() or ρ_2() is degenerate, simply write MNB(r,ρ_2()) (r>0) or MNB(ρ_1(),p) (p∈(0,1)), respectively. In what follows, unless stated otherwise, is a d-dimensional random vector with values in D⊆^d and ξ, z_1,z_2,ρ_1 and ρ_2 are as in Definitions <ref>. A claim number distribution P_N belongs to the class Panjer(a(),b();0) if, the conditional distribution of N given satisfies for every n∈ R_N∖{0} the condition p_n()=(a()+b()/n)· p_n-1() Pσ()-a.s., where a and b are real-valued (D)-measurable functions. (a) In the special case P_=δ_θ_0, where δ_θ_0 denotes the Dirac measure concentrated on θ_0∈ D, the class Panjer(a(θ_0),b(θ_0);0) coincides with the original Panjer class of claim number distributions. (b) Recursive formulas for mixed claim number distributions can be found also in <cit.>, Lemma 5.7, but in a different context from condition (<ref>). More precisely, these recursive formulas arise from a change of distributions technique for the mixing distribution. Since the definition of Panjer(a(),b();0) involves conditioning, it is natural to expect that the notion of regular conditional probabilities (also known as disintegrations) will play a crucial role in order to avoid trivialities and to treat conditioning in a rigorous way. The following definition is a special instance of <cit.>, Definition 452E, appropriately adapted for our purposes (see also <cit.>, Definition 3.1). Let (,T,Q) be a probability space. A family {P_y}_y∈ of probability measures on is called a regular conditional probability (written rcp for short) of P over Q if (d1) for each E∈ the map y↦ P_y(E) is T-measurable; (d2) ∫ P_y(E) Q(dy)=P(E) for each E∈. If f:→ is an inverse-measure-preserving function (i.e., P(f^-1[B])=Q(B) for each B∈ T), an rcp {P_y}_y∈ of P over Q is called consistent with f if, for each B∈ T, the equality P_y(f^-1[B])=1 holds for Q-almost all (written a.a. for short) y∈ B. The following result is taken from <cit.>, and serves as a basic tool for the proofs of the upcoming results. In order to present it, denote by ∘ the function composition operator. (<cit.>, Lemma 3.5) Let (,T,Q) be a probability space and f:→ be an inverse-measure-preserving function. Put σ(f):={f^-1[B] : B∈ T} and suppose that an rcp {P_y}_y∈ of P over Q consistent with f exists. Then, for each A∈ and B∈ T the following hold true: * _P[g|σ(f)] =_P_∙[g]∘ f Pσ(f)-a.s., where g is a -T-measurable function such that ∫ g dP is defined on ∪{-∞,∞}; * ∫_B P_y(A) Q(dy)=∫_f^-1[B]_P[_A|σ(f)] dP=P(A∩ f^-1[B]). Henceforth, {P_θ}_θ∈ D is an rcp of P over P_ consistent with . For any n∈ R_N and θ∈ D, set p_n(θ):=P_θ(N=n). (a) The use of rcp's in applied probability has been criticized mainly due to the fact that their existence is not always guaranteed (cf., e.g., <cit.>, Subsection 2.4). However, if is countably generated and P is perfect (see <cit.>, 451A(d), for the definition of a perfect measure), then there always exists an rcp {P_y}_y∈ of P over Q consistent with every inverse-measure-preserving function f from into , provided that T is also countably generated (see <cit.>, Theorems 6 and 3). Note that the most important applications in actuarial science are still rooted in the case that is a Polish space (cf., e.g., <cit.>, p. 239, for its definition), where such rcp's always exist. In particular, ^d and ^ are typical examples of Polish spaces appearing in applications (see also <cit.>, Remark 3.2). For an excellent review on rcp's and their applications, we refer the interested reader to <cit.>.(b) Each of the probability measures P_θ of the corresponding rcp can be interpreted as a version of the conditional probability P(∙|=θ), which is well-defined, up to an a.s. equivalence, as a function of θ, while the consistency of {P_θ}_θ∈ D with can be interpreted as the concentrating property P(≠θ|=θ)=0 for any θ∈ D of conditional probabilities.(c) Assume that is a positive random variable and let P_N be a mixed claim number distribution. If the map θ↦_P_θ[N] is increasing (resp. decreasing) for P_-a.a. θ>0, then the random variables N and are P-positively (resp. negatively) correlated.In fact, assume first that the map θ↦_P_θ[N] is increasing for P_-a.a. θ>0. By applying Lemma <ref>, we get that _P[N·]=_P_[θ·_P_θ[N]], implying together with <cit.>, Theorem 2.2, and Lemma <ref>, that _P[N·]≥_P[N]·_P[]; hence N and are positively correlated. The proof for a decreasing map θ↦_P_θ[N] is similar. It is natural to ask which are the members of Panjer(a(),b();0), and if they can be characterized in terms of the basic claim number distributions. By applying Lemma <ref>, the next result yields a characterization for the members of the Panjer(a(),b();0) class via rcp's, connecting in this way the defining property of the original Panjer class and of its mixed counterpart. This characterization also serves as a useful tool for proving the results presented throughout the paper. The following statements are equivalent: * P_N∈ Panjer(a(),b();0); * (P_θ)_N∈ Panjer(a(θ),b(θ);0) for P_-a.a. θ∈ D. Fix on arbitrary n∈ R_N∖{0} and F∈(D). Statement (i) is equivalent to _P[_^-1[F]· p_n()] dP=_P[_^-1[F]·(a()+b()/n)· p_n-1()] or, according to Lemma <ref>(i), to _P_[_F· p_n(θ)]=_P_[_F·(a(θ)+b(θ)/n)· p_n-1(θ)]; hence we equivalently get condition p_n(θ)= (a(θ)+b(θ)/n)· p_n-1(θ) for P_-a.a. θ∈ D, which is equivalent to statement (ii). This completes the proof. In other words, Proposition <ref> demonstrates that a claim number distribution P_N is an element of Panjer(a(),b();0) if and only if it is a mixture of a P_θ-basic claim number distribution. The next table summarizes the members of the mixed Panjer family. Recall that the so-called Neyman Type A distribution arises as a mixed Poisson distribution with Poisson distributed structure parameter (cf., e.g., <cit.>, Section 9.6 for more details). The next example presents a recursive formula for such a mixture of distributions (see also <cit.>, condition (4)) by applying the characterization appearing in Proposition <ref>. Let D=_0 and take P_N= MP() with P_= P(a) (a>0). Fix an arbitrary n∈. Applying Proposition <ref>, along with Lemma <ref>, yields p_n=a· e^-1/n·∑_k=0^n-11/k!·(∑_θ=0^∞ e^-θ·θ^n-1-k/(n-1-k)!· e^-a·a^θ/θ!) which implies p_n=a· e^-1/n·∑_k=0^n-11/k!· p_n-1-k. The initial value for the recursion is given by the formula p_0=_P[e^-]=e^-a·(1-e^-1). In the case that N is mixed Poisson distributed with structure distribution U, i.e., p_n=∫_0^∞ e^-θ·θ^n/n! U(dθ) for any n∈_0 (cf., e.g., <cit.>, Definitions 3.1(b)), <cit.> and <cit.> have provided some important results concerning recursive formulas, under the additional assumption that the logarithm of the mixing density can be written as the ratio of two polynomials. However, these results cannot, in general, be transferred to the class of MP(), as it is not always possible given an arbitrary probability space (,,P) and a probability distribution U to construct a random variable on , such that P_=U (see <cit.>, Remark 4.5). Even if one assumes the existence of , it is not, in general, possible to construct an rcp of P over U consistent with (see <cit.>, Examples 4 and 5). Nevertheless, for the most interesting cases appearing in applications, there exist a random variable such that P_=U and an rcp of P over U consistent with (see <cit.>, Theorem 3.1, and <cit.>, Examples 1, 2 and 3). It is evident that if P_N∈ Panjer(a(),b();0), then by applying Proposition <ref> we get p_n=∫_D (a(θ)+b(θ)/n)· p_n-1(θ) P_(dθ) for any n∈ R_N∖{0}. For various choices of the functions a and b (see Table <ref>), we can obtain recursive formulas for the computation of the probabilities p_n, see e.g., <cit.>, <cit.> and <cit.>. However a unified formula for these recursions does not exist in the literature; hence a question that naturally arises is whether we can obtain a recursion of the form p_n=C_n· p_n-1 for any n∈ R_N∖{0}, where C_n∈(0,∞), for the members of Panjer(a(),b();0). The next theorem introduces an appropriate rcp of P over P_N consistent with N in order to obtain such a recursive formula. Let P_N∈ Panjer(a(),b();0). There exists a family {μ_n}_n∈R_N of probability measures on defined by means of μ_n(A):=_P[_A·_{N=n}]/_P[_{N=n}] for any A∈ being an rcp of P over P_N consistent with N, such that for any n∈R_N∖{0} and F∈(D), the condition p_n=C_n· p_n-1 holds true, where C_n:=_μ_n-1[a()+b()/n]. Fix on an arbitrary n∈ R_N and define the set-function μ_n:→(0,∞) by μ_n(A):=_P[_A·_{N=n}]/_P[_{N=n}] for any A∈. A standard computation justifies that μ_n is a probability measure on . Let A∈ be fixed, but arbitrary. First note that condition (d1) holds since the map n↦μ_n(A) is clearly 𝒫(R_N)-measurable while condition (d2) follows from P(A)=_P[_P[_A| N]]=∫_P[_A·_{N=n}]/_P[_{N=n}] dP_N=∫μ_n(A) dP_N; hence the family {μ_n}_n∈R_N of probability measures is an rcp of P over P_N. The consistency of {μ_n}_n∈ R_N follows immediately by the definition of μ_n. As P_N∈ Panjer(a(),b();0) and {P_θ}_θ∈ D is an rcp of P over P_ consistent with , we can apply Proposition <ref>, together with Lemma <ref>, to get that p_n+1=_P_[(a(θ)+b(θ)/n+1)· p_n(θ)]. But since _μ_n[_^-1[F]]=_P[_^-1[F]·_{N=n}]/_P[_{N=n}]=_P_[_F· p_n(θ)]/p_n for every F∈(D), where the second equality follows by Lemma <ref>, we obtain _P_[(a(θ)+b(θ)/n+1)· p_n(θ)]=_μ_n[(a()+b()/n+1)]· p_n. The latter together with condition (<ref>) completes the proof. In the next set of examples, we apply Theorem <ref> in order to obtain recursive formulas for the pmf of a mixed counting distribution that belongs to Panjer(a(),b();0). In our first example, we revisit the case of the mixed Poisson distribution. Fix an arbitrary n∈ and let P_N= MP(ξ()). Since P_N is an element of Panjer(a(),b();0) with a()=0 and b()=ξ() (see Table <ref>), we apply Theorem <ref> to get p_n=_μ_n-1[ξ()/n]· p_n-1=_P[(ξ())^n· e^-ξ()]/n·_P[(ξ())^n-1· e^-ξ()]· p_n-1. Setting u_ξ(t):=_P[e^-t·ξ()] (t≥ 0), the latter equality becomes p_n = - 1/n·u_ξ^(n)(1)/u_ξ^(n-1)(1)· p_n-1, where notation f^(n) stands for the n-th order derivative of a real-valued function f. Note that the initial value for the recursion is p_0=u_ξ(1). In particular, let D=(0,∞), ξ=id_D, where by id_D is denoted the identity map on D, and consider the gamma distribution with parameters ,>0 (written Ga(,) for short) defined as Ga(,)(B)=∫_B ^/Γ()· x^-1· e^-· x λ(dx) for any B∈(0,∞), where λ denotes the restriction to (0,∞) of the Lebesgue measure on . Take P_=w_1· Exp()+w_2· Ga(,) (,>0), where w_1,w_2∈ [0,1] with w_1+w_2=1. Since u(t):=u_id_D(t)=w_1·/+t+w_2·(/+t)^ for any t≥ 0 it follows easily that u^(n)(t)=(-1)^n/(+t)^+n·(w_1· n!··(+t)^-1+w_2·()_n·^) for any t≥ 0, where (x)_k, x∈ and k∈ denotes the ascending factorial (cf., e.g., <cit.>, p. 2); hence condition (<ref>) yields the recursive formula p_n=1/n·(+1)·w_1· n!·(+1)^-1+w_2·()_n·^-1/w_1· (n-1)!·(+1)^-1+w_2·()_n-1·^-1· p_n-1 with initial value p_0=w_1·/+1+w_2·(/+1)^. In the special case w_1=0 and w_2=1, conditions (<ref>) and (<ref>) yield p_n=+n-1/n·(+1)· p_n-1 with p_0=(/+1)^, which is the usual negative binomial recursion (see <cit.>, p. 23).Now take w_1=/+1, w_2=1/+1 and =2. For this particular case we get P_(B)=∫_B ^2/+1·(θ+1)· e^-·θ λ(dθ) for any B∈(0,∞) implying that the structural parameter is distributed according to the Lindley distribution (cf., e.g., <cit.> for the definition and its basic properties). Conditions (<ref>) and (<ref>) give p_n=+n+2/(+1)· (+1+n)· p_n-1 with p_0=^2·(+2)/(+1)^3. The next example presents a general recursive formula for the MB(m,z_2()) (m∈) distribution. As a special case, we obtain a well-known recursive formula for the negative hypergeometric distribution (cf., e.g., <cit.>, Section 6.2.2, for the definition and its properties) appearing in <cit.>, Example 3. Letting P_N= MB(m,z_2()) (m∈), we fix on an arbitrary n∈{1,…, m}. Since P_N∈ Panjer(a(),b();0) with a()=-z_2()/1-z_2() and b()=(m+1)·z_2()/1-z_2() (see Table <ref>), we can apply Theorem <ref> to get p_n =m-n+1/n·_μ_n-1[z_2()/1-z_2()]· p_n-1 =m-n+1/n·_P[(z_2())^n·(1-z_2())^m-n]/_P[(z_2())^n-1·(1-z_2())^m-n+1]· p_n-1. In this case, the initial value is given by p_0=_P[(1-z_2())^m]. In particular, let D=(0,1), let z=id_D and take P_= Beta(α,β) for α,β>0 (cf., e.g., <cit.>, p. 179, for the definition of the beta distribution). An easy computation yields _P[^n·(1-)^m-n]/_P[^n-1·(1-)^m-n+1]=+n-1/+m-n, implying, together with condition (<ref>), that p_n=(m-n+1)·(+n-1)/n· (+m-n)· p_n-1 with initial value p_0=∏_k=0^m-1+k/++k. Similarly to the previous example, in the next one we treat the case P_N= MNB(r,ρ_2()) (r>0). As a special instance, a well-known recursion for the generalized Waring distribution (cf., e.g., <cit.>, Section 6.2.3, for the definition and its properties) is rediscovered (see <cit.>, Example 4). Let P_N= MNB(r,ρ_2()), where r>0, and fix on arbitrary n∈. As P_N∈ Panjer(a(),b();0) with a()=1-ρ_2() and b()=(r-1)·(1-ρ_2()) (see Table <ref>), apply Theorem <ref> to get p_n =r+n-1/n·_μ_n-1[(1-ρ_2())]· p_n-1 =r+n-1/n·_P[(ρ_2())^r·(1-ρ_2())^n]/_P[(ρ_2())^r·(1-ρ_2())^n-1]· p_n-1, while the initial value is given by condition p_0=_P[(ρ_2())^r].In particular, if D=(0,1), ρ_2=id_D and P_= Beta(α,β) for α,β>0, then _P[^r·(1-)^n]=B(α+r,β+n)/B(α,β) implying, together with condition (<ref>), that p_n=(r+n-1)·(+n-1) /n·(++r+n-1)· p_n-1 with initial value p_0=B(α+r,β)/B(α,β). § MIXED COMPOUND DISTRIBUTIONS Let X:={X_n}_n∈ be a sequence of random variables on with values in R_X⊆_+, and consider the random variable S on taking values in R_S⊆_+, defined by S:=∑_j=1^N X_j , if N≥ 1 0 , if N=0 . The sequence X and the random variable S are known as the claim size process and the aggregate claims size, respectively. Recall that a sequence {Z_n}_n∈ of random variables on is: * P-conditionally (stochastically) independent given , if for each n∈ with n≥ 2 P(Z_i_1≤ z_i_1, Z_i_2≤ z_i_2,…, Z_i_n≤ z_i_n|)=∏^n_k=1 P(Z_i_k≤ z_i_k|) Pσ() whenever i_1,…, i_n are distinct members of and (z_i_1,…,z_i_n)∈^n; * P-conditionally identically distributed given , if P(F∩ Z_k^-1[B])=P(F∩ Z_m^-1[B]) whenever k,m∈, F∈σ() and B∈. For the rest of the paper, we simply write “conditionally" in the place of “conditionally given ” whenever conditioning refers to , and, unless otherwise stated, X is a sequence of P-conditionally i.i.d random variables, and P-conditionally mutually independent of N. Denote by μ either the Lebesgue measure λ on (0,∞) or the counting measure on _0, and for any θ∈ D let (P_θ)_X_1(B)=∫_B f_θ(x) μ(dx) for any B∈(R_X), where f_θ:=(f_θ)_X_1 is the probability (density) function of X_1 with respect to P_θ. Since {P_θ}_θ∈ D is an rcp of P over P_ consistent with , it follows by <cit.>, Lemma 3.2, that the latter is equivalent to P_X_1|(B)=∫_B f_X_1|(x) μ(dx) Pσ()-a.s.. If is non-degenerate and if ∫^2_P[X_1|] dP<∞, then the random variables X_n, X_m with n≠ m are P-positively correlated.In fact, for any n,m∈ such that n≠ m, we get Cov_P(X_n,X_m)=_P[_P[X_n· X_m|]] -^2_P[X_1]=_P[^2_P[X_1|] ]-^2_P[_P[X_1|] ]>0, where the first equality follows by <cit.>, Remark 2.1, while the second one follows since X is P-conditionally i.i.d.; hence Cov_P(X_n,X_m)=Var_P(_P[X_1|])> 0. The next proposition provides an expression for the probability distribution of a sum of P-conditionally i.i.d. random variables. If S_n:=∑_j=1^nX_j for any n∈, then P_S_n(B)=∫_B(∫_D f^∗ n_θ(x) P_(dθ)) μ(dx) for all B∈(R_S), where f_θ^∗ n denotes the n-th convolution of f_θ with itself. Fix on arbitrary n∈. For any B∈(R_S) and F∈(D), the consistency of {P_θ}_θ∈ D with , along with Lemma <ref>, yields P(S^-1_n[B]∩^-1[F])=∫_F (P_θ)_S_n (B) P_(dθ). But since X is P-conditionally i.i.d., it follows by <cit.>, Lemma 3.4(ii), that X is P_θ-i.i.d. for P_-a.a. θ∈ D, implying that (P_θ)_S_n (B)=∫_B f_θ^∗ n(x) μ(dx) ; hence for F=D, we get P_S_n(B)=∫_B(∫_D f^∗ n_θ(x) P_(dθ)) μ(dx) which completes the proof. The next example shows how Proposition <ref> can be utilized in order to compute convolutions in the case of P-conditionally i.i.d. random variables. Let D:=(0,∞) and fix on arbitrary n∈. Assume that P_X_1|= Exp() Pσ()-a.s. and P_= Ga(,) with α,β∈ (0,∞). Since by <cit.>, Lemma 3.4(ii), the claim size process X is P_θ-independent and (P_θ)_X_1= Exp(θ) for P_-a.a. θ∈ D, we deduce that (P_θ)_S_n= Ga(n, θ) for P_-a.a. θ>0. Now, by applying Proposition <ref>, we get P_S_n(B)=∫_Bβ^α· x^n-1/Γ(n) ·Γ(α)·(∫_Dθ^α+n-1· e^-(β+x)·θ λ(dθ)) λ(dx) for any B∈(0,∞), implying P_ S_n (B)= ∫_Bβ^α/B(α,n)·x^n-1/(β+x)^α+n λ(dx), i.e., S_n is distributed according to the generalized Pareto distribution (cf., e.g., <cit.>, p. 118). Note that since each X_n is P-conditionally exponentially distributed and is gamma distributed, it follows easily that each X_n is Pareto distributed with parameters α and β (cf., e.g., <cit.>, p. 180 for the definition of the Pareto distribution). The following definition is in accordance with the definition of mixed compound Poisson processes in <cit.>, p. 780 and Lemma 3.3. The aggregate claims distribution P_S is a mixed compound distribution, written MC(P_N,P_X_1) for brevity, if P_N is a mixed claim number distribution and the sequence X is P-conditionally i.i.d. and P-conditionally mutually independent of N. In particular, if P_=δ_θ_0 for some θ_0∈ D, then P_S reduces to a compound distribution, written C((P_θ_0)_N,(P_θ_0)_X_1) for short (cf., e.g., <cit.>, p. 109). Assume that is a positive random variable and let P_S= MC(P_N,P_X_1). If the maps θ↦_P_θ[N] and θ↦_P_θ[X_1] have the same (resp. different) monotonicity for P_-a.a. θ>0, then the random variables N and X_1 are P-positively (resp. negatively) correlated. In fact, assume first that the maps θ↦_P_θ[N] and θ↦_P_θ[X_1] have the same monotonicity for P_-a.a. θ>0. By applying Lemma <ref> we get that _P[N· X_1]=_P_[_P_θ[N]·_P_θ[X_1]], implying together with <cit.>, Theorem 2.2, and Lemma <ref>, that _P[N· X_1]≥_P[N]·_P[X_1]; hence N and X_1 are positively correlated. The proof when the maps θ↦_P_θ[N] and θ↦_P_θ[X_1] have different monotonicity, for P_-a.a. θ>0, is similar. The following characterization of mixed compound distributions in terms of consistent rcp's allows us to eliminate conditioning via a suitable change of measures, and to convert a mixed compound distribution into a compound one with respect to the probability measures of the corresponding rcp. The following statements are equivalent: * P_S= MC(P_N,P_X_1); * (P_θ)_S= C((P_θ)_N, (P_θ)_X_1) for P_-a.a. θ∈ D. Statement (i) is equivalent to the facts P_N|= K() Pσ()-a.s., X is P-conditionally i.i.d. and P-conditionally mutually independent of N. Due to <cit.>, Lemmas 3.2, 3.3 and 3.4(ii), we equivalently get that (P_θ)_N= K(θ), that N and X are P_θ-mutually independent and that X is P_θ-i.i.d., respectively, for P_-a.a. θ∈ D, which is equivalent to (ii). The characterization appearing in Proposition <ref> allows the extension of well-known results for compound distributions to their mixed counterpart. In the next result, we present a formula for the probability distribution of S in the case P_S= MC(P_N,P_X_1). If P_S= MC(P_N,P_X_1), then P_S(B)=∑_n=0^∞(∫_Dp_n(θ)·∫_Bf_θ^∗ n(x) μ(dx) P_(dθ)) for any B∈(R_S) Since P_S= MC(P_N,P_X_1), apply Proposition <ref> to get (P_θ)_S= C((P_θ)_N, (P_θ)_X_1) for P_-a.a. θ∈ D; hence by <cit.>, Lemma 5.1.1, we obtain (P_θ)_S(B)=∑_n=0^∞ p_n(θ)·∫_Bf_θ^∗ n(x) μ(dx) for any B∈(R_S). The latter, together with Lemma <ref>, completes the proof. Denote by g the probability (density) function of the random variable S with respect to P. The next example presents an explicit formula for the tail probability P(S>u) in the case of a mixed compound geometric distribution with conditionally exponentially distributed claim sizes. Let P_S= MC(P_N,P_X_1) with P_N=MNB(1,ρ_2()) and P_X_1|=Exp(v()) Pσ()-a.s., where v is a (D)-(0,∞)-measurable function. By Proposition <ref>, we get that (P_θ)_S= C((P_θ)_N, (P_θ)_X_1) with (P_θ)_N=NB(1, ρ_2(θ)) and (P_θ)_X_1=Exp(v(θ)) for P_-a.a. θ∈ D. An easy computation yields g(x)=_P[ρ_2()] , if x=0 _P[(1-ρ_2())·ρ_2()· v()· e^-ρ_2()· v()· x] , if x>0 ; hence P(S>u)=_P[(1-ρ_2())· e^-ρ_2()· v()· u] for any u>0. In particular,(a) if D=(0,∞), ρ_2() = 1/1+ and v()=^2, then condition (<ref>) can be rewritten as P(S>u)=_P[ /1+· e^-^2/1+· u] for any u > 0. Figure <ref> displays the tail probability P(S>u) in the case that the random variable is distributed according to the Inverse Gaussian distribution with parameters μ, φ>0 (written IG(μ,φ) for short), i.e., P_(B)=∫_B √(%s/%s)φ2 π x^3· e^-φ (x-μ)^2/2 μ^2 x λ(dx) for any B∈(0,∞) Note that this particular selection for ρ_2() and P_ has also been adopted in a regression context by <cit.>; (b) Analogously, if ρ_2()=e^-, v()=^3, then condition (<ref>) becomes P(S>u)=_P[(1-e^-)· e^-e^-·^3 · u] for any u > 0. Figure <ref> depicts the tail probability P(S>u) in the case P_=Ga(,), where ,>0. Note that a similar mixture for N has been studied by <cit.>; (c) Now, let D=(0,1), ρ_2()= and v()=-ln^2. In this case, condition (<ref>) can be rewritten as P(S>u)=_P[(1-)·^2· u·] for any u > 0. Figure <ref> illustrates the tail probability P(S>u) in the case P_=Be(,), where ,>0. Note that such a mixture for N has been considered by <cit.>. § RECURSIVE FORMULAS FOR MIXED COMPOUND DISTRIBUTIONS Throughout what follows in this section, unless stated otherwise, P_S= MC(P_N,P_X_1) with P_N∈ Panjer(a(),b();0) and P_X_1(_0)=1. Since in the collective risk model the random variable S represents the total claim amount paid out from an insurance company over a fixed period of time, the determination of its pmf is of great importance. However, even in the classical case where X is P-i.i.d. and N, X are P-mutually independent, it is difficult to obtain a closed form for the pmf of S, which usually involves the computation of higher-order convolutions, a task that may be proven difficult, or at least time consuming. As a consequence, actuaries usually resort to recursive expressions, which seem to be more convenient and computationally efficient. In the next result, we obtain a recursive formula for the pmf of S by extending the famous Panjer recursion (cf., e.g., <cit.>, Theorem 5.4.2) to the class Panjer(a(),b();0). In order to present it, we denote by m_N| the conditional probability generating functions of N, i.e., m_N|(t)=_P[t^N|] for any t with |t|≤1. There exists a family {ρ_x}_x∈R_S of probability measures on defined by ρ_x(A):=_P[_A·_{S=x}]/_P[_{S=x}] for any A∈ being an rcp of P over P_S consistent with S, such that g(x)=_P[m_N|(f_X_1|(0))] , if x=0 ∑_y=1^xD_x,y· g(x-y) , if x∈ R_S∖{0}, where D_x,y:=_ρ_x-y[( a()+b()· y/x)·f_X_1|(y)/1-a()· f_X_1|(0)] for any x∈ R_S∖{0} and y∈{1,…,x}.In particular, if f_X_1|(0)=0 Pσ()-a.s. then g(x)= p_0 , if x=0 ∑_y=1^xD_x,y· g(x-y) , if x∈ R_S∖{0}, where D_x,y=_ρ_x-y[( a()+b()· y/x)· f_X_1|(y)] for any x∈ R_S∖{0} and y∈{1,…,x}. Fix on arbitrary x∈ R_S and define the set-function ρ_x:→(0,∞) by ρ_x(A):=_P[_A·_{S=x}]/_P[_{S=x}] for any A∈. By applying similar arguments to those appearing in the proof of Theorem <ref>, it follows easily that the family {ρ_y}_y∈R_S is an rcp of P over P_S consistent with S. Furthermore, first note that condition g(0)=_P[m_N|(f_X_1|(0))] is an immediate consequence of Corollary <ref> for B={0}. By <cit.>, Lemmas 3.4(ii) and 3.3, the sequence X is P_θ-i.i.d. and (N,X) are P_θ-mutually independent for P_-a.a. θ∈ D, respectively. Since P_N∈ Panjer(a(),b();0), Proposition <ref> implies that (P_θ)_N∈ Panjer(a(θ),b(θ);0) for P_-a.a. θ∈ D; hence we may apply <cit.>, Theorem 5.4.2, to get g_θ(x)= 1/1-a(θ)· f_θ(0)·∑_y=1^x( a(θ)+b(θ)· y/x)· f_θ(y)· g_θ(x-y), where g_θ denotes the pmf of S with respect to P_θ. The latter, together with Corollary <ref> and Lemma <ref>, yields g(x)=∑_y=1^x_P[( a()+b()· y/x)·f_X_1|(y)/1-a()· f_X_1|(0)· g_(x-y)]. But since _ρ_x[_^-1[F]]=_P[_^-1[F]·_{S=x}]/_P[_{S=x}]=_P_[_F· g_θ(x)]/g(x) for each F∈(D), where the second equality follows by Lemma <ref>, we easily get that _P[( a()+b()· y/x)·f_X_1|(y)/1-a()· f_X_1|(0)· g_(x-y)] =_ρ_x-y[( a()+b()· y/x)·f_X_1|(y)/1-a()· f_X_1|(0)]· g(x-y) for any y∈{1,…,x}. Conditions (<ref>) and (<ref>) complete the first part of the proof. In particular, first note that if f_X_1|(0)=0 Pσ()-a.s., then m_N|(f_X_1|(0))=p_0() Pσ()-a.s.. The latter, along with conditions (<ref>) and (<ref>), completes the proof. Assume that P_X_1 is absolutely continuous with respect to the Lebesgue measure λ on (0,∞). By applying the characterization appearing on Proposition <ref> in conjunction with <cit.>, Theorem on p. 24, we can easily prove the equality g(x) =_P[p_1()· f_X_1|(x)] + ∫_0^x_P[(α()+b()· y/x)· f_X_1|(y)· g_(x-y)] λ(dy) for any x>0. The integral equation (<ref>) has to be solved numerically, however its implementation presents some serious difficulties (see <cit.>, p. 42). The proof of a recursion for g in the spirit of Theorem <ref> and its implementation is the subject of a forthcoming paper. If X and are P-independent, it follows by Lemma <ref> that P_X_1|=P_X_1 Pσ()-a.s.. Hence Theorem <ref> gives g(x)=_P[m_N|(f_X_1(0))] , if x=0 ∑_y=1^xD_x,y· g(x-y) , if x∈ R_S∖{0}, where D_x,y=_ρ_x-y[x· a()+y· b()/x·(1-a()· f_X_1(0))]· f_X_1(y) for any x∈ R_S∖{0} and y∈{1,…,x}. Recall that a sequence Z:={Z_n}_n∈ of random variables on is called exchangeable, if the joint distribution of the sequence is invariant under the permutation of the indices (cf., e.g., <cit.>, 459C). In non-classical Risk Theory, the concept of exchangeability seems to be an appealing way to introduce a dependence structure between the claim sizes. Actually, exchangeability can be a natural assumption in the risk model context (see <cit.>, Remark 2.7). Note that by <cit.>, Proposition 4.1, we get that a sequence Z is exchangeable under P if and only if there exists a random vector such that Z is P-conditionally i.i.d. over σ(). Let {ρ_x}_x∈R_S be the family of probability measures constructed in Theorem <ref>. If P_N∈ Panjer(a,b;0) and N, are P-independent then g(x)=_P[m_N (f_X_1|(0))] , if x=0 ∑_y=1^x D_x,y· g(x-y) , if x∈ R_S∖{0}, where D_x,y=( a +b ·y/x)·_ρ_x-y[f_X_1|(y)/1-a · f_X_1|(0)] for any x∈ R_S∖{0} and y∈{1,…,x}. The proof follows immediately by Theorem <ref> and therefore it is omitted. The following example shows that if P_N∈ Panjer(a(),b();0), then the claim number distribution, for example in the case of an Excess of Loss reinsurance, remains in the same class, but with different parameters (cf., e.g., <cit.>, p. 108). Let h be a (D)-(0,∞)-measurable function and set v(θ):=P_θ(X_1>h(θ))∈(0,1) for any θ∈ D. Fix an arbitrary n∈ and define the function z_n:× D→{0,1} by z_n(ω,θ):=_{X_n>h(θ)}(ω) for any (ω,θ)∈× D. For an arbitrary, but fixed θ∈ D, set z(θ):={z_k(θ)}_k∈. Define the real-valued function Z_n() on by Z_n():=z_n∘ (id_×) and the sequence Z():={Z_k()}_k∈. Clearly Z() is P-conditionally i.i.d. and P-conditionally independent of N with P_Z_n()= MB(1,v()). Consider the random sum M:=∑_k=1^N Z_k(). It then follows by Theorem <ref> that g(x)=_P[m_N|(1-v())] , if x=0 D_x,1· g(x-1) , if x∈ R_M∖{0}, where D_x,1=_ρ_x-1[(a()· v(θ)/1-a()·(1-v()) +1/x·b()· v()/1-a()·(1-v()))]. The latter, along with the fact that P_M is a claim number distribution (cf., e.g., <cit.>, Theorem 5.1.4), implies that P_M∈ Panjer(a(),b();0), where a():=a()· v()/1-a()·(1-v()) and b():= b()· v()/1-a()·(1-v()). The following numerical examples demonstrate the behaviour of the pmf of the aggregate claims S. For illustrative purposes, we focus on three widely used mixing distributions in actuarial literature, namely the Inverse Gaussian (IG), the the Gamma (Ga) and the Beta (Be) distribution. Let D⊆(0,∞) and P_N=MP(ξ()), where ξ is a (D)-(0,∞)-measurable function. Since P_N∈ Panjer(a(),b();0), we get ()=0 and b()=ξ() (see Table <ref>). Let v be a (D)-(0,1)-measurable function. (a) Take P_X_1=MNB(1,v()). Since P_X_1(_0)=1, we may apply Theorem <ref> to get g(x)=_P[e^ξ()· (v()-1)] , if x=0 ∑_y=1^xD_x,y· g(x-y) , if x∈, where D_x,y=y/x·_ρ_x-y[ξ()· v()·(1-v())^y] for any x∈ and y∈{1,…,x}. Figure <ref> depicts the behaviour of g for ξ()= and v()=/1+. (b) Assume now that the claim sizes are distributed according to a mixed zero truncated geometric distribution with parameter v() (written MZTG(v()) for short), i.e., f_X_1|(x) = v()·(1-v())^x-1 Pσ()-a.s. for any x∈. Since P_X_1(_0)=1 with f_X_1|(0)=0 Pσ()-a.s., we may apply Theorem <ref> to get g(x)=_P[e^-ξ()] , if x=0 ∑_y=1^xD_x,y· g(x-y) , if x∈, where D_x,y=y/x·_ρ_x-y[ξ()· v()·(1-v())^y-1] for any x∈ and y∈{1,…,x}. Figure <ref> illustrates the behaviour of g for the same choices of ξ() and v() as in (a). (c) Assume that X and are P-independent and that v() is degenerate at some point p∈(0,1). If P_X_1=ZTG(p), then we can apply Theorem <ref>, along with Remark <ref>(a), to get g(x)=_P[e^-ξ()] , if x=0 ∑_y=1^xD_x,y· g(x-y) , if x∈, where D_x,y=y/x· p· (1-p)^y-1·_ρ_x-y[ξ()] for any x∈ and y∈{1,…,x}. Figure <ref> depicts g for p=3/5 and ξ() as in (a). (d) Assume that N and are P-independent and that ξ() is degenerate at some point θ_0∈ D. If P_X_1=MZTG(v()), we may apply Corollary <ref> to get g(x)= e^-θ_0 , if x=0 ∑_y=1^x D_x,y· g(x-y) , if x∈, where D_x,y=y·θ_0/x·_ρ_x-y[v()·(1-v())^y-1] for any x∈ and y∈{1,… x}. Figure <ref> exhibits g for θ_0=3 and v() as in (a). Let D⊆ (0,∞), P_N=MNB(ρ_1(),ρ_2()), where ρ_1, ρ_2 are (D)-(0,∞)- and (D)-(0,1)-measurable functions, respectively. Since P_N∈ Panjer(a(),b();0), we get ()=1-ρ_2() and b()=(ρ_1()-1)·(1-ρ_2()) (see Table <ref>). Take P_X_1=MZTG(/1+). (a) Let (ρ_1(θ),ρ_2(θ))=(θ,e^-θ) for any θ∈ D. Since ()=1-e^- and b()=(-1)·(1-e^-), we may apply Theorem <ref> to get g(x)=_P[e^-^2] , if x=0 ∑_y=1^xD_x,y· g(x-y) , if x∈, where D_x,y= _ρ_x-y[(1+(-1)·y/x)·(1-e^-)·/(1+)^y] for any x∈ and y∈{1,…, x} (see Figure <ref>). (b) Now take (ρ_1(θ),ρ_2(θ))=(2 θ,1/1+θ) for any θ∈ D. As ()=/1+ and b()=(2-1)·/1+, we can apply Theorem <ref> to obtain g(x)=_P[(1/1+)^2] , if x=0 ∑_y=1^xD_x,y· g(x-y) , if x∈, where D_x,y= _ρ_x-y[(1+(2-1)·y/x)·^2/(1+)^y+1] for any x∈ and y∈{1,…, x} (see Figure <ref>). (c) Let D=(0,1) and take (ρ_1(θ),ρ_2(θ))=(r,θ) for any θ∈ D, where r>0. Since ()=1- and b()=(r-1)·(1-), we can apply Theorem <ref> to obtain g(x)=_P[^r] , if x=0 ∑_y=1^xD_x,y· g(x-y) , if x∈, where D_x,y=(1+(r-1)·y/x)·_ρ_x-y[(1-)·/(1+)^y] for any x∈ and y∈{1,…, x}. Figure <ref> illustrates g for P_=Be(,), where ,>0, and different values of r. (d) Let D=(0,∞) and take (ρ_1(θ),ρ_2(θ))=(θ,p) for any θ∈ D, where p∈(0,1). As ()=1-p and b()=(-1)·(1-p), we can apply Theorem <ref> to obtain g(x)=_P[p^] , if x=0 ∑_y=1^xD_x,y· g(x-y) , if x∈, where D_x,y= (1-p)·_ρ_x-y[(1+(-1)·y/x)·/(1+)^y] for any x∈ and y∈{1,…, x}. Figure <ref> depicts g for P_=Ga(,), where ,>0, and various values of p. (e) Assume that N and are P-independent and let (ρ_1(θ),ρ_2(θ))=(r,p) for any θ∈ D, where r>0 and p∈(0,1). Since =1-p and b=(r-1)·(1-p), we can apply Corollary <ref> to obtain g(x)= p^r , if x=0 ∑_y=1^xD_x,y(F)· g(x-y) , if x∈, where D_x,y=(1-p)·(1+(r-1)·y/x)·_ρ_x-y[/(1+)^y] for any x∈ and y∈{1,…, x} (see Figure <ref>). Let D⊆ (0,∞), P_N=MB(n,z_2()), where n∈ and z_2 is a (D)-(0,1)-measurable function. As P_N∈ Panjer(a(),b();0), we get ()=-z_2()/1-z_2() and b()=(n+1)·z_2()/1-z_2() (see Table <ref>). Take P_X_1=MZTG(/1+).(a) Let D=(0,1) and take z_2(θ)=1-θ for any θ∈ D. Since ()=-1-/ and b()=(n+1)·1-/, we may apply Theorem <ref> to get g(x)=_P[^n] , if x=0 ∑_y=1^xD_x,y· g(x-y) , if x∈, where D_x,y=(-1+(n+1)·y/x)·_ρ_x-y[1-/(1+)^y] for any x∈ and y∈{1,…, x}. Figure <ref> depicts g for P_=Be(,), where ,>0, and different values of n. (b) Let D=(0,∞) and take z_2(θ)=e^-θ for any θ∈ D. As ()=-e^-/1-e^- and b()=(n+1)·e^-/1-e^-, we may apply Theorem <ref>, together with Remark <ref>(a), to get g(x)=_P[(1-e^-)^n] , if x=0 ∑_y=1^xD_x,y· g(x-y) , if x∈, where D_x,y=(-1+(n+1)·y/x)·_ρ_x-y[· e^-/(1-e^-)·(1+)^y] for any x∈ and y∈{1,…, x}. Figure <ref> illustrates g for P_=Ga(,), where ,>0, and various values of n. (c) Let D=(0,∞) and take z_2(θ)=1/1+θ for any θ∈ D. Since ()=-1/ and b()=(n+1)/, we may apply Theorem <ref> to get g(x)=_P[(/1+)^n] , if x=0 ∑_y=1^xD_x,y· g(x-y) , if x∈, where D_x,y=(-1+(n+1)·y/x)·_ρ_x-y[1/(1+)^y] for any x∈ and y∈{1,…, x}. Figure <ref> depicts g for P_=IG(μ,φ), where μ,φ>0, and different values of n. § CONCLUDING REMARKS When dealing with inhomogeneous insurance portfolios, mixtures of claim number distributions are frequently used to model the claim counts. In actuarial practice, the usual independence assumptions seem to be unrealistic. In particular, the mutual independence between claims sizes and counts appears to be implausible, especially when considering inhomogeneous portfolios. The above led us to the development of this work. This paper aims to investigate the mixed counterpart of the original Panjer(a,b;0) class and the corresponding compound distributions by relaxing the usual independence assumptions in two ways. More specifically, we allow the parameters of the claim number and the claim size distributions to be random, and we assume that the claim size process is conditionally i.i.d. and conditionally mutually independent of the claim counts. Based on the above assumptions and the fundamental concept of regular conditional probabilities, this work provides a characterization for the members of Panjer(a(),b();0) class, as well as two recursive algorithms for P_N and P_S, respectively, where the recursion for P_S also covers the case of a compound Panjer distribution with exchangeable claims. Our results are accompanied by various numerical examples to highlight the practical relevance of this work. It is worth mentioning that the members of the Panjer(a(),b();0) class may be of particular interest in non-life insurance problems, for example, when insurance data exhibit overdispersion. Finally, this research allows us to consider some possible extensions of actuarial interest beyond the scopes of this paper, such as the study of the mixed counterpart of Panjer(a,b;k) for k≥ 1 and the derivation of De Pril's recursion for the moments of S (see <cit.>, Theorem, p. 118). 99 [Albrecher & Boxma2004]albo Albrecher, H. & Boxma, O. J. (2004). A ruin model with dependence between claim sizes and claim intervals. Insurance: Mathematics and Economics, 35(2), 245–254. [Albrecher & Teugels2006]alte Albrecher, H. & Teugels, J. L. (2006). Exponential behavior in the presence of dependence in risk theory. Journal of Applied Probability, 43(1), 257–273. [Albrecher et al.2011]acl Albrecher, H., Constantinescu, C. & Loisel, S. (2011). Explicit ruin formulas for models with dependence among risks. Insurance: Mathematics and Economics, 48(2), 265–270. [Antzoulakos & Chadjiconstantinidis2004]anch Antzoulakos, D. L. & Chadjiconstantinidis, S. (2004). On mixed and compound mixed Poisson distributions. Scandinavian Actuarial Journal, 2004(3), 161–188. [Beal1940]beal Beal, G. (1940). The fit and significance of contagious distributions when applied to observations on larval insects. Ecology, 21(4), 460–474. [Chang & Pollard1997]chpo Chang, J. T. & Pollard, D. (1997). Conditioning as disintegration. Statistica Neerlandica, 51(3), 287–317. [Cohn2013]Co Cohn, D. L. (2013). Measure Theory, 2nd ed. Birkhäuser Advanced Texts. [De Pril1986]dp De Pril, N. (1986). Moments of a class of compound distributions . Scandinavian Actuarial Journal, 1986(2), 117–120. [Fackler2023]fac Fackler M. (2023). Panjer class revisited: one formula for the distributions of the Panjer (a,b,n) class. Annals of Actuarial Science, 17(1), 145–169. [Faden1985]faden Faden, A. M. (1985). The existence of regular conditional probabilities: Necessary and sufficient conditions. Annals of Probability, 13(1), 288–298. [Fremlin2013]fr4 Fremlin, D. H. (2013). Measure theory (Vol. 4). Torres Fremlin (Ed.). [Gençtürk & Yiğiter2016]gy Gençtürk, Y. & Yiğiter, A. (2016). Modelling claim number using a new mixture model: negative binomial gamma distribution. Journal of Statistical Computation and Simulation, 86(10), 1829–1839. [Gerhold et al.2010]gsw Gerhold, S., Schmock, U. & Warnung, R. (2010). A generalization of Panjer’s recursion and numerically stable risk aggregation. Finance and Stochastics, 14(1), 81–128. [Ghitany et al.2008]gan Ghitany, M. E., Atieh, B. & Nadarajah, S. (2008). Lindley distribution and its application. Mathematics and computers in simulation, 78(4), 493–506. [Gómez-Déniz et al.2004]gdsco Gómez-Déniz, E., Sarabia, J. M. & Calderín-Ojeda, E. (2008). Univariate and multivariate versions of the negative binomial-inverse Gaussian distributions with applications. Insurance: Mathematics and Economics, 42(1), 39–49. [Grandell1997]gr Grandell, J. (1997). Mixed Poisson Processes, Chapman & Hall. [Johnson et al.2005]jkk Johnson, N. L., Kotz, S. & Kemp, A. W. (2005). Univariate Discrete Distributions, 3rd ed. John Wiley and Sons, New York. [Hess et al.2002]hls Hess, K. T., Liewald, A. & Schmidt, K. D. (2002) An extension of Panjer's recursion. ASTIN Bulletin: The Journal of the IAA, 32(2), 61–80. [Hesselager1994]hessa Hesselager, O. (1994). A recursive procedure for calculation of some compound distributions. ASTIN Bulletin: The Journal of the IAA, 24(1), 19–32. [Hesselager1996]hessb Hesselager, O. (1996). A recursive procedure for calculation of some mixed compound Poisson distributions. Scandinavian Actuarial Journal 1996(1), 54–63. [Karlis & Xekalaki2005]kaxe Karlis, D. & Xekalaki, E. (2005). Mixed poisson distributions. International Statistical Review/Revue Internationale de Statistique, 73(1), 35–58. [Kolev & Paiva2008]kopa Kolev, N. & Paiva, D. (2008). Random sums of exchangeable variables and actuarial applications. Insurance: Mathematics and Economics, 42(1), 147–153. [Lyberopoulos & Macheras2012]lm1v Lyberopoulos, D. P. & Macheras, N. D. (2012). Some characterizations of mixed Poisson processes. Sankhyā A, 74(1), 57–79. [Lyberopoulos & Macheras2013]lm5 Lyberopoulos, D. P. & Macheras, N. D. (2013). A construction of mixed Poisson processes via disintegrations. Mathematica Slovaca, 63(1), 167–182. [Lyberopoulos & Macheras2019]lm3arLyberopoulos, D. P. & Macheras, N. D. (2019). A characterization of martingale-equivalent compound mixed Poisson process. arXiv preprint arXiv:1905.07629. [Lyberopoulos & Macheras2021]lm3 Lyberopoulos, D. P. & Macheras, N. D. (2021). A characterization of martingale-equivalent mixed compound Poisson processes. Annals of Applied Probabability, 31(2), 778–805. [Lyberopoulos & Macheras2022]lm6z3 Lyberopoulos, D. P. & Macheras, N. D. (2022). Some characterizations of mixed renewal processes. Mathematica Slovaca 72(1), 197–216. [Lyberopoulos et al.2019]lmt1 Lyberopoulos, D. P., Macheras, N. D. & Tzaninis S. M. (2019). On the equivalence of various definitions of mixed Poisson processes. Mathematica Slovaca, 69(2), 453–468. [Mansoor et al.2020]matal Mansoor, M., Tahir, M. H., Cordeiro, G. M., Ali, S. & Alzaatreh, A. (2020). The Lindley negative-binomial distribution: Properties, estimation and applications to lifetime data. Mathematica Slovaca, 70(4), 917–934. [Nadarajah & Kotz2006a]nakp1 Nadarajah, S. & Kotz, S. (2006a). Compound mixed Poisson distributions I. Scandinavian Actuarial Journal, 2006(3), 141–162. [Nadarajah & Kotz2006b]nakp2 Nadarajah, S. & Kotz, S. (2006b) Compound mixed Poisson distributions II. Scandinavian Actuarial Journal, 2006(3), 163–181. [Panjer1981]pa Panjer, H. H. (1981). Recursive evaluation of a family of compound distributions. ASTIN Bulletin: The Journal of the IAA, 12(1), 22–26. [Schmidt2012]Sc Schmidt, K. D. (2012). Lectures on Risk Theory. Springer Science & Business Media. [Schmidt2014]Sc2014 Schmidt, K. D. (2014). On inequalities for moments and the covariance of monotone functions. Insurance: Mathematics and Economics, 55, 91–95. [Schröter1990]sch Schröter, K. (1990). On a class of counting distributions and recursions for related compound distributions. Scandinavian Actuarial Journal, 1990(2-3), 161–175. [Stoyanov2013]sto Stoyanov, J. M. (2013). Counterexamples in Probability, 3rd ed.. Dover Publications. [Sundt1992]su Sundt, B. (1992). On some extensions of Panjer's class of counting distributions. ASTIN Bulletin: The Journal of the IAA, 22(1), 61–80. [Sundt & Jewell1981]sj Sundt, B. & Jewell, W. S. (1981). Further results on recursive evaluation of compound distributions. ASTIN Bulletin: The Journal of the IAA, 12(1), 27–39. [Sundt & Vernic2004]sv2004 Sundt, B. & Vernic, R. (2004). Recursions for compound mixed multivariate Poisson distributions. Blätter der DGVFM, 26(4), 665–691. [Sundt & Vernic2009]sv Sundt, B. & Vernic, R. (2009). Recursions for convolutions and compound distributions with insurance applications. Springer Science & Business Media. [Tzaninis & Macheras2023]mt2 Tzaninis, S. M. & Macheras, N. D. (2023). A characterization of progressively equivalent probability measures preserving the structure of a compound mixed renewal process. ALEA, Latin American Journal of Probability and Mathematical Statistics, 20, 225–247. [Tzougas et al.2019]tzougas Tzougas, G., Hoon, W. L.& Lim, J. M. (2019). The negative binomial-inverse Gaussian regression model with an application to insurance ratemaking. European Actuarial Journal, 9, 323–344. [Tzougas et al.2021]tzougas2 Tzougas, G., Hong, N. & Ho, R. (2021). Mixed poisson regression models with varying dispersion arising from non-conjugate mixing distributions. Algorithms, 15(1), 16. [Wang2011]zwang Wang, Z. (2011). One mixed negative binomial distribution with application. Journal of Statistical Planning and Inference, 141(3), 1153–1160. [Wang & Sobrero1994]wsWang, S. & Sobrero, M. (1994). Further results on Hesselager’s recursive procedure for calculation of some compound distributions. ASTIN Bulletin: The Journal of the IAA, 16(2), 161–166. [Willmot1986]wil1986 Willmot, G. E. (1986). Mixed compound Poisson distributions. ASTIN Bulletin: The Journal of the IAA, 16(1), 59–79. [Willmot1987]wil1987 Willmot, G. E. (1987). The Poisson-Inverse Gaussian distribution as an alternative to the negative binomial. Scandinavian Actuarial Journal, 1987(3-4), 113–127. [Willmot1988]wil88 Willmot, G. E. (1988). Sundt and Jewell’s family of discrete distributions. ASTIN Bulletin: The Journal of the IAA, 18(1), 17–29. [Willmot1993]wil Willmot, G. E. (1993). On recursive evaluation of mixed Poisson probabilities and related quantities. Scandinavian Actuarial Journal, 1993(2), 114–133.
http://arxiv.org/abs/2406.18984v1
20240627082620
Amplify Graph Learning for Recommendation via Sparsity Completion
[ "Peng Yuan", "Haojie Li", "Minying Fang", "Xu Yu", "Yongjing Hao", "Junwei Du" ]
cs.IR
[ "cs.IR" ]
IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE Shell et al.: Amplify Graph Learning framework based on Sparsity Completion 0000–0000/00$00.00  2024 IEEE Amplify Graph Learning for Recommendation via Sparsity Completion Peng Yuan, Haojie Li, Minying Fang, Xu Yu, Yongjing Hao, and Junwei Du July 1, 2024 ===================================================================================== § ABSTRACT Graph learning models have been widely deployed in collaborative filtering (CF) based recommendation systems. Due to the issue of data sparsity, the graph structure of the original input lacks potential positive preference edges, which significantly reduces the performance of recommendations. In this paper, we study how to enhance the graph structure for CF more effectively, thereby optimizing the representation of graph nodes. Previous works introduced matrix completion techniques into CF, proposing the use of either stochastic completion methods or superficial structure completion to address this issue. However, most of these approaches employ random numerical filling that lack control over noise perturbations and limit the in-depth exploration of higher-order interaction features of nodes, resulting in biased graph representations. In this paper, we propose an Amplify Graph Learning framework based on Sparsity Completion (called AGL-SC). First, we utilize graph neural network to mine direct interaction features between user and item nodes, which are used as the inputs of the encoder. Second, we design a factorization-based method to mine higher-order interaction features. These features serve as perturbation factors in the latent space of the hidden layer to facilitate generative enhancement. Finally, by employing the variational inference, the above multi-order features are integrated to implement the completion and enhancement of missing graph structures. We conducted benchmark and strategy experiments on four real-world datasets related to recommendation tasks. The experimental results demonstrate that AGL-SC significantly outperforms the state-of-the-art methods. The code of AGL-SC is available at <https://github.com/yp8976/AGL_SC>. Graph Learning, Matrix completion, Variational autoencoder, Recommendation Systems, Multi-order interaction features. § INTRODUCTION As an effective solution to the issue of information overload, recommendation systems have been widely implemented in various domains by predicting personalized user preferences. However, in these systems, the predominant issue affecting the accuracy of suggestions stems from the inherent sparsity of user interactions, as it is impractical for users to rate every item. Consequently, researchers have proposed various methods, including content-based and CF-based recommendations <cit.>, aimed at enhancing the model's proficiency in handling sparse data. CF-based recommendations predict personalized user preferences by assuming that users with similar behaviors exhibit similar preferences towards items. This approach relies on the observed user-item interactions to learn user and item embeddings. A common paradigm is to parameterize users and items for reconstructing historical interactions, and then predict user preference based on the pairwise similarity between user and item embeddings. Since interactions between users and items can naturally transformed as a user-item bipartite graph, drawing on the success of Graph Neural Networks (GNNs), graph-based CF models have been extensively studied. These models often generate user and item embeddings through recursive message passing operations on the graph structure, which integrate multi-hop neighbors into node representation learning and achieve state-of-the-art recommendation performance. Fig.<ref> shows the existing graph collaborative filtering methods, most of graph-based CF models treat all unobserved actions as negative feedback when transforming bipartite graphs. When learning user representations, these models often study propagate interactions among neighbors by stacking GNN layers, utilizing the graph structure and message passing mechanisms. However, this approach causes the lack of potential positive preference links and overlooks the relevance of high-order neighbors among users and items. Coupled with the excessively sparse interaction data and substantial noise in practice, these issues become more pronounced. Such conditions lead to significant biases in graph node representations <cit.>, ultimately leading to poor performance. Matrix completion has been widely applied in recommendations based on CF as a technique used to reconstruct lost data in sparse matrices <cit.>. Early matrix completion technologies, such as Singular Value Thresholding (SVT), treated user-item implicit feedback as a low-dimensional user-item similarity matrix for learning. Due to the limitations of linear transformations, these early approaches faced relatively high computational costs when dealing with large datasets <cit.>. With the development of deep learning, advanced recommendation models have adopted the matrix completion concept, further employing graph neural networks for collaborative filtering to enhance recommendation performance. Such models transform recommendation tasks into issues of graph node representation and edge prediction, by propagating information in the form of graph between users and items <cit.>. For instance, IGMC proposed an inductive matrix completion approach, achieving edge prediction based on subgraph extraction methods <cit.>. EGLN integrates the concept of mutual information maximization, iteratively propagates information between nodes in graph to realize edge prediction <cit.>. These methods mitigate data sparsity and enhance the graph representation, thus improving the quality of recommendations. However, the incorporation of noise is random, which may lead to poor local-global consistency in the graph structure, and could potentially lead to implicit changes in the model's representations in latent space, such as posterior collapse <cit.>. Despite matrix completion techniques have been effective in dealing with the sparse data in recommendation systems, graph-based CF models still grapple with the issue of bias in long-tail distributions. Inspired by the success of generative self-supervised learning in CV, the generative models can well reconstruct the input graph without information distortion. Among them, VAE was proposed as a perturbation data augmentation method introduced into the graph representation learning for recommendation systems. It has significant potential in modeling user and item features, as well as debiasing <cit.>. In this paper, we leverage variational inference to reconstruct the graph structure without information bias. To this end, in this paper, we propose the AGL-SC strategy based on sparse completion for enhanced graph learning. Addressing the issues of representation bias caused by data sparsity and noise, we generate feature representations of user and item nodes through graph neural networks. excavating direct interactions of user and item nodes as superficial structural features. Simultaneously, by focusing on similar user and item groups, we employ an improved factorization method to mine high-order interaction features, thereby enhancing the model's ability to distinguish similar user representations <cit.>. Finally, these representations are fed into a variational generative function for multi-order feature collaborative inference, completing the complex nonlinear interaction features of users. This approach results in more precise and comprehensive user recommendations, and offers strategies for mitigating representation bias in various model nodes. The primary contributions are as follows: * We propose a sparse completion framework based on VAE that captures the latent multi-order interaction distribution features of user-item pairs. This framework generates comprehensive distributions of user preferences, thereby enhancing graph representation. * We present an update mechanism that integrates high-order interaction features of user-item pairs. This mechanism effectively merges the characteristics of high-order interactions between users and items with generated latent vectors. * Extensive experiments conducted on four datasets, compared with other baseline models. The proposed AGL-SC model demonstrated significant performance improvements. § RELATED WORK §.§ Graph Learning for Recommendation To enhance the efficiency of recommendation systems, existing research has inclined towards using graph learning to process unstructured graph data. The primary objective is to efficiently transform graph data into low-dimensional, dense vectors, ensuring that the graph's informational and structural properties are precisely mapped in the vector space. Initially, graph learning based on node embedding through matrix decomposition techniques like Neural Matrix Factorization (NMF) <cit.> and Singular Value Decomposition (SVD), converting nodes into a low-dimensional vector space. However, these methods were characterized by high time and space complexity. Subsequently, with the success of word vector methodologies in Natural Language Processing (NLP), some approaches, such as DeepWalk <cit.> and Node2Vec <cit.>, facilitated large-scale graph learning by converting graphs into sequences. Yet, this method also led to a suboptimal utilization of the inherent structural information of the graphs. In recent years, drawing on the principles of CNN, RNN, and autoencoders, researchers have developed Graph Neural Network architectures,which have demonstrated outstanding performance in structural information extraction and node representation. Contemporary research has largely focused on space-based Graph Neural Networks for recommendations <cit.>. PinSage <cit.>, a content recommendation model based on Graph Neural Networks, propagates item features through an item-item association graph. NGCF <cit.> employs Graph Neural Networks to model high-order collaborative signals between users and items during the embedding learning process. LightGCN <cit.> introduces neighborhood aggregation, propagating embeddings across user-item interaction graphs and simplifies the model by removing feature transformation and non-linear activation. UltraGCN <cit.> adjusts the relative importance of different types of relationships flexibly using edge weight allocation in a constraint loss. SVD-GCN <cit.>, replacing part of the Graph Neural Network methods with SVD, utilizes the top k singular vectors for recommendations. These methods optimize information transmission and node representation through various strategies, thereby improving recommendation efficiency and precision. However, the information transmission mechanism of Graph Neural Networks may slow down network convergence, inherit and amplify noise within the network <cit.>, leading to bias in the embedding of graph structure nodes and over-smoothing issues. §.§ Matrix Completion Matrix completion, based on the learning of graph-structured data, interprets users' ratings or purchasing behaviors towards items as links in a graph. By filling in the missing edges in the interaction graph, it facilitates the recommendation tasks <cit.>. In recommendation systems, users often express their preferences indirectly through implicit feedback, such as browsing product interfaces, rather than through direct actions like ratings or likes. These behaviors, serving as implicit feedback in recommendation systems, can more accurately reflect user preferences. Such actions are crucial for modeling user preferences and enhancing the effectiveness of recommendations <cit.>. Therefore, how to complete user preferences from implicit feedback has become an important academic topic. Recent research tends to incorporate user-item interaction features as auxiliary information for completion. Monti et al. introduced user and item neighborhood networks MGCNN to extract latent features of users and items <cit.>, while Berg et al. utilized one-hot encoding of node IDs for feature extraction by GC-MC <cit.>. Nguyen et al. implemented a deep model-based Conditional Random Field algorithm DCMC to handle large-scale <cit.>, highly incomplete data. Wu et al. proposed a completion model AGCN <cit.>, that integrates user-item attribute information. Zhang et al. introduced MC2G <cit.>, which combines social networks and item similarity graphs to enhance recognition of user preferences. However, each of these methods has its limitations. MGCNN relies on the graph Laplacian operator, limiting its generalization capabilities on new graphs. GC-MC and DCMC use linear decoders or simple neural networks for completion prediction, might oversimplify actual user behavior patterns. These models are based on optimizing discriminative strategies with given user interaction information that overly simplify user behavior. Introducing additional signals, such as attribute description text, can increase model complexity and computational costs, potentially introducing more noise and irrelevant information <cit.>. Moreover, these models, typically based on fixed-length vectors, lack the capacity to represent the diversity and uncertainty of user interests <cit.>. Our work employs generative models, which infer the interaction probabilities for items that users have not interacted with by sampling from learned probability distributions <cit.>. In practical applications, the smoothness of VAEs in the distribution helps to avoid drastic fluctuations in model parameters due to minor differences in samples, even when user behavior undergoes subtle changes <cit.>. This capability enables us to better reconstruct missing node features, explore, and complete user preferences. § THE PROPOSED MODEL We proposed Amplify Graph Learning framework based on Sparsity Completion (AGL-SC), which comprises three main components: 1) The graph learning module maps user-item bipartite graphs into low-dimensional feature vectors for both user and item nodes; 2) The high-order constraint module derives high-order interaction feature vectors for nodes through factorization, transforming them into perturbation noise for VAE, thereby facilitating improved graph structure completion and node embedding learning; 3) In the VAE module, a multinomial likelihood function is incorporated to utilize multi-order features for variational inference and prediction of the probabilistic distribution of user preferences, aiming to generate a more comprehensive understanding of user preferences and enhance recommendation performance. For a better illustration, we show the overall framework of AGL-SC in Fig.<ref>. §.§ User-Item Feature Learning The graph learning module accomplishes the embedding of node features within the graph by modeling the direct interaction structures between user and item nodes. In recommendation systems, there are two primary entities: a set of users U(|U|=M) and a set of items V(V|=N). We represent the implicit feedback of user-item interactions using an interaction matrix R∈ℝ^M× N where if user u interacts with item i, the corresponding element is 𝐫_ui=1 otherwise 𝐫_ui=0. Most neural graph recommendation models define the user-item graph as 𝒢={U∪V,A}, wherein the adjacency matrix A is an unweighted matrix, defined as follows: A=[ [ 0 R; R^T 0 ]]. Utilizing initialization, all vertices in the network are mapped into a low-dimensional embedding space 𝐞_u∈ℝ^M× d and 𝐞_v∈ℝ^N× d, transforming them into vector matrices to obtain the initial embedding representations of users and items. D is a diagonal node degree matrix, E∈ℝ^(M+N)× d represents the embedding matrix. Subsequently, these initialized embeddings of users and items are fed into a graph neural network to learn model parameters. By stacking l layers of embedding propagation, users (and items) can receive messages H propagated from their l order neighbors: [ H^l=A^lE ,; A=D^-1/2AD^-1/2. ] The feature representations of users and items are calculated comprehensively, and the node embeddings of both at each layer are accumulated and aggregated through a pooling function. Subsequently, an inner product operation is utilized to derive the latent rating matrix P : 𝐞=p o o l i n g_AVG(H^l|l=0,1,...,L). P=𝐞_u^T·𝐞_v. §.§ Implicit High-Order Feature Constraint Accurately mining the structural constraints of a bipartite graph is a pivotal approach to enhancing bipartite graph learning <cit.>. Homogeneous node high-order constraints are employed to capture the inherent similarities among users or items, while heterogeneous node interaction high-order constraints are utilized to gauge the extent of a user's preference for an item. The high-order constraint module, drawing upon factor decomposition methods, extracts high-order features of nodes and amalgamates both types of node constraints to enhance the accuracy of the model's node representation. Building on this, we initially derive the user feature matrix E_U∈ℝ^M× d and the item feature matrix E_V∈ℝ^N× d by transforming the embeddings into dense vector representations. Then for each user-item instance (i,j), a mapping function is applied as: ϕ(E_ui, E_vj) = E_ui⊙ E_vj, where ⊙ denotes the element-wise product of vectors. We then project the vector to the next layer: R_ij = sigmoid ( h^T (E_ui⊙ E_vj) ), where 𝐡 denotes edge weights of the output layer. The calculated interaction strengths of users to various items are utilized as weights to derive an aggregated feature matrix that expresses the similarity of interactions among users. Similarly, an aggregated feature vector matrix is constructed to represent the similarity between items: W_U = E_ U(RE_V)^T, W_V = E_ V(RE_U)^T. Furthermore, the similarity feature vector matrix is utilized to identify groups of users and items with similar preferences. we derives the relationship matrices between users W_U and between items W_V through the interaction matrix R: W_U=RR^T , W_V=R^TR. By minimizing the Kullback-Leibler Divergence between these distributions, the embedding vector representations of the nodes are optimized. Consequently, the high-order constraint loss function for the nodes is defined as follows: ℒ _ MC = D_KL(W_U||W_U) + D_KL(W_V||W_V). ℒ_MC = ∑_i,j=1^M W_Uij( logW_Uij/W_Uij) + ∑_i,j∈V^N W_Vij( logW_Vij/W_Vij). §.§ Structure Generative Learning In recommendation systems, VAEs predominantly utilize user distributions for generation, adeptly capturing the latent relationships between users and items <cit.>, especially in handling sparse data. We design a generative module to enhance the completion of missing graph structures, thereby strengthening the model's understanding and prediction of complex user behavior patterns. VAEs exhibit remarkable performance in handling sparse data, a particularly pertinent feature for addressing the common issue of data sparsity in recommendation tasks. In recommendation systems, VAEs predominantly generate recommendations based on user distributions, effectively capturing the latent relationships between users and items, thereby providing more precise and personalized recommendations. Numerous studies input the missing graph structures directly into VAEs to tackle recommendation issues under data sparsity conditions. However, this approach lead to an increase in the neural network's capacity, making it challenging for the VAE to capture detailed feature information. Consequently, this generative module can adjusting the intensity of noise to enhance the missing graph structures, thereby improving the relevance and diversity of recommendations. In Section <ref>, we have already fitted the feature representations of nodes using graph neural networks and obtained the predicted matrix R. In order to better construct the distribution of latent variables in the generative module, the latent matrix P of nodes are optimized using Wasserstein distance <cit.>: 𝒲[R,P]=γ∈Γ(X∼R,Y∼ P)inf𝔼_(X,Y)∼γ[||X-Y||], where X and Y represent random variables drawn from two probability distributions R and P, respectively. VAE perceives the latent vector as a probabilistic distribution, with the output of its hidden layer encompassing two dimensions, μ and σ, which represent the mean and variance of the normal distribution parameters that the encoding 𝐳 adheres to. This facilitates a smooth transition of vectors among similar samples. We proceed with computations utilizing the optimized distribution P for user u: [ q_ϕ(𝐳|P) = ∏_u = 0^M q_ϕ(𝐳_u|P_u) ,; q_ϕ(𝐳_u|P_u) = N(𝐳_u;μ _ϕ(u),diag(σ _ϕ ^2(u)), ] where 𝐳_u and P_u respectively represent the user's low dimensional structural embedding and interaction prediction. μ and σ denote the mean and variance of the latent Gaussian distribution generated by the encoder, with ϕ symbolizing the training parameters of the encoder, estimated by a parametric encoder network. To ensure that the distribution closely approximates the true posterior distribution, a parametric encoder network is employed to minimize the Kullback-Leibler Divergence between the two distributions: ϕ^*=min_ϕD_KL(q_ϕ(𝐳_u|P_u),p_θ(𝐳_u)). Subsequently, we have devised a method through gated units to filter and integrate input features, thereby controlling the impact of the latent high-order constraints computed in Section <ref> on the generation of the graph structure. We denote by 𝐱_u the latent vector of user u, h refers to the representation of the hidden state. The gating signal for each user can be expressed as follows: [ 𝐡_μ_u= tanh(W_μ_u·𝐱_μ_u),; 𝐡_σ_u= tanh(W_σ_u·𝐱_σ_u),; 𝐳=sigmoid(W_z·[𝐱_μ_u,𝐱_σ_u]), ] where W· is a learnable transformation matrix8. Calculate the enhanced distribution representation for each user u by the following method: [ μ_u=𝐳·𝐡_μ_u,; σ_u=(1-𝐳)·𝐡_σ_u,; p(P_u)=𝒩(μ_u,d i a g(σ_u^2)). ] The decoder reconstructs the interaction matrix through conditional probability. Consequently, the Kullback-Leibler Divergence between this and the posterior distribution can be calculated by the following formula: D_KL(q_ϕ(𝐳_u|P_u)||p_θ(𝐳_u)) = ∫q_ϕ(𝐳_u)(logp_θ(𝐳_u) - logq_ϕ(𝐳_u))dz. Finally, to obtain targeted enhanced feature representations, sampling from the posterior distribution yields a new latent layer encoding 𝐳, which is then inputted into the generative network. This approach allows us to preserve the inherent structure in the process of generating the graph structure, while also enhancing the model's generalization capabilities: ℒ_VAE = 𝔼_q_ϕ(𝐳_u|P_u)[ logp_θ(𝐳_u|P_u)] - D_KL(q_ϕ(𝐳_u|P_u)||p_θ(𝐳_u)). §.§ Loss Function In this paper, we employ the Bayesian Personalized Ranking (BPR) loss for recommendations, aiming to optimize recommendations by maximizing the predicted ranking of items with which a user has already interacted (positive samples) compared to those they have not (negative samples). Specifically, this objective is achieved by optimizing the following loss function: ℒ_REC= ∑_u∈ U,i,j∈ V - ln sigmoid(P_u i-P_u j). Under the influence of the VAE, the model's capacity to extract implicit feedback from users is adjusted through the decoder. An additional loss term is introduced to control the gated units' ability to capture the high-order connectivity of user preferences, thereby modeling the complex relationships between users and items more precisely and enhancing the performance of the model's recommendations. We fine-tune the parameters to balance recommendation loss and generative loss for optimal results. The final loss function can be expressed as follows: ℒ_AGL-SC = ℒ_REC+λℒ_MC+βℒ_VAE. § EXPERIMENTS To demonstrate the effectiveness of AGL-SC, we have conducted extensive experiments to address the following research questions: * RQ1: Does our proposed method enhance recommendation accuracy more effectively than other baseline methods? * RQ2: What impact do the various modules of the model have on its final performance? * RQ3: How do various hyperparameter settings affect the model's performance? * RQ4: Does our method effectively improve the model's extraction of implicit feedback, thereby enhancing recommendation performance? §.§ Experimental Settings §.§.§ Datasets To evaluate the effectiveness of our model, we use four publicly available datasets of different scales, including MovieLens100K, MovieLens1M, Amazon Electronics, and Yelp. Table <ref> presents detailed information corresponding to these datasets. §.§.§ Evaluation Metric To evaluate the performance of various methods, we employed the rank-based metric Normalized Discounted Cumulative Gain NDCG@K and the relevance-based measure Recall@K. NDCG@K: [ DCG = ∑_i = 1^K rel_i/log_2(i + 1),; IDCG = ∑_i = 1^K rel_i^ * /log_2(i + 1),; ] where rel_i represents the relevance of item i for the target user, and is the relevance of the item ideally ranked at position i. DCG and IDCG respectively denote the computed Discounted Cumulative Gain and the ideal Discounted Cumulative Gain. Recall@K: Re call = TP/TP + FN, where TP signifies the true positives, accurately predicted as such, where the actual value is positive, and the predicted value is also positive; FN represents the false negatives, incorrectly predicted as such, where the actual value is positive, but the prediction erroneously indicates a negative. §.§.§ Baselines To validate the efficacy of our proposed recommendation method, we compared it with several comparable state-of-the-art model approaches. These include graph embedding methods (DeepWalk, Node2Vec), and graph collaborative filtering-based methods (NGCF, LightGCN, DGCF, UltraGCN, SVD-GCN). For each baseline method, we adjusted their parameters, on the same dataset, reported their best-performing parameters for comparison with our model. Graph embedding-based methods: * DeepWalk <cit.>: Transforms graph structure data into a natural language processing task using a random walk strategy to generate node embeddings in the graph. * Node2Vec <cit.>: Combines DeepWalk's random walk approach with a custom exploration-exploitation strategy to more flexibly capture homophily and structural equivalence in networks. Graph collaborative filtering-based methods: * NGCF <cit.>: Leveraging graph neural networks to enhance collaborative filtering, it refines user and item embeddings by incorporating high-order connectivity relations and aggregates neighbor embeddings from different propagation layers to learn representations of users and items. * LightGCN <cit.>: Simplifies the design of GCN to make it more concise and suitable for recommendations, learning user and item embeddings through linear propagation on the user-item interaction graph, and employing a weighted sum of embeddings learned across all layers as the final embedding. We set three layers as the baseline. * DGCF <cit.>: Pays particular attention to the fine granularity of user-item relationships in terms of user intent. Models the intent distribution for each user-item interaction and iteratively refines intent-aware interaction graphs and representations. We set four intents as the baseline. * UltraGCN <cit.>: An extension of LightGCN, it builds a loss function by setting a convergence mode to avoid problems with multi-layer stacking. We set gamma=1e-4, lambda=1e-3 as the baseline. * SVD-GCN <cit.>: Replaces the core design in the GCN method with a flexible truncated SVD, utilizing the top K singular vectors for recommendation purposes. We set alpha=3, beta=2 as the baseline. * VGCL <cit.>: Integrates variational graph generation with contrastive learning, predicting user preferences using a contrastive loss based on variational graph reconstruction. We set t=0.25, lambda=0.1, gamma=0.5 as the baseline. §.§.§ Parameter Settings We implemented the AGL-SC model and all baselines using PyTorch, employing Kaiming initialization to process user-item embeddings, resulting in 128-dimensional initial embeddings. Within the model, the number of GCN layers is set to 2, and the VAE hidden state vector dimension is 200. During training, a dropout rate of 0.2 is used to mitigate the likelihood of overfitting. The network optimization employs the Adam optimizer, with an initial learning rate of 0.001. The training is set for a maximum of 100 epochs. For evaluation, if there is no improvement over 10 consecutive training epochs, the training is terminated. Moreover, we reproduce the results on MovieLens1M dataset with MindSpore framework (Huawei 2024). §.§ Overall Performance Comparisons Table <ref> showcase the overall performance of various baseline models across four datasets. It is observable that our proposed AGL-SC model outperforms the other baseline models in all metrics.We found that methods based on graph neural networks achieved better performance than graph embedding methods, as the two graph embedding models solely use the user-item interaction bipartite graph as input. This highlights the ability of graph learning to capture user preferences by modeling high-order user-item graph structures, compensating for the models' capability to handle implicit feedback and alleviating the issue of over-smoothing. This is particularly evident when dealing with users or items of high similarity, where the model can more accurately identify and emphasize important connections while reducing the impact of noise. Among the existing graph recommendation methods, VGCL <cit.> achieved the best baseline performance, followed by SVD-GCN <cit.> and UltraGCN <cit.>. VGCL, by utilizing variational graph reconstruction, addresses the limitations of current contrastive learning view enhancement methods, thus improving recommendation performance. This underscores the effectiveness of using generative models in recommendation systems to yield richer and more diverse recommendations, with variational graph reconstruction offering a better graph structure than simple data enhancement. SVD-GCN and UltraGCN, building upon the LightGCN <cit.> framework, proposed simplification strategies for graph learning to denoise user interactions. UltraGCN introduced item similarity into graph recommendations, while SVD-GCN added both item and user similarity loss. These comparisons demonstrate that incorporating high-order feature constraints in graph recommendations can effectively enhance recommendation performance. Beyond the aforementioned factors, our model through a combination of graph neural networks and a sparse completion strategy of generative models. In processing sparse data, the generative module effectively infers missing interactions, enhancing the predictive accuracy of the model. Additionally, the high-order constraint module strengthens the filtration of irrelevant data noise, ensuring the relevance of recommendations, thereby achieving optimal performance. Furthermore, when comparing our model with graph neural network-based models like LightGCN and UltraGCN, we observed that AGL-SC improved metrics across all four datasets, although the specific improvements varied per dataset. We believe these differences reflect the model's effectiveness in completing missing graph structures, a capability not effectively addressed by other baseline models in situations of missing interactions. §.§ Ablation Study We introduced multiple components to enhance the model's performance in recommendation tasks. To assess the importance and contribution of each component, we sequentially removed components and compared the performance of AGL-SC and its corresponding variants in terms of the recommendation performance metrics Recall@20 and NDCG@20, as shown in Table <ref> and Fig.<ref> for a better illustration. w/o VAE: This variant removes the generative module of the model, using a combination of graph neural networks and factorization machines to train embeddings, in order to evaluate the role of the generative module in model performance. w/o FM: This variant removes the model's high-order constraint module, relying solely on embeddings trained using graph neural networks, to explore the effect of introducing high-order features of user-item interactions on enhancing prediction accuracy. w/o both: This is the model's basic version, retaining only the GNN module of our model. After removing the generative module, the overall performance of the model declined. However, compared to general models like NGCF, the metrics were largely on par or even slightly improved. We attribute this to the high-order constraint module's capability for deep feature mining of bipartite graph data, an aspect often overlooked in standard graph neural network architectures, especially in adequately modeling the complex relationships between user-user and item-item pairs. After removing the high-order constraint module, the model still exhibited superior performance compared to DGCF. This underscores the efficacy of using VAE generative models for structural enhancement and completion in recommendation systems. Unlike models that merely execute encoding compression, VAEs ensure the integrity of information. This characteristic is particularly crucial for recommendation systems, as it guarantees that the diversity and complexity of users and items are fully represented. Furthermore, models without integrated high-order constraints showed a decline in metric performance compared to the complete model, indicating that adding high-order constraint information optimizes the generative process guided solely by incomplete bipartite graph data. Also, during the model training process, variants of the model experienced convergence difficulties, suggesting that AGL-SC also has significant advantages in enhancing the model's generalization capabilities. Ultimately, AGL-SC consistently outperforms three variants, proving the effectiveness of combining variational graph reconstruction strategies with high-order feature constraints. §.§ Robustness Analysis §.§.§ Data Sparsity In recommendation systems, making recommendations to inactive users with few available interactions poses a significant challenge. Therefore, our aim is to demonstrate the effectiveness of our model in mitigating data sparsity. Specifically, we categorized test users into five groups based on sparsity, that is, the number of interactions under the target behavior (such as purchases). We then calculated the average NDCG@20 for each user group. We presented the comparative results with representative baselines on the MovieLens1M public dataset. In Fig.<ref>, the x-axis represents different levels of data sparsity, the left side of the y-axis quantifies the number of test users in the corresponding group using bars, while the right side of the y-axis quantifies the average metric values using a line. Based on the results, we made the following observations: * Compared to other methods, our model achieved better performance, especially demonstrating commendable recommendation capabilities for inactive users, who constitute a significant portion of the user base. * As the number of interactions increases, the evaluation metrics slightly decline, which might be due to the increase in interaction data causing a reduction in the optimization performance of the model's objective function. §.§.§ Data Distribution In this subsection, we aim to explore how the characteristics of isotropy and anisotropy affect node representations in graph learning. recommendation systems based on graph learning strive to transform nodes within a graph into low-dimensional vector representations. Such representations are instrumental in revealing complex relationships between users and items, thereby enhancing the accuracy and efficiency of recommendations. In graph learning, isotropy refers to the scenario where a node's vector representation has a similar distribution across all directions, implying that the model generates more balanced and consistent node representations to augment the capture of global information. Conversely, anisotropy denotes the presence of differences in characteristics or measurements in various directions. This allows node representations to capture a richer array of information and complex structural features. Given the phenomenon of anisotropy in graph structures, preserving the original graph structure and its anisotropic characteristics is a primary task for the model. However, the distribution of anisotropic vector representations learned during the model's training process can lead to representation degeneration. Current algorithms counteract this by introducing isotropic noise distributions, enhancing the model's generalization capabilities, preventing overfitting, and aiding in the capture of broader contextual information. Consequently, this paper employs high-order interaction features of nodes as directed isotropic noise for perturbation, achieving a synthesis of anisotropic graph structures with enhanced isotropic structures, balancing global consistency and local differentiation in node representations. We randomly select users and items that had interactions in the original dataset and showcase their vector matrices, where a colour gradient illustrates the intensity of the interactions, with darker shades indicating a stronger preference for the items by the users. We demonstrate comparative results with representative baselines on the MovieLens1M dataset, as shown in Fig.<ref>. The results show that, our model maintains clear boundary distinctions in terms of vector distribution similarity, while ensuring sharp contrasts between color blocks compared to other models. This indicates that our model follows an isotropic distribution, preserving structural features between different users, thereby offering sharper discrimination. §.§ Hyper-Parameter Sensitivities We conducted extensive experiments to examine the impact of several key hyperparameters on AGL-SC. As illustrated in Fig.<ref>, we used the NDCG@20 metric to demonstrate the variations of three hyperparameters: the similarity constraint coefficient λ and the variational inference coefficient β. We observed that, on the MovieLens1M dataset, AGL-SC achieved optimal performance at similarity constraint coefficient = 0.1 and variational inference coefficient = 0.1. Regarding the similarity constraint coefficient, we noticed an improvement in performance when it increased from 0 to 0.001, but a rapid decline in performance when it was set to 0.5. This indicates that appropriate similarity constraints can effectively prevent overfitting issues, but overly strong similarity constraints may limit model optimization. Similarly, for the variational inference parameter, selecting a suitable value is crucial for the overall objective optimization. §.§ Strategy Enhancements To validate the effectiveness of the proposed graph-enhanced representation learning strategy, we conducted an experiment covering two distinct datasets: MovieLens1M and Yelp. In this experiment, we utilized four classic graph neural network-based recommendation models, enhancing them with high-order constraint and generative modules to verify their performance under the NDCG@20 metric. The specific results are shown in Table <ref>. It is evident that across both datasets, the performance of all four recommendation models improved upon the addition of the graph-enhanced representation learning strategy modules. We attribute this improvement to the fact that in recommendation systems, the latent vector of VAE is a distribution rather than fixed. This allows for a smoother differentiation in the latent space among similar samples. It also models high-order feature interactions in an implicit manner thereby enhancing the model's ability to fit user preferences. The LightGCN model showed a relatively higher degree of improvement, which we believe is due to LightGCN's simplified design, enabling it to better adapt to the integration of high-order constraint information. § CONCLUSIONS In this work, we introduced a graph-enhanced representation learning strategy based on sparse completion for structural enhancement and completion, aimed at reducing bias in graph node representations and addressing the issue of data sparsity in bipartite graphs within recommendation scenarios. Our method leverages the flexibility of the VAE latent space and its ability to model complex data distributions to complete missing node structures. During the generative process, high-order interaction constraints of nodes are introduced as guidance, ultimately facilitating the prediction of user preferences and item recommendations. This approach has been proven effective and superior to various advanced recommendation models across four real-world datasets. In the future, we plan to explore advanced GNN variants, such as attention-based or heterogeneous GNNs, to better adapt to the integration of multi-order constraint information. § ACKNOWLEDGEMENTS Thanks for the support provided by MindSpore Community. IEEEtran
http://arxiv.org/abs/2406.18699v1
20240626190535
From Pixels to Torques with Linear Feedback
[ "Jeong Hun Lee", "Sam Schoedel", "Aditya Bhardwaj", "Zachary Manchester" ]
cs.RO
[ "cs.RO" ]
J. Lee et al. Robotics Institute, Carnegie Mellon University, Pittsburgh 15213, USA {jeonghunlee, sschoedel, zacm}@cmu.edu University of Chicago, Chicago 60637, USA a7b@uchicago.edu From Pixels to Torques with Linear Feedback Jeong Hun Lee1 Sam Schoedel 1 Aditya Bhardwaj2 Zachary Manchester 1 July 1, 2024 ======================================================================= § ABSTRACT We demonstrate the effectiveness of simple observer-based linear feedback policies for “pixels-to-torques” control of robotic systems using only a robot-facing camera. Specifically, we show that the matrices of an image-based Luenberger observer (linear state estimator) for a “student” output-feedback policy can be learned from demonstration data provided by a “teacher” state-feedback policy via simple linear-least-squares regression. The resulting linear output-feedback controller maps directly from high-dimensional raw images to torques while being amenable to the rich set of analytical tools from linear systems theory, allowing us to enforce closed-loop stability constraints in the learning problem. We also investigate a nonlinear extension of the method via the Koopman embedding. Finally, we demonstrate the surprising effectiveness of linear pixels-to-torques policies on a cartpole system, both in simulation and on real-world hardware. The policy successfully executes both stabilizing and swing-up trajectory tracking tasks using only camera feedback while subject to model mismatch, process and sensor noise, perturbations, and occlusions. § INTRODUCTION Both model-based <cit.> and learning-based <cit.> policies have demonstrated impressive control results in robotics using vision-based sensory feedback. However, both suffer from inherent drawbacks: neural-network policies trained from images can require large amounts of data, which often must be generated in simulation before being transferred to a real robot. Meanwhile, model-based policies require separate machine-vision algorithms to extract features for downstream state estimation or model learning. Recently, data-driven switched linear control policies have been applied directly to image feedback with promising results <cit.>. However, this was limited to a quasi-static setting. We aim to apply data-driven linear feedback policies to the control of dynamic robotic systems directly from images as shown in Fig. <ref>. We demonstrate that it is possible to perform “pixels-to-torques” control of robotic systems with simple observer-based linear feedback policies in both simulation and on real hardware. Specifically, we show that an image-based linear output-feedback policy can be designed by learning a Luenberger observer (i.e., a linear state estimator) via linear-least-squares (LLS) regression over supervised demonstration data. The result is a pixels-to-torques policy that is interpretable and can be analyzed using the rich set of tools from classical linear systems theory, all while inferring states directly from high-dimensional images. We leverage this interpretability to promote stability of the closed-loop policy via a convex extension of the LLS problem. The resulting linear output-feedback policy can achieve good performance while learning from small amounts of data, which we demonstrate on a classic cartpole system. Specifically, the policy is able to avoid sim-to-real transfer by learning directly from hardware data and successfully stabilizes the system while being robust to perturbations, unmodeled dynamics, and even visual occlusions. To summarize, our contributions are: * A data-efficient “student-teacher” methodology for learning linear visual output-feedback policies. * A convex formulation of the least-squares observer-learning problem that includes closed-loop stability constraints. * A nonlinear extension of our method based on the Koopman embedding. * Demonstration of linear “pixels-to-torques” control on a real-world cartpole system. The remainder of the paper is organized as follows: In Section <ref>, we provide a broad survey of related works on image-based control and estimation in robotics. In Section <ref>, we provide an overview of topics from linear control, state estimation, and Koopman-operator theory. In Section <ref>, we describe our linear-policy framework as well as the student-teacher-based method developed to learn a Luenberger observer. We also introduce a Koopman-based extension for nonlinear control settings. Section <ref> presents experimental results demonstrating the method's performance on a cartpole system both in simulation and on real-world hardware. Lastly, we summarize our conclusions in Section <ref>. § RELATED WORKS §.§ Learning Dynamics Models from Images Over the last decade, the deep-learning community developed dynamics-model learning from images for model-based reinforcement learning (RL) and model-predictive control (MPC) <cit.>. To avoid the high-dimensionality of pixel space, the dynamics were initially learned over lower-dimensional, object-centric states to explicitly track objects in the image scene <cit.>. While the states in these works were explicitly predefined or incorporated into a known problem structure, subsequent work also learned the lower-dimensional latent state space for robotics settings, which include simulated gym environments <cit.> and real-world object manipulation <cit.>. Specifically, Finn et. al. <cit.> and Watter et. al. <cit.> learned latent spaces where the learned dynamics were also locally linear and time varying. This allowed for simple stochastic policies to be effectively used for downstream control tasks. In contrast to object-centric models, Finn et. al. <cit.> also learned image-to-image dynamics (termed “visual foresight”) and coupled it with a stochastic MPC controller for pushing manipulation tasks. Recently, “deep” Koopman methods have used deep neural networks to lift observations into a higher-dimensional latent space where the dynamics behave linearly <cit.>. Laferriere et. al <cit.> and Xiao et. al <cit.> extend deep Koopman to pixel-to-control tasks for robotics, including cartpole stabilization. Specifically, images were first embedded to lower-dimensional latent spaces using autoencoders before being lifted via multi-layer perceptrons (MLPs). Lyu et. al. <cit.> expands upon this further to use constrastive encoders for cartpole-swingup tasks. We note the similarities of deep Koopman to time-varying-latent-space-dynamics learning by Finn et. al. <cit.> and Watter et. al. <cit.>. While both methods aim to learn dynamics models for downstream controllers, our method aims to learn a linear state estimator for which images are observations. §.§ Learning Policies with Image Feedback Learning control policies directly from visual inputs has been a widely used practice in the deep-RL community  <cit.>. While initially performed within virtual environments <cit.>, deep-neural-network image-feedback policies have quickly been extended to real-world robotics tasks, including manipulation via diffusion policies <cit.> and quadruped locomotion over various terrains with egocentric vision <cit.>. These methods, while demonstrating impressive results, are limited by the need for large amounts of training data. To scale to learning on real-world hardware, Levine et. al. <cit.> proposed a guided policy search method for efficiently learning an end-to-end pixels-to-torques policy for manipulation tasks. Specifically, “guiding” linear-Gaussian controllers with privileged full-state feedback are optimized and used to provide supervised training data for learning the pixels-to-torques policy. Rather than learning a deep end-to-end policy, our method learns a simple Luenberger observer from the guided supervised data to achieve image-to-state estimation for a downstream state-feedback controller. §.§ Vision-Based Pose and State Estimation Deep learning has been widely applied to vision-based pose and state estimation across many robotics applications <cit.>. Applications include both external feature detection for a downstream controller to track <cit.> and estimation of the robot's own pose or state <cit.>. A commonality across these learning-based methods is the need for large amounts of data, necessitating data synthesis and sim-to-real transfer. As a result, Lu et. al. looked to differentiable rendering for both robotic pose and state estimation from images <cit.>. Specifically, Lu et. al. rendered predicted images of the robot and calculated the corresponding Jacobians for an extended Kalman filter, resulting in online robot-state estimation from a single camera <cit.>. However, this method requires a differentiable rendering model of the robot to generate predicted observations. Therefore, we aim to determine if it's possible to directly learn an image-based Luenberger observer instead of relying on an explicit observation model. §.§ Effectiveness of Linearization Over many decades, linear controllers and state estimators have proven to be powerful tools within the controls community thanks to their simplicity, ease of analysis, and surprising robustness <cit.>. Recently, under the label of state-space models (SSMs), linear models have also become popular in the deep-learning community <cit.>. Specifically, Gu et. al. <cit.> demonstrated that structured linear models in the latent spaces of deep networks can model long-range dependencies found in many applications like speech recognition, language processing, and genomics <cit.>. The most recent implementations have produced performance matching or exceeding that of widely-adopted transformers while being much more computationally efficient <cit.>. We note the strong similarity of SSMs to deep-Koopman methods <cit.>, specifically in their learning of linear dynamics in network latent spaces. In addition to deep-learning methods, non-neural linear alternatives have also been shown to be surprisingly effective in robotics <cit.> and vision-based applications <cit.>. By applying approximate Koopman operators, both Haggerty et. al. <cit.> and Bruder et. al. <cit.> were able to model highly nonlinear, high-dimensional soft manipulators with no neural components. Within the context of quasi-static pile manipulation, Suh et. al. <cit.> showed that switched linear visual-foresight models can outperform deep counterparts in both prediction error and closed-loop control performance. However, applying these model-learning methods directly to image-based observations of open-loop unstable dynamical systems can be difficult. Koopman liftings of high-dimensional images is computationally expensive, and linear visual-foresight models has been restricted to quasi-static systems. Therefore, we aim to determine if it is possible to avoid the development of a Koopman dynamics model by directly learning an observer-based linear feedback policy. § BACKGROUND This section provides a brief review of linear control, state estimation, and Koopman-operator theory. We refer the reader to the substantial existing literature on these topics for further details <cit.>. §.§ Linear Control We aim to describe our controlled robot with a discrete-time linear time-invariant dynamics model, x_k+1 = Ax_k + Bu_k, where x_k+1, x_k ∈^n are the robot states at time steps k+1 and k respectively; u_k ∈^m are the control inputs (i.e., actions) at time step k; and A ∈^n × n, B ∈^n × m are the dynamics coefficient matrices. Using the dynamics described by (<ref>), we can design a linear state-feedback controller to drive the robot to the origin: u_k = -Kx̂_k, where K ∈^m × n is the controller gain matrix. As a particular example, K could be determined by solving an LQR problem using the Ricatti equation <cit.>. To evaluate the stability of the controller, we substitute (<ref>) into (<ref>) to obtain an expression for the closed-loop dynamics: x_k+1 = (A-BK)x_k, where we can see that x_k will converge to the origin if A-BK is stable. This stability condition for the closed-loop dynamics can be expressed as, |eigvals[A-BK]| < 1, where |eigvals[A-BK]| are the magnitudes of the eigenvalues of the closed-loop dynamics. §.§ Linear State Estimation As seen in (<ref>), the implementation of a linear state-feedback controller requires knowledge of the full state, x_k. However, this is rarely the case; we usually only have access to information from the robot's sensors. For now, we will assume that the sensors only provide partial state information described by the following observation model: y_k = Cx_k, where y_k∈^l are the observations (e.g., sensor readings, images, etc.) at time step k and C ∈^l × n is the observation matrix. An observer can then be designed to first predict states using the dynamics, (<ref>), before correcting the predictions using sensor observations to provide the final full-state estimates. This is the core idea behind a Luenberger observer (i.e. linear state estimator): x̂_k+1 = Ax̂_k + Bu_k_prediction + L(y_k+1 - ŷ_k+1)_correction, where x̂_k+1 are the estimated state output by the observer at time step k+1; ŷ_k+1 are the corresponding predicted observations; y_k+1 are the corresponding real-world observations (i.e., sensor readings on hardware); and L ∈^n × l is the observer gain. As an example, a Kalman filter can be used to design L to properly estimate x̂_k+1 under Gaussian noise. We refer to the combined controller-and-observer system as an output-feedback policy. Similarly to K, L can also be calculated to drive the state estimation errors, x_k - x̂_k, asymptotically to zero for a given linear feedback controller. We first account for the controller by substituting (<ref>) into (<ref>): x̂_k+1 = (A-BK)x̂_k + L(y_k+1 - ŷ_k+1). Then, we substitute (<ref>) and (<ref>) into (<ref>) to further simplify the Luenberger observer, x̂_k+1 = (I-LC)(A-BK)x̂_k + Ly_k+1. This allows us to analyze the stability of the Luenberger observer in a similar manner to the linear state-feedback controller as described by (<ref>). Specifically, we can express the condition for asymptotically-converging state estimation errors as: |eigvals[(I-LC)(A-BK)]| < 1. We will define the matrix A^LK as: A^LK = (I-LC)(A-BK). §.§ Koopman-Operator Theory Koopman-operator theory has seen increased adoption in robotics in recent years <cit.>. This is due to its ability to apply model-based linear (or bilinear) control directly to nonlinear systems, which we denote as x_k+1 = f(x_k, u_k). Assuming these dynamics are control affine, the nonlinear system can be represented exactly by an infinite-dimensional bilinear system <cit.>. This bilinear Koopman model takes the form, z_k+1 = A z_k + B u_k + ∑_i=1^m u_k^i C^i z_k, where z is the infinite-dimensional lifting of the robot states, x. In practice, this is approximated by a finite-dimensional nonlinear mapping over candidate basis functions <cit.> or deep neural networks <cit.>. We represent this finite-dimensional embedding as, z = ϕ(x), where z ∈^N_z. Likewise, ϕ is constructed in such a way that the “unlifting” is linear: x = G z. § METHODOLOGY For a robot controlled via image-based feedback, the development of an appropriate observation model, (<ref>), requires a differentiable renderer <cit.> that we may not have access to. In addition, performing a gain-calculation procedure, such as solving the Ricatti equation, may be too computationally expensive to perform using dynamics models in pixel space <cit.>. Therefore, we directly learn the parameters of a linear feedback policy's Luenberger observer, which uses images from a robot-facing camera as sensory feedback. An overview of our method is shown in Fig. <ref>. §.§ Pixel-Based Luenberger Observer We begin by restating the Luenberger observer. Specifically, we substitute (<ref>) and (<ref>) into (<ref>) to express the Luenberger observer in terms of just the predicted robot states, x̂; robot control inputs, u; and output observations in the form of high-dimensional, raw pixel values from a robot-facing camera, y^p: x̂_k+1 = (I-LC)A_A^Lx̂_k + (I-LC)B_B^Lu_k + Ly^p_k+1, where A^L, B^L denote new coefficient matrices. We additionally introduce an affine term, d ∈^n, to handle linearizations about non-zero goal points: x̂_k+1 = A^Lx̂_k + B^Lu_k + Ly^p_k+1 + d, This yields the form of a pixel-based Luenberger observer whose coefficient matrices we will learn from subsequent supervised data. For simplification, we concatenate the matrices into θ = [ A^L B^L L d ]. §.§ Learning a Pixel-Based Linear Output-Feedback Policy To gather supervised data, we first leverage a predesigned “teacher” policy as shown in Fig. <ref>. In our case, the teacher is a privileged linear-quadratic regulator (LQR) policy that utilizes the robot's built-in encoders. The teacher is used to collect supervised data in the form of demonstration trajectories of the predicted robot state, X̂_1:N; control inputs, U_1:N-1; and pixel values from robot-facing-camera images, Y^p_2:N, which are used as observations by a subsequently learned linear output-feedback policy for pixels-to-torques control. We refer to this learned policy as the “student.” The collected supervised trajectories are then concatenated and stored in the form of data matrices, W = [ X̂_1:N; U_1:N-1; Y^p_2:N; 1 ] = [ x̂_1 x̂_2 … x̂_N-1; u_1 u_2 … u_N-1; y^p_2 y^p_3 … y^p_N; 1 1 … 1 ], X̂_2:N = [ x̂_2 x̂_3 … x̂_N ]. This allows us to formulate the learning of the student's pixel-based Luenberger observer described by (<ref>) as the following linear-least-squares (LLS) problem: θminimize θ W - X̂_2:N_2^2. We additionally introduce sparsity-promoting L_1 regularization to reduce overfitting, θminimize θ W - X̂_2:N_2^2 + λ ||L||_1. By combining the teacher's state-feedback controller with the newly designed Luenberger observer, the student output-feedback policy can then perform closed-loop pixels-to-torques control as shown in Fig. <ref>. We note that regression can be separately performed over states and control histories to also learn a gain, K, for a new student controller. However, if the teacher is a linear feedback policy, this is equivalent to using the teacher's controller. §.§ Enforcing Stability with a Linear-Matrix-Inequality By leveraging linear systems theory, we introduce a convex constraint to the observer-learning problem to ensure that state-estimation errors are guaranteed not to diverge. Specifically, we note that the stability condition described by (<ref>) is equivalent to the following spectral-norm condition, ||A^LK||_2 < 1, which can be rewritten as the following linear-matrix-inequality (LMI) <cit.>: [ I A^LK; A^LK I ]≻ 0, where ≻ indicates positive definiteness. Enforcing this constraint in the LLS problem requires privileged information of the teacher's linear controller gain, K, which we assume access to. Combining (<ref>) with (<ref>) extends the original LLS problem into the following convex program, which we solve using the semidefinite programming solver COSMO <cit.>: |l| θθW - X̂_2:N_2^2 + λ||L||_1, [ I A^LK; A^LK I ] ≻0, where θ and W are now defined as W = [ X̂_1:N; Y^p_2:N; 1 ], θ = [ A^LK L d ]. §.§ Extension to Nonlinear Systems via Koopman Embedding For nonlinear control and estimation, the nonlinear dynamics can be linearized about a reference trajectory before being tracked by a time-varying linear controller (e.g., time-varying LQR) and estimator. However, doing so requires learning gains for every time step, K_k and L_k, which can become computationally expensive and data-inefficient with high-dimensional image-based observations. Therefore, we introduce a Koopman-based extension to the pixel-based linear output-feedback policy that only requires learning a single Luenberger gain, L. To do so, we start with a pre-specified choice of Koopman embedding and corresponding unlifting, z_k = ϕ(x_k), x_k = Gz_k. We then replace the linear dynamics (<ref>) with the bilinear Koopman dynamics (<ref>) in our Luenberger Observer (<ref>). Doing so yields the following Luenberger observer w.r.t. z: z_k+1 = A z_k + B u_k + ∑_i=1^m u_k^i C^i z_k + L(y_k - ŷ_k), which can be further simplified in a similar manner to (<ref>). An additional affine term can also be introduced, yielding: z_k+1 = A^L z_k + B^L u_k + ∑_i=1^m u_k^i C^L, i z_k + Ly_k+1 + d. We may now formulate a new LLS problem in the same manner as (<ref>): θminimize θ W - Ẑ_2:N_2^2 + λ ||θ||_1, where Ẑ_2:N and W are now defined as Ẑ_2:N = [ ϕ(x̂_2) ϕ(x̂_3) … ϕ(x̂_N) ], W = [ Ẑ_1:N-1; U_1:N-1; (U^1Ẑ)_1:N-1; ⋮; (U^mẐ)_1:N-1; Y^p_2:N; 1 ] = [ ẑ_1 ẑ_2 … ẑ_N-1; u_1 u_2 … u_N-1; u^1_1ẑ_1 u^1_2ẑ_2 … u^1_N-1ẑ_N-1; ⋮ ⋮ ⋱ ⋮; u^m_1ẑ_1 u^m_2ẑ_2 … u^m_N-1ẑ_N-1; y^p_2 y^p_3 … y^p_N; 1 1 … 1 ], and θ represents the concatenated matrices of the Koopman Luenberger: θ = [ A^L B^L C^L, 1 … C^L, m L d ]. As before, this Koopman-based extension of the pixel-based Luenberger observer can be combined with the teacher's state-feedback controller (e.g., time-varying LQR) or a newly designed Koopman-based MPC controller <cit.>. The lifting and unlifting operations, (<ref>)-(<ref>), can be used to pass state information between a state-feedback controller and the Koopman-based Luenberger observer. This creates a pixels-to-torques policy that can now track trajectories for nonlinear systems. § EXPERIMENTAL RESULTS §.§ Simulation Setup We evaluate the closed-loop performance of our pixel-based linear output-feedback policy in a simulated cartpole environment. We specify two models: a nominal model, which is simplified and contains up to 5% parametric model error, and a true model, which is used exclusively for simulating the system in a “real-world” environment to evaluate the algorithm's performance. We additionally introduce Gaussian process noise applied to the control inputs of the true model. The nominal model is used to design the teacher LQG policy, which uses encoder measurements of the cartpole's configuration as observations for its state estimator. The encoder quantization is simulated as observation noise in the true model. To simulate a camera sensor for the pixels-to-torques student policy, we use Makie <cit.> to render greyscale images of the simulated cartpole. To discern these image-based observations from the true, “real-world” system, we visualize the true system in Meshcat <cit.>. < g r a p h i c s > Overview of the cartpole hardware used for real-world demonstrations of pixels-to-torques control performed by linear output-feedback policies. The cartpole hardware is actuated by a brushless motor while an opposing global-shutter, Arducam OV2311 camera captures images of the cartpole. A Jetson Nano acts as the primary computer tasked with processing the images and executing the pixel-based student policy (or corresponding teacher) before sending the control inputs to the cartpole motor. This operation is performed at 60 Hz to match the camera's maximum frame rate. Arrows indicate positive directions for the cart position, pole angle, and corresponding velocities. §.§ Simulation Results We evaluate the data-efficiency of learning the pixel-based linear output-feedback policy on cartpole-stabilization tasks as shown in Fig. <ref>. Specifically, we evaluate the policy's stabilization performance across 50 initial conditions chosen from the same distribution as the training demonstrations. As shown in Fig. <ref>, the learned pixels-to-torques policy converges to a small stabilizing error with only 20 training trajectories. In addition, the student policy achieves a 100% success rate when trained with as little as 50 trajectories as shown in Fig. <ref>. Success is defined as the cart position being within 17.6 (half the pole length) of the origin with a pole angle within 2 of upright. As shown in Fig. <ref>, we also investigate the observer gain of the pixel-based linear feedback policy. Specifically, we visualize each row of the policy's observer gain, L, as a normalized heat map to discern the pixels that contribute most to the respective state variable's estimation. Interestingly, doing so yields identifiable, best-corresponding visual features: Pixels that correspond to the cart contribute most to the cart-position estimation while the pole-tip pixels additionally contribute to the pole-angle estimation. In addition, the sign of the gain correctly corresponds to the physical location of the respective pixel. The respective velocities also have similar visual patterns. We additionally showcase the Koopman-based extension of the pixel-based linear output-feedback policy for tracking a nonlinear system as shown in Fig. <ref>. Specifically, we demonstrate the Koopman-based policy's ability to track a reference swing-up trajectory on the cartpole. The Koopman lifting consists of the cartpole states, 4th-order Fourier bases, and 6th-order Chebyshev polynomials. The reference trajectory was designed using iterative LQR before being tracked by a subsequent time-varying-LQG teacher policy for data collection. The Koopman-based policy is trained on 20 teacher demonstrations. As shown in Fig. <ref>, the policy is able to properly track the swing-up trajectory despite model mismatch and process noise. §.§ Hardware Setup We develop a physical cartpole system, shown in Fig. <ref>, to evaluate the efficacy of the pixel-based linear output-feedback policy on real-world hardware. The cartpole itself consists of a single brushless motor, a cart that rides along an aluminum track, and a pole laser cut from white acrylic. A black background is added to aid in image segmentation, and reflective components are also covered with a black cloth during runtime. Opposing the cartpole is a Jetson Nano and Arducam OV2311 camera mounted on a tripod. The camera has a global shutter to avoid rolling shutter distortion. The Jetson Nano acts as the main computer to perform online tasks, which consists of basic image processing (Gaussian blurring and Otsu thresholding) before executing the pixel-based linear output-feedback policy for state estimation and control. The system then sends control inputs to the motor at 60 Hz to match the frame rate of the camera. Image capture and processing is performed with OpenCV. §.§ Hardware Results As shown in Fig. <ref>, the pixels-to-torques policy is able to successful recover from perturbations and stabilize the real-world cartpole. Surprisingly, the policy is also robust to major occlusions, as shown in Fig. <ref>. While the occlusions do cause decreased state-estimation performance, as shown in Fig. <ref>, the policy's state-feedback controller is still able to overcome the disturbance to stabilize the cartpole. § CONCLUSIONS We have shown that it is possible to perform “pixels-to-torques” control of a highly dynamic robotic system with simple linear output-feedback policies. These pixel-based linear policies are amenable to analysis via linear systems theory while offering surprisingly effective control in the presence of model mismatch, disturbances, and visual occlusions, as demonstrated on a real-world cartpole system. We additionally introduced a Koopman-based extension of the pixels-to-torques policy for nonlinear systems. Several interesting directions for future work remain: It should be possible to apply linear visual feedback to systems with egocentric and/or multiple cameras. There are also several ways of combining our method with adaptive control or online learning techniques. Finally, we also plan to further explore the rich connection between pixel-based output feedback, Koopman operators, and diffusion policies. §.§.§ This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE2140739. splncs04
http://arxiv.org/abs/2406.18883v1
20240627042827
Galaxies Ages with Redshift z=2 to 4: Stellar Population Synthesis for Candidates in FourStar Galaxy Evolution Survey
[ "Chong-yu Gao", "Martin Lopez-Corredoira", "Jun-jie Wei" ]
astro-ph.GA
[ "astro-ph.GA" ]
Martín López-Corredoira martin@lopez-corredoira.com Jun-Jie Wei jjwei@pmo.ac.cn 0000-0003-0593-936X]Chong-Yu Gao Purple Mountain Observatory, Chinese Academy of Sciences, Nanjing 210023, China School of Astronomy and Space Sciences, University of Science and Technology of China, Hefei 230026, China 0000-0001-6128-6274]Martín López-Corredoira Instituto de Astrofísica de Canarias, E-38205 La Laguna, Tenerife, Spain PIFI-Visiting Scientist 2023 of China Academy of Sciences at Purple Mountain Observatory, Nanjing 210023 and National Astronomical Observatories, Beijing 100012, China Departamento de Astrofísica, Universidad de La Laguna, E-38206 La Laguna, Tenerife, Spain 0000-0003-0162-2488]Jun-Jie Wei Purple Mountain Observatory, Chinese Academy of Sciences, Nanjing 210023, China School of Astronomy and Space Sciences, University of Science and Technology of China, Hefei 230026, China § ABSTRACT Observations of large amount of massive galaxies with relatively old populations found at high redshifts are challenging galaxy formation scenarios within standard cosmology. Precise determinations of the average age of these galaxies would be useful for the discussion of this problem. Here we carry out a better constraint of the age of 200 V-shaped SED non-AGN galaxies at redshifts 2<z<4 of the catalog of FourStar Galaxy Evolution Survey, identified by V-shape in their spectral energy distribution (SED) with a Lyman and a Balmer break. SED fitting include a main stellar population in addition to a residual younger population and extinction. The galaxies are younger at higher redshift on average. However, for the galaxies with z>2.5, we do not see a significant evolution of their average age, with all average ages of galaxies mostly remaining between 1 and 2 Gyr. Our research find that most massive galaxies (∼ 10^10 M_⊙ ) are older (typically >∼ 1 Gyr old) and formed earlier than less massive galaxies in our sample. § INTRODUCTION Massive bright galaxies with relatively old populations have been found at high redshifts <cit.>. The formation of a galaxy requires a considerable amount of time. This fact and other observations, such as the existence of distant quasi-stellar objects (QSOs) at z=7.6 with black hole mass larger than 10^9 M_⊙ <cit.>, or the presence of high metallicity at high redshifts <cit.>, or large amounts of molecular gas <cit.>, challenge the model of galaxy formation within standard cosmological model. The determination of the age of the galaxies is very important to evaluate how challenging are these high redshift (z≳ 2) massive galaxies, either with analysis of spectra or the spectral energy distribution (SED) from photometry of galaxies in several filters. Spectral analyses may give more accurate measurements, but require very large exposure times, so they are available only for few galaxies, and they are importantly affected by age-metallicity degeneracy in low signal/noise cases <cit.>. Nonetheless, there are useful studies in this direction. For instance, <cit.> used the breaks of 2640 Å and 2900 Å and the fit of the whole spectrum to the models to determine the age of a galaxy at z=1.55 to be larger than 3.5 Gyr. <cit.> used the strength of H_δ for z∼ 0.9 galaxies and the fit of the whole spectrum to the models. Their method seems also appropriate for our galaxies with redshift up to 1.2. A more accurate method of age determination, such as the use of H_γ <cit.> would need a very high signal/noise in the spectra which is not reachable for high redshift galaxies. However, <cit.> were able to analyze the spectra of quiescent galaxies from FourStar Galaxy Evolution Survey (ZFOURGE) using combined H_β+H_γ+H_δ absorption lines at 3<z<4, getting stellar ages ≲ 1 Gyr. Stellar population synthesis (SPS) method has been used in exploring the age and other properties from the SED of galaxies. There are several examples in the literature with remarkably high ages in comparison with the age of the Universe at the corresponding redshift. <cit.> found three old galaxies with luminosity-weighted age of 2.6 Gyr at z=2.7, 3.5 Gyr at z=2.3, and another 3.5 Gyr old galaxies at z=2.3; <cit.> used the SPS method to obtain the red compact galaxies with ages of 5.5, 3.5, and 1.7 Gyr for redshifts 1.21, 1.9, and 3.4. At much higher redshift, z≈ 6, in 13 spectroscopically confirmed star-forming galaxies, the median best-fitting age of the SED was 200–300 Myr <cit.>; a galaxy with photometric redshift 6.6–6.8 with an age ≳ 50 Myr and most likely value of a few hundred Myr <cit.>; or a galaxy at z=8.6 with 450–500 Myr and a galaxy at z=11.1 with ∼ 160 Myr <cit.>. <cit.> provided a compilation of ages of 61 galaxies with redshifts up to z≲ 4, using Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS) photometry for the most distant sources. These SED fits are mostly carried out assuming one single stellar population (SSP) in the SPS, that is, assuming a passive evolving population; and constraining the age to be lower than the age of the Universe at a given redshift. <cit.> claimed that by assuming only one SSP the age of galaxies would be underestimated. This is a problem both for SED fitting or by deriving ages from spectral features such as a breaks or some lines. <cit.> proposed two SSP fitting method (it contains two stellar population components) to fit the SED of galaxies at z>2.5. The young component of this double SSP, less than 5% in mass, is consistent with a residual star formation that can also be observed at lower redshift galaxies. Their results indeed show that two SSP fitting method can describe the SED of their candidates much better than one SSP, but the uncertainties of the ages of older stellar components are high, so many galaxies should be analyzed to get a good constraint in the average age for the observed redshift. In this paper, we want to contribute to a better constraint of the age of red galaxies at redshifts 2<z<4 by analyzing more than 200 galaxies of ZFOURGE with SED fitting using two SSP method to derive the ages of young stellar components, ages of old stellar components, metallicities, redshift, the ratio of old/young population, and extinction parameters. In our fit, we do not force the fit to be younger than the age of the Universe at their corresponding redshift. rThe objective of this analysis is characterizing galaxies with V-shaped photometric SED at intermediate redshift, which will be useful in the comparison with other analyses of other higher redshift galaxies selected with the same criteria that are being observed with James Webb Space Telescope (JWST) <cit.>. Once we have the fit for each galaxy, we can calculate the average age of the galaxies in the same bin, with a much lower error bar. This article is organized as follows: In section <ref> we describe the data provided by ZFOURGE catalog, and the selection criteria we used. In section <ref>, the fitting model and the method to calculate the average age for each redshift bin are introduced. Our results are showed in section <ref>. Finally, a brief discussion and conclusion are made in section <ref>. § DATA §.§ ZFOURGE ZFOURGE provided a photometric catalog containing more than 70,000 galaxies, which covers the cosmic fields CDFS, COSMOS, and UDS. The observation of the galaxies in the ZFOURGE catalog involves numerous bandpass filters, with 40 filters for the CDFS field, 37 for the COSMOS field, and 26 for the UDS field. The wavelength extends from ∼0.4 μm to ∼7.9 μm <cit.>. In this catalog, the information of each source include the identification number, Right Ascension, Declination, flux (in units of μJy) for each filter along with its uncertainty and weight, signal-to-noise ratio (SNR), photometric redshift, spectroscopic redshift, and active galactic nucleus (AGN) status. The photometric redshift is obtained by fitting the spectral energy densities with EAZY package <cit.>. In this work, we mainly use the flux for each filter, SNR, and AGN status to select our candidates, and do the further analysis. §.§ Sample selection <cit.> applied the empirical selection criteria to select high-redshift galaxies based on JWST-NIRCam photometry. They selected on a `double-break' SED: no detection in HST-ACS imaging (with SNR(B_435,V_606,I_814)<2), blue in the NIRCam short-wavelength filters and red in the NIRCam long-wavelength filters, which is expected for those sources at z≥7 with Lyman break and with red UV-optical colours. The adopted colour selection criteria were [ F150W-F277W<0.7; F277W-F444W>1.0 . ] To make sure good SNR, they required SNR(F444W)>8 and limited their sample to F444W<27 AB magnitude and F150W<29 AB magnitude. However, they are designed for 7 ≤ z ≤ 9, we thus cannot apply them directly. With the photometric redshifts provided by the ZFOURGE catalog, we first calculate the rest-frame wavelength associated to each filter. After that, we apply the colour selection criteria [ FLyman - Fmed<0.7; Fmed-FBalmer>1.0 , ] where `FLyman', `Fmed', and `FBalmer' are the AB magnitudes of the rest-frame central wavelengths of the filters, λ_f_c, such that |λ_f_c-0.1216 μ m|<0.05 μm, |λ_f_c-0.26 μm|<0.05 μm, and |λ_f_c-0.4 μm|<0.05 μm, respectively. We set the average flux of filters with central wavelength <0.5 μm to be lower than the flux of FLyman, which is an approximation of the criterion SNR(B_435,V_606,I_814)<2 in <cit.>. In addition, we also require the candidates have SNR>8 to ensure good quality. Note that the AGN effects can be excluded by using the information of catalog. In Figure <ref>, we show three representative examples of SEDs, one each for the CDFS, COSMOS, and UDS fields. In this plot, the rest-frame wavelength (λ_ rest) is calculated by λ_ rest=λ_ obs/(1+z_ fitted), where z_ fitted is the redshift inferred from the fitting (see below) and λ_ obs is the observer-frame wavelength. We have 15 candidates for the CDFS field, 84 candidates for the COSMOS field, and 101 candidates for the UDS field. The total number is 200 candidates. All of them present an `V-shape' in F_λ versus λ_ obs as they have `double-break' features (i.e., Lyman and Balmer breaks). § METHODS §.§ Model fitting The physical properties of a galaxy can be obtained by fitting its SED with stellar population synthesis models. Most of the studies applied only one SSP, with one single average age, which is lower than the age of the oldest stellar population according to <cit.>. It has been proved that the two SSP method (it contains a young and a relative old stellar population components) gives better analysis for the V-shaped candidates <cit.>. Thus, in our analysis we use the two SSP method to fit the SEDs of the candidates. The stellar population synthesis model we used is GALAXEV <cit.>, which computes the spectral evolution of stellar populations from 91 Å to 160 μ m at rest. GALAXEV contains ten ages (0.005, 0.025, 0.10, 0.29, 0.64, 0.90, 1.4, 2.5, 5.0, and 11 Gyr) and three metallicities ([M/H]=-0.4, 0, and +0.4), so there are 30 instantaneous-burst templates that can be used to fit the galaxy spectra <cit.>. In this work, we use Calzetti's law to consider the extinction effects. Calzetti's law was empirically derived from a nearby starburst galaxies containing small Mgellanic cloud-like grains <cit.>. This law is suitable for galaxies at high redshifts, in which stars are formed within the central regions. Here the adopted extinction parameter is R_V=4.05. Our fitting model can be expressed by the following equation: F_ theo(λ)= L_0/4π d_L(z)^2(1+z)× [⟨ L_ SSP( age _ old,[M/H], A_V;λ /(1+z))⟩_T +A_2⟨ L_ SSP( age _ young,[M/H], A_V;λ /(1+z)) ⟩_T ] , where d_L(z) is the luminosity distance, age_old, age_young, [M/H], A_v, A_2, and z, respectively, represent the age of old stellar population, the age of young stellar population, metallicity, the extinction in V-filter, the ratio of young/old population, and redshift. The flux provided by the catalog is in units of μ Jy, so we need to convert it into erg s^-1 cm^-2 Å^-1. The conversion is carried out by means of F_λ( erg s^-1 cm^-2 Å^-1)=2.998 × 10^-14 F_ν(μ Jy) ×Δν/Δλ, where Δλ=[∫ d λ T(λ)]^2/∫ d λ T^2(λ), Δν=[∫ d ν T(ν)]^2/∫ d ν T^2(ν), and T(λ) is the transmission curve for the corresponding filter. The model fitting is carried out by minimizing the χ^2, χ_red ^2=1/N_dof ∑_i=1^N [F_obs(λ_i)-F_theo (λ_i)]^2/σ^2(λ_i). Here N_ dof=N-f, where N is the number of data points and f is the number of free parameters (f=7 in our case). Also, σ _i is the error of the observed flux F_obs(λ_i). Although the distance luminosity d_L is estimated in flat Λ CDM model with cosmological parameters H_0=70 km Mpc^-1 s^-1 and Ω _m=0.3, the SED fitting is totally independent of the cosmological model, because d_L only affects the amplitude of the SED, not its shape. After obtaining the best-fit parameters through the minimum χ^2 statistic, the error bars of the parameters can be derived through χ_red ^2<χ_red,min^2[1+f(N_dof ,CL)/N_dof ] . where f(N_dof ,CL) is a function of N_ dof and the confidence level (CL) given in the corresponding table of χ ^2 statistics. In cases with the minimum χ^2_ red, min 1, the errors would be underestimated or overestimated. We thus add a factor χ^2_ red, min in the right-hand side of Equation (<ref>). This is equivalent to assume that our fit is a “good fit” (with χ^2_ red, min≈ 1) and multiplying the error bars by some factor to get it. Figure <ref> displays the comparison between the redshifts inferred from our model fitting (z_ fit) and those provided by the catalog (z_ cat). One can see from this plot that they are in good agreement within the error bars. §.§ Stellar Mass Estimation In our model, there are two stellar components in the galaxies. In order to estimate the total mass of a galaxy, each component should be considered, though we are dominated by the mass of the old population. We estimate the stellar mass of a galaxy by means of the following procedure: First, we calculate the rest-frame V-band luminosity L_V by the rest-frame V-band flux (corrected of extinction). Then, according to equation <ref>, the young population contributes L_V,young = L_V × A_2/(1+A_2), and the old population contributes L_V,old = L_V × 1/(1+A_2). <cit.> calculated the stellar mass-to-light ratios with different ages by GALAXEV assuming the solar metallicities Υ _V(age), which are used here to calculate the mass of the young and old population of a galaxy, neglecting the metallicity dependence: M=M_young+M_old, where M_young and M_old represent the young and old components masses. That is, M_x=L_V,x×Υ _V(age_x), x could be old or young. The mass distribution of our datasets is shown in Fig. <ref>. We note that the dispersion of masses shown in Fig. <ref> is larger than real, because there is a larger error in the mass determination due to the errors of ages. The average mass of our sample of 200 galaxies is 1.5 × 10 ^9 M_⊙, with range 10^7 to 10^11 M_⊙. §.§ Average and median ages of galaxies at different redshifts After obtaining the best-fit parameters age_old and their corresponding uncertainties for 727 galaxies, We choose those galaxies with inferred redshift z_ fit>2, and end up with 200 galaxies. Firstly, we divide our selected galaxies into 6 groups with redshifts from low to high, with each group containing the similar number of galaxies. The redshift bins are 2.00–2.05, 2.05–2.11, 2.11–2.23, 2.33–2.37, 2.37–2.66, and 2.66–4.14, respectively. Assuming the positive and negative errors are Gaussian (an asymmetrical Gaussian when the positive and negative error are different), the probability P of each galaxy to have age x is P_i(x)=√(2/π)/σ_l+σ_r×{[ exp[-(x-τ _i)^2/2 σ_l^2] if x ⩽τ _i;; exp[-(x-τ _i)^2/2 σ_r^2] if x>τ _i. ]. where τ _i means the ith age_old, σ_l and σ _r, respectively, represents the negative and positive 1 σ errors. By taking the cumulative product over all distributions, we can obtain the distribution of average age: P_aver(x)=K×∏ _i=1^nP_i(x) , where n is the number of galaxies in each redshift bin and K is a normalization constant. Afterwards, by identifying the 0.16, 0.5, and 0.84 quantiles of the overall distribution, we can obtain the lower limit, median, and upper limit of its 1 σ confidence interval. The above calculation assumed that the galaxies in the same bins have similar ages, this underestimates the uncertainties of average ages. Considering the differences of the ages of galaxies in the same bin, the uncertainties should be multiplied a factor √(χ^2_i/n_i-1), with χ ^2_i = ∑ _k=1^n_i(log(age)_k-log(age)_i,aver)^2/σ _log(age)_k^2, where i stands for the ith bin, n_i is the number of galaxies in the bin, log(age)_i,aver is the average age of ith bin calculated above, and σ _log(age)_k represents the log(age) uncertainties of kth galaxy in ith bin. Median values are effected by the outliers more slightly, so we also calculate the median ages for each redshift bin and mass bin. The statistical error bars are calculated in 68% C.L. (1 σ uncertainties): the upper or lower limits correspond to the positions 0.5N ± 0.47√(N) of the ordered set of N data. Presenting results through average and median values could avoid anomalies caused by systematic errors in the fitting method. § RESULTS With the inferred ages of old stellar populations in 200 galaxies having z_ fit>2, we calculate the average age and median age of the galaxies for each redshift bin by using the method described in Section <ref>. Figure <ref> shows the age-redshift diagram for 6 redshift bins. In this plot, we also illustrate the age of the Universe as a function of redshift (estimated using flat ΛCDM; solid curve). It is obvious that the average and median galaxy ages is younger than the age of the Universe at the same redshift. One could find that the median and average galaxy ages are very similar in the figure <ref> to figure <ref>. In order to study the correlation between the average age and z_ fit, we perform a linear fit, t_ gal=A+B× z_ fit, to the data points. Here t_ gal denotes the average age of the galaxies.The best-fit coefficients are A=0.62_-0.42^+0.42 Gyr and B=0.10_-0.17^+0.18. The slope B is consistent with 0 at the 1σ confidence level, implying that there is no significant correlation between t_ gal and z_ fit. There exist some selection effects such as the Malmquist bias, which leads to the preferential detection of galaxies with higher luminosity. In order to check the impact of the Malmquist bias on our results, we will only consider those sufficiently bright galaxy sources with rest frame V-band luminosity L>5.75× 10^42 erg s^-1. Figure <ref> shows the average ages of 130 galaxies with L>5.75× 10^42 erg s^-1 as a function of redshift. We can see that high-L galaxies are observable even at high redshifts between 3 and 3.5. Since there is no significant difference between Figures <ref> and <ref>, we conclude that the selection effect is not important in this research. Similarly, a simple linear-fit analysis for the data points in Figure <ref> gives A=1.06_-0.52^+0.52 Gyr and B=-0.06_-0.19^+0.19. The slope B is also consistent with 0 at the 1σ confidence level. If we focus on 70 highest-redshift galaxies in our sample and divide them into 7 redshift bins, the range of z_ fit is from 2.35 to 4.14. Their average galaxy ages are presented in Figure <ref>. We also perform a linear fit to the average age-redshift measurements, and obtain the results of A=0.07^+1.12_-1.12 Gyr and B=0.28_-0.42^+0.40. Again, the slope B is in line with 0 at the 1σ confidence level. In Table <ref>, we summary the average galaxy ages in different redshift bins for the samples of 200 galaxies, 130 galaixes with L>5.75× 10^42 erg s^-1, and 70 highest-redshift galaxies, respectively. In addition, the corresponding cosmic ages are also listed in Table <ref> for comparisons. We define the median formation time of the galaxy as t_ form=t(z)-t_ gal, where t(z) is the age of the Universe at the corresponding redshift bin. Figure <ref> shows the median galaxy formation times as a function of cosmic time (or redshift) for the samples of 200 galaxies (orange dots), 130 galaxies with L>5.75× 10^42 erg s^-1 (green dots), and 70 highest-redshift galaxies (blue dots), respectively. Except for three abnormal points, the values of t_ form range from ∼ 1 Gyr to ∼2.5 Gyr, which are consistent with the results of <cit.>. Also, one can see from Figure <ref> that for different galaxy samples, the average galaxy formation times in similar redshift bins are similar, which means that the selection effects have minimal influences on our results. Galaxy with stellar age of Y and stellar mass of X means the galaxy formed X mass in Y time. So the star formation rate of it is X/Y. For our result of figure <ref> to figure <ref> we could estimate the average SFR for each bins by SFR=<mass>/<age>, where <mass> and <age> are the average mass and average age of the galaxies in the correspond bin.The average SFR for each bin are calculated for Fig. <ref> to <ref>, which fall in the range from 10^-0.02 to 10^0.7 M_⊙ / Yr, consist with the result from <cit.>. A similar analysis is carried out to investigate the correlation between the ages and stellar masses of galaxies. The 200 galaxies are divided into 6 bins according to their mass, with 33 or 34 galaxies in each bin. Then, the median age of each bin is calculated by the method mentioned above. The result is presented in Fig. <ref>. Also, a similar linear fit is carried out to test the relation between the stellar mass and age. The linear fitting result of the type t=A+Blog M is A=10.23_-1.71^+1.92, B=-0.95_-0.19^+0.22, the slope B<0 within 10σ confidence level, indicating a negative correlation between stellar mass and formation time (t_form). This means that the more massive galaxies tend to be formed earlier, but our samples are not complete samples, we could not obtain a stronger conclusion. § DISCUSSION AND CONCLUSION <cit.> proposed a method to estimate the ages of galaxies by using two SSP fitting, especially indicated for SEDs with V-shape typical of massive galaxies with Lyman break and Balmer break, as those found by <cit.> at z≳ 6. In this work, we applied the selection criteria of <cit.> into the sources provided by ZFOURGE catalog at redshifts between 2 and 4, excluding AGNs. We calculated the average ages of the galaxies in bin of different redshifts. We obtained ages for 200 candidate galaxies, which we divided our galaxies into 6 reshift bins, and calculated the average ages for each bin by the method introduced in sect. <ref>. They are compatible to be lower than the age of the Universe in the standard ΛCDM model. In addition, we only considered the candidates with rest R-band luminosity larger than 5.75× 10^42 erg s^-1 (observable at any redshift, that is, free from Malmquist bias) and we got a similar result, which means that the luminosity selection does not have significant effects on our research. When we focused on the highest-redshift candidates in our samples, a similar result is obtained. In general, our average ages of the galaxies are between 0.52_-0.29^+0.35 Gyr and 2.69_-0.69^+0.94 Gyr (at redshifts 2.39 and 2.75, respectively). Although the ages of galaxies obtained in our sample are roughly consistent with the age of the Universe, there are bins with significantly high average ages. For example, the fourth bin in Figure <ref> has the average age of 2.69_-0.64^+0.96 Gyr, while the corresponding cosmic age is 2.75 Gyr; the fourth bin in Fig.  <ref> has the average age of 2.10_-0.46^+0.60 Gyr, while the corresponding cosmic age is 2.80 Gyr. This means that the galaxies were formed when the age of the Universe was only 500 Myr. Also, in the paper, a negative correlation between the galaxies' stellar masses and formation time is obtained in our sample. Note however that the systematical error caused by the two SSP method may correspond to the irregular relation between the redshifts and ages obtained by our results <cit.>. Some previous works have studied the ages of distant red galaxies in the redshift range between z=1.2 and z=3.4, the obtained ages of candidates are within 1.7 Gyr to 5.5 Gyr <cit.>, which are consistent with our results, though the selection criteria may vary the result. Average ages larger than one Gyr are also obtained if extremely red quiescent and massive galaxies at 2.5<z<3.8 are selected <cit.>. These ages of oldest stellar population in galaxies are useful to constrain the cosmological model <cit.>. In general, from both Figures <ref> and <ref>, we could see that the oldest galaxies in the range 2<z<4 occur at redshift near 2, that is, the galaxies are younger at higher redshift, as expected. However, we do not see a clear variation of age of galaxies proportional to the variation of age of the universe for z>2.5: In Figure <ref>, we do not see a correlation of age with redshift, with all average ages of galaxies remaining between 0.5 and 2 Gyr. Since the launch of JWST, more and more massive galaxies in the high-z Universe are being discovered. <cit.> have applied this methodology of two SSP fitting to 13 JWST galaxies with the same SED characteristics at ⟨ z⟩=8 and got an average age >0.9 Gyr (95% CL), which presents a tension with the age of the Universe under ΛCDM cosmology, and it is within the same range of ages than the galaxies at 2.5<z<4.0. Within the samples selected by the color criterion of <cit.>, there are no significant age evolution of their average or median ages between the redshifts, it is reasonable to suppose that the ages of double peak V-shaped galaxies do not have evolution in higher redshift, which means that if the galaxies with similar properties are found in high redshift (such as higher than 8), there may be a challenge to the present-day galaxy formation model. The data used in this article were downloaded from URL <https://cdsarc.cds.unistra.fr/viz-bin/cat/J/ApJ/830/51#/browse>. ZFOURGE is a 45 night imaging program taken with the FourStar near-infrared camera on Magellan, which combines deep imaging in J1, J2, J3, Hs, Hl, and Ks with public legacy surveys to fit Spectral Energy Distributions for >60,000 galaxies <cit.>. MLC’s research is supported by the Chinese Academy of Sciences President’s International Fellowship Initiative grant number 2023VMB0001. JJW’s research is supported by the Natural Science Foundation of China (grant No. 12373053), the Key Research Program of Frontier Sciences (grant No. ZDBS-LY-7014) of Chinese Academy of Sciences, and the Natural Science Foundation of Jiangsu Province (grant No. BK20221562). aasjournal
http://arxiv.org/abs/2406.18869v1
20240627034955
Chemical Continuous Time Random Walks under Anomalous Diffusion
[ "Hong Zhang", "Guohua Li" ]
physics.chem-ph
[ "physics.chem-ph" ]
Tunneling valley Hall effect induced by coherent geometric phase W. Zeng July 1, 2024 ================================================================ § ABSTRACT Chemical master equation plays an important role to describe the time evolution of homogeneous chemical system. In addition to the reaction process, it is also accompanied by physical diffusion of the reactants in complex system that is generally not homogeneous, which will result in non-exponential waiting times for particle reactions and diffusion. In this paper we shall introduce a chemical continuous time random walk under anomalous diffusion model based on renewal process to describe the general reaction-diffusion process in the heterogeneous system, where the waiting times are arbitrary distributed. According to this model, we will develop the systematic stochastic theory including generalizing the chemical diffusion master equation, deriving the corresponding mass action law, and extending the Gillespie algorithm. As an example, we analyze the monomolecular A↔ B reaction-diffusion system for exponential and power-law waiting times respectively, and show the strong fractional memory effect of the concentration of the reactants on the history of the concentration in power-law case. § INTRODUCTION Anomalous diffusion behaviors have attracted great interest in various fields of physics, biological, chemical, and environmental sciences. <cit.> The main characteristics of anomalous diffusion is that the mean square displacement scales as a nonlinear power law in time, i.e., <Δ x^2>∼ t^β. Faster than linear scaling (β>1) is corresponding to as superdiffusion, slower than linear scaling 0<β<1 is referred to as subdiffusion, and when β=1 it is normal diffusion. The continuous time random walk (CTRW) with long-tailed waiting time has been often evoked as a suitable description of anomalous diffusion. In this model, the particle begins its jump at time t = 0 and then traps in a position for a random waiting time until it jumps away. In recent years there has been considerable effort to investigate the chemical reactions (e.g., reversible or irreversible conversion to a different species <cit.>, spontaneous evanescence <cit.>, fluorescence recovery after photobleaching <cit.>, or linear reaction dynamics <cit.>, etc.) under anomalous diffusion using different continuous time random walks models. In 2010, Fedotov considered three CTRW models: model A, B, C with nonlinear reactions under varying assumptions for the effect of reaction on random waiting time <cit.>. In 2013, Angstmann et al. generalized the model B where it is assumed that the created particles will draw a new waiting time to be in space- and time-dependent force fields, and applied this generalized model with space-dependent traps to obtain a fractional Fokker-Plank equation (FFPE) with space-dependent anomalous exponential for an ensemble of particles <cit.>. However, for the complex reaction-diffusion system, such as the system including more than three reactions each of which has many reactants and products performing anomalous diffusion, the CTRW approach is out of work due to the technical difficulty that it is quite hard for the balance equation to remember all the effects of so many complex reactions on the probability evolution for every particle and of coupling relations between the reactions and anomalous diffusion. Fortunately, there is a chemical master equation (CME) which can provide a stochastic approach to capture the time behavior of a spatially homogeneous chemical system in particle number space. The advantage of the CME is that it can deal with the system including many reactions, and has a firmer physical basis than the deterministic reaction-rate equations. Besides, the Gillespie algorithm based on the spatially homogeneous master equation is proved to be straightforward to implement on a digital computer. <cit.> Recently a generalized chemical diffusion master equation (CDME) is developed to describe the normal diffusion process for a given n-particle probability density coupled by the reaction dynamics similar in form to a chemical master equation by using creation and annihilation operators.<cit.> But up to now it is still a challenge to involve anomalous diffusion into CME. On the other side, the system that both CME and CDME consider is the well-mixed chemical reaction system in which each reaction is generated by Poisson process, that corresponds to the exponential distributed waiting time. <cit.> However, in various real-world systems the inter-reaction times typically obey non-Poissonian distribution. <cit.> In particular, inter-reaction times typically obey long-tailed distributions. Examples of long-tailed distributions of inter-reaction times include population dynamics, epidemic processes, finance, and so on. The non-Poissonian chemical process defines an inhomogeneous chemical continuous time random walk (CCTW) in particle number space. <cit.> Based on the CCTW Zhang et. al. proposed a stationary generalized chemical master equation which can model intracellular non-Markovian intracellular processes with molecular memory. [17-20] In this paper we will consider the complex heterogeneous system involving both the arbitrary non-Poisson chemical process and the anomalous diffusion process of the particles. We shall overcome the technical difficulties to develop a systematic stochastic approach for the generalized CDME, the generalized mass law action, and the generalized Gillespie algorithm based on renewal processes used in CCTW, where inter-event times are independently generated from a given distribution <cit.>, and on the continuous time random walk model, where it is assumed that the reactants can move from one space to the other, and at each space or site the reactants can react according to the chemical reaction law, but do not react in the process of jumping. Moreover, we will take the anomalous monomolecular A↔ B reaction-diffusion system as an example, and obtain the corresponding mass action law for exponential and power-law distributed waiting times respectively, and show the strong fractional memory effect of the concentration of the reactants on the history of the concentration in the case of power-law distribution. § THE RENEWAL PROCESS We assume that there are γ different species that participate in m different reactions and can diffuse in N positions. The chemical species are denoted by S_j, where j = 1,...,γ; the υ positions are denoted by x,y,z,..., the corresponding particle numbers at x at time t are denoted by k_xj. The state vector of particle numbers is a random υγ-dimensional vector k=(k_x1,k_x2,...,k_xγ,...,k_z1,k_z2,...,k_zγ)^T, where the superscript T denotes the transpose. A single event of reaction and diffusion are characterized by the reaction and diffusion waiting times. We denote the probability density function (PDF) for the waiting time τ_ix of reaction i at x by p_ix(t) and denote the PDF of the waiting time τ_dlx for the particle l diffusing away from x by p_dlx(t), respectively. We now consider a special renewal process where N(t)=sup{n∈ N: T_n≤ t} is the number of renewal in [0,t] with N(0)=0 if X_1<t, where T_n is the time after n renewal step and is the sum T_n=τ_1+τ_2+...+τ_n, n≥ 1 where τ_1, τ_2,...,τ_n are a sequence of nonnegative independent distributions random variables. For each τ_n, n≥ 1 is the waiting time for the n-th update. It is assumed to be the minimum time of all waiting times of reactions and diffusions, that is, τ_n=min{τ_ix,τ_dlx} (i=1,2,...,m, l=1,2,...,γ, x=1,2,...,υ). One can find N(t)=N_r(t)+N_d(t) where N_r(t) is the number of reaction renewals in [0,t], and N_d(t) is the number of diffusion renewals in [0,t]. The state vector will change for each reaction renewal and each diffusion renewal. During a reaction i at x the loss (gain) in particle number k_xj is denoted by r_ixj∈ N (p_ixj∈ N). These coefficients are typically, but need not be, given by the law of mass action. Thus, the impact of reaction i on the state space can be expressed as <cit.> ∑_jr_ixjS_j→∑_jp_ixjS_j The stoichiometric coefficients s_ixj = p_ixj-r_ixj denote the net change in each species j due to each reaction i at space x, and form the reaction transforming vector for the n-th step s_ix=(0,...,0,s_ix1,s_ix2,...,s_ixγ,0,...,0). For each particle j, its space at time t is Z_j(t)=Z_j(0)+∑_n=1^N_d_jξ_n where ξ_n is the jumping length for the n-th jumping step of particle j, N_d_j(t) is the number of diffusion renewals of particle j in [0,t] and the relation with N_d(t) is ∑_jN_d_j(t)=N_d(t). If in one step an i reactant moves from x to y, then after this step the number k_xl of the reactant l reduces 1 at x and k_yl adds 1 at y, so we can find the diffusion transforming vector s_dlxy=0,...,-1,0,...,1,0,....0 xl,......,yl (all except (x-1)γ +l-th and (y-1)γ +l-th components are 0). §.§ Distribution of the renewal waiting time We now investigate the distribution of the renewal waiting time τ_n. Recall that p_ix(t) is the waiting time PDF for one group of reaction i taking place at one site x, so the survival probability for such group does not react for time interval [0,t] is ∫_t^∞p_ix(t')dt'. Denoting the whole combinatorial number of groups of reactants for reaction i at space x by h_ix(k) <cit.>, the PDF for only one group of reactants occurring at space x after waiting time t is as following ψ_ix^r(t,k)=h_ix(k)p_ix(t)(∫_t^∞p_ix(t')dt')^h_ix(k)-1, while the probability that the reaction i at space x does not occur after waiting t is Ψ_ix^r(t,k)=(∫_t^∞p_ix(t')dt')^h_ix(k). Analogously, ψ_lx^d(t,k)=k_xl p_dlx(t)(∫_t^∞p_dlx(t')dt')^k_xl-1 is the pdf that reactant l at space x occurs after waiting t, and Ψ_lx^d(t,k)=(∫_t^∞p_dlx(t')dt')^k_xl is the probability that all the reactants l at space x does not diffuse away after waiting time t. Note that in the above descriptions we assume that all random waiting times are independent, but once one reaction or diffusion occurs all waiting times in the system will be renewed. If τ_n is for a reaction i at x, then its PDF is ϕ_ix^r(t,k) = ψ_ix^r(t)Π_j≠ iΨ_jx^r(t,k)Π_y≠ x;i=1,2,...,mΨ_iy^r(t,k) ·Π_x=1,2,...,υ;l=1,2,...,γΨ_lx^d(t,k). which denotes that when the system waits t the reaction i at space x first occurs but none of other reactions and diffusions have happened before this reaction, that is, the waiting time of this reaction in this renewal step is the minimum waiting time. Note that when the diffusion is not considered, and the chemical process focuses one space x, ϕ_ix^r(t,k) becomes ϕ_ix^r(t,k)=ψ_ix^r(t)Π_j≠ iΨ_jx^r(t,k), which is the non-Markovian case in Refs.<cit.>. But if τ_n is the waiting time of a diffusion i from x, then its PDF is ϕ_lx^d(t,k) = ψ_lx^d(t,k)Π_l'≠ lΨ_l'x^d(t,k) Π_y≠ x;l=1,2,...,γΨ_ly^d(t,k) ·Π_i=1,2,...,m;x=1,2,...,υΨ_ix^r(t,k) which denotes that when the system waits t the diffusion of i from x first occurs but none of other reactions and diffusion have happened before this diffusion, that is, the waiting time of this event in this renewal step is the minimum waiting time. Besides, we also introduce Φ(t,k) = Π_x=1,2,...,υ;i=1,2,...,mΨ_ix^r(t,k) ·Π_x=1,2,...,υ;l=1,2,...,γΨ_lx^d(t,k), which is the probability that there is no diffusion or reaction in the system in the time interval [0, t]. One can find that -∂Φ(t,k)/∂ t = ∑_x=1^υ∑_i=1^mϕ_ix^r(t,k)+∑_x=1^υ∑_l=1^γϕ_lx^d(t,k), which means Φ(t,k) = ∫_t^∞∑_x=1^υ∑_i=1^mϕ_ix^r(t',k)+∑_x=1^υ∑_l=1^γϕ_lx^d(t',k)dt', and then Φ(u,k) = 1/u[1-(∑_x=1^υ∑_i=1^mϕ_ix^r(u,k) +∑_x=1^υ∑_l=1^γϕ_lx^d(u,k))], where Φ(u,k), ϕ_ix^r(u,k) and ϕ_xl^d(u,k) are the Laplace t→ u transform of Φ(t,k), ϕ_ix^r(t,k) and ϕ_xl^d(t,k), respectively. § MASTER EQUATION We then want to investigate the evolution of the state vector k at time t in this renewal process, that is, to obtain the master equation of the chemical continuous time random walks under anomalous diffusion model (see Figure.1). Let P(k, t) denote the probability distribution Prob[k(t)=k]. Firstly, we have the balance equation P(k,t)=∫_0^tdt'∑_n=0^∞R_n(k,t')Φ(t-t',k) where R_n(k,t) is the joint density of arriving at k at time t after n steps, and satisfying R_n+1(k,u) = ∑_i=1^m∑_x=1^υR_n(k-s_ix,u)ϕ_ix^r(u,k-s_ix) +∑_l=1^γ∑_x=1^υ∑_y≠ x[R_n(k-s_dlxy,u) ·ϕ_lx^d(u,k-s_dlxy)λ(y-x)], for nonnegative integer n in Laplace space, and R_0(k,t)=P(k,0)δ(t). Here, R_n(k,u) is the Laplace t→ u transform of R_n(k,t), P(k,0) is the initial distribution and λ(y-x) represents the transition probability for the reactant jumping from x to y. Let R(k,u)=∑_n=0^∞ R_n(k,u). Then R(k,u) = ∑_i=1^m∑_x=1^υR(k-s_ix,u)ϕ_ix^r(u,k-s_ix) +∑_l=1^γ∑_x=1^υ∑_y≠ x[R(k-s_dlxy,u) ·ϕ_lx^d(u,k-s_dlxy)λ(y-x)] +P(k,0). Taking the Laplace transform of (14), and combining (10) we get P(k,u) = ∑_n=0^∞R_n(k,u)Φ(u,k)=R(k,u)Φ(u,k) =R(k,u)1/u[1-(∑_x=1^υ∑_i=1^mϕ_ix^r(u,k) +∑_x=1^υ∑_l=1^γϕ_xl^d(u,k))], and so R(k,u)=P(k,u)1/Φ(u,k), where R(k,u) and P(k,u) are the Laplace transforms of R(k,t) and P(k,t), respectively. From Eq.(15) we finds uP(k,u) = R(k,u)-R(k,u)[∑_x=1^υ∑_i=1^mϕ_ix^r(u,k) +∑_x=1^υ∑_l=1^γϕ_lx^d(u,k)] Substituting (14) and (16) into the above equation (17), and taking some algebraic operations, we finally obtain the master equation for the chemical continuous time random walks with anomalous diffusion in Laplace space uP(k,u)-P(k,0) = ∑_i=1^m∑_x=1^υP(k-s_ix,u)ϕ_ix^r(u,k-s_ix)/Φ(u,k-s_ix) +∑_l=1^γ∑_x=1^υ∑_y≠ x[P(k-s_dlxy,u) ·ϕ_lx^d(u,k-s_dlxy)/Φ(u,k-s_dlxy)λ(y-x)] -∑_x=1^υ∑_i=1^mP(k,u)ϕ_ix^r(u,k)/Φ(u,k) -∑_l=1^γ∑_x=1^υP(k,u)ϕ_xl^d(u,k)/Φ(u,k). Note that the first item on the right side is just the gain flux and the last two items is the loss flux. Inverting it into time space we obtain the master equation in time space as following ∂ P(k,t)/∂ t =∑_i=1^m∑_x=1^υ∫_0^tP(k-s_ix,t')Θ_ix^r(t-t',k-s_ix)dt' +∑_l=1^γ∑_x=1^υ∑_y≠ x∫_0^t[P(k-s_dlxy,t') ·Θ_lx^d(t-t',k-s_dlxy)λ(y-x)]dt' -∑_x=1^υ∑_i=1^m∫_0^tP(k,t')Θ_ix^r(t-t',k)dt' -∑_l=1^γ∑_x=1^υ∫_0^tP(k,t')Θ_xl^d(t-t',k)dt', where Θ_ix^r(t,k) and Θ_lx^d(t,k) are respectively the inverse Laplace transforms u→ t of Θ_ix^r(u,k)=ϕ_ix^r(u,k)/Φ(u,k), and Θ_lx^d(u,k)=ϕ_xl^d(u,k)/Φ(u,k). Note that when the diffusion process disappears, the above master equation is just the CME for the original non-markovian reaction system in <cit.>, and Θ_ix^r(t,k) is just the memory kernel <cit.>. Note also that in the steady state we get ∑_i=1^m∑_x=1^υ∫_0^tP(k-s_ix,t')Θ_ix^r(t-t',k-s_ix)dt' +∑_l=1^γ∑_x=1^υ∑_y≠ x∫_0^t[P(k-s_dlxy,t') ·Θ_lx^d(t-t',k-s_dlxy)λ(y-x)]dt' =∑_x=1^υ∑_i=1^m∫_0^tP(k,t')Θ_ix^r(t-t',k)dt' +∑_l=1^γ∑_x=1^υ∫_0^tP(k,t')Θ_lx^d(t-t',k)dt'. § MASS ACTION LAW We will consider the rate law to describe the macroscopic behavior of reaction-diffusion system. We assume that the whole number ∑_x=1^υ∑_l=1^γk_xl of the particles in this system is an invariable constant n_0. Let n_xl represent the number for the particle l at position x, and let P_l(c,x,t) represent the probability of the concentration c=n_xl/n_0 for l at x at time t. Then we find P_l(c,x,t)=∑_k: k_xl=n_xl P(k,t) which satisfies ∫_0^1P_l(c,x,t)dc=∑_k P(k,t)=1. Thus, ∂ P_l(c,x,t)/∂ t =∑_k:k_xl=n_xl∂ P(k,t)/∂ t. Combining with Eq.(<ref>), we obtain ∂ P_l(c,x,t)/∂ t =∑_i:s_ixl≠ 0∑_k:k_xl=n_xl[P(k-S_ix,t') ·Θ^r_ix(t-t',k-S_ix)-P(k,t') Θ^r_ix(t-t',k)] +∑_y≠ x∑_k:k_xl=n_xl[P(k-S_dlxy,t')Θ^d_lx(t-t',k-S_dlxy) ·λ(y-x)+P(k-S_dlyx,t') Θ^d_ly(t-t',k-S_dlyx)λ(x-y) -P(k,t')Θ^d_lx(t-t',k)λ(y-x) -P(k,t')Θ^d_ly(t-t',k)λ(x-y)]. Let a_ixl=s_ixl/n_0. Then we get ∂ P_l(c,x,t)/∂ t =∫_0^t{∑_i:a_ixl≠ 0[P_l(c-a_ixl,x,t') ·Θ^r_ix(t-t',c-a_ixl)-P_l(c,x,t') Θ^r_ix(t-t',c)] + [P_l(c+1/n_0,x, t')Θ^d_lx(t-t',c+1/n_0) +∑_y≠ xP(c-1/n_0,x,t') Θ^d_ly(t-t',c-1/n_0)λ(x-y) -P_l(c,x,t')Θ^d_lx(t-t',c) -∑_y≠ xP_l(c,x,t')Θ^d_ly(t-t',c)λ(x-y)]}dt'. Since ⟨ C_l(x,t)⟩=∑_n_xl=0^n_0n_xl/n_0P_l(c,x,t), then ⟨∂ C_l(x,t)/∂ t⟩ =∫_0^1 c∂ P_l(c,x,t)/∂ tdc =∫_0^1cdc∫_0^t{∑_i:a_ixl≠ 0[P_l(c-a_ixl,x,t') ·Θ^r_ix(t-t',c-a_ixl)-P_l(c,x,t') Θ^r_ix(t-t',c)] + [P_l(c+1/n_0,x, t')Θ^d_lx(t-t',c+1/n_0) +∑_y≠ xP(c-1/n_0,x,t') Θ^d_ly(t-t',c-1/n_0)λ(x-y) -P_l(c,x,t')Θ^d_lx(t-t',c) -∑_y≠ xP_l(c,x,t')Θ^d_ly(t-t',c)λ(x-y)]}dt', which can be written as ⟨∂ C_l(x,t)/∂ t⟩ =∫_0^tdt'∫_0^1{[∑_i:a_ixl≠ 0 (c-a_ixl+a_ixl)P_l(c-a_ixl,x,t') ·Θ^r_ix(t-t',c-a_ixl)-cP_l(c,x,t') Θ^r_ix(t-t',c)] + (c+1/n_0-1/n_0)P_l(c+1/n_0,x, t')Θ^d_lx(t-t',c+1/n_0) -cP_l(c,x, t')Θ^d_lx(t-t',c) +∑_y≠ x[(c-1/n_0+1/n_0)P_l(c-1/n_0,x,t') Θ^d_ly(t-t',c-1/n_0)λ(x-y) -cP_l(c,x,t')Θ^d_ly(t-t')λ(x-y))]}dc. We simplify the above equation, and get ∂⟨ C_l(x,t)⟩/∂ t⟩ = ∫_0^t[∑_i:a_ixl≠ 0 a_ixl⟨Θ^r_ix(t',t-t')⟩ -1/n_0⟨Θ^d_lx(t',t-t')⟩+∑_y≠ x1/n_0⟨Θ^d_ly(t',t-t')⟩λ(x-y))]dt'. Here, ⟨Θ^r_ix(t',t-t')⟩=∫_0^1 Θ^r_ix(t-t',c)P_l(c,x,t')dc, and ⟨Θ^d_lx(t',t-t')⟩=∫_0^1 Θ^d_lx(t-t',c)P_l(c,x,t')dc. This equation is the mass action law for the one-dimensional reaction-diffusion system. If there is no diffusion in the system, the above equation recovers the mass action law in Ref.<cit.>. § AN EXAMPLE: MONOMOLECULAR A↔ B REACTION-DIFFUSION SYSTEM As an example, we shall consider the monomolecular A↔ B reaction-diffusion system in which the A and B particles can move in one-dimensional finite lattice. There are two reactions which make the change of the concentrations of A and B. The first reaction, we denote by 1, is the reaction from A to B, and the second we denote by 2 is the reaction from B to A. Then the one-dimensional mass action rate (<ref>) becomes ∂ < C_A(x,t)>/∂ t =1/n_0∫_0^t[-⟨Θ_1x^r(t',t-t')⟩ +⟨Θ_2x^r(t',t-t')⟩-⟨Θ_Ax^d(t',t-t')⟩ +∑_y≠ x∫_0^t⟨Θ_Ay^d(t',t-t')⟩λ(x-y)] dt', and ∂ < C_B(x,t)>/∂ t =1/n_0∫_0^t[-⟨Θ_2x^r(t',t-t')⟩ +⟨Θ_1x^r(t',t-t')⟩-⟨Θ_Bx^d(t',t-t')⟩ +∑_y≠ x⟨Θ_By^d(t',t-t')⟩λ(x-y)]dt', where C_A(x,t)=n_xA/n_0 and C_B(x,t)=n_xB/n_0. §.§ Exponential case For exponent waiting time, that is, p_1x(t)=α_rAxe^-α_rAxt and p_2x(t)=α_rBxe^-α_rBxt, and p_dAx(t)=α_dAxe^-α_dAxt and p_dBx(t)=α_dBxe^-α_dBxt, then we have ψ_1x^r(t,k)=n_xAα_rAxexp(-α_rAxn_xAt) ψ_2x^r(t,k)=n_xBα_rBxexp(-α_rBxn_xBt) Ψ_1x^r(t,k)=exp(-α_rAxn_xAt) Ψ_2x^r(t,k)=exp(-α_rBxn_xBt) ψ_lx^d(t,k)=n_xAα_dAxexp(-α_dAxn_xAt) ψ_2x^d(t,k)=n_xBα_dBxexp(-α_dBxn_xBt) Ψ_1x^d(t,k)=exp(-α_dAxn_xAt) Ψ_2x^d(t,k)=exp(-α_dBxn_xBt) ϕ_1x^r(t,k) = n_xAα_rAxexp{-∑_x=1,2,...,υ[α_rAxn_xA+α_rBxn_xB +α_dAxn_xA+α_dBxn_xB]t} ϕ_2x^r(t,k) = n_xBα_rBxexp{-∑_x=1,2,...,υ[α_rAxn_xA+α_rBxn_xB +α_dAxn_xA+α_dBxn_xB]t} and ϕ_lx^d(t,k) = n_xAα_dAxexp{-∑_x=1,2,...,υ[α_rAxn_xA+α_rBxn_xB +α_dAxn_xA+α_dBxn_xB]t} ϕ_2x^d(t,k) = n_xBα_dBxexp{-∑_x=1,2,...,υ[α_rAxn_xA+α_rBxn_xB +α_dAxn_xA+α_dBxn_xB]t} Φ(t,k) = exp{-∑_x=1,2,...,υ[α_rAxn_xA+α_rBxn_xB +α_dAxn_xA+α_dBxn_xB]t}, and then Θ_1x^r(u,k)=n_xAα_rAx, Θ_2x^r(u,k)=n_xBα_rBx, and Θ_Ax^d(u,k)=n_xAα_dAx, Θ_Bx^d(u,k)=n_xBα_dBx. Inverting them to time space yields Θ_1x^r(t,k)=n_xAα_rAxδ(t), Θ_2x^r(t,k)=n_xBα_rBxδ(t), and Θ_Ax^d(t,k)=n_xAα_dAxδ(t), Θ_Bx^d(t,k)=n_xBα_dBxδ(t). Thus, we get 1/n_0∫_0^t⟨Θ_1x^r(t',t-t')⟩ dt'=α_rAx⟨ C_A(x,t)⟩, 1/n_0∫_0^t⟨Θ_2x^r(t',t-t')⟩ dt'=α_rBx⟨ C_B(x,t)⟩ 1/n_0∫_0^t⟨Θ_Ax^d(t',t-t')⟩ dt'=α_dAx⟨ C_A(x,t)⟩, 1/n_0∫_0^t⟨Θ_Ay^d(t',t-t')⟩ dt'=α_dAy⟨ C_A(y,t)⟩, 1/n_0∫_0^t⟨Θ_Bx^d(t',t-t')⟩ dt'=α_dBx⟨ C_B(x,t)⟩, and 1/n_0∫_0^t⟨Θ_By^d(t',t-t')⟩ dt'=α_dBy⟨ C_B(y,t)⟩. We substitute them into the mass action law (<ref>) and (<ref>), and obtain ∂ < C_A(x,t)>/∂ t =-α_rAx<C_A(x,t))> +α_rBx<C_B(x,t))>-α_dAx<C_A(x,t)) +∑_y≠ xα_dAy<C_A(y,t)>λ(x-y), and ∂ < C_B(x,t)>/∂ t =-α_rBx<C_B(x,t))> +α_rAx<C_A(x,t))>-α_dBx<C_B(x,t)) +∑_y≠ xα_dBy<C_B(y,t)>λ(x-y), which are the classical mass action laws for the A↔ B reaction-diffusion system. §.§ Power-law case In this case we consider the PDFs for waiting times obeying power-law distribution, that is, p_1r(t)∼τ_0^β_1rβ_1r/Γ(1-β_1r)1/t^1+β_1r, p_2r(t)∼τ_0^β_2rβ_2r/Γ(1-β_2r)1/t^1+β_2r, p_dA(t)=τ_0^β_dAβ_dA/Γ(1-β_dA)1/t^1+β_dA and p_dB(t)=τ_0^β_dBβ_dB/Γ(1-β_dB)1/t^1+β_dB where 0<β_1r,β_2r,β_dA, β_dB<1. Then we have ϕ_1x^r(t,k)=n_xAβ_1rτ_0^H/F1/t^1+H Taking the Laplace transform t→ u, we obtain ϕ_1x^r(u,k)∼n_xAβ_1r/H[1-Γ(1-H)/F(τ_0 u)^H]. Analogously, one can get Φ(u,k)∼Γ(1-H)/F(τ_0 u)^H-1, Therefore, Θ_1x^r(u)∼ n_xAβ_1rMu^1-H. Taking the Laplace transform of 1/n_0∫_0^t⟨Θ_1x^r(t',t-t')⟩, we find L{1/n_0∫_0^t⟨Θ_1x^r(t',t-t')⟩ dt'} =1/n_0∫_0^1p(c,x,u)Θ_1x^r(u,c)dc =∫_0^1p(c,x,u)n_xA/n_0Mu^1-Hβ_1rdc =β_1rMu^1-H⟨ C_A(x,u)⟩, whose inverse Laplace transform is 1/n_0∫_0^t⟨Θ_1x^r(t',t-t')⟩ dt'=β_1rM𝒟_t^1-H⟨ C_A(x,t)⟩, 1/n_0∫_0^t⟨Θ_2x^r(t',t-t')⟩ dt'=β_2rM𝒟_t^1-H⟨ C_B(x,t)⟩, 1/n_0∫_0^t⟨Θ_Ax^d(t',t-t')⟩ dt'=β_dAM𝒟_t^1-H⟨ C_A(x,t)⟩, 1/n_0∫_0^t⟨Θ_Bx^r(t',t-t')⟩ dt'=β_dBM𝒟_t^1-H⟨ C_B(x,t)⟩, where M=1/H1/τ_0^HF/Γ(1-H), with F=Γ^n_A(1-β_1r)Γ^n_B(1-β_2r)Γ^n_A(1-β_dA)Γ^n_B(1-β_dB), and H=β_1rn_A+β_2rn_B+β_dAn_A+β_dBn_B, 0<H<1. We substitute them into the mass action laws (<ref>) and (<ref>), and obtain ∂ < C_A(x,t)>/∂ t =-β_1rM𝒟_t^1-H⟨ C_A(x,t)⟩ +β_2rM𝒟_t^1-H⟨ C_B(x,t)⟩-β_dAM𝒟_t^1-H⟨ C_A(x,t)⟩ +∑_y≠ xβ_dAM𝒟_t^1-H⟨ C_A(y,t)⟩λ(x-y), and ∂ < C_B(x,t)>/∂ t =-β_2rM𝒟_t^1-H⟨ C_B(x,t)⟩ +β_1rM𝒟_t^1-H⟨ C_A(x,t)⟩-β_dBM𝒟_t^1-H⟨ C_B(x,t)⟩ +∑_y≠ xβ_dBM𝒟_t^1-H⟨ C_B(y,t)⟩λ(x-y). Here, 𝒟_t^1-H is the Riemann-Liouville fractional differential operator. This means that the concentration depends on the history evolution for A and B particles. § GENERALIZED GILLESPIE ALGORITHM Based on the chemical continuous time random walks under anomalous diffusion model, we can get the generalized Gillespie algorithm as follows. (i) In the initial state n_0 reactants are randomly placed at υ places by uniform distribution. (ii) The random waiting time τ_ix as the internal clock to react for each reaction i at x is chosen from a series of values distributed according to p_ix(t) and the random waiting time τ_dlx as the internal clock to make next jump for each reactant l at x is chosen from a series of values distributed according to p_dlx(t) [23-25]. If p_ix(t) is exponential distribution p_ix (t)=α_ixe^-α_ixt, then we get τ_ix=-ln u/α_ix, where u is a random variate drawn from the uniform density on the interval [0,1]. If the p_ix(t) is power-law distribution p_ix (t)=α_ixτ_0^α_ixt^-(1+α_ix), then we get τ_ix=τ_0 u^-1/α_ix. The generation of τ_dlx is as same as that of τ_ix. (iii) Find the minimum clock time min{τ_ix,τ_dlx}(i=1,2,...,m; l=1,2,...,γ; x=1,2,...,υ). If the minimum waiting time is for one reaction i at x then we change the system according to this reaction <cit.>, or the minimum waiting time is for one diffusion process for a reactant l at x, then we let l diffuse from x to one place y with the probability λ(y-x), after which the internal clocks for reactions and diffusion are all reset by the distributions and repeat the procedure. Note that this algorithm is different from the non-Markovian Gillespie algorithm for reactive system where the reaction i is chosen by the instantaneous probability after the time Δ t for the next event drawn in <cit.>. § COMPARISON WITH THE MASS ACTION LAW DERIVED FROM CONTINUOUS TIME RANDOM WALK (CTRW) MODELS. In this section we shall compare our mass action law with that derived from continuous time random walk (CTRW) for the state A to B and backward under anomalous transport by Campos e.t. in <cit.>. The corresponding master equations in <cit.> is as following dP_1(x,t)/dt =∫_-∞^+∞∫_0^tP_1(x',t')Θ_1j(t-t')λ_1(x-x')dt'dx' +∫_0^tP_2(x,t)Θ_2r(t-t')dt'-∫_0^tP_1(x,t')Θ_1j(t-t')dt' -∫_0^tP_1(x,t')Θ_1r(t-t')dt', dP_2(x,t)/dt =∫_-∞^+∞∫_0^tP_2(x',t')Θ_2j(t-t')λ_2(x-x')dt'dx' +∫_0^tP_1(x,t')Θ_1r(t-t')dt'-∫_0^tP_2(x,t')Θ_2j(t-t')dt' -∫_0^tP_2(x,t')Θ_2r(t-t')dt'. Here, P_1,2(x,t)=N_1,2(x,t)/N respectively denote the PDFs for a particle being in state A and B at point x at time t where N_1,2(x,t) are the the densities of A and B, and N is the sum of N_1(x,t) and N_2(x,t) over the whole space. One can see that ∫_-∞^+∞(P_1(x,t)+P_2(x,t))dx=1. λ_1,2(Δ x) denote the jump length PDFs for A particle and B particle, respectively. ψ_1j,2j(t) denote the PDFs of random diffusion waiting times, ψ_1r,2r(t) denote the PDFs of random reaction waiting time of A→ B and B→ A, Ψ_1j,2j(t)=1-∫_0^tψ_1j(τ)dτ and Ψ_1r,2r(t)=1-∫_0^tψ_2j(τ)dτ are the corresponding survival probabilities for ψ_1j,2j(t) and ψ_1j,2j(t). Finally, Θ_1j(u)=L(Ψ_1r(t)ψ_1j(t))/L(Ψ_1r(t)Ψ_1j(t)) and Θ_1r(u)=L(Ψ_1j(t)ψ_1r(t))/L(Ψ_1j(t)Ψ_1r(t)) are the kernels. When the random jump length is Gaussian satisfying λ(k)∼ 1-σ^2k^2/2 and random waiting times for reaction and diffusion are both power-law distributed satisfying ψ_1r,2r(t)=β_1r,2rτ ^β_1r,2r1/t^1+β_1r,2r and ψ_1j,2j(t)=β_1j,2jτ ^β_1j,2j1/t^1+β_1j,2j, where σ^2 is the jump length variance, one gets Θ_hj(u)∼β_hj/β_hr+β_hju^1-β_hr-β_hj/τ^β_hr+β_hjΓ(1-β_hr-β_hj) and Θ_hr(u)∼β_hr/β_hr+β_hju^1-β_hj-β_hr/τ^β_hj+β_hrΓ(1-β_hj-β_hr) for 0<β_1r+β_1j,β_2r+β_2j<1. Taking the Fourier-Laplace transform of the generalized master equations (<ref>) and (<ref>) yields u P_1(k,u)-P_1(k,0)=P_1(k,u)Θ_1j(u)(-σ^2k^2/2) +P_2(k,u)Θ_2r(u)-P_1(k,u)Θ_1r(u), and u P_2(k,u)-P_2(k,0)=P_2(k,u)Θ_2j(u)(-σ^2k^2/2) +P_1(k,u)Θ_1r(u)-P_2(k,u)Θ_2r(u). By substituting Θ_hj(u) and Θ_hr(u) into (<ref>) and (<ref>) and taking the inverse Fourier-Laplace transform, we finally obtain the generalized fractional reaction-diffusion equations dP_1(x,t)/dt = C_11∂^2 D_t^1-β_1r-β_1j P_1(x,t)/∂ x^2 +C_2D_t^1-β_2j-β_2rP_2(x,t) -C_3 D_t^1-β_1j-β1rP_1(x,t), dP_2(x,t)/dt = C_21∂^2 D_t^1-β_2r-β_2j P_2(x,t)/∂ x^2 -C_2D_t^1-β_2j-β_2rP_2(x,t) +C_3 D_t^1-β_1j-β1rP_1(x,t), with C_11=σ^2/2β_1j/β_1r+β_1j1/τ^β_1r+β_1jΓ(1-β_1r-β_1j), C_21=σ^2/2β_2j/β_2r+β_2j1/τ^β_2r+β_2jΓ(1-β_2r-β_2j), C_2=β_2r/β_2r+β_2j1/τ^β_2r+β_2jΓ(1-β_2r-β_2j), C_3=β_1r/β_1r+β_1j1/τ^β_1r+β_1jΓ(1-β_1r-β_1j). When we assume that there is only one particle in the reaction-diffusion system, the jump length is Gaussian whose PDF obeys (<ref>) and the PDFs of reaction and diffusion waiting times is space-independent satisfying (<ref>) and (<ref>), we find that our mass action law (<ref>) and (<ref>) can recover (<ref>) and (<ref>). § COMPARISON WITH THE CHEMICAL CONTINUOUS TIME RANDOM WALK (CTRW) MODELS. In Ref.<cit.> it defines an inhomogeneous continuous time random walk in particle number space, from which a generalized chemical master equation is derived as following ∂ P(n,t)/∂ t =∑_i ∫_0^t dt'(∏_j E_j^-s_ij-1) × P(n,t')M_i(t-t',n), where the step operator E_j^z acts on a function f(n) by incrementing the particle number n_j of species S_j. M_i is the memory function. This equation is the special case of the mass action law (<ref>) without diffusion in the system. Based on the derived master equation (<ref>), the authors determine the modified chemical rate laws ∂⟨ C⟩/∂ t =∑_i s_i∫_0^t dt'M_i^C(t-t',C), where M_i^C(t-t',C)=M_i^C(t-t',n_0 C)/n_0. Note that when the diffusion is ignored, namely, λ (y-x)=0 in our rate laws (<ref>) we can recover (<ref>). Furthermore, a generalization of the Gillespie algorithm is proposed for the reaction system in <cit.>. Comparing with their algorithm, we find that their algorithm is specially for exponential reactive waiting times, while our algorithm is for arbitrary distributed and more general. Beside, we also take account of the diffusion for generalization while it is not considered in <cit.>. But they further consider the global delay that is added to inter-reaction waiting time. § CONCLUSION To sum up, in order to describe the coupled non-Poissonian reaction and anomalous diffusion process in heterogeneous system, where the reactive and diffusive waiting times are both arbitrary distributions, we develop a chemical continuous time random walk under anomalous diffusion model. We firstly use the renewal process to describe the numbers of reaction and diffusion renewals, and also define the distribution of the minimum renewal waiting times. Based on the renewal process we obtain the generalized chemical diffusion master equation (19) for the probability of the numbers of reactants, and then derive the corresponding mass action law (26) for the average concentration of the reactants using the CDME. Moreover, we generalize the Gillespie algorithm to fit for the non-Markovian reaction-diffusion system. In the end, we take the A↔ B reaction system in which the A and B particles for exponential and power-law waiting times respectively. We recover the mass law rate in the exponential waiting time case, and show the strong fractional memory effect of the concentration of the reactants on the history of the concentration in the power-law case. § ACKNOWLEDGEMENTS This work was supported by the Natural Science Foundation of Sichuan Province (Grant No. 2022NSFSC1752). § CONFLICT OF INTEREST STATEMENT The authors declare that there are no conflict of interests, we do not have any possible conflicts of interest. 25 [1] TB1996 C. Tsallis and D. J. Bukman, Phys. Rev. E 54 R2197 (1996) [2] MK2000 R. Metzler, J. Klafter, Phys. Rep. 339 1 (2000) [3]SSS2006 I. M. Sokolov, M. G. W. Schmidt, F. Sagués, Phys. Rev. E, 73 031102 (2006) [4]LHW2008 T. A. M. Langlands, B. I. Henry, S. L. Wearne, Phys. Rev. E 77 021111 (2008) [5]SSS2007 M. G. W. Schmidt, F. Sagués, and I. M. Sokolov, J. Phys. Condens. Matter 19 065118 (2007) [6]AYL2010 E. Abad, S. B. Yuste, K. Lindenberg, Phys. Rev. E 81 031115 (2010) [7] YAL2014 S. B. Yuste, E. Abad, and K. Lindenberg, J. Stat. Mech. -Theory and Exp. 11 014 (2014) [8] HLW2006 B. I. Henry, T. A. M. Langlands, S. L. Wearne, Phys. Rev. E 74 031116 (2006) [9] F2010 S. Fedotov, Phys. Rev. E 81 011117 (2010) [10] ADH2013 C. N. Angstmann, I. C. Donnelly, B. I. Henry, Math. Model. Nat. Phenom. 8 17 (2013) [11]G1977 D. T. Gillespie J. Phys. Chem. 81, 25 (1977) [12]RF2022 M. J. del Razo, D. Frömberg, A. V. Straube, C. Schütte, F. Höfling, s. Winkelmann, Lett. Math. Phys. 112, P1-59 (2022) [13] RW2023 M. J. del Razo, S. Winkelmann, R. Klein, F. Höfling, J. Math. Phys. 64 , 013304 (2023) [14]GQ2017 H. Ge, H. Q, Applications of mathematical dynamics models in biophysics and biochemistry, Peking University Process (2017) [15]MR2018 N. Masuda, L. E. C. Rocha, SIAM Rev. 60 , 95 (2018) [16]AD2017 T. Aquino, M. Dentz, Phys. Rev. Lett. 119, 230601 (2017) [17]ZZ2019 J. J. Zhang, T. S. Zhou, Proc. Natl. Acad. Sci. USA 116, 23542 (2019) [18]CWLW2022 L. Y. Chen, Y. Wan, J. R. Liu, H. H. Wang Biosystems 214, 104648 (2022) [19]CLGGZZ2022 M. L. Chen, S. H. Luo, M. F. Gao, C. J. Guo, T. S. Zhou, J. J. Zhang, Phys. Rev. E 105, 014405 (2022) [20]CLGGZZ2022 H. W. Yin, S. Q. Liu, X. Q. Wen, Phys. Rev. E 103, 022409 (2021) [21]C1962 D. R. Cox, Renewal Theory, Methuen Co. Ltd, (1962) [22]F1971 W. Feller, An Introduction to Probability Theory and Its Applications, John Wiley Sons, (1971) [23]NW2009 S. J. Ni, and W. G. Weng, Phys. Rev. E 79 , 016111 (2009) [24]SFSS2009 H. H. Schmidt-Martens, D. Froemberg, I. M. Sokolov, and F. Sagués, Phys. Rev. E 79 , 041135 (2009) [25]ZL2020 H. Zhang, G. H. Li, Phys. Rev. E 102 , 012315 (2020) [26]CFM2008 D. Campos, S. Fedotov, and V. Méndez. Phys. Rev. E 77 , 061130 (2008)
http://arxiv.org/abs/2406.18111v1
20240626065726
Automatic Tracing in Task-Based Runtime Systems
[ "Rohan Yadav", "Michael Bauer", "David Broman", "Michael Garland", "Alex Aiken", "Fredrik Kjolstad" ]
cs.DC
[ "cs.DC" ]
In Situ In Transit Hybrid Analysis with Catalyst-ADIOS2 François MazenLouis GombertLucas GivordCharles Gueunet F. Mazen et al. Kitware Europe {name.surname}@kitware.com ===================================================================================================================== plain § INTRODUCTION Implicitly parallel programming systems <cit.> automatically extract parallelism from a sequential source program through different forms of dynamic dependence analysis. Automatic parallelization and communication inference has enabled composable high-level libraries <cit.> to be built on top of implicitly parallel task-based runtime systems. However, the cost of the dependence analysis affects the performance of implicitly parallel systems at scale and places a floor on the minimum problem size that can be executed efficiently <cit.>. Applications with tasks that are too small to amortize the cost of dependence analysis is dominated by it and run at low efficiency. To improve the performance of implicitly parallel task-based runtime systems, researchers have proposed techniques <cit.> to memoize, or trace, the dependence analysis. Tracing records the results of the dependence analysis for an issued program fragment, and then replays the results of the analysis the next time an identical program fragment is issued. Tracing has been shown to yield significant speedups by eliminating the cost of the dependence analysis on iterative programs. For example, tracing can reduce the per-task overhead in the Legion <cit.> runtime system from ∼1ms to ∼100μs <cit.>, widening the scope of applications for which task-based runtime systems can be effective. A significant limitation of existing tracing techniques is that they require the programmer to annotate repeatedly issued program fragments with stop/start markers for the runtime system. Programmer inserted annotations derail an important feature of implicitly parallel programming systems—their correctness under program composition. As users develop modular programs that pass data from one component to another, the runtime system ensures that computations launched by different modules maintain sequential semantics by implicitly inserting the necessary data movement and synchronization. However, programmer introduced trace annotations do not obey these composition principles, and the correct placement of trace annotations when composing complex software becomes unclear. Functions defined in a third-party library may contain operations that cannot by traced by a practical tracing implementation, or may issue a different sequence of operations on each invocation. Each of these cases result in runtime errors, due to the incorrect trace annotations constructing an ill-formed sequence of operations. Furthermore, even simple programs using high-level implicitly parallel libraries can have traces that do not correspond to syntactic loop structures in the source program, making it difficult to correctly place tracing annotations. We elaborate on such an example program in <Ref>. In order to improve programmer productivity and to enable the tracing of modular high-level programs, implicitly parallel task-based systems should automatically identify repeated sequences of operations, memoize their analysis results and cheaply replay the analysis as needed. We call this the problem of automatic trace identification, which is similar to Just-In-Time (JIT) compilation in the context of dynamic language implementations <cit.>. JIT compilers for dynamic languages interpret bytecode during program startup, and compile bytecode to native instructions as repeatedly invoked program fragments become hot. Following this architecture, implicitly parallel task-based runtimes should interpret issued operations with a dynamic dependence analysis, and switch to an analysis-free compiled execution once repeated sequences of operations are encountered. We introduce our system [ is the tendency to notice patterns between unrelated things.], that acts as a JIT compiler for the dependence analysis of an implicitly parallel task-based runtime system. The key challenge that faces is the identification of repeated sequences of operations produced by the target program. Unlike JIT compilers, the input to a task-based runtime system is a stream of tasks that lacks information about control flow such as basic block labels or function definitions. As such, cannot rely on these code landmarks or predictable execution flow to identify repeated sequences of operations. Instead, analyzes the input stream of operations to find repetitions by solving a series of online string analysis problems. To demonstrate , we develop an implementation within the Legion <cit.> runtime system as a front-end component that sits between the application and Legion's dependence analysis engine. As operations are issued to Legion, performs a series of dynamic analyses to identify repeatedly issued sequences of operations, and correspondingly invokes Legion's tracing engine <cit.> to memoize and replay dependence analysis on these sequences. While our prototype targets Legion, we believe that the ideas in can be directly applied to other task-based runtime systems that perform a dynamic dependence analysis. The specific contributions of this work are: * A formulation of the desirable properties of traces to identify (<Ref>). * Algorithms to dynamically identify traces in an application's stream of operations (<Ref>). * An implementation of that targets the Legion <cit.> runtime system. To evaluate , we apply it to the largest and most complex Legion applications written to this date, including production-grade scientific simulations and machine learning applications. We show that on up to 64 GPUs of the Perlmutter and Eos supercomputers, is able to achieve between 0.92x–1.03x the performance of manually traced code, and is able to effectively trace previously untraced code built from the composition of high-level components to yield end-to-end speedups of between 0.91x–2.82x. As such, is able to insulate programmers against the overheads of task-based runtime systems on varying applications and problem sizes, transparently and without programmer intervention. § MOTIVATING EXAMPLE We now show an example of high-level implicitly parallel code where it is difficult for a programmer place tracing annotations. As part of developing the example, we provide necessary background on the Legion <cit.> runtime system. <Ref> performs Jacobi iteration using  <cit.>, a distributed drop-in replacement for . distributes through a dynamic translation to Legion. implements operations by issuing one or more Legion tasks, which are designated functions registered with the runtime system. Each array is mapped to a Legion region, which is a multi-dimensional array tracked by Legion. Each task takes a list of regions as arguments. The stream of tasks launched by the main loop of the program is in <Ref>. For each task, the first two arguments denote the inputs, while the third argument is the output. Legion extracts parallelism from the issued stream of tasks by analyzing the data dependencies between tasks and the usage of their region arguments <cit.>. To trace a program fragment, the programmer issues a call (standing for “trace begin”) before and a call after the fragment. The first time Legion executes a trace with a particular , it records the results of the dependence analysis, and then replays the results when executing the same trace again <cit.>. For a trace to be valid, the sequence of tasks and their region arguments encapsulated by and calls must be exactly the same for a given . The same region arguments must be used across trace invocations as the dependence analysis is affected by the usages of the regions and how they are partitioned. While we consider regions for a Legion implementation of , this restriction generalizes to any form of argument that affects the dependence analysis. A natural attempt to trace the program in <Ref> would place the and around the body of the main loop. However, this annotation results in an invalid trace (raising a runtime error), for a subtle reason that requires knowledge of the internals of . The problem with this natural annotation is the loop-carried use of the Python variable , which is bound to different arrays (regions) at different points of execution. Upon entering loop iteration i, is bound to a region arbitrarily named , which is used as an argument for the first operation. As execution proceeds, allocates a new region for the result of the division with , and binds the Python variable to the region . Therefore, the next iteration i+1 issues a on , causing iteration i+1 to issue a different sequence of tasks than iteration i! This program illustrates a real-world case where abstraction and composition make it difficult to apply the low-level tracing technique. To correctly trace the program in <Ref>, a programmer must either add trace annotations around every two iterations of the main loop, or use two different trace ID's for each different iteration's repetition pattern. This steady state of groups of two iterations is achieved because when is assigned, the region it refers to can be collected and immediately reused by . Relying on this steady state is brittle, as the addition of more operations in the main loop or a change in 's region allocation policy could perturb the way in which the necessary steady state for tracing is achieved. Instead, dynamically analyzes the stream of tasks and automatically discovers what fragments of the application should be traced, removing this concern from the programmer. § WHAT ARE GOOD TRACES? The overarching goal of is to reduce the amount of time the runtime spends performing dynamic dependence analysis by selecting traces to replay. A simple model of a tasking runtime system's dependence analysis is that the runtime spends time α analyzing each task. The first time a trace is issued, the dependence analysis results are memoized, so the runtime spends time α_m (memoization time) on each task in the trace, where α_m is slightly larger than α. Then, on subsequent executions of the trace, there is some constant c amount of overhead to replaying the trace, but every task in the trace only incurs an analysis cost of α_r (replaying time), where α_r ≪α. Using this model of the runtime system, we derive several properties of traces that should find. First, the selected traces should maximize the number of traced operations to minimize the number of tasks that contribute an α to the overall analysis cost. Next, the selected traces should be relatively long so that the constant replay cost c does not accumulate. Finally, the set of selected traces should be small, so that does not continually memoize new traces and pay α_m per task in each new trace. Intuitively, the ideal set of traces corresponds to the loops in the target program. We now concretize the good traces that should find as the solutions of a concrete optimization problem. Consider the sequence of tasks S constructed from a complete execution of the target program. A system for automatic trace identification must construct from S * A set of traces T, containing sub-strings of S, * A function f : T →, mapping each t ∈ T to a set of intervals in S that are matched by t, that maximizes the coverage of f, defined by (T, f) = ∑_t∈ T∑_i ∈ f(t) |i|, subject to the constraints * ∀ t ∈ T, t is longer than a minimum length, * ⋃_t ∈ T f(t) is a disjoint set of intervals. Multiple solutions exist for this problem, so we prefer solutions that first maximize the number of matched intervals (∑_t ∈ T |f(t)|), and then minimize the total number of selected traces (|T|). Maximizing (T, f) directly minimizes the number of untraced tasks, and selecting a small set of traces that repeats many times minimizes the memoization cost of α_m per task. Finally, a minimum length is placed on traces to ensure that the constant replay cost c can be effectively amortized. We present a concrete problem instance and example solutions in <Ref>. The presented optimization problem precisely defines the properties of traces that a system like should attempt to find, but it does not directly yield an algorithm to discover good solutions. Additionally, the optimization problem is structured in a post-hoc formulation, where an optimal solution is constructed from the results of the entire program execution. In practice, a system like must construct the solution (T, f) in an online manner, using the currently visible prefix of the sequence of tasks launched by the application. In the next section, we discuss algorithms for dynamically finding good solutions to this optimization problem through a set of string processing algorithms. rohany: Transition into the optimization problem. We now define the properties of traces that should identify by considering the string S produced by a complete execution of the target program. We aim to find traces that are the sub-strings of S that correspond to loops in the source program. should identify a small set of traces that covers as much of S as possible, while the traces themselves repeat many times in S. This intuition aims to minimize the number of untraced operations (to remove Legion dependence analysis overheads), while maximizing the number of times can replay traces (as memoization of analysis has a cost). This goal is similar to the goal of JIT compilers to find the blocks of bytecode to compile that the application spends the most time in. We condense this intuition into a concrete optimization problem. Given a string of tokens S produced by executing the target program, a system for automatic trace identification should construct * A set of traces T, containing sub-strings of S, * A function f : T →, mapping each t ∈ T to a set of intervals in S that are matched by t, that maximizes the coverage of f, defined by ∑_t∈ T∑_i ∈ f(t) |i|, subject to the constraints * ⋃_t ∈ T f(t) is a disjoint set of intervals, * ∀ t ∈ T, t is longer than a minimum length. Multiple solutions exist for this problem, so we prefer solutions that first maximize the number of matched intervals (∑_t ∈ T |f(t)|), and then minimize the total number of selected traces (|T|). The first constraint ensures that there are no overlapping matches chosen by f so that each index in S is counted at most once, as any task can only be part of at most one trace replay. The second constraint is for practical purposes, to ensure that the selected traces are large enough to amortize the costs of replaying them. By directly maximizing the coverage of f, we capture the intuition of minimizing the number of untraced operations. Next, by preferring solutions that maximize the number of matches and minimize the total number of chosen traces, we minimize the time spent recording traces. Note that just constructing the set of traces T is not a sufficient target for the optimization problem, as overlapping traces in T lead to ambiguity as to which sub-strings of S a trace in T should match. As such, we include the construction of the matching function f to perform the necessary disambiguation. § TRACE IDENTIFICATION Dynamically finding good traces requires processing information about the tasks seen so far, and then using that information to record and replay traces in the future. An overview of 's dynamic analysis procedure is sketched in <Ref>. has two components that correspond to the to the targets of the optimization problem in <Ref>. The trace finder constructs the candidate set of traces T by accumulating the tasks issued by the application into a buffer, and asynchronously mining the buffer to find candidate traces. The trace replayer then constructs the matching function f by ingesting the candidate traces into a trie, and identifying candidate traces in the application stream by maintaining pointers into the trie that represent potential matches. A concrete example of how identifies a trace in an application is shown in <Ref>. We now describe each of these components in detail. §.§ A Stream of Tokens An insight of our work is that automatic trace identification is inherently an online string analysis problem of finding repeated sub-sequences in the application's task stream. As seen in <Ref>, the task stream is not just a list of identifiers—tasks have arguments that must also be the same across iterations to be used in a trace. To capture all aspects of a task that can affect the dependence analysis, constructs a hash from each task and its region arguments. Converting the input stream of tasks into a stream of hash tokens enables more direct application of string processing techniques, and straightforward handling of traceable operations that are not tasks. §.§ Finding Traces With High Coverage 's trace finder records tasks as they are issued by the application into a buffer (we describe a refinement to this scheme in <Ref>). Once the buffer fills up, launches an asynchronous analysis of the buffer to find a set of traces within the buffer that maximize the coverage of the buffer. We discuss previous ideas that are related to this goal, and then describe the solution used in .[We discuss more related works in <Ref>.] Existing Techniques The Lempel-Ziv family of algorithms use repeated sub-strings for compression. Algorithms like LZ77 <cit.> maintain a sliding window of previous tokens to search for repeats in when encoding upcoming tokens. The LZW <cit.> algorithm avoids the use of a sliding window by only increasing the length of any candidate repeat by a single token at a time. While not directly finding a set of repeats with high coverage, similar algorithms that use a sliding window would need to maintain and search in a window the size of the analyzed buffer, resulting in quadratic time complexity. In order to recognize a trace of length n, an LZW-style algorithm would also need to encounter the trace n-1 times. We wanted an algorithm that is sub-quadratic in order to scale to large buffer sizes. Real-world applications we discuss in <Ref> have traces that contain more than 2000 tasks, requiring token buffers of at least twice that size to detect a single repeat. Within the programming languages community, recent work by Sisco et al. <cit.> used a technique called tandem repeat analysis <cit.> to find loops in the netlists that result from compiling hardware description languages. A tandem repeat is a sub-string α that repeats contiguously within a larger string S, such that α^k is a sub-string of S, for some k. Despite the success that Sisco et al. had using tandem repeat analysis, we found that even simple real world programs did not contain enough tandem repeats for the analysis to reliably identify a trace set with high coverage. The reason is that while these real-world programs tended to have repetitive main loops, there would often be irregularly appearing computations such as convergence checks or statistics calculations that occur infrequently between loop iterations. As such, the strings that represented these programs would not contain tandem repeats, but instead repeated sub-strings separated by other tokens. A relaxation of tandem repeat analysis is to search for non-overlapping repeated sub-strings, which removes the contiguity requirement on the repeats. Concretely, given the string ababab, abab is an overlapping repeat, while ab is non-overlapping. We could use non-overlapping repeated sub-strings to assemble a set of traces T and a disjoint mapping f that achieves high coverage. While there exist standard suffix-tree algorithms to find repeated sub-strings, we found that the natural extensions of these algorithms to detect non-overlapping repeated sub-strings also resulted in quadratic runtime complexity. Our Algorithm In this work, we design a repeat finding algorithm that is directly aware of the optimization problem in <Ref> and runs in O(nlog(n)), where n is the size of the token history buffer. At a high level, our algorithm makes a pass through a suffix array constructed from the input buffer to collect a set of candidate repeats. It then greedily selects the largest repeated sub-strings that do not overlap with any previously chosen sub-strings. Psuedocode for our algorithm is in <Ref>, which takes a string S and returns a set of sub-strings that achieve high coverage of S. We assume that the reader is knowledgeable about suffix arrays and their structural properties. However, understanding the algorithm in <Ref> is not required to understand its usage in , as discussed in <Ref> and <Ref>. As a first step, we construct a suffix array and longest common prefix array from the input buffer of tokens. We then iterate through adjacent pairs of suffixes to construct a set of candidate repeats, which are tuples of sub-strings defined by their length, the repeated sub-string, and its starting position in S. These candidates are constructed based on whether or not the shared prefix between adjacent suffix array entries overlap. Once all of the candidates have been constructed, we sort the candidates to greedily select candidates in order of length, and select as many occurrences of a particular sub-string as possible. We only select candidates that do not overlap with any previously selected candidates, and then deduplicate the chosen set of candidates as the result. A sample execution of <Ref> is shown in <Ref>. Our algorithm can be implemented with time complexity O(nlog(n)). Linear time algorithms exist for suffix array and LCP array construction <cit.>. Two candidates are generated for each entry in the suffix array, so sorting the candidates takes O(nlog(n)) time. The interval intersection step can be reduced to constant time by leveraging the candidate iteration order, so the entire loop executes in O(n) time. In particular, an array of length |S| can be maintained, and as each candidate is selected, all positions covered by the candidate are marked. Then, as candidates are iterated over in decreasing length and increasing start position order, interval intersection can be checked by checking if the start or end of an interval is marked. Finally, the deduplication can be done by generating a unique ID for each candidate sub-string in the candidate generation phase, and adjusting the candidate representation to be a tuple of length, ID and starting position; using this sort order allows deduplication to be done at each iteration of the candidate selection loop. Our algorithm aims to find good solutions to the optimization problem in <Ref> by identifying long repeated sub-strings and selecting as many as possible that do not overlap with each other. We trade off between an optimal solution to the optimization problem to instead find good solutions and maintain a lower asymptotic runtime. There are two such heuristics in our algorithm. First, when adjacent suffix array entries have a repetition, we consider only the maximal length repetition instead of all sub-strings of the repetition. Second, when we select which candidates to keep, we greedily choose the largest candidates instead of performing a bin-packing computation. While we do not provide theoretical guarantees on the optimality of our algorithm, we show in <Ref> that using our algorithm is able to identify good traces in complex, real-world applications. §.§ Recognizing and Replaying Candidate Traces 's trace replayer uses <Ref> to find candidate traces from the application's history of tasks. In this section, we discuss how 's trace replayer identifies and selects these candidate traces from the task stream to record and replay. Our design of the trace replayer has two major goals. First, the per-task overhead must be low, as it is imperative for performance for the application to issue as many tasks into the runtime as possible so that the runtime can either replay traces or perform dependence analysis ahead of execution. Slowing down the task launch rate would result in exposed latency from various sources in the runtime. Second, must balance exploration and exploitation when selecting traces. As more information about the application is gained, should switch to better traces as it finds them. However, should not leave a steady state until it is confident that performance can be improved, as recording new traces has a cost. As discussed previously, accumulates a history of tasks launched by the application and asynchronously uses <Ref> to select candidate traces. Asynchronous analysis of task histories is important to avoid stalling the application by waiting for the analysis to finish before accepting the next task from the application. When an asynchronous analysis completes, ingests the results into a trie that maintains the current set of candidate traces. Along with this trie, maintains a set of pointers into the trie that represent potential matched traces. As tasks are issued, updates the set of pointers by creating new pointers for each new task, stepping any existing pointers down the trie if possible, and removing any pointers that are made invalid. Once a pointer reaches a leaf of the trie and has matched a trace, has the option to forward the trace to the tasking runtime, wrapped by and calls. uses a scoring function to select which matched trace to replay when faced with multiple valid choices. The scoring function is based on the length of the candidate trace multiplied by a count of the number of times the trace has appeared. In calculation of the score, we impose a maximum value of the count that can be used, and exponentially decay the value of the count by how many tasks have been encountered since the trace last appeared. Finally, we increase the score slightly if a trace has already been replayed. Our scoring function encodes heuristics about trace selection and aims to balance exploration and exploitation. We naturally prefer long traces over shorter ones, as longer traces have the potential to eliminate more runtime overhead. The capping of the appearance count allows for to eventually switch from a trace that appeared early during program execution to a better trace that appears later in the execution. Next, decaying the appearance count ensures that a seemingly promising trace that occurs infrequently, does not eventually hit a threshold, and disrupts a steady state. Finally, since recording new traces has a cost, when faced with traces of a similar score, we bias towards a trace it has already replayed. §.§ Achieving Responsiveness and Quality 's trace finder accumulates tasks into a buffer and mines the buffer for traces using <Ref>. An important question is what should the size of that buffer be? The size of this buffer trades off between responsiveness of the 's trace identification and the quality of traces is able to find. With a small buffer, can identify traces early but will not be able to identify traces in programs with large loops. Meanwhile, a large buffer allows to identify long traces in complex applications but introduces significant startup delay in smaller applications. We did not want end users to be required to continually adjust the buffer size parameter as their application changes. As such, some strategy to adapt the buffer size along this tradeoff space is necessary. We found that a strategy that attempts to dynamically resize the buffer based on what traces to find is unsatisfactory, as the system is unable to differentiate between an application currently not repeating operations versus an application repeating a sequence of operations larger than the buffer size. Instead, we propose a strategy that selects a large fixed buffer size upfront, and then samples smaller pieces of the buffer in a principled manner to be responsive to the occurrence of short traces. samples from the buffer guided by the ruler function sequence <cit.>, which provides a practically useful sampling strategy with provable guarantees. The ruler function counts the number of times a number can be evenly divided by two. Applying it to the sequence 1, 2, 3, 4, … yields the sequence 0, 1, 0, 2, …. Raising the sequence to the power two yields 1, 2, 1, 4, …, which we can interpret as subsets of the buffer to analyze. For example, with a buffer size of four, as tasks arrive would first analyze the first task, then the first two tasks, then the third task, and finally all four tasks. A visualization of this sampling policy is in <Ref>. This sampling policy lets quickly react to changes in the application by analyzing recent pieces of the buffer while allowing larger traces to be found by infrequently analyzing longer components of the buffer. For example, sampling the full buffer in <Ref> is required to find a trace that repeats in positions H2-H4 and H5-H7. In practice, we use the exponentiated ruler function as the multiples of a larger constant (such as 250) to sample the buffer with. Finally, given that our algorithm in <Ref> runs in O(nlog(n)), we show that our sampling strategy increases the total runtime complexity of processing the buffer by only an extra log factor, yielding a total of O(nlog^2(n)). This technique enables all of the experiments in <Ref> to be run with the same buffer size configuration parameter. § IMPLEMENTATION DISCUSSION We now discuss important aspects of a realistic implementation of . In particular, we discuss the specifics of implementing in a distributed context and a decision not to perform speculation when replaying traces. In particular, we discuss the specifics of implementing in a distributed context, a decision not to perform speculation when replaying traces, and the necessary extensions Legion's underlying tracing engine. §.§ Distributing the Analysis 's analysis as presented in <Ref> is sequential, processing tasks as they are issued by the application. In a distributed setting, leverages Legion's dynamic control replication <cit.> to act as a sequential analysis, except for one component, which we discuss next. With control replication, the application executes on each node and Legion shards the dependence analysis and execution across nodes. The main restriction of control replication is that the application must issue the same sequence of tasks on every node. We implement as a layer between the application and Legion, so inherits the control replication requirements of the application. In particular, each node must agree on which traces to replay and when during program execution to record and replay the traces. The only source of non-determinism in that may result in divergent decisions between nodes is the asynchronous processing of token buffers described in <Ref>. The asynchronous analysis may complete earlier on one node than another, resulting in that node replaying a trace before another node has identified that trace as a candidate. However, making the analysis synchronous would result in stalling the application until analyses complete. We resolve this tension by having each node agree on a count of processed operations to issue before ingesting the results of an asynchronous analysis. If any node had to wait on an asynchronous analysis to complete, all nodes increase their count of operations to wait on for the next analysis. This strategy reaches a steady state where analysis results are ingested in a deterministic manner without stalling the application. §.§ (The Lack of) Speculation Speculation is a common technique in computer architecture to efficiently execute programs with data-dependent control flow. As has similarities to speculative components in architecture like trace caches (<Ref>), a natural design decision was if should speculate on whether traces would be issued by the application. Our implementation of does not speculate and waits for the entirety of a trace to arrive before issuing the trace to Legion. The relative costs of different operations within the Legion runtime system made the potential upside of speculation not worth the implementation complexity. Legion employs a pipelined architecture where a task flows through three stages: 1) the application phase, where the task is launched (into ), 2) the analysis phase, where the task is analyzed or replayed as part of a trace, and 3) the execution phase, where the task is executed. Depending on the cost ratio of the application and analysis phases, speculation may be beneficial as waits for an entire trace to pass through the application phase. Legion's analysis phase is an order of magnitude more expensive than the application phase, letting the application phase run far ahead of the analysis phase. Thus, waiting for an entire trace to be issued by the application rarely stalls the pipeline and gets exposed in the overall runtime. Thus, we found that designing a trace prediction algorithm and implementing a backup-rollback-recover scheme on speculation failures was not worth the complexity. Committing to a speculation-based approach requires undertaking several complex components, the most complex of which is rolling back after misspeculation. In order to roll back from misspeculations, would either need to make backups of all regions modified by a trace or track changes made to regions during the execution of a trace, and then restore the original state of the program upon speculation failure. These approaches lead to unexpected costs in either memory usage or execution time. Since recovery from failed speculation would have an additional cost, additional complexity would arise in the algorithms used to make good predictions about which traces to replay. In our implementation of within the Legion runtime system, we believe that the relative costs of different components limit the potential benefits of speculation, making it not worth the additional complexity. In particular, Legion employs a pipelined architecture which, for simplicity's sake, a task flows through three stages: 1) the application stage, where the task is issued by the application and made visible to , 2) the analysis stage, where the task is analyzed (or replayed as part of a trace), and 3) the execution stage, where the task is actually executed. The potential benefits of speculation arise from the cost ratio of the application stage and the analysis stage: since waits for an entire trace to accumulate in the application stage before dispatching it to the analysis stage, overheads in the application stage may accumulate and become exposed without speculation. However in Legion, the application stage of a task is extremely cheap, taking O(10us) per task, while the analysis stage with tracing takes O(100us) per task. These relative costs means that the application stage is always far ahead of the analysis stage, and we find that in practice, the latency of waiting for tasks to accumulate in the application stage is not exposed in the overall runtime. §.§ Non-Idempotent Traces An important improvement to Legion that was needed for was support for non-idempotent traces; previously, Legion only supported idempotent traces. A trace is idempotent when its preconditions imply its postconditions. The preconditions of a trace capture the state of the dependence analysis when the trace was recorded, and the postconditions capture the state of the dependence analysis after the trace is executed. Intuitively, a trace may only be replayed if its preconditions are satisfied, i.e. at the start of the trace the dependence analysis is in the same state as when the trace was recorded.[We refer readers to Lee et al. <cit.> for more information about trace conditions and idempotency.] The impact of idempotency is that for back-to-back replays of an idempotent trace, the trace's preconditions are known to hold and do not need to be verified. For several technical reasons such as implementation complexity and the costs of checking trace preconditions, Legion only supported idempotent traces. We found that it was necessary to relax this restriction and support non-idempotent traces. A consequence of modeling trace identification as a string analysis problem is that the conversion of task sequences into strings loses information about the pre- and post-conditions of a trace. As such, it is difficult to bias the string algorithms towards sub-strings with have semantic properties such as idempotency. Supporting non-idempotent traces involves recording multiple instances of the trace for each set of preconditions it may have (as the postconditions do not imply the preconditions), selecting the instance that is applicable in the dependence analysis state when replay is requested, and eagerly applying the postconditions to the dependence analysis state after replay. § EVALUATION Overview We evaluate on the largest and most complex Legion applications written to date, including production scientific simulations and a distributed deep learning framework. Our results show that is able to effectively find traces in complex programs with lower overhead, enabling programmers to experience the benefits of tracing without manual effort and allowing a more general set of applications to be traced. Experimental Setup We evaluated on the Eos and Perlmutter supercomputers. Each node of Eos is an NVIDIA DGX H100, containing 8 H100 GPUs with 80 GB of memory and a 112 core Intel Xeon Platinum. Each node of Perlmutter contains 4 NVIDIA A100 GPUs with 40 GB of memory and a 64 core AMD EPYC 7763. Nodes of Eos are connected with an Infiniband interconnect, while Perlmutter uses a Slingshot interconnect. We compile Legion on Eos with the UCX networking module, and use the GASNet-EX <cit.> networking module on Perlmutter. We do not execution each application on both Perlmutter and Eos due to differences between the local environments on each machine. In our experiments, we evaluate the relative performance differences between traced and untraced programs, and comparisons between machines are not significant. §.§ Weak Scaling In this section, we discuss weak scaling results of applications using , as shown in <Ref> and <Ref>. In a weak scaling study, we increase the problem size as the size of the target machine grows to keep the problem size per processor constant. For each application, we perform a sweep over different sizes of the problem to vary the task granularity, thus affecting the impact of runtime overhead. These different problem sizes are denoted in the graph by the “-s”, “-m” and “-l” label suffixes which stand for small, medium and large. At smaller problem sizes, more runtime overhead can be exposed, while larger problem sizes make it easier to hide runtime overhead. In each weak-scaling plot, we report the steady-state throughput of each configuration and problem size after a number of warmup iterations (discussed in <Ref>). We report throughput in iterations per second achieved by each configuration, so within a particular problem size, higher is better; across problem sizes, the smaller problem sizes will achieve a higher iterations per second than the larger problem sizes. S3D S3D <cit.> is a production combustion chemistry simulation code that has been developed over the course of many years by different scientists and engineers. The Legion port of S3D implements the right-hand-side function of the Runge-Kutta scheme, and interoperates with the legacy Fortran+MPI driver of the simulation. The integration between Legion and the legacy Fortran+MPI code leads to various constraints that the manual trace annotations interact with. For example, during the first 10 iterations, a hand-off between Legion and Fortran+MPI must occur every iteration, while after the first 10 iterations a hand-off is required only every 10 iterations. While not unmanageable, these interactions have led to relatively complicated logic to manually trace the main loop. We scale S3D on Perlmutter, and compare the performance of to manually traced and untraced versions of S3D. The results are shown in <Ref>. Even on a single node, tracing has a noticeable performance impact on the smaller problem sizes and affects the scalability of S3D. achieves within 0.92x–1.03x of the performance of the manually traced version, and between 0.98x–1.82x speedups over the untraced version. HTR HTR <cit.> is a production hypersonic aerothermodynamics application. HTR performs multi-physics simulations of hypersonic flows at high enthalpies and Mach numbers, such as for simulations of the reentry of spacecraft into the atmosphere. Like S3D, we evaluate 's performance on HTR on Perlmutter, and compare it against a manually traced version and an untraced version. While HTR without tracing performs competitively to the traced version at small GPU counts, <Ref> shows that tracing is necessary for performance at scale. achieves within 0.99x–1.01x of the performance of the manually traced version, and between 0.96x–1.21x speedups over the untraced version. CFD CFD is a application that solves the Navier-Stokes equations for 2D channel flow <cit.>. Unlike S3D and HTR, there is not a manually traced version of CFD, due to the difficulties around composition discussed in <Ref>. Developing a manually traced implementation of CFD would require either rewriting the application to remove any dynamic region allocation, or manual examination of allocator logs to find the number of iterations in the steady state. As a result, we compare CFD with to the standard untraced version on different problem sizes, which is the performance that users are able to achieve today. <Ref> shows weak scaling results for CFD on Eos. These results are similar to HTR, where leveraging tracing is necessary for performance at scale. On the smallest problem size, even though the tracing removes a large amount of runtime overhead, the tasks are too small to hide the communication latency at larger scales, leading to the observed fall off in performance. On larger problems, CFD with is able to maintain high performance while the untraced version falls off, yielding between 0.92x–2.64x speedups. TorchSWE TorchSWE is a port of the MPI-based TorchSWE <cit.> shallow-water equation solver, and is the largest application developed so far. Similarly to CFD, there is no manually traced version to compare to. However, unlike CFD, performing a rewrite of TorchSWE to enable manual tracing would be difficult, as TorchSWE contains an order of magnitude more lines of code. Weak scaling results for TorchSWE on Eos are shown in <Ref>. These results demonstrate that there does not exist a problem size for TorchSWE on Eos that can hide Legion's runtime overhead without tracing. Even the large problem size, which nearly reaches the GPU's memory capacity, exposes Legion runtime overhead at 8 GPUs. The reason for this is that TorchSWE maintains a large number of fields for each simulated point, and issues different array operations on each field. The amount of data needed for each element in the simulation does not allow the task granularity to be easily increased, as each new element added increases the memory footprint more than it increases the average task granularity. For such applications, leveraging tracing is a requirement, and enables complex applications like TorchSWE to do so automatically. TorchSWE itself contains enough task parallelism to hide communication latencies, but needs tracing to first lower runtime overhead. With , we are able to achieve between 0.91x–2.82x speedup on TorchSWE, achieving nearly perfect scalability on 64 GPUs. §.§ Strong-Scaling We now move from scientific simulation codes to distributed deep neural network training with FlexFlow <cit.>. FlexFlow is a deep neural network framework that searches for hybrid parallelization strategies for different layers of the network. We perform a strong-scaling experiment with FlexFlow on Eos to train the largest () network from the CANDLE <cit.> initiative[Due to engineering limitations in FlexFlow at the time of writing, the network was parallelized only with data parallelism.]. A strong-scaling study fixes the problem size on a single processor, and increases the number of processors while keeping total problem size constant. To strong scale the training, we fix the batch size for single GPU, and then increases the number of GPUs available. We compare the performance of FlexFlow with manual trace annotations, two configurations of (discussed next), and no tracing. As seen in <Ref>, as FlexFlow scales up, the tasks become smaller and begin to expose Legion runtime overhead without tracing, leading to slowdowns when scaling up. The two configurations of differ in the maximum trace length to be replayed ('s history buffer is the same, but recorded traces are broken into pieces of a given maximum size). The first (auto-5000) is the standard configuration with no maximum, as used in all other experiments, and the second (auto-200) has a maximum length of 200 tasks, which is similar to the length of the manually annotated trace. As FlexFlow strong scales, the cost of Legion issuing the trace replay starts to become exposed as the execution time of the trace decreases, leading to shorter traces exposing less latency, and thus performing better[The Legion team is aware of this shortcoming and plans to address it in the future.]. On 32 GPUs, the configuration of with a maximum trace length of 200 achieves between 0.97x the performance of the manually traced FlexFlow, and achieves a 1.5x speedup over the untraced FlexFlow. §.§ Overheads of We now discuss the overheads that imposes over standard execution with Legion. While we inherit the overheads of Legion's existing tracing infrastructure <cit.> (the cost of memoizing traces), imposes two new sources of overhead to measure: 1) the overhead on task launches and 2) the time taken until a steady state is reached. As discussed in <Ref>, intercepts the application's task launches and performs some analysis work before forwarding the task launches to Legion. This analysis work includes launching asynchronous token buffer processing jobs and manipulating traversals of the trie data structures used for online trace identification. To quantify this overhead, we ran a two node experiment on Perlmutter and measured the time it took to launch (not analyze or execute) Legion tasks with and without enabled. We ran a two node experiment to ensure that the coordination logic discussed in <Ref> was included in timing. We found that task launching took on average 7μs without , and on average 12μs with . While increases the task launch overhead, this overhead is still significantly lower than the amount of time it takes to replay a task as part of a trace, which is  100μs. As such, the task launching cost of can still be effectively hidden by the asynchronous runtime architecture. The asynchronous analysis jobs that launches to process task histories do not affect the critical path, and utilize Legion's background worker threads. While in theory these jobs could compete for the resources necessary for Legion's dependence analysis, we have not yet encountered an application where they caused a detriment in performance. To measure the time taken until reaches a steady state of replaying traces on our iterative applications, we report the number of iterations until a steady state is reached. <Ref> contains the iteration counts needed for each application in <Ref> and <Ref>, which range from 30 to 300. These simulation and machine learning workloads would be run in production for a significantly larger number of iterations. We note that the applications have a larger number of required warmup iterations due to the dynamic behavior discussed in <Ref>, where a single application-level iteration of the program does not necessarily correspond to a repeated sequence of tasks. §.§ Trace Search To give intuition about the search process that performs, we constructed a visualization of the amount of runtime overhead that is removing over time. <Ref> is a visualization of S3D over time (for 70 iterations), where each for task launched by S3D, we display how many of the previous 5000 tasks were traced. For iterative computations, this procedure yields the expected result, where spends time during program startup discovering new traces, and then settles into a steady state. The amount of traced operations increases slightly by the end of the execution, as finds a better set of traces that lowers the number of untraced operations. §.§ Separate Evaluation of String Search Algorithm? I'm not sure if we need this, but I can write some separate microbenchmarks to collect this. § RELATED WORK Just-In-Time Compilers Just-In-Time (JIT) compilers <cit.> for dynamic languages have a tiered execution system, where the target language is first translated to bytecode, which is executed by an interpreter. Frequently executed program fragments are then compiled into native instructions for significantly faster execution. employs a similar architecture where a task-based runtime system's dynamic analysis acts as the slow but general interpreter, and uses a tracing engine as the fast but specialized compiler. JIT compilers rely on code landmarks like function definitions and basic block addresses to maintain counters of frequently executed program fragments. Since views an unrolled stream of tasks, it must employ novel techniques for identification of traceable program fragments. Trace Caches Trace caches <cit.> have been used in processors to improve instruction fetching bandwidth. At a high level, trace caches record the common jump paths taken through basic blocks, and pre-fetch those paths when revisiting the same basic blocks. shares a similar architecture to trace caches, which also use patterns in running programs to improve the performance of a slower dynamic component (in this case, the control-dependent instruction fetching). Similarly to JIT compilers, trace caches also use landmarks in executing programs to guide their decisions, which is not able to exploit. Also, by virtue of being implemented in hardware, the mechanisms that trace caches must be simpler than the kinds of analyses can use, which are implemented in software. String Analysis <Ref> contains a partial discussion of related string analysis works—we continue the discussion here. The most relevant string processing problem in the bio-informatics community is motif finding <cit.>, which is the problem of finding short (5–20 token long), fixed-length repeated strings in a larger corpus. The focus on a short and fixed sub-string length and a tendency to use genomic information to guide the search makes these techniques not applicable to our problem. Algorithms for document fingerprinting such as Moss <cit.> have been developed that accurately identify copies between documents. In particular, these techniques are guaranteed to detect if repetitions of at least a minimum size exist across documents. Fingerprinting techniques are useful to detect whether there exist repeated sub-strings, but do not directly aid in finding the sub-strings themselves that have high coverage. Inspector-Executor Frameworks is similar in spirit to Inspector-Executor (I/E) frameworks that dynamically analyze program behavior and then perform optimizations <cit.>. I/E frameworks generally focus on recording information related to array accesses and use knowledge of these accesses to perform compiler optimizations that parallelize or distributed loops. In contrast, observes a dynamic sequence of tasks and searches for repeated sub-sequences of tasks to record as traces. Task-Based Runtime Systems Several task-based runtime systems have been developed for high performance computing <cit.>, data science <cit.>, and machine learning <cit.>. One axis of runtime overhead that these different systems impose on applications is the cost of dependence analysis. The cost of dependence analysis is directly related to the expressivity and flexibility of the runtime system's programming model. Legion has an expressive data model that supports content-based coherence <cit.>, leading to a relatively expensive dependence analysis. As a result, tracing <cit.> was developed to reduce the costs of the dependence analysis. A tracing-like technique called Execution Templates <cit.> was also developed to cache control plane decisions in runtime systems for cloud-based environments. § CONCLUSION In this work, we introduce , a system and framework for task-based runtime systems to automatically trace the dependence analyses for repeated program. By automatically detecting traces, is able to improve programmer productivity by insulating programmers against changing task granularity, and enable new applications to take advantage of tracing. We develop an implementation of that targets the Legion runtime system and show that on the most complex Legion applications written to this date, is able to match the performance of manually traced code, and effectively optimize currently untraceable programs to improve the performance at scale by up to 2.82x. § ACKNOWLEDGEMENTS Help – wonchan, manolis, shriram (legate help); seshu (S3D); elliot (general help + regent); mario, caetano (HTR setup); zhihao, colin (flexflow); Fred group (feedback along the way); danny sleator, sam westrick for pointers on the string analysis problem; roshni (help with setting up optimization problems), James, AJ, Chris, Rubens, Shiv (writers workshop)
http://arxiv.org/abs/2406.18277v1
20240626120121
Hemispheric analysis of the magnetic flux in regular and irregular solar active regions
[ "A. Zhukova" ]
astro-ph.SR
[ "astro-ph.SR" ]
firstpage–lastpage Learning-rate-free Momentum SGD with Reshuffling Converges in Nonsmooth Nonconvex OptimizationThe research of Xiaoyin Hu is supported by Zhejiang Provincial Natural Science Foundation of China under Grant (No. LQ23A010002), the National Natural Science Foundation of China (Grant No. 12301408), Scientific Research Foundation of Hangzhou City University(No.J-202317), and the advanced computing resources provided by the Supercomputing Center of HZCU. The work of Xin Liu is supported in part by the National Key R&D Program of China (2023YFA1009300), the National Natural Science Foundation of China (12125108, 12226008, 11991021, 11991020, 12021001, 12288201), Key Research Program of Frontier Sciences, Chinese Academy of Sciences (ZDBS-LY-7022), and CAS AMSS-PolyU Joint Laboratory of Applied Mathematics. Xiaoyin Hu Nachuan Xiao Xin Liu Kim-Chuan Toh Received: date / Accepted: date ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT Studying the hemispheric distribution of active regions (ARs) with different magnetic morphology may clarify the features of the dynamo process that is hidden under the photospheric level. The magnetic flux data for 3047 ARs from the CrAO catalog between May 1996 and December 2021 (cycles 23 and 24) were used to study ARs cyclic variations and perform correlation analysis. According to the magneto-morphological classification (MMC) of ARs proposed earlier, subsets of the regular (obeying empirical rules for sunspots) and irregular (violating these rules) ARs were considered separately. Our analysis shows the following. For ARs of each MMC type, in each of the hemispheres, time profiles demonstrate a multi-peak structure. The double-peak structure of a cycle is formed by ARs of both MMC types in both hemispheres. For the irregular ARs, the pronounced peaks occur in the second maxima (close to the polar field reversal). Their significant hemispheric imbalance might be caused by a weakening of the toroidal field in one of the hemispheres due to the interaction between the dipolar and quadrupolar components of the global field, which facilitates the manifestation of the turbulent component of the dynamo. The similarity of the irregular ARs activity that was found in adjacent cycles in different hemispheres also hints at realization of the mix-parity dynamo solution. For the quadrupolar-like component of the flux (compiled in the simple axisymmetric approximation), signs of oscillations with a period of about 15 years are found, and they are pronounced specifically for the irregular groups. This MMC type ARs might also contribute in α-quenching. dynamo – Sun: activity – Sun: magnetic fields § INTRODUCTION Active regions (ARs) are the most famous manifestation of the solar activity, reflecting its cyclical nature, which still harbors unclear aspects. <cit.>. For instance, despite decades of research, the origin of the North-South (N-S) asymmetry of the solar activity is not fully understood. The noticeable hemispheric imbalance is confirmed by numerous observations <cit.>. This phenomenon is constantly in the focus of modern research on sunspot number, areas <cit.>, and other indicies of the global activity <cit.>. The understanding of the N-S asymmetry mechanisms is essential for the prediction models <cit.>. As it is follows from the pioneer magnetic cycle models <cit.>, the activity of the two hemispheres should not differ. ARs, the source of which is understood as a toroidal component of the global magnetic field, should occur approximately equally in both hemispheres. When the magnetic field lines of the global dipole are stretched along the equator in the convection zone due to differential rotation, neither hemisphere has an advantage. On the other hand, the mean-field dynamo theory requires the destruction of the mirror symmetry of turbulence in convection zone <cit.>, which is a necessary condition for overcoming the limitations of anti-dynamo theorems <cit.>. Certain theoretical studies assume that the α-effect (which is responsible for restoring the poloidal component of the global field) has a different distribution in the two hemispheres <cit.>. Observational studies allows for different interpretations of the interaction of the two hemispheres. Some authors suggest that the interdependence of magnetic field systems originating in the hemispheres is weak, and the hemispheres are rather independent of each other <cit.>. However, the noticeable interaction between the hemispheres was reported by <cit.>. The relationship between the characteristics of adjacent cycles (such as the lag between the activity of the hemispheres) and the possible memory of the cycles are also discussed <cit.>. The hemispheric distribution of ARs with different individual properties (observed or simulated) also show peculiarities. For instance, <cit.> found that the time profiles of the asymmetry index vary for sunspot groups of different sizes. <cit.> showed that the long-term behavior of solar activity (including significant hemispheric asymmetry) considerably depends on the presence of large individual ARs with atypical properties (`rogue' regions). By atypical properties they mean violations of the Hale's polarity law and Joy's law <cit.>. Recall that anti-Hale ARs demonstrate uncharacteristic (for certain cycle and hemisphere) polarity of the leading spot, while non-Joy groups have unusual tilt, i.e. inclination of the magnetic axis relative to the East-West direction. <cit.> discussed the possibility of a difference in tilts during the Maunder Minimum and in modern cycles. <cit.> reported that the tilt randomness is the most crucial element (among diverse components) of the Babcock-Leighton mechanism in resulting hemispheric irregularities in the evolution of polar field. <cit.> revealed the role of anti-Hale ARs in the weakening of the polar field in certain hemisphere, although they found the effect of a single sunspot pair as not very dramatic. <cit.> shown that the decay of anti-Hale and non-Joy ARs results in the remnant flux surges that are directed towards the pole and transform the conventional order in magnetic flux transport in corresponding hemisphere. Distinguishing ARs with atypical properties also underlies the recent magneto-morphological classification (MMC) of ARs <cit.>. The idea is that, along with a set of the regular ARs (obeying empirical rules for sunspot groups), a special set of the irregular ARs (violating one or more rules) can be considered (see Section <ref> for more details). Although both the regular and irregular ARs follow the cycle and supposed to be generated by the global dynamo, the strongest fluxes of the irregular ARs are observed in the second maximum, which may indicate intervention of the turbulent component of the dynamo <cit.>. As one of the reasons for the irregular ARs hemispheric imbalance, the interplay between the dipolar and quadrupolar components of the global magnetic field was assumed <cit.>. Please note that the role of the quadrupolar component in the solar activity was widely discussed <cit.>. For the large-scale magnetic field, the quadrupole mode was assumed as one of the reasons for the N-S asymmetry <cit.>. Grand minima of the solar activity may also be associated with violations of the dipolar parity <cit.>. In addition, focusing on the totality of ARs-`violators', one may be interested in whether such groups are the result of random deviations that fluctuate the cycle provided by the regular groups, or whether they are an inherent part of the dynamo process that have a special functionality. Although the terminology <cit.> hints at the former, the peculiar hemispheric asymmetry <cit.> encourage us to study this issue in more detail. Here we analyze the magnetic fluxes of ARs to identify the features of the hemispheric distribution of groups with different magnetic morphology and to reveal their involvement in the dynamo process. Possible signs of the interplay between the dipolar and quadrupolar components of the magnetic field, which might be expressed in the irregular ARs flux profiles, are also considered. The study encompasses two completed solar cycles (SCs), namely, SCs 23 and 24 (from May 1996 to December 2021). § DATA AND METHOD In this study, unlike our previous works on the N-S asymmetry of the number of sunspot groups <cit.>, we based on the data on the magnetic fluxes of ARs. Magnetic fluxes of ARs are widely associated with the subphotospheric toroidal magnetic field produced by the global dynamo. The magnetic flux data (used for each AR once) can be considered as a `generative' activity index and allows us to make assumptions about the features of the dynamo process <cit.>. As a source of the magnetic flux data we used the catalog of the magneto-morphological classes of ARs of the Crimean Astrophysical Observatory (MMC ARs CrAO catalog). The catalog was created in 2017 <cit.>, and signeficantly redesigned in 2022 in accordance with the approach outlined in <cit.>. The catalog is available at the CrAO web site (https://sun.crao.ru/databases/catalog-mmc-ars). Recall that, in accordance with a technique of independent snapshots of full-disk magnetograms <cit.>, the MMC ARs catalog includes data on ARs that appeared on the disk every 9th day in the range of 60 degrees from the central meridian. Such selection parameters allow satisfying three conditions: i) independence of snapshots <cit.> and accounting of each unique AR once; ii) minimizing outcome of the projection effect (ARs with inversion of the magnetic field near the limb were discarded); iii) three 9-day snapshots cover the Carrington rotation (which facilitates subsequent data processing). For each of the 3047 ARs (that are recorded in the MMC ARs catalog for the period of the study from May 1996 to December 2021), the catalog contains the calculated unsigned magnetic flux, identifier (NOAA number), coordinates and other specific data. For the SC 23, the magnetic fluxes of ARs were calculated from the magnetic field data of the Michelson Doppler Imager <cit.> on board the Solar and Heliospheric Observatory. For the SC 24, the magnetic fluxes were obtained by means of the Helioseismic and Magnetic Imager <cit.> aboard the Solar Dynamics Observatory. More specifically, the Space-weather HMI Active Region Patches <cit.> were used for the HMI period. To take into account the systematic difference between the MDI and HMI instruments <cit.>, during processing of the different cycles data, we applied to them a correction coefficient <cit.>. In addition, only those ARs whose magnetic flux exceeding the threshold of 10^21 Mx were considered. More details regarding the magnetic flux calculation procedure and the features of the current version the MMC ARs catalog can be found in <cit.>. When analyzing the data, we distributed ARs between two main MMC types. According to the MMC <cit.>, all studied ARs, except for unipolar spots, were sorted out between the following categories: regular – bipolar ARs obeying empirical laws (rules) for sunspot groups; irregular ARs – all the rest. By classical rules we mean the Hale polarity law (implying a certain polarity of the leading sunspot depending on whether the AR belongs to the even/odd cycle and N-/S-hemisphere), the Joy law (latitudinal dependence for the tilt of ARs) and the prevalence of the leading sunspot rule <cit.>. It is quite obvious that the regular ARs are compatible with classical magnetic cycle models <cit.> and the mean-field dynamo theory <cit.>. Their existence is completely determined by the global dynamo action <cit.>. The class of the irregular ARs, on contrary, represents a violation of the clear pattern and may indicate the interference of other mechanisms of the magnetic field excitation. In addition, as we interested in the N-S asymmetry of ARs, we distributed ARs between N- and S-hemispheres. As the result, along with the data for all the studied ARs (total unsigned magnetic flux), we were dealing with four sets of ARs, depending on their magnetic morphology and location in different hemispheres. We also compiled two additional quantities (conventionally named by us `semi-sum' and `semi-difference' parts of the flux, see Section <ref>) from the hemispheric data, and that added two more subsets for us to study. We suppose that these parts of the flux might be related to its dipolar-like and quadrupolar-like components, as it is discussed in Subsection <ref>. Please note that the flux of each set was calculated separately. Final time series consist of the cumulative magnetic flux data per rotation. § TEMPORAL VARIATIONS OF THE REGULAR AND IRREGULAR ARS IN DIFFERENT HEMISPHERES Temporal variations of the total unsigned magnetic flux (for all studied ARs) are presented in Fig. <ref> (top panel, black line). The curve for the total flux illustrates the cycle progress and has the noticed double-peak structure, which is known since <cit.>. In hemispheric data for the total flux (middle and bottom panels, grey fill), in the SC 23, the double-peak structure can also be traced. However, the depth of the gap between two main maxima of the cycle varies in different hemispheres (in the N-hemisphere, the time profile looks almost like a plateau, whereas, in the S-hemisphere, the flux decreases significantly). In the SC 24, the pattern is different. The two main maxima are formed by ARs in different hemispheres (the N-hemisphere dominate in the first maximum, while the south fluxes more pronounced in the second maximum). It is consistent with findings by <cit.>, who showed that the double peaks may occur in one of the hemispheres without having any counterpart of the same in the other hemisphere. Temporal profiles for regular and irregular ARs (Fig. <ref>, middle and bottom panels) are shown with blue and red lines, respectively. In each of the hemispheres, each of the profiles demonstrates rather a multi-peak structure. Some of the peaks coincide with certain main maximum (maxima) of the cycle. In different hemispheres, the peaks occur sometimes in-phase, and sometimes – out-of-phase. For different MMC-type ARs, the temporal profiles differ (Fig. <ref>). The regular ARs dominance is observed in the first maximum of the SCs 23 (S-hemisphere) and SC 24 (N-hemisphere). The irregular groups profiles demonstrate more peculiarities in the second maxima. Although the number of the irregular ARs is half the number of regular groups <cit.>, their fluxes exceed the regular ARs fluxes in this temporal interval <cit.>. As it follows from Fig. <ref>, the dominance of the irregular ARs fluxes in the second maximum is provided by groups in the S-hemisphere. Thus, the double-peak structure of a cycle as a whole is formed by ARs of both MMC types in both hemispheres. The trends for the fluxes are more pronounced than that for the number of ARs <cit.>. The second main maximum of the cycle occurs due to ARs with irregularities in their magnetic configuration, and the increased fluxes of the irregular ARs during this temporal interval are thought to be influenced by the turbulent component of the solar dynamo <cit.>. The fact that the regular and irregular ARs fluxes are comparable to each other may also be interesting in terms of the surface polar field evolution, which eventually impact the predictability of the next solar cycle <cit.>. Specific polar surges are supposed to be formed due to the presence of large anti-Hale and non-Joy ARs at the solar surface, and this is confirmed by observations <cit.>. Since the significant contribution of the irregular ARs in the solar cycle progress is found here, a comprehensive analysis of the relationship between characteristics of polar surges and the irregular ARs presence may be interesting as a topic for future studies. § TEMPORAL VARIATION OF THE ASYMMETRY INDICES FOR THE REGULAR AND IRREGULAR ARS To quantify the discrepancy in activity between the N- and S-hemispheres, we used several asymmetry indices. Along with the traditional normalized asymmetry index, (N - S)/(N + S) <cit.>, we also used the absolute asymmetry index, (N - S), which is supposed to reproduce variations in activity better than the normalized index <cit.>. The point is that the absolute index shows the real imbalance between the hemispheres, whereas the normalized index `spreads' it across the overall activity. Usually, the relative asymmetry index demonstrates pronounced values in the minimum, while the absolute asymmetry is strong in maximum of the cycle <cit.>. Thus, these two indices complement each other and we used both of them. Note that when the index value is greater than zero, solar activity is dominant in the N-hemisphere, otherwise, the opposite is true. As an additional way to calculate the hemispheric imbalance, we used the normalized index, (N - S)^2/(N + S). The choice was determined by the fact that this expression coincides with the equation for chi-square statistics for sunspot hemispheric data <cit.>. Strictly speaking, for the magnetic fluxes, the result of calculations by this formula cannot be accepted as a χ-square statistics (this data are non-integer, and the sets of ARs are too small in most of rotations). However, some faint hint of the significance of the N-S asymmetry can be obtained using this index. Temporal variations of the different asymmetry indices for the magnetic fluxes of total ARs (grey fill), the regular (blue line) and irregular (red line) ARs are presented in Fig. <ref>. The top panel is the same as in Fig. <ref> and provided to illustrate the cycle development. The absolute asymmetry index, N - S, normalized indices, (N - S)/(N + S) and (N - S)^2/(N + S), are shown in the second, third and bottom panels, respectively. For total ARs, in the ascending phase and first maximum of the SC 23, one can see multiple transitions of activity from hemisphere to hemisphere. During other time intervals, the asymmetry index retains its sign for several years. In the SC 24, the sign change is observed twice, which is consistent with other studies <cit.>. The main points of the absolute asymmetry index sign changing (Fig. <ref>, second panel) are marked by the dashed vertical lines of different colors (blue for minima and green for maxima). For the regular and irregular ARs, in the beginning of the SC23, the time profiles show considerable difference. After 2002, the activity of both MMC type ARs follows the overall progress. It is worth mentioning that, in some time intervals, the hemispheric asymmetry of the irregular ARs even more pronounced than that for the regular groups. The absolute asymmetry index for the irregular ARs reaches its maximum values in intervals that can be called `extreme' for convenience (the SC 23, first maximum, N-hemisphere and SC 24, second maximum, S-hemisphere) (Fig. <ref>, second panel). The values of the index in these intervals are greater than the highest value for the regular groups (that is observed in the SC 24 in the first maximum). Nevertheless, the values of the normalized index, (N - S)/(N + S), are comparable for ARs of different magnetic morphology in the extreme intervals (third panel). This may be due to the fact that these intervals fall at the maximum, where (as it was mentioned above) the normalized index (N - S)/(N + S) is not very pronounced. In addition, the index (N - S)^2/(N + S) allows us to assume the relevance of the strong hemispheric imbalance for the irregular ARs in the extreme intervals (bottom panel). In the minima of the cycles, as the variations of the normalized index (N - S)/(N + S) shows, a more noticeable N-S asymmetry is observed for the regular groups. § TEMPORAL VARIATIONS OF THE `SEMI-SUM' AND `SEMI-DIFFERENCE' PARTS OF THE FLUX We also considered additional quantities that were conventionally called by us `semi-sum', F_SS, and `semi-difference', F_SD, parts of the magnetic flux. These quantities were obtained from the hemispheric data (using even and odd functions) as F_SS = (F_N + F_S)/2, F_SD = (F_N - F_S)/2, where F_N and F_S are the parts of the flux observed in the N- and S-hemispheres, respectively. In a simple approximation (as discussed below in Section <ref>), these quantities might be associated with the dipolar-like and quadrupolar-like components of the flux. Since we operate with an unsigned magnetic flux (see Section <ref>), F_N and F_S must be assigned a certain sign. In accordance with the Hale polarity law and approach by <cit.>, in the SC 24, the flux was accepted as a positive (negative) in the N-hemisphere (S-hemisphere). The sign was reversed in the adjacent minima. For the minimum between SCs 23 and 24, 2008.03 (in the N-hemisphere) and 2009.02 (in the S-hemisphere) were adopted as the dates of sign reversal <cit.>. For the minimum between SCs 23 and 24, the dates 2020.02 (in the N-hemisphere) and 2019.04 (in the S-hemisphere) were fitted by the MMC ARs CrAO catalog data. Temporal variations of the semi-difference (middle panel) and semi-sum (bottom panel) parts of the flux are shown in Fig. <ref>. The top panel is the same as in all previous Figs. The data for the total ARs are represented by grey fill, time profiles for the regular and irregular ARs are shown by blue and red lines, respectively. To more accurately distinguish the peaks, these data are smoothed by a Gaussian kernel. The full width at half maximum (FWHM) value for the kernel is accepted for 1.2 years. Dashed vertical lines (associated with the moments of changing the sign of the absolute asymmetry index) are adopted from Fig. <ref>. These blue (for cycle minima) and green (for cycle maxima) lines almost coincide with the moments of changing the sign of the semi-difference and semi-sum parts of the flux, respectively. For total ARs (Fig. <ref>, grey fill), the temporal profile for the semi-difference part of the flux evidently reflects the course of the cycle (taking into account the sign changes). The semi-sum part is significantly inferior in strength to the dominant semi-difference part (note that the scales in the middle and bottom panels are not the same). The pattern for the semi-sum part is less regular and does not show typical cyclic variations. In 1999-2001, the regular and irregular ARs profiles varies in the out-of-phase manner. For the regular and irregular ARs, the semi-difference part time profiles are pretty close (Fig. <ref>, middle panel), however, the semi-sum part profiles (bottom panel) differ considerably. For the semi-difference part of the flux, the regular ARs activity is somewhat more noticeable in the first maximum, whereas the irregular groups manifestation occurs in the second maximum. And this is true for both cycles. For the semi-sum part of the flux, the irregular ARs activity is most pronounced in the SC 23 (first maximum) and SC 24 (second maximum). Wakening of the irregular ARs can also be guessed at the beginning of the SC 25. Nevertheless, irregular ARs do not play a substantial role during the declining phase of SC 23 and the ascending phase of SC 24. In general, irregular ARs show more dramatic changes in the semi-sum part of the flux than the regular groups. Possible interpretation is discussed below (see Sections <ref>, <ref>). § CORRELATION FUNCTIONS FOR THE REGULAR AND IRREGULAR ARS §.§ Auto-correlation for the semi-sum and semi-difference parts of the flux Our next test was aimed at finding signs of oscillations of the semi-sum and semi-difference parts of the flux produced by different MMC-type ARs. The evidence of quasi-periodic variations in a time dependent variable F(t) can be obtained using standart auto-correlation function C_F(nδ t) = C_F(-nδ t) = ∑_i=1^N-n(F(t_i) - F)(F(t_i+n) - F)/∑_i=1^N(F(t_i) - F)^2 , where δ t is the time increment (one rotation in our case), N is the total number of data points (341 rotations for the period of the study), n is the time lag. The results of calculations according to Eq. 3 for the regular (blue lines) and irregular (red lines) ARs are shown in Fig. <ref>. For the total flux, for the dominant semi-difference part (top panel, black line), the correlation function show a presence of cyclic variations with a period of 12 years. This is slightly different from the expected well-known value of 11 years for the solar cycle <cit.> and may be explained by the prolonged declining phase of the SC 24. For groups of different MMC types, the period of 12 years is also clearly pronounced. For the semi-sum part of the flux (Fig. <ref>, bottom panel), the the correlation function is found to be more variable. For total ARs (black line), a period between 14 and 15 years can be detected. For the irregular ARs, the semi-sum part of the flux also demonstrates the close period and this is especially interesting, since for the regular ARs, a pronounced periodicity in the semi-sum part variations is not revealed. §.§ Cross-correlation between the hemispheric fluxes We also found the correlation between the fluxes of ARs in different hemispheres as C_NS(nδ t) = 1/N-n∑_i=1^N-|n|F_N(t_i+|n|)F_S(t_i), n < 0 1/N-n∑_i=1^N-nF_N(t_i)F_S(t_i+n), n ≥ 0 , where time dependent variables F_N(t) and F_S(t) represent ARs in the N-hemisphere and S-hemispheres, respectively. Other notations are the same as for Eq. 3. Note that in the Eq. 4, we deal with the convolution of fluxes in different hemispheres. Thus, we used non-normalized dimensional data. In Fig. <ref>, the results of calculations according to Eq. 4 for the regular (top panel, blue line) and irregular (bottom panel, red line) ARs are presented. The right part of each of the graphs represents the case when the second series F_S(t) is delayed relative to the first series F_N(t). In the left part, the time lag between the series is opposite. Thus, in each of the graphs, in the right part, the main side peak shows the relationship between the activity in the SC 23 (N-hemisphere) and SC 24 (S-hemisphere). The side peaks in the left parts correlate fluxes of ARs in the SC 23 (S-hemisphere) and SC 24 (N-hemisphere). For total ARs (Fig. <ref>, grey fill), a regular correlation pattern is observed. The two main side peaks are slightly lower than the central one and indicate oscillations with a period between 11 and 12 years. The left and right sides of the graph are almost symmetrical. For the regular ARs (Fig. <ref>, top panel), in the right part, the profile is approximately the same as for the total groups. The blurring of the left side peak might be due to the mismatch of the time profiles for total and regular ARs in the SC 24 (N-hemisphere) and in the declining phase of the SC 23 (S-hemisphere) (Fig. <ref>). For the irregular ARs (Fig. <ref>, bottom panel), on the left side of the panel, the blurring is less pronounced. However, in both parts of the graph, there is a significant difference in the height of the central and side peaks. This represents a meaningful contrast between the graph and the usual pattern. Note that the phenomenon is expressed specifically for the irregular groups. A high right side peak implies a significant correspondence between the N-hemisphere (SC 23) and S-hemisphere (SC 24). A small left side peak, on contrary, shows a weakened correlation between the N-hemisphere (SC 24) and S-hemisphere (SC 23). Thus, for the irregular ARs, we deal with two different cases of the strengthened/weakened relationship between the two hemispheres in the adjacent cycles. According to the theoretical concepts, a special symmetry between the adjacent cycles may be realized as the result of the mixed-parity solutions for the dynamo models <cit.>. The magnetic flux for the n-th SC in the N-hemisphere is predicted to be similar to the flux for the (n+1)-st SC in S-hemisphere and vice versa. As an example of such an interaction, the observed strong correlation between the hemispheres (N23–S24) could be considered. Although, it should be noted that the case of weakening of correlation between the hemispheres (S23–N24) is also observed. The interaction of the dipolar and quadrupolar components of the global magnetic field was recently assumed as the reason for the N-S asymmetry expressed in features of the regular and irregular ARs statistics <cit.>. We consider the possibility to clarify this issue in the next Section <ref>. § THE RELATIONSHIP BETWEEN THE DIPOLAR AND QUADRUPOLAR COMPONENTS OF THE LARGE-SCALE MAGNETIC FIELD AND OCCURRENCE OF THE REGULAR AND IRREGULAR ARS §.§ Separation of the the dipolar-like and quadrupolar-like components from available flux data for the regular and irregular ARs As it follows from the previous Subsection <ref>, in terms of distinguishing between the regular and irregular ARs, it is interesting to infer the dipolar and quadrupolar components of the flux. In the current section, we consider such a possibility. It is known that the large-scale magnetic field data are relevant for the harmonic analysis <cit.>. Synoptic maps of the magnetic field are widely used for this purpose. Although the difference in the measurements of different instruments can lead to discrepancies that affect the energy recovered in each spherical harmonic mode <cit.>, the advantage of this data type is sufficient number of harmonics, which is limited only by the map resolution. However, the synoptic maps do not allow us to identify the parts of magnetic flux originating from the regular and irregular ARs. Unfortunately, the sunspot data (suitable for this purpose) does not give the entire spatial pattern needed for the harmonic analysis. The synoptic-style mapping on the base of sunspot data <cit.> has limitations and requires additional justification for the regular and irregular ARs. The currently available data for different MMC-type ARs imply a presence of a single signal from each of the hemispheres, which leads to the use of simplified expressions. An example of simplification is presented in <cit.>, in their theoretical study of the relationship of the hemispheric asymmetry and parity flips in the dynamo mechanism, where the global solar magnetic field was considered as the axisymmetric structure without omitting the non-axisymmetric components (see details in Appendix <ref>). We will refer to this assumption as the simple axisymmetric approximation further in the text. The similar method was applied by <cit.>, who used the dipolar and quadrupolar moments to improve solar cycle predictions based on the polar magnetic fields. In this approach, only axisymmetric terms (m=0) are taken into account (see Eq. <ref>). This implies that the axial dipole and quadrupole assumed to be the main contributors to the magnetic field. Meanwhile, observational studies convincingly indicate the presence of significant non-axisymmetric components in the solar magnetic field <cit.>; the surface magnetic field appears to be complex and multipolar in the cycle maxima. When simplified approach is used, the influence of non-axial and other higher harmonics seemed to be blurred between the dipolar (Eq. <ref>) and quadrupolar (Eq. <ref>) parts without ability to distinguish the contribution of each term in Eq. <ref>. Nevertheless, even some observational studies based on the synoptic maps data also did not divide the orders into axisymmetric zonal and non-axisymmetric modes and considered that all degrees m of a given order l are a whole entity <cit.>. Please note that, in terms of the dynamo theory, the separation of individual harmonics is a kind of abstraction. The dynamo process is characterized by significant non-linearity <cit.>. Dipolar and quadrupolar modes are known to be in continuous nonlinear interaction <cit.>. In addition, observations can be interpreted in different ways. For instance, <cit.> considered the observed periods in the sunspot hemispheric asymmetry as the direct manifestation of the dipolar and quadrupolar modes. The expressions given by <cit.> are similar to Eqs. <ref>, <ref>, which implies the axial symmetry of the dipolar and quadrupolar components and all related restrictions (discussed above). Another meaningful way is to present the hemispheric asymmetry as the result of superposition of the anti-symmetric and symmetric dynamo modes <cit.>. The assumption of this approach is the equality of the antisymmetric and symmetric modes, which, apparently, can be met only for critical dynamo modes <cit.> and for some time intervals during the maxima <cit.>. For the regular and irregular ARs, the approach by <cit.> cannot be applied because it requires more than a half of a century long observations of sunspot activity <cit.>. We have in hands only two cycles long observations. The approach by <cit.> has a chance to be tried. The compiled semi-difference (Eq. <ref>) and semi-sum (Eq. <ref>) parts of the flux (Section <ref>) can be used as a proxy for the dipolar and quadrupolar components, respectively. However, doing that we should keep in mind all accompanying restrictions (see Appendix <ref> for further discussion). The possibility of using such proxies for ARs of different magnetic morphology might be indirectly supported by the following signs. The temporal variations of the component of the dipolar parity (semi-difference part of the flux, Section <ref>, Fig <ref>, middle panel) show the typical cycle progress for all studied ARs, as well as for the regular and irregular groups. The correlation analysis also shows the period similar to 11 years for all types of ARs (see Section <ref>, Fig. <ref>). Thus, the dipolar-parity part might be associated with the dipolar-like component of the flux. Unfortunately, we cannot to make the similar firm inference about the quadrupole in the moment. §.§ Interaction between the dipolar and quadrupolar components of the global magnetic field As a next step, we used the data on the asymmetry indices (Section <ref>) to develop ideas on the interaction between the dipolar and quadrupolar components of the global magnetic field proposed in <cit.>. In short, the global dynamo generates both the regular and irregular ARs <cit.>. Irregularities occur due to the distortion of the magnetic flux tubes of ARs during their ascent through the convection zone <cit.>. A turbulent component of the dynamo may be suggested as the reason for such distortions <cit.>. Its manifestations, expressed in an increase in the fraction of the irregular ARs, may be associated with the cycle development when the toroidal field (produced by the global dynamo) weakens and/or loses its regularity. An increase in the number and fluxes of the irregular ARs in the second maximum of the cycle was found in our previous research <cit.>. A similar phenomenon is observed for the magnetic fluxes of the irregular ARs in this study (especially in the S-hemisphere, Fig. <ref>, bottom panel). The hemispheric imbalance for the irregular groups may be explained by the additional weakening of the toroidal field due to the interaction between the dipolar and quadrupolar components of the magnetic field. Simple sketches for the component relationship for even and odd SCs can be found in <cit.>. The idea is that <cit.>, the global dipole changes its orientation one time per 11-year cycle, whereas, in each SC, the quadrupole magnetic field lines can be oriented in two opposite directions (and switch according to the quadrupole oscillation). As a result, in each SC, in a given hemisphere, the total magnetic field is being strengthened or weakened due to the superposition of the dipolar and quadrupolar components. This also applies to the toroidal field, since the quadrupole magnetic field lines are frozen into the plasma and are stretched along the equator by means of the differential rotation, as well as the dipole field lines. Please note that, although the global dipole reversals occurs near the solar maximum, at the photospheric level, the change of the toroidal field (manifested in the sunspot data) is observed with a delay, in the oncoming cycle minimum. In this study, we use the information both about the dipolar component orientation and the absolute asymmetry index sign to define a possible orientation of the quadrupolar component of the field. In Fig. <ref>, as in the previous figures, the top panel shows the cycle progress. The middle panel demonstrate the orientation of the dipolar component of the field for a given cycle (blue lines). The moments of the absolute asymmetry index sign reversal are adopted from Fig. <ref> (dashed vertical lines). Blue dashed lines correspond to the moments of changing the dipolar field orientation in the cycle minima. Other dashed lines (representing other cases of the sign reversal corresponding to the cycle maxima) are seem to be associated with a change in the orientation of the quadrupolar component and are shown in green. The bottom panel shows the results of our fitting of the quadrupolar component orientation (green lines). The fitting was carried out as follows. At first, we defined the total toroidal field (its strength and orientation). The sign of the absolute asymmetry index (Fig. <ref>, second panel) indicates in which hemisphere the irregular ARs dominate in a given time interval. The total field (Fig. <ref>, bottom panel, red arrows) is supposed to be weakened in the corresponding hemisphere. In addition, as the dipolar component exceeds the quadrupolar component in magnitude <cit.>, the total field is assumed to be co-directed with the dipolar component. The next step was to find out the direction of the quadrupole component of the field. For this purpose, we considered both the direction of the dipolar component of the field and the strengthening/weakening of the total field. In a given hemisphere, the dipolar and quadrupolar components are co-directional in the case of amplification of the total field, otherwise the opposite is true. According to this simple phenomenological model <cit.>, in the minima, when the direction of the dipolar component changes, the quadrupolar orientation should remain unchanged. The moments of changing the quadrupolar component orientation are not determined by the 11-year cycle. §.§ Observational features for the dipolar-like and quadrupolar-like components of the flux To compare assumptions from the previous Subsection <ref> with the observation results, the interpretation from Subsection <ref> was used. In the simple axisymmetric approximation, the semi-difference and semi-sum parts of the flux parts were considered as its dipolar-like and quadrupolar-like components, respectively. In terms of this approach, the fitted orientation of the quadrupolar component of the field in the sketch (Fig. <ref>, bottom panel) appears to be in agreement with observed temporal variations of the quadrupolar-like component of the flux (Fig. <ref>, bottom panel). Changes in the direction of the magnetic field lines in the sketch occur almost simultaneously with changes in the sign in the graph. Besides, the dipolar-like component reversals occur in the cycle minima, whereas the quadrupolar-like component direction varies at other times (near maxima), which is consistent with the model assumptions (Subsection <ref>). Please note that the quadrupolar component orientation (Fig. <ref>) was fitted only for time intervals associated with the maxima of studied cycles. In addition, the dipolar-like and quadrupolar-like components representation used allow us to continue discussion of Figs. <ref> and <ref> from a new perspective. As it follows from Fig. <ref>, in the second maxima, the irregular ARs show the enhancement for both the dipolar-like and quadrupolar-like components of the flux (although in the SC 23, the quadrupolar peak is slightly shifted to the first maximum). For the quadrupolar-like component, the peaks are more pronounced. It is also consistent with proposed simple phenomenological model. Our findings also meet indirect evidences in the other authors results obtained for the global magnetic field at the photospheric level and in corona <cit.>. For example, in 1999, the peak pronounced for both all and the irregular ARs (Fig. <ref>) could appear due to increasing the amplitude of the quadrupolar-like component. Actually, at this time, the dipole and quadrupole strength becomes comparable <cit.>. In 2012, we also can notice both the peaks in the quadrupolar-like part of the flux (Fig. <ref>) and the immense growth in the non-axisymmetric quadrupole component of the global field according to the results by <cit.>. Apart of that, the time moments of the quadrupolar-like component peaking coincide with peaks in higher zonal harmonics l=3,5 <cit.>. It is also interesting that the irregular ARs increasing in maxima that was found for both the dipolar-like and quadrupolar-like components coincides with the moments of the equatorial (non-axisymmetric) global dipole manifestations <cit.>. The rotation shearing may also converts the equatorial dipole into higher-order multipoles l=3,5, 7,... <cit.>. The relationship between the local fields of sunspots and non-axisymmetric nature of the large-scale magnetic field might also be associated with the irregularities in ARs magnetic configuration. It should also be noted that the compiled quadrupolar-like component of the flux demonstrates the period of about 15-year (Fig. <ref>), which is close to the theoretical estimates <cit.> and <cit.>. It is especially interesting that this periodicity is provided by the irregular ARs and is not detected for the regular groups. It might also indicate a special role of the irregular ARs in the dynamo process. § CONCLUDING REMARKS The magnetic flux data for 3047 ARs from the MMC ARs CrAO catalog were used to study cyclic variations from May 1996 to December 2021 (complete SCs 23 and 24). Along with the total flux, we analyzed the fluxes of subsets of ARs, depending on their MMC type (the regular and irregular groups) and location relative to the equator (in the N- and S-hemispheres). We also considered the total flux of ARs as a proxy for the subsurface toroidal flux and compiled its dipolar-like and quadrupolar-like components from ARs hemispheric data in the simple axisymmetric approximation. The aim of this work was to study the regular and irregular ARs hemispheric distribution and to clarify different MMC-type ARs involvement in the dynamo process. The features of the quadrupolar-like component of the flux expressed in the irregular ARs data were also considered. As the result, we found the following. (i) For ARs of each MMC type, in each of the hemispheres, time profiles demonstrate a multi-peak structure. The most pronounced peaks are observed for the irregular ARs. For groups of each type, in different hemispheres, the peaks occur sometimes in-phase and sometimes out-of-phase. The double-peak structure <cit.> is formed by ARs of both MMC types in both hemispheres. (ii) For both studied cycles, although the number of the irregular ARs is half less than the number of the regular groups <cit.>, the irregular AR fluxes are comparable with ones of the regular groups (N-hemisphere) or exceed them (S-hemisphere, second main maximum). As it was mentioned in the Introduction, the increase in the total irregular ARs fluxes in the second maximum is supposed to be due to the turbulent component of the dynamo <cit.>. The pronounced flux hemispheric imbalance (found here) supports the previous hypothesis about weakening of the toroidal field and appearance of ARs-`violators' in one of the hemispheres due to the interaction between the dipolar and quadrupolar components of the magnetic field <cit.>. (iii) Cyclic variations of asymmetry indices show that the N-S asymmetry of the irregular ARs is even more pronounced than that of the regular groups. In accordance with our simple phenomenological model, the absolute asymmetry index sign reversal occurs when the mutual orientation of the dipolar and quadrupolar components of the magnetic field changes. For the dominant dipolar component of the flux, as follows from the classical magnetic cycle models <cit.>, a sign reversal falls at minima. In other cases, when the direction of the dipolar magnetic field lines remains unchanged, the asymmetry index sign reversal might occur due to the changes in the direction of the quadrupolar component. Time profiles for the dipolar-like and quadrupolar-like components of the flux (inferred from observations in the simple axisymmetrical approximation) are consistent with these assumptions. (iv) For the basic dipolar-like component of the total flux (as well as for different-type AR fluxes), the auto-correlation function show a presence of cyclic variations with a period of 12 years. A discrepancy with an expected value of 11 years <cit.> may be explained by the prolonged declining phase of the SC 24. For the quadrupolar-like component, a period of about 15 years was found. This is close to the theoretical estimates in <cit.> (between 13 and 15 years) and in <cit.> (16 years). Interestingly, that the period of 15 years is found only for the irregular ARs, whereas for the regular groups a pronounced periodicity is not revealed. (v) For the total ARs and regular groups, for the adjacent cycles, a comparison of the hemispheric data using the cross-correlation function shows a standart correlation pattern with two side peaks (slightly inferior to the central peak in height). For the irregular ARs the pattern is violated. For the groups of this MMC type, the highest right side peak implies a strong correlation between the N-hemisphere (SC 23) and S-hemisphere (SC 24). Since the mixed-parity solutions for dynamo models predict a special symmetry <cit.>, this is consistent with the theory. However, the low left side peak shows a weak conformity between the S-hemisphere (SC 23) and N-hemisphere (SC 24). Thus, for the adjacent cycles, during the transition of activity from the N- to S-hemisphere (and for the opposite case), we observe a strong (weak) correlation. In summary, overall pattern for regular and irregular ARs in the cycle allows us to attribute their origin to the global dynamo action <cit.>. A well-known phase lag between the hemispheric activity <cit.> was found for ARs of both MMC types. However, for the irregular ARs, the N-S asymmetry show the features. Their increased activity, as well as considerable hemispheric imbalance, occurs in the second main maximum of the cycle, which is especially important. The point is that the dynamo process involves two stages. The first part of the cycle implies the transformation of the poloidal component of the global magnetic field to the toroidal one (Ω-effect), and the differential rotation is generally accepted as a trigger of this process. During the second part of the cycle, the poloidal component of the field restores from the toroidal one, and the features of this process (α-effect) are still being discussed. It might be assumed that the irregular ARs (pronounced in the second maximum) contribute in the α-quenching, which could be different in different hemispheres. Our findings also confirm the crucial role of anti-Hale and non-Joy ARs for the polar field reversal that discussed, e.g., in <cit.>. The irregular groups can modify the polar cap flux asymmetry and impact on the amplitude of the ongoing cycle, which is essential for the prediction models <cit.>. Thus, despite of the denomination (irregular, rogue, anomalous, etc.), ARs-`violators' are the integral part of the mechanism of solar activity and have special functionality in closing the dynamo loop. Apart of that, the irregular ARs are the source of strong flares and geoeffective events <cit.>, which immensely impact the solar-terrestrial system <cit.>. Please note that, on the one hand, the irregular ARs show significant fluxes comparable to those of the regular groups. On the other hand, present-day dynamo models are focused on the regular groups only. Thus, it would be interesting to consider the irregular ARs in the model design to complement our understanding of the solar cycle. The irregular ARs also demonstrate a number of features for the quadrupolar-like component of the flux (compiled in simplest axisymmetric approximation). The pronounced peaks in the maxima and evidences of oscillations are found for them. The specific symmetry pattern (similarity of activity in different hemispheres in adjacent cycles) is also revealed for ARs of this MMC type. Although we could mention the possibility of a mix-parity dynamo solution <cit.>, the simplicity of the approximation used restrict the degree of our certainty. The same applies to the possible relationship between the detected properties of ARs profiles and other observational features, such as evidences of the equatorial dipole <cit.> and close amplitudes of the dipolar and quadrupolar components of the global field in certain temporal intervals in maxima <cit.>. Nevertheless, we emphasize that all the peculiar observational effects were found specifically for the irregular ARs. In addition, implementation of the dipolar and quadrupolar modes with approximate equality of the components may leads to the global minima, although other reasons are also possible <cit.>. As <cit.> showed, the solar cycle might have resided in quadrupolar parity states in the past, which provides a possible pathway for predicting parity flips in the future. It is quite possible that, with the dominance of the quadrupolar component, all ARs will appear in only one hemisphere <cit.>, and they will mostly be irregular. We thus speculate that a significant increase in the fraction of the irregular ARs during future cycles may warn us about critical changes in the level of solar activity. Anyway, all aspects of interpretation are the subject of theoretical researches, which are beyond the scope of this article. We present here only observational results about the magnetic fluxes of ARs with different magnetic morphology. § ACKNOWLEDGEMENTS The author is grateful to the anonymous reviewer for his/her meaningful comments that made it possible to improve the article. The author is thankful to V.I.Abramenko for her valuable remarks and to R.A.Suleymanova for the data on the SC 23 in the MMC ARs CrAO catalog. The author would also like to thank V.N.Obridko, D.D. Sokoloff, L.L.Kitchatinov, M.S.Butuzova, S.A.Korotin, K.N.Grankin, S.M.Andrievsky for fruitful discussions. The study was financially supported by the Russian Ministry of Science and Higher Education, agreement №122022400224-7. § DATA AVAILABILITY The MMC AR CrAO catalog is available at the CrAO web site (https://sun.crao.ru/databases/catalog-mmc-ars). Additional comments can be recieved from the author on request. mnras § DEFINING THE DIPOLAR AND QUADRUPOLAR COMPONENTS OF THE MAGNETIC FIELD FROM OBSERVATIONS In general, the radial magnetic field on the solar surface B(θ,φ) can be represented in terms of spherical harmonics Y_l^m(θ,φ) as B(θ,φ, t) = ∑_l=0^∞∑_m=-l^lB_l^m(t)Y_l^m(θ,φ), where θ and φ are polar (co-latitudinal) and azimuthal (longitudinal) coordinates, respectively <cit.>. Time-dependent complex coefficients B_l^m(t) can be found from the orthogonality relationship by the integration over the surface of a sphere, for example, as ∫Y_l'^m'*(r)Y_l^m(r)dΩ = δ_ll'δ_mm'. The spherical harmonics Y(θ,φ), which represent the angular portion of the solution to Laplace's equation in spherical coordinates, can be expressed as Y(θ,φ) = C_l^mP_l^m(cosθ)e^imφ, where P_l^m(cosθ) are associated Legendre polynomial of order l and degree m. Coefficients C_l^m are defined as C_l^m = (-1)^m[2l+1/4π(l-m)!/(l+m)!]^1/2. In practice, the number of terms in Eq. <ref> is finite and depends on the type of the observational data. Synoptic maps are widely used as the basis for calculations, and truncation limit l_max depends on maps characteristics <cit.>. Note that coefficients B_l^m(t) have a complex nature, and the amplitudes of the spherical harmonics modes appear in the real part (for m>0) and imaginary part (for m<0) <cit.>. Thus, one can taking into account a symmetry between spherical harmonics with orders m and -m (for a given value of l) and start the sum at m=0 (instead of m=-l). Therefor, the Eq. <ref> takes a form B(θ,φ, t) = ∑_l=0^l_max∑_m=0^lB_l^m(t)Y_l^m(θ,φ). Limiting only the dipolar and quadrupolar components of the field (lower-order terms, l=1 and l=2, respectively) and rewriting the expression in more detail (to examine its structure), one can obtain from Eq. <ref> the following expression B(θ,φ,t) = √(3/4π)B_1^0(t)cosθ-√(3/8π)B_1^1(t)√(1-cos^2θ))e^iφ_first-order terms+ second-order terms +√(5/4π)B_2^0(t)·1/2(3cos^2θ-1)- -√(5/24π)B_2^1(t)√(1-cos^2θ)·cosθ· e^iφ+ +√(5/96π)B_2^2(t)·3(1-cos^2θ)· e^2iφ. In Eq. <ref>, each of the parts (for each of the orders) consists of both axisymmetric (m=0) and non-axisymmetric (m>0) terms. In the first-order part (l=1), the first term represents an axial global dipole, which axis approximately coincides with the rotation axis of the Sun, while the second term can be associated with a horizontal (equatorial) dipole <cit.>. The second-order part contains the quadrupolar (l=2) terms. The first (axisymmetric, m=0) term is responsible for a zonal harmonic, while the non-axisymmetric terms, m=l=2 and m ≠ l, are related to the sectorial and tesseral structures, respectively <cit.>. In addition, the terms in both-order parts (Eq. <ref>) can be classified as antisymmetric (odd l+m) and symmetric (even l+m) with respect to the equator. According to different nomenclatures, these families of harmonic modes also referred as `primary-family' (`dipolar') and `secondary-family' (`quadrupolar'), respectively. It is important that the nomenclature dipolar/quadrupolar (used in the sense of equatorial symmetry) can lead to confusion, and the equatorial dipole (l=1, m=1) can formally be assigned to the `quadrupolar' family <cit.>. Please note that the axisymmetric terms (m=0) in the Eq. <ref> do not contain a dependence on the azimuthal angle and have a simple form that facilitates further transformations. As it is shown in <cit.>, in the case of axial symmetry (assuming the axial dipolar and quadrupolar moments the main determinants of the field), we can obtain for a particular latitude for the field in the N-hemisphere B_N = C_1· DM·cosθ + C_2· QM·1/2(3cos^2θ-1), and for the field in the S-hemisphere B_S = -C_1· DM·cosθ + C_2· QM·1/2(3cos^2θ-1), where coefficients B_1^0 and B_2^0 are expressed as the dipolar (DM) and quadrupolar (QM) moment, respectively. Coefficients C_1^0 and C_2^0 are also designed: C_1≡ C_1^0=√(3/4π); C_2≡ C_2^0=√(5/4π). In addition, in Eqs. <ref>, <ref>, different signs are assigned to the antisymmetric component of the field in different hemispheres, whereas the symmetrical component has the same sign on different sides of the equator. With respect to the dipolar and quadrupolar moments, the combination of Eqs. <ref> and <ref> is a simple linear system. It provides the following solutions DM = 1/2C_1cosθ(B_N-B_S), and QM = 1/C_2(3cos^2θ-1)(B_N+B_S). Thus, in the axisymmetric approximation, the dipolar and quadrupolar components of the field turn out to be proportional, respectively, to the difference and sum of the hemispheric signals. <cit.> used Eqs. <ref> and <ref> to define a parity function in terms of dipolar and quadrupolar moments to reveal the relationship between the solar parity reversal and hemispheric asymmetry. The close approach in the calculation of the dipolar and quadrupolar moments was used by <cit.> when improving the solar cycle predictions based on the northern and southern polar magnetic field data. A similar method proved to be useful when working with observational data limited to a single signal from each of the hemispheres at any given temporal interval <cit.>.
http://arxiv.org/abs/2406.18293v1
20240626122354
Combining Automated Optimisation of Hyperparameters and Reward Shape
[ "Julian Dierkes", "Emma Cramer", "Holger H. Hoos", "Sebastian Trimpe" ]
cs.LG
[ "cs.LG", "cs.AI" ]
EmT: A Novel Transformer for Generalized Cross-subject EEG Emotion Recognition Yi Ding, Member, IEEE, Chengxuan Tong, Graduate Student Member, IEEE, Shuailei Zhang, Muyun Jiang, Yong Li, Kevin Lim Jun Liang, and Cuntai Guan, Fellow, IEEE Yi Ding and Chengxuan Tong contribute equally to this work. Yi Ding, Chengxuan Tong, Shuailei Zhang, Muyun Jiang, Yong Li, and Cuntai Guan are with the College of Computing and Data Science, Nanyang Technological University, 50 Nanyang Avenue, Singapore, 639798. e-mail: (ding.yi, tong0110, shuailei.zhang, james.jiang, yong.li, ctguan)@ntu.edu.sg. Chengxuan Tong and Jun Liang Kevin Lim are with Wilmar International, Singapore. E-mail: (chengxuan.tong, kevin.limjunliang)@sg.wilmar-intl.com. Cuntai Guan is the Corresponding Author. July 1, 2024 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= § ABSTRACT There has been significant progress in deep reinforcement learning (RL) in recent years. Nevertheless, finding suitable hyperparameter configurations and reward functions remains challenging even for experts, and performance heavily relies on these design choices. Also, most RL research is conducted on known benchmarks where knowledge about these choices already exists. However, novel practical applications often pose complex tasks for which no prior knowledge about good hyperparameters and reward functions is available, thus necessitating their derivation from scratch. Prior work has examined automatically tuning either hyperparameters or reward functions individually. We demonstrate empirically that an RL algorithm's hyperparameter configurations and reward function are often mutually dependent, meaning neither can be fully optimised without appropriate values for the other. We then propose a methodology for the combined optimisation of hyperparameters and the reward function. Furthermore, we include a variance penalty as an optimisation objective to improve the stability of learned policies. We conducted extensive experiments using Proximal Policy Optimisation and Soft Actor-Critic on four environments. Our results show that combined optimisation significantly improves over baseline performance in half of the environments and achieves competitive performance in the others, with only a minor increase in computational costs. This suggests that combined optimisation should be best practice. § INTRODUCTION Deep reinforcement learning (RL) has successfully been applied to various domains, including <cit.>. Despite successes in these and other challenging applications, configuring RL algorithms remains difficult. This is due to the algorithms typically having several hyperparameter and reward configurations, critically determining learning speed and the general outcome of the training process. For each task, there usually is a final objective one wants to achieve. Defining the RL rewards in terms of this objective is typically insufficient; instead, augmenting the reward with additional intermediate rewards, sub-goals, and constraints is necessary for effective training. This augmentation of a reward signal is referred to as reward shaping, and performance and learning speed can crucially depend on it <cit.>. Next, RL algorithms require the optimisation of hyperparameters, such as learning rate or discount factor. Effective hyperparameter tuning requires an effective reward signal, and effective reward shaping depends on good hyperparameter configurations. This circular dependency becomes particularly relevant when applying RL to novel environments beyond commonly used benchmarks, for which neither effective reward shapes nor good hyperparameter settings are available. In the area of Automatic RL (AutoRL) <cit.>, different data-driven approaches have been developed in recent years to automatically approach hyperparameter optimisation <cit.> and reward shaping <cit.>. However, these methods approach each problem individually without considering their interdependency. Therefore, they require the availability of high-performing configurations of the non-optimised component. To the best of our knowledge, ours is the first study to thoroughly investigate the effectiveness and broader applicability of jointly optimising hyperparameters and reward shape by using multiple and different environments and systematically evaluating the benefit thus obtained. We examine the combined optimisation of hyperparameters and reward shape using two state-of-the-art RL algorithms: Proximal Policy Optimisation (PPO) <cit.> and Soft Actor-Critic (SAC) <cit.>. We performed experiments on Gymnasium LunarLander <cit.>, Google Brax Ant and Humanoid <cit.>, and Robosuite Wipe <cit.>. The Wipe environment is a robot task representing contact-rich interactions inspired by modern production tasks, which has not been well studied in the literature yet. We compare the combined optimisation results against baselines from the literature and, in particular, to individual optimisation of only hyperparameters and reward shape. We employ the state-of-the-art black-box hyperparameter optimisation algorithm DEHB <cit.> for our experiments, which recently showed to outperform other optimisation methods in RL <cit.>. Our key contributions can be summarised as follows: * We illustrate the advantage of joint optimisation by showing complex dependencies between hyperparameters and reward signals in the LunarLander environment. We use an existing hyperparameter optimisation framework and extend it with additional hyperparameters that control reward shaping. We show that combined optimisation can match the performance of individual optimisation with the same compute budget despite the larger search space; furthermore, we show that it can yield significant improvement in challenging environments, such as Humanoid and Wipe. * We demonstrate that including a variance penalty for multi-objective optimisation can obtain hyperparameter settings and reward shapes that substantially improve performance variance of a trained policy while achieving similar or better expected performance. § BACKGROUND We begin with some background on RL, define the optimisation of hyperparameters and reward shape, and present the selected algorithm applicable to these optimisation problems. §.§ Reinforcement Learning and Reward-Shaping In RL, an agent learns to optimise a task objective through interaction with an environment <cit.>. The environment is represented as a discounted Markov Decision Process (MDP) ℳ := (𝒮, 𝒜, p, r, ρ_0, γ), with state space 𝒮, action space 𝒜, an unknown transition probability distribution p: 𝒮×𝒜×𝒮→ℝ, reward function r: 𝒮×𝒜→ℝ, distribution of the initial state ρ: 𝒮→ℝ, and discount rate γ∈ (0, 1). A policy π: 𝒮×𝒜→ℝ selects an action with a certain probability for a given state. The agent interacts with the MDP to collect episodes τ = (s_0, a_0, r_1, s_1, …, s_T), i.e., sequences of states, actions, and rewards over time steps t=0, …, T. In applications, RL algorithms are very sensitive to the rewards in a given MDP to infer policies achieving the desired objective. To ease the policy search, reward shaping is the practice of designing a reward function r̃^α, w := α· (r + f^w) based on the original reward r of ℳ, where the reward shaping function f^w: 𝒮×𝒜→ℝ denotes the change in reward <cit.> parameterised by reward weights w ∈ℝ^n and scaled by α∈ℝ^+. The shaped reward essentially yields the modified MDP ℳ_α, w := (𝒮, 𝒜, p, r̃^α, w, ρ_0, γ). The function f^w is commonly designed by identifying key terms or events that should be rewarded or penalised and combining these as a weighted sum. To obtain policies, there are now two hierarchical objectives for measuring performance. The outer task objective measures success in terms of the overall goal one wants to solve, and the inner objective in terms of maximising the collected shaped rewards when interacting with ℳ_α, w. We formalise the overall task objective as o_goal, measuring the success of a task in the trajectory τ by assigning it a score o_goal(τ) ∈ℝ. Examples of such goals are achieving a certain objective or minimising time to perform a task. In addition to this outer task objective, we have the inner objective of maximising the expected return of the shaped rewards given by J(π) = 𝔼_τ∼π [ ∑_t=1^Tγ^t ·r̃_t ]. The common approach of RL is to maximise performance with regard to the task's objective o_goal by finding the policy π that maximizes the expected return J(π). This typically involves tuning the parameters α and w of the shaped reward to obtain reward signals that facilitate finding policies in RL training that perform well with regard to o_goal. The task objective o_goal is not used during RL training and only measures success for a full trajectory τ. This allows measuring success much sparser than the shaped reward. Such sparse task objectives are commonly straightforward to define. §.§ Combined Hyperparameter and Reward Shaping Optimisation [23]R0.3 < g r a p h i c s > Illustration of the two-level optimisation process. Outer loop: hyper- and reward parameter optimisation; inner loop: RL training. In each iteration, the parameter optimiser chooses parameters and receives their performance measured by 𝒪_goal(π). In practical RL applications, both the algorithm's hyperparameters and the environment's reward shape require tuning. For an environment ℳ_α, w with task objective o_goal, we can approach the refinement of hyper- and reward parameters as a two-level optimisation process. In the outer loop, an optimisation algorithm selects hyper- and reward parameters for the algorithm and environment. In the inner loop, these parameters are used for RL training, yielding a policy π. This policy is then assessed against a task performance metric 𝒪_goal based on o_goal, and its score is returned to the optimisation algorithm to determine the next parameter configuration. To evaluate a policy's performance with regard to o_goal, different metrics can be used. The single-objective performance metric 𝒪^so_goal(π):= 𝔼_τ∼π[o_goal(τ)] is exclusively concerned with optimising the average task score. The multi-objective metric 𝒪^mo_goal(π):= 𝔼_τ∼π[o_goal(τ)] - σ_τ∼π[o_goal(τ)] includes an additional variance-based penalty, as described by <cit.>, preferring policies with low-performance variance and therefore consistent outcomes. Figure <ref> illustrates the two-level optimisation process. To formally introduce the optimisation problems, we adopt the definition of algorithm configuration by <cit.> and adapt it to our RL context. Consequently, our focus is on optimising the hyperparameters of the RL algorithm, represented by θ, as well as the reward shaping, represented by α and w. Consider an environment ℳ_α, w := (𝒮, 𝒜, p, r̃^α, w, ρ_0, γ) with reward parameters consisting of reward scaling α∈ A and reward weights w ∈ W. Further, given an RL algorithm A_θ(ℳ_α, w) parametrised by hyperparameters θ∈Θ. This algorithm interacts with the environment ℳ_α, w and returns a policy π. For performance metric 𝒪_goal(π), we define the following optimisation problems: Hyperparameter optimisation: For fixed reward parameters α̂ and ŵ, find θ^* ∈Θ, s.t. θ^* ∈θ∈Θarg max 𝒪_goal(A_θ(ℳ_α̂,ŵ)). Reward parameter optimisation: For fixed hyperparameters θ̂, find (α^*, w^*) ∈ A × W, s.t. (α^*, w^*) ∈(α, w) ∈ A × Warg max 𝒪_goal(A_θ̂(ℳ_α, w)). Combined optimisation: Find (θ^*, w^*, α^*) ∈Θ× W × A, s.t. (θ^*, w^*, α^*) ∈(θ, α, w) ∈Θ× A × Warg max 𝒪_goal(A_θ(ℳ_α, w)). §.§ DEHB Among the many optimisation methods for RL, DEHB has recently demonstrated superior performance <cit.> and can be utilised for all three optimisation problems introduced in Section <ref>. DEHB is a black-box, multi-fidelity hyperparameter optimisation method combining differential evolution <cit.> and HyperBand <cit.>. Its multi-fidelity approach involves running numerous parameter configurations with a limited budget (e.g., a fraction of training steps) and advancing promising configurations to the next higher budget. This strategy allows for efficient exploration of the parameter space by testing a large number of configurations while avoiding wasteful evaluations on suboptimal configurations. The best-performing parameter configuration observed during optimisation is called the incumbent configuration. § RELATED WORK Our work aims to optimise RL algorithms by focusing jointly on hyperparameters and reward shapes to consistently obtain policies with high performance. The critical importance of hyperparameter tuning in deep RL is well-established <cit.>. Similarly, reward shaping is recognised as important for fast and stable training <cit.>. The development of stable and reliable policies has been explored in risk-averse, multi-objective RL <cit.>, employing a straightforward variance-based performance penalty among many possible methods. For black-box hyperparameter and reward shape optimisation, several methods have already been developed in the framework of AutoRL <cit.>, a data-driven approach for systematically optimising RL algorithms through automated machine learning. However, these methods only target either the hyperparameter or reward-shape optimisation problem. Black-box methods optimising hyperparameters comprise population-based <cit.> and multi-fidelity methods <cit.>. A recent study <cit.> highlights the effectiveness of the multi-fidelity DEHB procedure for RL. Black-box methods for optimising reward shapes have been studied using evolutionary methods <cit.>. None of the mentioned works for AutoRL consider joint optimisation of hyperparameters and reward parameters. The differences in performance achieved by separate hyperparameter optimisation and reward weight optimisations have been discussed by <cit.>, showing that reward parameters alone can improve performance and search efficiency compared to hyperparameter tuning. To the best of our knowledge, no comprehensive investigation has been conducted into whether combined reward and hyperparameter optimisation is generally possible and examined its potential in depth. Moving beyond this, <cit.> provided initial evidence that joint optimisation of hyperparameters and reward shape can outperform standard hyperparameter optimisation with manual reward shaping. However, their findings were limited to a single environment and presented as a custom solution, focusing solely on solving one specific environment. Furthermore, they did not thoroughly investigate the effects of a combined optimisation approach. § SETUP OF EXPERIMENTS In this section, we describe the setup of our experiments, the results of which will be discussed in Section <ref>. The experiments detailed in Section <ref> aim to examine the relationship between specific hyperparameters and reward weights to better understand their interdependencies and the necessity of joint optimisation. Subsequently, the experiments in Section <ref> empirically investigate the performance of joint optimisation compared to individual optimisation to analyse differences in performance and cost. We trained PPO and SAC agents in four environments, each with a specific task objective: in Gymnasium's continuous LunarLander <cit.>, a probe aims to minimise landing time; in Google Brax Ant and Humanoid <cit.>, a walking robot aims to maximise travel distance; and in Robosuite Wipe <cit.>, a simulated robot arm seeks to maximise the amount of dirt wiped from a table. All environments were chosen for non-trivial reward structures and for posing difficult hyperparameter optimisation problems. Specifically, Humanoid is notoriously difficult to solve, and the Wipe environment has a large reward parameter space that needs to be optimised. The Wipe environment represents a task that has not yet been extensively studied in the literature and is closely related to real-world applications. This allows us to test the applicability of our combined optimisation approach to environments that are less well-established in the field, yet of high practical interest. More information about the environments and their reward structure can be found in Appendix <ref>. For training with LunarLander and Wipe, we employed the stable-baseline's Jax PPO and SAC implementation <cit.>, while for the Google Brax environments, we utilised the Google Brax GPU implementations. Implementation details can be found in the supplementary code repository <https://github.com/ADA-research/combined_hpo_and_reward_shaping>. §.§ Interdependency of Hyperparameters and Reward Parameters We conducted an exhaustive landscape analysis for PPO training on LunarLander, exploring pairwise combinations of a hyper- and reward parameter to better understand their interdependencies and substantiate the intuition that both components should be optimised jointly. The parameters not considered in each pair were fixed to their baseline value. A resolution of 100 values per parameter was applied, and the training performance of each pair was measured by the single-objective task performance and averaged over 10 seeds. In terms of hyperparameters, we considered the discount factor γ, generalised advantage estimation λ, and learning rate η, and in terms of reward parameters, the tilting, distance, and velocity weight. A logarithmic grid was applied to the discount factor and learning rate, with points positioned at equidistant logarithms. A uniform grid of equidistant points was applied to all other hyper- and reward parameters. Both choices were also used in our later optimisation experiments. §.§ Optimisation of Hyperparameters and Reward Parameters We conducted optimisation experiments to empirically compare the performance of joint optimisation with individual optimisation of hyperparameters and reward parameters; our goal was to understand the practicality of joint optimisation in finding well-performing hyperparameters and reward parameters without requiring any manual tuning. We used the black-box algorithm DEHB for the three optimisation problems introduced in Section <ref>. The hyperparameter search spaces for PPO and SAC consist of four and seven parameters, respectively, that are commonly optimised and known to impact performance significantly. In particular, learning rate and discount factor were optimised for PPO and SAC. The hyperparameters not included in the search space were fixed at the baseline values of each training. For the reward function, we adjusted the weight parameters of each environment's reward shape. LunarLandar has four reward parameters, Ant and Humanoid three, and Wipe seven. The hyperparameters not optimised in reward-weight-only optimisation were set to the algorithm's training baseline values for the respective environment. The reward parameters not optimised in the case of hyperparameter-only optimisation were set to the default values of the respective environments. In the combined optimisation approaches, all hyperparameters and reward parameters in the search space were optimised from scratch. The search spaces and baseline values for hyperparameters are detailed in Appendix <ref>, while the search spaces and default values for reward parameters are provided in Appendix <ref>. DEHB has been demonstrated to outperform random search for hyperparameter optimisation <cit.>. To analyse its effect on the optimisation of reward parameters, we used a random search approach for the combined optimisation task, where the hyperparameters are optimised with DEHB, but the reward parameters are chosen randomly in each optimisation step. In our setup, we set the fidelity of DEHB to equal the number of RL training steps. DEHB evaluates parameter configurations during the optimisation using three training step budgets, each increasing by a factor of three, with the largest matching the baseline's training steps. The fitness of each configuration is determined by the average performance metric after training with the designated steps over three random seeds. We performed experiments using the single- and multi-objective task objective performance metrics introduced in Section <ref>. Each optimisation experiment was conducted with five random seeds, and each resulting final incumbent configuration was evaluated by training using ten additional random seeds and evaluating performance on the corresponding task objective. SAC is particularly sensitive to the scaling of the reward signal, since it influences the agent's exploration behaviour <cit.>. The reward scale α has only been optimised as part of the SAC baseline for Humanoid and Ant. Thus, we separately optimised the reward scale α only for Humanoid and Ant and kept the reward scale fixed to α = 1 for the other environments. Details on how reward scaling was performed can be found in Appendix <ref>. The overall optimisation budget for DEHB with PPO and SAC equals 133 and 80 full training step budgets, respectively. Due to its computational demands, for the Wipe environment, we only considered SAC. The wall-clock times for the PPO and SAC optimisations are about 4 and 60 for LunarLander, 12 and 15 for Ant, 36 and 60 for Humanoid, and 120 for Wipe. An overview of our execution environment and the overall computational cost can be found in Appendix <ref>. § EMPIRICAL RESULTS We now present the results from our experiments. First, we show the complex interdependencies between hyperparameters and reward weights. Second, we demonstrate that joint optimisation can match or outperform individual optimisation and produce policies with substantially lower variance. §.§ Interdependency between Hyperparameters and Reward Parameters [30]R0.6 < g r a p h i c s > Landscapes depicting the average return on LunarLander for pairwise hyperparameters and reward weights over ten PPO trainings. Lower values (lighter) correspond to faster landing (better performance). Yellow lines mark each parameter's default value. Blue lines denote the best-performing reward weights for each hyperparameter value. The black dots mark the incumbent configurations found in the joint optimisation experiments in Section <ref>. The parameter landscapes for LunarLander with PPO are shown in Figure <ref>. We observe an interdependency of varying strength between the hyperparameters and reward parameters. In all cases, the behaviour of specific reward parameters changes with different hyperparameter values. The ranges for reward parameters that lead to successful training vary depending on the hyperparameter and exhibit sharp boundaries in some cases. In particular, we observed ranges of reward parameters where performance deteriorates across all possible hyperparameter values. Regarding the relation between hyperparameters and best-performing reward parameters (indicated by the blue lines in the plot), we observed a strong dependency for the distance weight and weaker dependencies for the velocity weights. In particular, we see a non-convex region of successful training parameters for the distance weight. Furthermore, we see large changes of the distance weight in its optimisation space. The tilting weight shows almost no dependency on the hyperparameters. Finally, our landscapes suggest that optimal value of the tilting weight is mostly near zero, which suggests that it is mostly irrelevant to RL training. In addition, we observed that the incumbent configurations in the joint optimisation experiments for LunarLander, presented in Section <ref> (indicated by the black dots in the plot), are often not fully located in high-performing regions. We believe this is due to the larger search space during optimisation, which introduces additional dependencies on other parameters that impact performance in these regions. This further highlights the interdependencies of the parameters within the context of the full search space. In Appendix <ref>, we report landscapes showing the optimal hyperparameters with respect to the reward parameters, showing similar dependencies. Overall, our results indicate that hyperparameters and reward parameters are interdependent and that finding high-performing hyperparameters necessitates well-chosen reward parameters and vice versa. This confirms the intuition this work is based on: optimising the hyperparameters and reward shape should not be considered independently but instead approached jointly. §.§ Joint Optimisation Performance Table <ref> reports the results of our optimisation experiments. Performance is shown in terms of single-objective task performance and the coefficient of variation (in percent). As outlined in Section <ref>, each experiment consists of five optimisation runs, with the incumbent parameter configuration of each run evaluated through ten RL training runs. The performance results in Table <ref> are derived by calculating the median performance for each optimisation run across its ten evaluations and then computing the median of these five values for each experiment. The median coefficients of variation are calculated analogously. We chose the median over the mean to present our results, as it is more robust to outliers. To gain further insights into the statistical differences between the optimisation experiments, we employed linear mixed-effects model analysis <cit.>. For each combination of environment and algorithm, we performed pairwise comparisons of the aggregated 50 evaluation runs of the best-performing experiment with those of the related optimisation experiments. The mixed-effects model allows us to test for statistically significant differences in the results of two optimisation experiments, using all available data, while correctly handling the dependencies between optimisation runs. We show the best performance and all results that show no statistically significant differences (at significance level 0.05) to it in boldface. Details on how the test was conducted and its assumptions can be found in Appendix <ref>. Boxplots of the median performance from the five optimisation runs for each experiment as well as boxplots of the 50 aggregated evaluations across all optimisation and training runs are presented in Appendix <ref>. Our results show that simultaneously optimising hyperparameters and reward parameters consistently matches or outperforms individual optimisation, without depending on tuned baseline parameters for non-optimised components. The only outlier is the single-objective PPO Ant optimisation. Significant performance gains are observed in the complex Humanoid and Wipe environments, while the simpler Ant and LunarLander environments, which are mostly solved using baseline parameter settings, generally show no additional improvements from joint optimisation. However, even if performance only matches the baseline parameters, joint optimisation still offers the advantage of not requiring hand-tuning, while addressing the mutual dependencies of hyperparameter and reward parameters. In LunarLander, Ant, and Humanoid, the optimised incumbent of our joint optimisation on the respective default reward function generally achieves performance considered to solve the environment. For Robosuite Wipe, the average objective score comes close to the maximum of 142. Especially in our experiments with LunarLander, Humanoid and Wipe, the policy improvements could be seen not only in the improved average objective score, but also in qualitative improvements in the agents' behaviour (representative videos can be found in the supplementary material). Therefore, combined optimisation shows competitive performance for already well-studied environments as well as the less-studied Wipe environment. We do not observe a clear pattern in Table <ref> that indicates whether optimising solely hyperparameters or reward parameters consistently outperforms the other. This underscores the necessity of joint optimisation to automatically determine which component requires more optimisation, especially for novel environments, where prior knowledge about the dynamics is lacking. Unsurprisingly, DEHB outperforms random search in almost all our experiments. We report the best hyperparameter and reward parameter values for each environment and algorithm in Appendix <ref>. Appendix <ref> provides the results of policies obtained for each configuration when evaluated using the default shaped reward function of the respective environment. In Figure <ref>, we show median incumbent performance during our SAC experiments at each optimisation time step. The speed of the combined optimisation is comparable to that of the individual optimisation approaches, despite involving much larger search spaces. In all environments except multi-objective Wipe, combined optimisation already matches the performance of the best-performing individual optimisation after roughly a third of the total optimisation steps and continues to improve. For multi-objective Wipe, this is achieved after two-thirds of the total optimisation steps. Similar trends are observed for the PPO results, shown in Appendix <ref>. This indicates that combined optimisation, despite the larger search space, requires minimal additional computational effort in terms of optimisation time. §.§ Single- vs Multi-objective Optimisation From Table <ref>, we conclude that multi-objective optimisation can improve policy stability by including a penalty for large standard deviation in performance. These improvements come with only marginal performance loss and sometimes even achieve slight gains; this is the case for Ant, Humanoid and LunarLander, where, in particular, for Humanoid and PPO LunarLander, improved performance is achieved. Only for Wipe, we observed that stability is not further improved compared to the single-objective training. RL is notoriously sensitive to hyperparameter settings. Therefore, optimising hyperparameters using a variance penalty for newly developed algorithms or novel scenarios can lead to increased stability and thus greatly facilitate research and applications. § CONCLUSIONS AND FUTURE WORK In this work, we have demonstrated the importance of jointly optimising hyperparameters and reward parameters. We illustrated dependencies in a simple environment, highlighting the circular dependency encountered in optimising hyperparameters and reward parameters and underscoring the need for simultaneous optimisation. Our empirical results indicate that this joint optimisation is feasible and can match or surpass the performance of individual optimisation approaches without requiring separate parameter tuning for the non-optimised component. Additionally, we demonstrated that this approach requires minimal extra computational effort and is applicable to practical environments, not yet extensively studied. We conclude that combined optimisation should be the best practice for RL optimisation. While we have focused on optimising specific reward parameters within a predefined reward structure, future work should explore a broader range of reward function combinations. Such an extension could consider further aspects of the reward function, including metrics, exponentiation, or specific functional choices, such as nonlinear transformations. Our results further indicate that including a variance penalty in a multi-objective optimisation can substantially enhance the performance variance of a given policy, with little or no reductions in performance. This emphasises the value of combined optimisation in achieving a good balance between a high average objective score and achieving this performance consistently. This improvement in stability is often a crucial requirement in reinforcement learning, enhancing reproducibility and the reliability of results in varying environments. Future research should investigate more sophisticated risk-averse metrics and thoroughly assess the trade-off between a policy's performance and stability. §.§.§ Acknowledgments The authors would like to thank Theresa Eimer for helpful discussions regarding the search space design, Anna Münz for help with statistical tests, and Anja Jankovic for helpful feedback. We gratefully acknowledge computing resources provided by the NHR Center NHR4CES at RWTH Aachen University (p0021208), funded by the Federal Ministry of Education and Research, and the state governments participating on the basis of the resolutions of the GWK for national high performance computing at universities. This research was supported in part by an Alexander von Humboldt Professorship in AI held by HH and by the “Demonstrations- und Transfernetzwerk KI in der Produktion (ProKI-Netz)” initiative, funded by the German Federal Ministry of Education and Research (BMBF, grant number 02P22A010). rlc § ENVIRONMENTS AND REWARD PARAMETER SEARCH SPACES In this section, we give a detailed overview of the environments used in our experiments and the reward parameters we are optimising in each environment's reward function. The default values and respective search spaces of the reward parameters can be found in Table <ref>. We opted not to optimise the terminal rewards r for the LunarLander and Wipe environments, as these constitute the sparse rewards that are addressed through the optimisation of the reward shape. For mapping a reward weight w_i to its search space, we always used the mapping w_i↦ [0, 10^n], where n ∈ℕ_0 is the smallest integer such that w_i < 10^n; we chose this approach, since it preserves the general magnitude of the reward parameters, while relying less on their initial ratios. We believe that practitioners typically have a rough idea about the importance of different components but find it difficult to obtain the exact ratios between them. §.§ Gymnasium LunarLander: The objective of the environment is to navigate a probe to a designated landing platform safely. We considered the environment's variant with continuous control inputs. In the shaped reward, positive rewards are given for moving closer to the landing platform and negative rewards for moving further away. A positive reward is granted for making successful contact with the platform using the probe's legs. Negative rewards are imposed for high velocities and tilting the probe excessively. Fuel consumption by the probe's engine, when activated, results in negative rewards. However, we considered fuel consumption as a constant physical attribute of the probe and did not consider it in our optimisations. The overall shaped reward function is given by r̃^α, w := α·(w_dist·r̃_dist + w_contact·r̃_contact - w_vel·r̃_vel - w_tilting·r̃_tilting - r̃_fuel + r_terminal), with r_terminal being the environment's sparse reward signal of successful landing or crashing. §.§ Google Brax Ant and Humanoid: The task in both environments is to train a robot to walk forward in a specified direction. In Ant, the robot is designed to resemble a four-legged ant and is human-like in Humanoid. The environment's rewards consist of positive rewards for staying healthy (being able to continue walking) and a reward for the distance travelled in each timestep. A negative reward for exercising large forces on the robot's joints is obtained. The overall shaped reward function is given by r̃^α, w := α·(w_dist·r̃_dist + w_healthy·r̃_healthy - w_force·r̃_force). §.§ Robosuite Wipe: The task is to wipe a table of dirt pegs with a simulated robot arm equipped with a sponge. Positive rewards are obtained for the sponge's distance to the dirt pegs, having contact with the table and exercising appropriate pressure on the table. Negative rewards are obtained for applying excessive force on the table, large accelerations while moving or arm-limit collisions resulting in an unhealthy state. The overall shaped reward function is given by r̃^α, w := α·(w_wiped·r̃_wiped + w_dist·(1 - tanh(w_dist_th·r̃_dist)) + w_contact·r̃_contact - w_force·r̃_force - w_vel·r̃_vel + w_unhealthy·r̃_unhealthy + r_terminal), with r_terminal being the environment's sparse reward for wiping the table clean or not. § INTERDEPENDECY BETWEEN HYPERPARAMETERS AND REWARD PARAMETERS In Figure <ref>, we present the same landscapes as in Figure <ref> but mark the best-performing hyperparameter value for each reward weight (shown as the blue lines). We can again observe strong dependencies and large changes in the best-performing hyperparameters for the discount factor and the generalised advantage estimate regarding the distance reward parameter. Only for the learning rate, we see almost no dependency on the reward parameters. § HYPERPARAMETER BASELINES AND SEARCH SPACES In Table <ref> and Table <ref>, we present the baselines and search spaces for PPO and SAC, respectively. We reproduced all baselines in our setup and, in some cases, made slight modifications to improve their performance when possible. The baselines for continuous LunarLander PPO and SAC have both been obtained from the stable-baselines 3 Zoo <cit.>. The Google Brax Ant and Humanoid baselines are obtained from Brax's GitHub repository. For PPO, hyperparameters have been shared via a Google Colab notebook in the Google Brax GitHub repository. For SAC, we utilise the performance results of a published hyperparameter sweep. In the case of Humanoid, we made small adjustments to the reported best-performing parameters in order to obtain the best results in our setup. Our adjustments align with the considered search space of their hyperparameter sweep. We utilised the published baseline in <cit.> for Robosuite Wipe with small adjustments to the training frequency and gradient steps to obtain better performance in our setup. § ADDITIONAL OPTIMISATION RESULTS AND VISUALISATIONS In the following sections, we present additional plots and tables on the performance distributions of our different optimisation experiments. §.§ Default Shaped Reward Function Evaluation In Table <ref>, we present the returns of the obtained policies for each optimisation experiment, evaluated on the corresponding environment's default-shaped reward function. Our observations indicate that for LunarLander and Humanoid, the combined optimisation consistently matches or outperforms the best performance, except for multi-objective SAC training for LunarLander. This suggests that the benefits on the task performance effectively transfer to the default shaped reward function, even though the policies were not specifically optimised for it. In the case of Ant, the performance is slightly lower than that achieved through hyperparameter-only optimisation, yet qualitative analysis shows that the environment is still clearly solved. For the Robosuite Wipe environment, however, the combined optimisation performs significantly worse than the hyperparameter-only optimisation, which starkly contrasts with the evaluation of the environment's task performance. Further analysis reveals that this discrepancy is due to the default shaped reward function's inadequate representation of the environment's task objective. Specifically, policies that do not completely clean the table but maintain contact with it until the end of an episode can accumulate a higher overall return compared to those that quickly complete the cleaning task. Consequently, the policies resulting from combined optimisation, which prioritise rapid table cleaning, achieve lower returns despite better performance in wiping the table. §.§ Optimisation Performance Boxplots In addition to the results presented in Table <ref>, we provide an overview of the full dataset as boxplots. Figures <ref> and <ref> display boxplots for the median performances of each experiment's five optimisation runs. For each experiment's optimisation run, we calculate the median performance across its ten evaluation trainings and present boxplots for the resulting five values per experiment. Figures <ref> and <ref> showcase boxplots for the combined 50 evaluation training performances, obtained by aggregating all ten evaluation training performances for each of the five optimisation runs per experiment. In each boxplot, the baseline's median performance is marked with a red line. Consistent with our analysis in Section <ref>, we observe that combined optimisation can match or surpass the individual optimisations of hyperparameters and reward parameters. Furthermore, multi-objective optimisation substantially enhances stability with minimal or no reductions in performance. §.§ Incumbent Performance during Optimisation Figure <ref> depicts the median incumbent performance during each PPO optimisation experiment. We note that the optimisation steps necessary to surpass the baselines generally occur much earlier than the complete duration of the optimisation, similar to the findings of <cit.>. However, we also observe that continuous improvement is still achieved after exceeding the baseline performance. Moreover, the combined optimisation does not seem to be significantly slower than optimising hyperparameters or reward parameters alone, suggesting that the combined optimisation can enhance results without additional costs, similar to the SAC results presented in Figure <ref>. § REWARD SCALING We examined two alternative methods to perform reward scaling for the Google Brax experiments denoted as explicit and implicit scaling: Explicit reward scaling aims to disentangle effects between the chosen scaling α and the reward weights w during the optimisation by normalising the reward weights. Formally, the optimiser's selected weights w are normalised by w' = ‖ŵ‖_1 ·w/w_1, where ŵ are the default reward weights of the given environment. The resulting reward function r̃^α, w' was then used for RL training as described in Section <ref>. In contrast, implicit reward scaling is done by keeping the reward scale fixed as α = 1 and instead optimising reward parameters with search spaces multiplied by the upper bound of the explicit scaling. The detailed search spaces are given in Table <ref>. In Table <ref>, we report the results of the two scaling approaches. Explicit scaling statistically significantly outperforms implicit scaling for Ant, whereas, for Humanoid, the median performance is better but not significant. Overall, this hints to explicit scaling being a better choice in cases where scaling matters to the algorithm. Due to its better performance, we used the explicit scaling method for performing experiments in Section <ref>. § BEST PERFORMING HYPERPARAMETER AND REWARD WEIGHT CONFIGURATIONS PER ALGORITHM AND ENVIRONMENT We present the best-performing configurations we found for each training algorithm and environment based on median external objective performance across ten evaluation trainings. Table <ref> and Table <ref> display the parameters for PPO and SAC, respectively, and additionally report the performance on the external objective and default shaped reward. We hope the configurations can help as baselines and for future research. The hyperparameter configuration and reward parameters must be used together in training for each environment to achieve the best performance. § EXECUTION ENVIRONMENT All experiments were conducted on a high-performance cluster running the Rocky Linux operating system, release 8.9. The Gymnasium LunarLander and Robosuite Wipe optimisations were executed on CPU nodes, while the Google Brax optimisations utilised GPU nodes. The CPU-based optimisations were carried out on nodes equipped with Intel Xeon Platinum 8160 2.1 GHz processors, each equipped with 24 cores each and 33 792 KB cache, with approximately 3.75 GB of RAM per core. The GPU-based optimisations utilised NVIDIA Volta 100 GPUs (V100-SXM2) with 16 GB of memory. During the optimisation process, the LunarLander and Wipe environments, on average, employed 25 and 32 CPU cores in parallel, respectively. For LunarLander, each RL training run used 4 cores, while Wipe training utilised 5 cores per run. During optimisation, the Google Brax environments required an average of 6 parallel GPUs, with each RL training run conducted on a single GPU. § LINEAR MIXED EFFECTS REGRESSION ANALYSIS To thoroughly analyse the differences between our optimisation experiments, we employed a linear mixed effects regression with a Wald test to analyse the difference in performance between experiments. We conducted the test based on the introduction of <cit.>, using the Wald test as a commonly used approximation of the likelihood-ratio test. The linear mixed-effects model analysis enables us to compare the performance of two optimisation experiments across their respective 50 evaluation runs while accounting for the dependencies induced by the seed of an optimisation run to which an evaluation belongs. Therefore, we can compare the full extent of our data and avoid collapsing it by summarising each optimisation run's performance by the median performance of its 10 evaluation trainings. For each environment and algorithm, we always pick the best-performing optimisation experiment based on its median performance presented in Table <ref>. We then compare this optimisation experiment pairwise to the other corresponding optimisation experiments and test whether the performance is statistically significantly different. Hence, the value to be predicted by the linear mixed-effects model is evaluation performance, with the fixed effect being the two compared experiments. The different evaluations are grouped by their corresponding optimisation seed as the model's random effect. Using the Wald test, we then check if removing the fixed experiment effect from the model would substantially harm the prediction performance of the model. Hence, small p-values of the test indicate that the model with the fixed experiment effect provides a better fit, and therefore, the experiments' performances are statistically significantly different. We applied a commonly used significance level of 0.05 to test for significance. For preprocessing the 100 evaluation data points of two experiments, we normalised mean performance to 0 and standard deviation to 1. Afterwards, we fit a mixed-effects model on the data and remove all points as outliers with residuals deviating more than two times the standard deviation from the mean. We then fit a model on the cleaned data and perform the Wald test to check for significance in the fixed effect, hence the difference between the two experiments. The assumptions underlying the test are (a) independence of the random effects and (b) homoskedasticity of the residuals of the fitted linear mixed-effects model. The normality of the residuals is an assumption of minor importance, as mixed-effect models have been shown to be robust to violations of this distributional assumption <cit.>; <cit.> even suggest not to test for normality of the residuals. The independence of the optimisation runs as the random effect is ensured by using different random seeds. To check the homoskedasticity assumption, we performed White’s Lagrange multiplier test on the residuals. The null hypothesis of White's test is homoskedasticity, and hence, large p-values suggest that the assumption of homoskedasticity is fulfilled. Further, we can reasonably assume that the evaluations of our experiments follow a normal distribution, and we further tested normality of the residuals using the Shapiro-Wilk test. In Table <ref>, we present the p-values of the Wald and White tests, as well as the number of outliers removed for the pairwise comparisons detailed in Table <ref>. The values correspond to comparisons against the cell identified as the best performer. Entries that do not exhibit a significant difference from the best-performing cells are highlighted. The assumption of homoskedasticity is satisfied in the majority of cases, as indicated by p-values greater than 0.05. This is particularly true in scenarios where the performance of the experiment is not significantly different from that of the best optimisation. Moreover, the p-values from the Wald test are generally well above the significance threshold of 0.05 when there is no significant difference. In our comparisons, the normality assumption was met in 18 out of 35 cases. Given the mixed-effect models' resilience to deviations from the normality assumption and the large number of cases where this assumption was satisfied, we conclude that our test results are reliable for comparing the outcomes of our experiments.
http://arxiv.org/abs/2406.17703v1
20240625164147
trARPES and optical transport properties of irradiated twisted bilayer graphene in steady-state
[ "Ashutosh Dubey", "Ritajit Kundu", "Arijit Kundu" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall" ]
These authors contributed equally to this work. Department of Physics, Indian Institute of Technology Kanpur, Kanpur 208016, India These authors contributed equally to this work. Department of Physics, Indian Institute of Technology Kanpur, Kanpur 208016, India Department of Physics, Indian Institute of Technology Kanpur, Kanpur 208016, India § ABSTRACT We theoretically investigate the trARPES spectrum and optical Hall conductivity in periodically driven twisted bilayer graphene, considering both steady-state and “projected" occupations of the Floquet state. In periodically driven pre-thermalized systems, steady-state occupation of Floquet states is predicted to occur when coupled to a bath, while these states have projected occupation instantaneously after the driving starts. We study how these two regimes can give markedly different responses in optical transport properties. In particular, our results show that steady-state occupation leads to near-quantized optical Hall conductivity for a range of driving parameters in twisted bilayer graphene, whereas projected occupation leads to non-quantized values. We discuss the experimental feasibility of probing such non-equilibrium states in twisted bilayer graphene. trARPES and optical transport properties of irradiated twisted bilayer graphene in steady-state Arijit Kundu June 25, 2024 =============================================================================================== *These authors contributed equally to this work § INTRODUCTION The discovery of correlated insulators and superconductivity in twisted bilayer graphene (TBG) has sparked considerable theoretical and experimental interest in moiré materials <cit.>. Moiré materials lie at the intersection of topological and electronic correlation physics, promoting novel phases that do not exist within each paradigm individually <cit.>. Near magic angles twisted bilayer graphene feature flat bands where electron correlation effects prevail, leading to various symmetry-broken phases <cit.>. Additionally, the valley and spin degeneracies can result in competing orders, with the quantum metric and Berry curvature potentially playing a crucial role in stabilizing a specific order <cit.>. Time-periodic fields can drive materials into exotic non-equilibrium phases <cit.>, allowing external control to tune their band-geometric, topological <cit.>, and transport properties <cit.>. In particular, periodic driving can effectively make the bands flat in TBG even away from the magic angles and may induce finite Chern numbers and large Berry curvatures <cit.>, suggesting quantized anomalous Hall transport signatures. However, unlike equilibrium systems, the nontrivial band topology in a many-body system does not always result in robust quantized transport and depends delicately on the occupation of Floquet states <cit.>. Floquet states offer a convenient basis for describing the time evolution of periodically driven systems, similar to the Hamiltonian basis for static systems. However, the thermodynamics governing level occupation in static systems does not directly translate to driven quantum systems. The driving field induces heating and breaks `reversibility' condition, which results in the Boltzmann distribution for equilibrium systems <cit.>. Time-resolved ARPES (trARPES) <cit.> and Andreev spectroscopy experiments <cit.> reveal the occupation of the filled Floquet band, which qualitatively differs from the Fermi-Dirac distribution observed in static systems, corroborating this distinction. Recently, several theoretical studies have investigated the occupation of Floquet states <cit.>, that depends on the system-bath coupling as well as drive parameters. In a periodically driven closed system, the occupation of Floquet states is determined by projecting the thermal density matrix onto the Floquet states. This typically results in a finite density of excitations, with no relaxation to lower energy levels through energy emission <cit.>. We refer to this as the “projected" occupation of the Floquet states. In realistic experiments, isolating the system perfectly from the environment is challenging, and observing this kind of Floquet state occupation in practice is unlikely. On the other hand, a closed-form expression for Floquet occupation can be obtained if the system is not fully closed, the system-bath coupling is negligible, and the system reaches a steady-state on timescales longer than that of the system-bath coupling <cit.>. Generally, the excitation density is lower here than in the closed system as the system can emit energy to the bath and relax to lower levels. We refer to this occupation as the “steady-state" occupation of the Floquet states. In this work, we theoretically investigate the trARPES spectrum of TBG, highlighting the contrasting signatures of projected and steady-state occupations when driven by a circularly polarized pump pulse. Several studies have explored band engineering in TBG and other moiré systems driven by periodic fields <cit.>. Others have examined the transport properties of periodically driven TBG, either using a semiclassical framework <cit.> or assuming a Fermi-Dirac occupation of Floquet states <cit.>. However, these studies often lack a connection between band topology and robust quantized Hall transport. We compute the optical Hall conductivity in TBG and find that when steady-state occupation is realized, the optical Hall conductivity approaches near-quantization for certain values of the drive parameters. In contrast, with projected occupation, it is never quantized. Additionally, for a generic two-band model, we find that the steady-state occupation is zero (one) for the upper (lower) quasienergy level (at zero temperature) around a gap due to a lifted degeneracy of static bands under the influence of a periodic drive. On the other hand, the projected occupation is half for both levels. Additionally, other gaps resulting from hybridization between the quasienergy bands and Floquet sidebands lead to half-occupations of the levels in both steady-state and projected occupations. The rest of the paper is organized as follows: in <ref>, we describe the Floquet preliminaries (<ref>), occupation of the Floquet states (<ref>), trARPES spectrum (<ref>), optical Hall conductivity (<ref>) and the low energy effective model of TBG with periodic driving field (<ref>). In <ref>, we discuss the numerical results of occupation (<ref>), trARPES spectrum (<ref>), and optical Hall conductivity (<ref>) of periodically driven TBG. We conclude and summarize in <ref>. § FORMALISM §.§ Floquet preliminaries For a periodically driven quantum system, the Hamiltonian follows ℋ(k, t) = ℋ(k, t + T), where k represents the Bloch momentum and T is the drive period. In this case, according to Floquet's theorem, the Schrödinger equation yields a complete set of orthogonal solutions of the form |Ψ_α(k, t)⟩ = e^-i ε_kα t |ϕ_α(k, t)⟩. Here, ε_ kα is the quasienergy of the Floquet state |Ψ_α(k, t)⟩. |ϕ_α( k,t)⟩=|ϕ_α( k,t+T)⟩ is the time-periodic part of the Floquet state, dubbed as Floquet mode. Quasienergies lie within the Floquet Brillouin zone, i.e., ε_ k α∈(-Ω/2,Ω/2] correspond to unique solutions to the Schrödinger equation, where Ω = 2π/ T is the frequency of the periodic drive. Due to the periodicity of ℋ( k,t) and |ϕ_α( k,t)⟩, they can be expressed in a Fourier series: ℋ( k,t) = ∑_p ∈ℤ e^-i p Ω tℋ^(p)( k), |ϕ_α( k,t)⟩ = ∑_p ∈ℤ e^-i p Ω t|ϕ_α^(p)( k)⟩. Here we use a slight abuse of notation, representing the p-th Fourier coefficient of ℋ(k, t) and |ϕ_α(k, t)⟩ as ℋ^(p)(k) and |ϕ^(p)_α(k)⟩, respectively. Substituting Eq. (<ref>) and Eq. (<ref>) into the Schrödinger equation, yields the following eigenvalue equation: ∑_n [ℋ^(m-n)( k)- mΩδ_m,n]|ϕ_α^(n)( k)⟩ = ε_ kα|ϕ_α^(m)( k)⟩. This defines the extended-zone Hamiltonian ℍ, the blocks of which are given by: ℍ_m,n = [ℋ^(m-n)( k)- mΩδ_m,n]. Upon diagonalizing ℍ, one finds the quasienergies (ε_ kα) and the Fourier components of the Floquet mode |ϕ^(p)_α( k)⟩. Note that the Fourier coefficient of the Floquet modes obeys the following normalization condition: ∑_p ⟨ϕ^(p)_α( k)|ϕ^(p)_α( k)⟩=1. §.§ Occupation of Floquet states For a driven system, the `reversibility' condition breaks down, which gives rise to the Boltzmann distribution and the Fermi-Dirac distribution for a non-interacting static system. Consequently, the thermodynamic principles applied to a static system cannot directly translate to periodically driven systems. The occupation of Floquet states depends on the specifics of the driving protocol and system-bath coupling, which provide a relaxation mechanism to the system. For a closed, periodically driven system, the occupation of αth Floquet state (with Floquet mode |ϕ_α( k,t)⟩), for a non-interacting system is obtained by projecting the density matrix to the Floquet state, n^ pr_α( k,t) = ⟨ϕ_α( k,t)| ρ^ static( k) |ϕ_α( k,t)⟩. This is referred to as the projected occupation throughout the remainder of the paper. Here, ρ^static( k) is the thermal density matrix of a static system, given by ρ^ static( k) = ∑_n f(E_ k n)|ψ_n( k)⟩⟨,| where |ψ_n( k)⟩ and E_ k n are the eigenstate and corresponding eigenvalue for the nth band, respectively. f(x) = 1/(e^β (x-μ) + 1) is the Fermi-Dirac distribution function. β and μ are the inverse temperature and the chemical potential, respectively. The periodic drive gives rise to a finite excitation density, and the system can not relax to a lower energy level through emission as it is closed from the environment. A closed-form expression for Floquet occupation can be obtained if the system is not fully isolated, the system-bath coupling is negligible, and the system reaches a steady-state on timescales longer than that of the system-bath coupling. In this scenario, the occupation of the αth Floquet state follows a staircase Fermi-Dirac distribution <cit.> given by n^ ss_α( k) = ∑_p∈ℤf(ε_ kα + pΩ - μ)⟨ϕ_α^(p)( k)| ϕ_α^(p)( k)⟩. This is referred to as the steady-state occupation. In this case, the excitation density is generally lower, as the system can relax to a lower energy level. In the following sections, we demonstrate that the realization of either projected or steady-state occupation in a periodically driven system leads to contrasting trARPES spectroscopy (<ref>) and optical Hall conductivity (<ref>) in twisted bilayer graphene (TBG), as well as in Dirac and semi-Dirac systems (see <ref>). We analytically calculate both the projected and steady-state occupation probabilities for weak driving at various drive-induced gap opening points (for more details, see <ref>). §.§ trARPES spectrum In trARPES spectroscopy, the system undergoes illumination by two ultrafast laser pulses: the pump and probe pulses, with a tunable delay time between them <cit.>. The pump pulse drives the system out of equilibrium. The probe pulse then forces the system to eject photoelectrons, which are then captured by a photodetector with an angular resolution. From the trARPES spectrum, one can directly observe the occupied states in the quasienergy spectrum of the driven system. The pump pulse is modeled by a vector potential, A_pump(t) = A_0 s_pump(t)(cos(Ω t), sin(Ω,t)), which is incorporated in the Hamiltonian by Peierls substitution. Here s_pump(t) = e^-(t-t_p)^2 / 2 σ^2_pump is a Gaussian envelop of the pump pulse with a width of σ_pump, centered around time t_p. The spectrum of the trARPES represents the photoemission intensity ℐ( k,ω) at momentum k and frequency ω, given by <cit.> ℐ( k,ω) = 1/(2t_0)^2∫_-t_0^t_0∫_-t_0^t_0 t_1 t_2 [𝒢^<( k, t_1, t_2)] × s_probe(t_1) s_probe(t_2)e^i ω(t_1 - t_2), where s_probe(t) = e^(t-t_pr)^2 / 2 σ^2_probe accounts for the probe pulse centered around time t_pr with width of σ_probe. The probing interval is in the time interval [-t_0,t_0]. 𝒢_αβ^<( k, t,t') = i c^†_ kβ(t') c_ kα(t) is the “lesser" Green's function <cit.>, where c_ k α is the fermionic annihilation operator at momentum k for α orbital. The evolution of the annihilation operator c_ k α is given by, c_ k α(t) = ∑_γ [𝒰_ k(t,0)]_αγc_ k γ(0), 𝒰_ k(t,0) = 𝒯exp(-i ∫_0^t t' ℋ( k, t')), where 𝒰_ k(t,0) is the evolution operator in the presence of the pump pulse and 𝒯 is the time ordering operator. The pump-probe delay time is given by Δ t = t_pr - t_ p. For the peak field strength, i.e., when the pump pulse overlaps maximally with the probe pulse overlap Δ t = 0, which is the case we assume throughout this work. To observe well-defined Floquet band structures in the trARPES spectrum, it was shown in Ref. Sentef_2015 that one has to follow the time scale hierarchy between the duration of pump-pulse, duration of the probe-pulse, and the time period of the drive: σ_pump≫σ_probe≫ T. §.§ Optical Hall conductivity When subjected to periodic driving, a system's physical properties can vary significantly from its static counterpart. Studying the properties of such systems often involves analyzing their response to weak external perturbations. Linear response theory provides a natural framework for this. Ref. <cit.> extends the linear response theory from static systems to strongly driven Floquet systems under the influence of a weak external probe, which we employ to compute Optical Hall conductivity. trARPES spectroscopy exclusively captures filled quasienergy states, lacking information about unfilled states. However, introducing a weak perturbation typically induces optical transitions between filled and unfilled states. Consequently, optical conductivity can probe the unoccupied quasienergy states. The formula for the optical Hall conductivity σ_xy(ω) probed at frequency ω, for a generic periodically driven system with driving frequency Ω reads: σ_xy(ω) = i/ω V∑_α,β ,m, k[D^(m)_α xβ( k)D^(-m)_β yα( k)(n_α( k) - n_β( k))/ω + ε_α( k) - ε_β( k) + mΩ + iδ]. Here D^(m)_α j β( k) = 1/ T∫_0^T e^i m Ω tϕ_ k α(t)∂ℋ( k, t)/∂ k_j(t)ϕ_ k β represents the Fourier component of the matrix element of the current operator in j-direction. ε_α( k) and n_α( k) are the quasienergy and occupation probability of α-th Floquet state, respectively, δ is an infinitesimally small positive number, and V is the total area of the system. We refer the reader to <ref> for the details of the derivation of <ref>. Note that <ref> is valid for only insulators; for metals, there will be additional corrections corresponding to Drude conductivity <cit.>. Physically, σ_xy(ω) encapsulates all optical transitions from a filled to an unfilled quasienergy band, including sidebands separated by energy ω. In a periodically driven system, circularly polarized light explicitly breaks time-reversal symmetry and induces non-trivial Chern number. However, in periodically driven systems, this does not always result in quantized Hall conductivity. The occupation probability can deviate significantly from the static Fermi-Dirac distribution, causing the correspondence between the Chern number and the DC (i.e., ω→ 0) Hall conductivity to break down. Nevertheless, if the occupation probability closely approximates the Fermi-Dirac distribution, near-quantized Hall conductivity can be achieved. §.§ Model Hamiltonian Twisted bilayer graphene (TBG) consists of two graphene layers rotated relative to each other, inducing a moiré potential due to the rotational misalignment. The low energy effective Hamiltonian is given by <cit.>: ℋ_0 = [ h_θ / 2 U( x); U( x)^† h_-θ / 2 ], where h_±θ / 2= -i v_F ∇·σ_±θ / 2. Here, v_F is the Fermi velocity of an electron in a single graphene layer, and σ_θ = e^-i σ_z θ / 2σ e^i σ_z θ / 2 are the rotated Pauli matrices. U( x) is the moiré potential, given by U( x) = U_0 + U_1 e^i G_1 · x + U_2 e^i G_2 · x, U_n = w_AAσ_0 + w_AB [σ_x cos(n ϕ) +σ_y sin(n ϕ) ], where ϕ = 2π / 3 and G_1,2 = k_θ(√(3) / 2, ± 3 / 2) are the reciprocal lattice vectors, and k_θ = 8πsin(θ / 2) / 3 a, with a as the lattice constant of graphene. Here, w_AA and w_AB are the tunneling amplitudes between AA and AB-stacked regions of TBG, respectively. The Hamiltonian in the Eq. (<ref>) acts on the spinor space (ψ_1,χ_1,ψ_2,χ_2), where 1,2 refers to two graphene layers and ψ_i, χ_i corresponds two sublattices of graphene. We periodically drive TBG with a circularly polarized light, represented by the vector potential A(t) = A_0(cos(Ω t), sin(Ω t)), where Ω is the driving frequency. This enters in the Hamiltonian by a minimal substitution, i.e., h_±θ / 2→ h_±θ / 2(t) = v_F [-i ∇ + A(t) ]·σ_±θ / 2. We treat the moiré potential in an effective long-wavelength approximation <cit.>. The time-dependent Hamiltonian reads ℋ(t) = ℋ_0 + h(t), where the time-dependent part h(t) is given by h(t) = ℋ^(+1) e^-i Ω t + ℋ^(-1)e^i Ω t, Here, ℋ^(±1) are the Fourier components of ℋ(t), given by [ℋ^(-1)] = [ℋ^(+1)]^† = v_F A_0 ( [ 0 0 0 0; e^i θ / 2 0 0 0; 0 0 0 0; 0 0 e^-i θ / 2 0; ]), and the zeroth Fourier component is given by the static Hamiltonian, i.e., ℋ^(0) = ℋ_0. As the low energy Hamiltonian is constructed from two Dirac Hamiltonian and does not incorporate the non-linear dispersion of graphene at higher energies, only the Fourier harmonics ℋ^(0), ℋ^(± 1) of ℋ(t) are non-zero. Now, to find the quasienergy and the Fourier components of the Floquet mode, we solve the eigenvalue problem of extended-zone Hamiltonian Eq. (<ref>). In this study, we adopt natural units e = ħ = 1, with additional parameters specified as follows: a = 2.4 Å, v_F / a = 2.425 eV, w_AB = 112 meV, and w_AA = 0 meV (chiral limit). For the construction of the Hamiltonian in plane-wave basis, the reciprocal lattice vectors G_n,m = n G_1 + m G_2 are restricted within -3 ≤ m,n ≤ 3. The number of Fourier components that for the construction of the extended-zone Hamiltonian is restricted to seven, enough for the convergence of low-energy quasienergy bands. § RESULTS AND DISCUSSION §.§ Occupation of periodically driven TBG When a static system is periodically driven (circularly polarized light) at a frequency much higher than the relevant bandwidth, the quasienergy spectrum resembles the real energy spectrum of the static system, except for gap openings at zero quasienergy, when there are degeneracy at zero energy in the static bands. These gap openings are attributed to broken time-reversal symmetry of the system in the presence of periodic drive. At these gap openings, interestingly, steady-state occupation predicts complete filling (assuming at zero temperature and chemical potential at zero quasienergy) of the lower quasienergy band and the upper quasienergy band to remain empty. This starkly contrasts with the project occupation's prediction that both the upper and lower quasienergy bands will be exactly half-filled around the gap opening. A derivation of the above is presented in <ref>. On the other hand, at lower values of driving frequency (lower than the bandwidth of static system) and for very weak driving amplitude, the quasienergy spectrum resembles a folded static energy spectrum within a Floquet Brillouin zone (-Ω/2,Ω/2], except gap openings at zero (in addition to the gaps that forms out of broken time-reversal symmetry) as well as at quasienergy ±Ω/2. This is due to the mixing of bands as they fold. Interestingly, whenever a gap opens because of folded bands mix, both SS and PR occupation predict exactly half occupation of the quasienergy bands (see <ref>). The above analysis is true only for weak driving amplitude, and at arbitrary driving intensity, SS and PR occupations can differ significantly at all parts of the spectrum. In the case of twisted bilayer graphene (TBG), around twist angle θ = 1.1^∘, in the chiral limit, the two central bands are almost flat (bandwidth ∼ 3.5 meV). These bands are degenerate at the two Dirac points in the moiré Brilloin zone (mBZ), i.e., K_M and K'_M (see <ref>(a)) due to the C_2 𝒯 symmetry present in graphene, where C_2 is 180^∘ rotation and 𝒯 is the spinless time-reversal symmetry operator. Time-reversal symmetry is broken when a circularly polarized light is applied, and the quasienergy spectrum gap out at the K_M and K'_M points in the mBZ.This results in two gapped central quasienergy bands that are even flatter, with a bandwidth of approximately 0.28 meV, as shown in Fig. <ref>(b). Consequently, the occupation in the quasienergy bands is redistributed, differing from that of the static bands. For these calculations, we set μ = 0 to determine the occupation of the quasienergy bands. At zero temperature, the value of PR occupation is nearly half for the two flat bands (above and below μ =0) as shown in Fig. <ref>(c), whereas the value of SS occupation is nearly one and zero for flat bands above and below μ =0 respectively as shown in Fig. <ref>(d). Additionally, we present the PR and SS occupations for Dirac and semi-Dirac systems in <ref>. §.§ trARPES spectrum of periodically driven TBG In this section, we compute the trARPES spectrum of the periodically driven TBG. For analytical calculations we assume, σ_pump≫ T, and drop the Gaussian part from A_pump(t). In this limit, the vector potential for the pump pulse becomes periodic, and one can use tools from Floquet theory to simplify Eq. (<ref>). The evolution operator in <ref>, in the Floquet basis is given by, 𝒰_ k(t,0)= ∑_α e^-i ε_ k α t |ϕ_α( k , t)⟩⟨ϕ_α( k ,0)|, where ε_ k α is αth quasienergy band and |ϕ_α( k,t)⟩ is the corresponding Floquet mode. Following Eq. (<ref>) and Eq. (<ref>), one can simplify the trace of lesser Green's function in the Floquet basis as [𝒢^<(k ,t_1, t_2)] = ∑_α,β∑_m,n e^i(ε_ k α + mΩ)t_2 e^-i(ε_ k β + nΩ)t_1 ×⟨ϕ_β( k,0)|𝒢^<( k ,0,0) |ϕ_α( k , 0)⟩ ×⟨ϕ_α^(m)( k)|ϕ_β^(n)( k)⟩. For analytical simplicity, we assume t_pr = t_p = 0 and t_0 = NT, where N ≫ 1 is a large number. Substituting Eq. (<ref>) into Eq. (<ref>) and performing the temporal integrals, one obtains the photoemission intensity ℐ( k, ω) as: ℐ( k,ω) = πσ_probe^2/4N^2 T^2∑_α,β m,n[ϕ_α( k , 0)𝒢^<( k,0,0)ϕ_β( k ,0)⟨ϕ^(m)_β( k )|ϕ^(n)_α( k )⟩] ×exp(- σ^2_probe(ω - ε_ kα - n Ω)^2 / 2) exp(- σ^2_probe(ω - ε_ kβ - m Ω)^2 / 2) ×(NT/√(2)σ_probe,i σ_probe(ε_ kα - ω + n Ω)/√(2)) (NT/√(2)σ_probe,i σ_probe(ε_ kβ - ω + m Ω)/√(2)), where erf(a,b) = [(a+b)+(a-b)]/√(2) and (x)=2 ∫_0^x t exp(-t^2) /√(π) is the error function. The energy resolution of the trARPES spectrum is inversely proportional to σ_probe. The photoemission intensity ℐ(k, ω) of the occupied band depends on the overlap of the Fourier components of the Floquet mode, given by ⟨ϕ^(m)_β(k) | ϕ^(n)_α(k) ⟩. This intensity is maximized when m = n and α = β, corresponding to ω = ε_kα + n Ω. Otherwise, it is suppressed. The expression simplifies further if we make the physically motivated assumption that the probing interval is much larger than the width of the probe pulse, i.e., NT ≫σ_probe. Under this assumption, the error function approximately becomes equal to 1, resulting in: ℐ( k, ω) ≈πσ^2_probe/8 N^2 T^2∑_α,n[ϕ_α( k , 0)𝒢^<( k,0,0)ϕ_β( k ,0)⟨ϕ^(n)_β( k )|ϕ^(n)_α( k )⟩] exp(- σ^2_probe(ω - ε_ kα - n Ω)^2). Depending on whether SS or PR occupation is realized in the system, 𝒢^<(k, 0, 0) has distinct forms. For the theoretical calculation of the trARPES spectrum with SS occupation, we assume that a pump pulse was applied in the distant past to a system weakly coupled to a bath. Subsequently, the system achieves a steady-state at timescales longer than that of the system-bath coupling. In this case, the lesser Green's function is diagonal in the Floquet basis at a reference time t = t' = 0, i.e., 𝒢^<_αβ(k,0,0) = i n^ss_αδ_αβ, where n^ss_α is the steady-state occupation of the α-th Floquet state, given by Eq. (<ref>). Fig. <ref>(b) shows the trARPES spectrum of TBG, which closely resembles the projected occupation shown in Fig. <ref>(d). For the case of PR occupation, assume that the system was in thermal equilibrium before the pump pulse was applied. Subsequently, the system is completely disconnected from the bath and becomes effectively closed. Thereafter, the pump pulse is applied to the system. In this case, the lesser Green's function at a reference time t = t' = 0 becomes identical to the thermal density matrix of the system ρ^static( k) defined in <ref>, i.e., 𝒢^<(k, 0, 0) = ρ^static(k). Fig. <ref>(a) shows the theoretical trARPES spectrum, that matches closely with the occupation shown in Fig. <ref>(c). Additionally, we present the theoretical trARPES spectrum of Dirac and semi-Dirac systems showing PR and SS occupation in <ref>. §.§ Optical Hall conductivity of periodically driven TBG In this section, we study the optical Hall conductivity of TBG driven with a circluar polarized light. We see several features in the optical conductivity that are different between the PR and SS occupations. In a static insulating electronic system with broken time-reversal symmetry, one typically expects quantized DC Hall conductivity at zero temperature, where the Fermi-Dirac distribution governs the occupations of the bands. When the system is driven out of equilibrium, the occupancy of the band is no longer a Fermi-Dirac distribution, and quantization becomes less apparent. Nevertheless, in cases where the occupancy resembles that of a band insulator, the Hall conductivity tends towards quantization <cit.>. Our study confirms this assertion, particularly under large and moderate driving frequency with SS occupation. For the case of SS occupation, we observe a nearly quantized DC optical Hall conductivity (see Fig. <ref>(a)) for a driving frequency Ω = 4.5 eV and vector potential amplitude A_0 = 1.5 k_θ. These drive parameters correspond to an electric field of the order of 10^4 kV/cm, which is achievable with current laser technology <cit.>. In this case, we obtain a DC Hall conductivity of ≈ 4 e^2 / h (accounting for both valleys and spins), that corresponds to Chern number of 4, as was previously reported in Ref. <cit.>. This can be understood from the SS occupation Fig. <ref>(d), where the occupation of the flat conduction and valance bands in the chiral limit mirrors that of a band insulator. Circularly polarized light breaks the time-reversal symmetry, inducing mass terms with opposite signs in each valley of graphene. Consequently, the combined Chern numbers stemming from the valleys and spin-flavors in graphene sum up, yielding a total Chern number of ± 4. On the other hand, this does not hold true for the PR occupation. In this case, both flat bands exhibit equal occupations (≈ 1/2 for each band), as depicted in Fig. <ref>(c). The optical Hall response differs markedly between SS and PR occupations. With SS occupation, we observe a kink at low energy, around 0.05 eV, stemming from optical transitions occurring between the split flat bands (with a gap of the order of ∼𝒪(A_0^2/Ω)) induced by circularly polarized light. Conversely, such a kink is absent in the Hall response when PR occupation is realized, as optical transitions between the flat bands are Pauli blocked. As we increase the frequency (ω), we observe peaks in the optical Hall response attributed to optical transitions involving higher energy bands. In the case of SS occupation, the first peak in σ_xy signifies transitions from the valence band to the second conduction band. Similarly, a comparable peak is noted at a slightly lower frequency for PR occupation, reflecting transitions between the first and second conduction bands of the driven TBG. Further increasing the energy reveals multiple peaks, indicative of transitions to additional higher energy bands. By changing the driving frequency, the system can undergo topological phase transitions. In Fig. <ref>(b), we illustrate the Ω dependence of Hall conductivity with SS occupation for some representative values of Ω. Transitioning from high-frequency driving (Ω=4.5 eV) to moderate driving frequencies (Ω=2.5 eV), the system undergoes a topological phase transition from σ_xy≈ 4 e^2/h to σ_xy≈ 8 e^2 / h. However, high-energy features remain qualitatively similar, albeit occurring at different ω values depending on Ω. In Fig. <ref>(c), we show the dependence of optical Hall conductivity on the driving amplitude for Ω = 4.5 eV. With increasing driving amplitude, the separation between flat bands increases, and corresponding kinks in the plot of optical Hall conductivity shift towards a higher value of ω. However, with all values of driving amplitude, we get a nearly quantized DC Hall conductivity for the SS occupation. Additionally, we present the optical Hall conductivity of Dirac and semi-Dirac systems with SS and PR occupation, as shown in <ref>. § SUMMARY We theoretically studied the trARPS spectrum and optical conductivity of driven twisted bilayer graphene (TBG), near the flat-band limit. The periodic drive can break time-reversal symmetry, giving rise to net Chern numbers of quasienergy bands. If the periodic drive, which we call the pump field, is for a relatively long time, such that the system is in a pre-thermal steady-state, one assumes a steady-state (SS) occupation of the quasienergy bands, then the optical conductivity gives rise to near-quantized signatures, reminiscent of non-trivial band topology. On the other hand, when the response is not from a steady-state, the quasienergy bands follow a projected (PR) occupation, and the responses differ from quantized values. These two possible occupations of the quasienergy bands have contrasting responses in the trARPES spectrum as well. Apart from TBG, we compute the same in Dirac as well as semi-Dirac semi-metallic systems, pointing out differences in response for SS and PR-occupied states. Our findings can be verified through various existing experimental techniques. The occupation of the Floquet bands can be seen in the pump-probe experiment <cit.>. The DC limit of optical Hall conductivity can be measured in the multiterminal experiments <cit.> as well as in the time of flight measurement in the optical lattice <cit.>. RK acknowledges funding under the PMRF scheme (Govt. of India). AK acknowledges support from the SERB (Government of India) via Sanction No. ECR/2018/001443 and CRG/2020/001803, DAE (Government of India) via Sanction No. 58/20/15/2019-BRNS, as well as MHRD (Government of India) via Sanction No. SPARC/2018-2019/P538/SL. apsrev4-1 equationsection § OPTICAL CONDUCTIVITY OF PERIODICALLY DRIVEN TBG According to the linear response theory, the real part of optical conductivity is related to the imaginary part of the current-current correlation. The retarder current-current correlation function is given by <cit.>, χ_ij( q,t,t')= -iθ(t- t')⟨[J_ qI^i(t),J_- q I^j(t')]⟩, where J_ q I^i(t) is current operator in direction î in interaction picture. In the above expression ⟨·⟩≡Tr(ρ̂·), where ρ̂ is the density matrix of the system. The current operator in the direction î in the interaction picture is given by, J_ q I^i(t)= 1/√(V)∑_ kc_ k + q/2^†(t)∂ℋ( k, t)/∂ k_ic_ k - q/2(t). Here, V is the total area of the system. The correlation function in the momentum-frequency space reads <cit.>: χ_ij( q,ω) = 1/V∑_ k∑_αβ∑_nD^(n)_α i β( k + q/2)D^(-n)_β j α( k - q/2)(n_ k + q/2α - n_ k - q/2β)/ω + iδ + ε_ k + q/2α- ε_ k + q/2β- n Ω, where D^(n)_α i β( k )= ∑_p, qϕ^(p)_ k α∂ℋ^(n+ p-q)( k)/∂ k_iϕ^(q)_ k β, and n_ k α is the occupation of the αth Floquet state. In this formula n_ k α represents either the projected (Eq. (<ref>)) or steady-state occupation (Eq. (<ref>)). If the quasi-energy bands are gapped at the chemical potential, the above equation in the thermodynamic limit q → 0 reduces to, χ_ij(ω) = 1/N∑_ k∑_αβ∑_nD^(n)_α i β( k )D^(-n)_β j α( k )(n_ k α - n_ k β)/ω + iδ + ε_ k α- ε_ k β- n Ω. The real part of optical Hall conductivity is then given by, σ_xy(ω)= -χ_xy(ω)/ω. § ADDITIONAL RESULTS FOR TRARPES AND OPTICAL HALL CONDUCTIVITY OF DIRAC AND SEMI-DIRAC SEMIMETAL We consider a tight-binding model for the honeycomb lattice with spatially anisotropic hopping. The Bloch Hamiltonian for the tight binding model is given by, ℋ( k)= [ 0 g( k); g( k)^* 0 ], where g( k)= t_1e^i k · a_1 + t_2e^i k · a_2 + t_3 with t_i being the anisotropic hopping amplitudes to nearest neighbors, as shown in the Fig. <ref>. Here, the primitive lattice vectors of the hexagonal lattice are given by: a_1= a(3/2, √(3)/2), a_2= a(3/2, -√(3)/2) with a being the lattice constant. We drive the system using a circularly polarized light. The vector potential for the same is given by A(t)= A_0(cos(Ω t), sin(Ω t)). In the presence of circularly polarized light, the time-dependent Bloch Hamiltonian is given by, ℋ( k, t) = [ 0 g( k + A(t)); g( k + A(t))^* 0 ], where g( k + A(t))= t_1e^i k · a_1 + i A(t) ·δ_1 + t_2e^i k · a_2 + i A(t) ·δ_2 + t_3e^ i A(t) ·δ_3, with δ_1= a(1/2, √(3)/2), δ_2= a(1/2, -√(3)/2), δ_3= a(-1,0). The Fourier components of the Hamiltonian reads, ℋ^(n)( k)= [ 0 g^(n)( k); g^(n)( k)^* 0 ], where g^(n)( k) is given by, g^(n)( k) = 1/T∫_0^Te^i n Ω t g( k + A(t) ) t = t_1e^i(3 kx/2+ √(3) ky/2)e^i n π/6𝒥_n(A_0) + t_2e^i(3 kx/2- √(3) ky/2)e^-i n π/6(-1)^n𝒥_n(A_0) + t_3i^n(-1)^n𝒥_n(A_0), where 𝒥_n(.) is the nth Bessel function of first kind. For numerical calculations, we choose: i) t_1 = t_2 = t_3= λ for Dirac and ii) t_1 = t_2 = λ,  t_3= 2λ for a semi-Dirac semimetal. We set the energy scale as λ and the length scale as a. Here, we discuss the steady-state (SS) occupation and projected (PR) occupation of the Dirac and semi-Dirac models. Subsequently, we present the theoretical trARPES spectrum and optical Hall conductivity, showing the signature of either occupation for each of the model. The static band structure of the tight binding Dirac model is gapless at high symmetry points K and K^' whereas the quasienergy band structure shows gaps at these points, as shown in Fig. <ref>(a). The opening of gaps in the quasienergy band structure is due to the broken time-reversal symmetry of the system by periodic drive (circularly polarized light). This leads to the redistribution of occupation in the quasienergy bands around these gaps. Fig. <ref>(b) and Fig. <ref>(c) show the PR and SS occupation for the Dirac model, respectively. One can easily note that the SS occupation is nearly one and zero for the quasienergy band below and above the gap, respectively, whereas the PR occupation is nearly half for either band around the gap. These generic features are also true for the semi-Dirac model as well (see Fig. <ref>). For both of the models, we present the trARPES spectrum when PR or SS occupation is realized. Fig. <ref> and <ref> show the trARPES spectrum, illustrating the influence of PR and SS occupation for the Dirac and semi-Dirac models, respectively. For both of the models, the DC optical Hall conductivity is nearly quantized with SS occupation, whereas it is not quantized with PR occupation, as shown in Fig. <ref>. The kinks appearing at higher energies are indicative of interband transitions. For more details, refer to <ref> of the main text. § DEGENRATE PERTURBATION THEORY TO COMPUTE THE VALUE OF STEADY-STATE (SS) AND PROJECTED (PR) OCCUPATION IN THE QUASIENERGY BANDS AROUND GAPS In this section, we apply the degenerate perturbation theory to compute the value of SS and PR occupations of the quasienergy band around the drive-induced band gaps. Such gap opening can be of two kinds. If the static bands have degeneracies protected by symmetry, then the driven can break symmetry and lift the degenracies. An example of this kind is the topological gap opening at the Dirac point of irradiated graphene with circularly polarized light at high frequency. At zero temperature, we show below that if such gap opening takes place at the chemical potential, the lower quasienergy band remain occupied leaving the upper quasienergy band unoccupied. At relatively lower frequency, the static bands go through `band-folding', and folds in the Floquet zone to form quasienergies. As the bands fold on themselves, there are degeracies and opening of gaps at these points. At zero temperature, we show that whenever such gap opening takes place by mixing an occupied and an unpccupied band of the static system, it results in equal distribution of the occupation at the point of gap opening. §.§.§ Gap opening by breaking symmetry Consider a degenerate eigenvalue (which we also take to be at the chemical potential, μ = 0) with eigenstates |ψ_±⟩ at some momentum point. If the driving frequency is high with a small amplitude, the effective Hamiltonian at this point is given by high-frequency approximation as <cit.> ℋ_eff = ℋ_0 + δℋ+ …, where ℋ_0 is the static part of the Hamiltonian with a symmetry that protects the degeneracy. δℋ= [ℋ^(+1),ℋ^(-1)]/Ω can be treated as a perturbation where we neglect higher-order terms in 1/Ω. If the degeneracy is lifted by this perturbation, at high frequency, the corrected energies, ϵ_± = |⟨ψ_-| δℋ| ψ_+⟩ |, are also the quasienergies. The corresponding eigenstates are the zeroth Fourier components of the corresponding Floquet states. Further, for weak driving, non-zero Fourier components of these Floquet states can also be neglected as they are expected to only grow polynomially with the driving amplitude <cit.>. The resulting Floquet states can thus be written as |ϕ_±(t)⟩≈(|ψ_+⟩+e^iθ|ψ_-⟩)/√(2), which is static in the limit of small driving amplitude <cit.>. Here θ = arg(⟨ψ_-| δℋ| ψ_+⟩). The SS occupation of these states are n_±^ ss = ∑_lf(ε_±+ lΩ)⟨ϕ_±^(l)|ϕ_±^(l)⟩. As only the l =0 Fourier component survives, at zero temperature n_-^ ss = 1 and n_+^ ss= 0 as f(ε_-) = 1 and f(ε_+) = 0. The density matrix before driving can be written as ρ^static = 1/2(|ψ_-⟩⟨ψ_-| + |ψ_+⟩⟨ψ_+|). It is easy to deduce that the PR occupations, n_±^ pr= ⟨ϕ_±|ρ^static|ϕ_±⟩ = 1/2. §.§.§ Gap opening at band-folded degeneracies At lower frequencies, there can be degeneracies as the folded quasienergy bands overlap within the Floquet zone. Let us consider that such overlaps are between folded static bands that were originally occupied and unoccupied. In the absence of driving, an extended zone picture (see Eq. (<ref>) of main text) of a static Hamiltonian consists of only diagonal blocks. This leads to band-folding within a Floquet zone (-Ω/2,Ω/2] with level crossings. If E_± are the static energies of a system with E_- and E_+ being occupied and unoccupied, respectively, then the band-foldings results in a degeneracy at the quasienergy ε if E_++pΩ = E_-+qΩ = ε, where ε∈ (-Ω/2,Ω/2], for integers p and q. The corresponding Floquet states are |ϕ_+(t)⟩=e^ipΩ t|ψ_+⟩ and |ϕ_-(t)⟩=e^iqΩ t|ψ_-⟩, i.e., they have only (-p)th and (-q)th Fourier components. In the extended-zone basis, the eigenstates are large column vectors with only (-p)the and (-q)th row being non-zero, with elements of |ψ_+⟩ and |ψ_-⟩, respectively. Let us represent these large column vectors by |Φ_+⟩⟩ and |Φ_-⟩⟩. In the presence of driving, the extended-zone Hamiltonian is modified with off-diagonal terms, which we denote as ℋ^O. Such off-diagonal elements lead to avoided crossings at degeneracies. Degenerate perturbation theory at eigenvalues ε of the extended-zone Hamiltonian yields correction to the eigenvalues ε_± = ε± |z|, where z = ⟨⟨Φ_+| ℋ^O |Φ_+⟩⟩. The modified eigenstates in the extended-zone picture are given by |Φ_-⟩⟩ = -e^iθ/√(2)|Φ_- ⟩⟩ + 1/√(2)|Φ_+⟩⟩,   |Φ_+⟩⟩ = e^iθ/√(2)|Φ_-⟩⟩ + 1/√(2)|Φ_+⟩⟩, where θ = arg(z). In terms of the time-dependent representation, the Floquet states are given by |ϕ_-(t) ⟩ = -e^iθ + iqΩ t/√(2)|ψ_- ⟩ + e^ipΩ t/√(2)|ψ_+⟩,   |ϕ_+(t) ⟩ = e^iθ + iqΩ t/√(2)|ψ_-⟩+ e^ipΩ t/√(2)|ψ_+⟩. The steady-state (SS) occupation of these states are: n_±^ ss = ∑_lf(ε_±+ lΩ)⟨ϕ_±^(l)|ϕ_±^(l)⟩. In the above summation only the (-q) and (-p) Fourier components contribute, hence n_+^ ss = f(ε_+-pΩ )⟨ϕ_+^(-p)|ϕ_+^(-p)⟩ + f(ε_+-qΩ )⟨ϕ_+^(-q)|ϕ_+^(-q)⟩ , = 1/2f(E_++ |z| )+ 1/2f(E_++ |z| + (p-q)Ω ). As E_+ - E_- = (q-p)Ω, we obtain n_+^ ss = 1/2f(E_++ |z| )+ 1/2f(E_-+ |z|). As |z| is small, one can assume that at zero temperature, f(E_+ + |z|) = 0, and f(E_- + |z|) = 1, yielding n_+^ ss = 1/2. A similar analysis also shows n_-^ ss = 1/2. Considering ρ^ static = |ψ_-⟩⟨ψ_-|, it is easy to arrive at n^ pr_± = ⟨ϕ_±(0)|ρ^ static |ϕ_±(0)⟩ = 1/2. Our analysis is agnostic to the details of any model. In Fig. <ref>, we have shown the SS and PR occupations of the quasienergy bands when the system is driven with a low-frequency circularly polarized light. We see that around gaps ε =0, Ω/2 for the Dirac model (around Dirac point K), which is consistent with our calculation.
http://arxiv.org/abs/2406.18268v1
20240626114156
Exploring the Stellar Rotation of Early-type Stars in the LAMOST Medium-resolution Survey. III. Evolution
[ "Weijia Sun", "Cristina Chiappini" ]
astro-ph.SR
[ "astro-ph.SR", "astro-ph.GA" ]
Rotational evolution of early-type stars W. Sun & C. Chiappini Leibniz-Institut für Astrophysik Potsdam (AIP), An der Sternwarte 16, 14482 Potsdam, Germany Laboratório Interinstitucional de e-Astronomia - LIneA, Rua Gal. José Cristino 77, Rio de Janeiro, RJ - 20921-400, Brazil Stellar rotation significantly shapes the evolution of massive stars, yet the interplay of mass and metallicity remains elusive, limiting our capacity to construct accurate stellar evolution models and to better estimate the impact of rotation in chemical evolution of galaxies. Our goal is to investigate how mass and metallicity influence the rotational evolution of A-type stars on the main sequence (MS). We seek to identify deviations in rotational behaviors that could serve as new constraints to existing stellar models. Using the LAMOST median-resolution survey Data Release 9, we derived stellar parameters for a population of 104,752 A-type stars. Our study focused on the evolution of surface rotational velocities and their dependence on mass and metallicity in 84,683 `normal' stars. Normalizing surface rotational velocities to Zero Age Main Sequence (ZAMS) values revealed a prevailing evolutionary profile from 1.7 to [4.0]. This profile features an initial rapid acceleration until t/ = 0.25±0.1, potentially a second acceleration peak near t/ = 0.55±0.1 for stars heavier than [2.5], followed by a steady decline and a `hook' feature at the end. Surpassing theoretical expectations, the initial acceleration likely stems from a concentrated distribution of angular momentum at ZAMS, resulting in a prolonged increase in speed. A transition phase for stars with 2.0<M/<2.3 emerged, a region where evolutionary tracks remain uncertain. Stellar expansion primarily drives the spin down in the latter half of the MS, accompanied by significant influence by inverse meridional circulation. The inverse circulation becomes more efficient at lower metallicity, explaining the correlation of the slope of this deceleration phase with metallicity from [-0.3]dex up to [0.1]dex. Starting with lower velocities at ZAMS, the metal-poor ([-0.3]dex < [M/H] < [-0.1]dex) subsample suggests a mechanism dependent on metallicity for removing angular momentum during star formation. The proportion of fast rotators decreases with an increase in metallicity, up to log(Z/)∼ -0.2, a trend consistent with observations of OB-type stars found in the Small and Large Magellanic Clouds. Exploring the Stellar Rotation of Early-type Stars in the LAMOST Medium-resolution Survey. III. Evolution. Weijia Sunhttps://orcid.org/0000-0002-3279-02331E-mail: wsun@aip.de Cristina Chiappinihttps://orcid.org/0000-0003-1269-72821,2 Received ; accepted ================================================================================================================================== § INTRODUCTION Stellar rotation is a crucial factor in the evolution of massive stars, significantly affecting their structure through geometrical distortion <cit.>, extra-mixing <cit.>, and enhanced mass loss rates <cit.>. Hydrodynamical instabilities induced by meridional circulation <cit.> lead to turbulent motions, which facilitate the mixing of chemical compositions <cit.> and the redistribution of internal angular momentum <cit.>. While extensive efforts have been devoted to modeling the effects of rotation <cit.>, the internal transport of angular momentum remains a significant challenge <cit.>. This issue is central to our understanding of stellar evolution, as the distribution of angular momentum directly influences the evolutionary outcomes. The literature reflects a diversity of approaches to diffusive and advective transport mechanisms <cit.>, compounded by assumptions on the initial rotation law at the Zero Age Main Sequence (ZAMS), which carries important consequences for the further evolution of stars on the main sequence (MS). Despite considerable progress over the last decade, the complexity of this research is further amplified by the consideration of magnetic fields <cit.>, early accretion history <cit.>, and binarity <cit.>. Addressing these pivotal challenges has led to the development of innovative methodologies. Since the proposal by <cit.> to estimate the inner rotational angular velocity based on the rotational frequency splitting of non-radial pulsations, asteroseismic data has significantly propelled the observation of internal rotation in stars beyond the Sun <cit.>. This method has provided a unique opportunity to probe the interior, albeit its application is confined to specific types of hot stars such as slowly pulsating B stars <cit.> and γ Doradus stars <cit.>. In parallel, the analysis of surface rotational velocities derived from spectroscopic data has shed light on the complex processes of global angular momentum loss and redistribution within stars, illuminating the respective efficiencies of these competing processes <cit.>. <cit.> suggested that Maxwellian distributions of rotational velocities could indicate a state of rigid rotation established at the ZAMS. Contrary to this expectation, studies, including <cit.>, discovered a bimodal distribution of rotational velocities among A-type MS stars, arguing that slower rotations might predispose stars to develop chemical peculiar (CP) stars. Further examination by <cit.> confirmed the presence of genuine bimodal distributions in the equatorial rotational velocities by excluding CP stars in their sample, with variations reflecting differences among spectral types. Expanding on this framework, <cit.> established rotational distribution models and evolutionary maps for stars with masses around [2-3], based on data from two thousand early-type field stars. This analysis revealed a pronounced acceleration in rotational velocity within the first third of the MS evolutionary phase, a phenomenon that existing rotation models did not predict. Validating theoretical models requires extensive samples across a broad spectrum of stellar atmospheric parameters, achievable only through large spectroscopic surveys like the Large Sky Area Multi-Object Fiber Spectroscopic Telescope <cit.> and the upcoming 4-metre Multi-Object Spectroscopic Telescope <cit.>. While the Gaia <cit.> mission has conducted the largest survey for rotational line-broadening measurements to date in its third data release (DR), it suffers from Radial Velocity Spectrometer's narrow wavelength range, affecting sensitivity to line broadening in hot stars <cit.>. A modified version of The Payne was used by <cit.> to derive stellar atmospheric parameters, including the projected rotational velocity vsin i, for 330,000 OBA-type stars in LAMOST low-resolution (R∼ 1800) survey (LRS) DR 6. Furthermore, <cit.> leveraged LAMOST's median-resolution (R∼ 7500) survey (MRS) DR 7 to catalog over 40,000 late-B and A-type MS stars, uncovering a bimodal rotation distribution that varies with stellar mass and metallicity. Specifically, metal-poor stars ([-1.3]dex<[M/H] < [-0.2]dex, with an average value around [-0.36]dex) predominantly show slow rotation, whereas metal-rich stars ([-0.2]dex<[M/H]<[0.69]dex, with an average value around [0.32]dex) exhibit both slow and rapid rotational branches <cit.>. This study expands on the investigations by and , utilizing the significantly larger dataset from LAMOST MRS DR 9. By doubling the sample size, we study deeper into the evolution of stellar rotation on the MS, particularly how it varies with mass and metallicity. Our analysis includes a critical comparison with theoretical rotation models to dissect the processes of angular momentum's initial distribution and its subsequent redistribution. Moreover, we investigate the fraction of rapid rotation among different metallicity levels and draw comparisons with analogous phenomena in more massive stars within dwarf galaxies to explore how metallicity affects rotational evolution. This article unfolds as follows. Section <ref> outlines our methodology for data reduction and contamination cleaning. In Section <ref>, we examine the evolution of stellar rotation, analyzing its dependence on mass and metallicity, and compare these findings with observations of rapid rotators in dwarf galaxies of the Magellanic Clouds. Section <ref> looks into the mechanisms influencing rotation evolution, exploring observed phenomena relative to existing stellar rotation models, and discusses metallicity's role and the study's limitations. Section <ref> concludes with a summary of our key findings and implications. § DATA We followed the candidate selection and data reduction procedures for MRS in LAMOST DR9 as established in . Specifically, we selected coadded spectra from the LAMOST MRS General Catalog, excluding objects with effective temperatures below [5000]K (as determined by the LAMOST Stellar Parameter pipeline, LASP) or those with a median pixel signal-to-noise ratio (S/N) less than 15. The average temperature uncertainty reported by LASP for the remaining sample is around [40]K. We then cross-matched our candidates with the Two Micron All Sky Survey <cit.> for extinction correction and with Gaia DR3 <cit.> for photometric determination, adhering to <cit.>'s guidelines on Gaia astrometric and photometric parameters. We adopted the photogeometric distance estimates of <cit.>. The reason why I did not use the StarHorse is that most of these stars are not included in the LAMOST Stellar Parameter Catalog (no temperature), so they are also missing in the StarHorse. To further refine our selection, we used line indices of Hα ([6548.0-6578.0]Å) and Mg i b ([5160.12–5192.62]Å), which are robust against extinction correction and flux calibration issues <cit.>. Only MRS samples with line-indices-based temperatures above [7000]K were considered. Spectra normalization and transformation to the rest frame were performed using the package <cit.>. The Stellar LAbel Machine <cit.>, trained on ATLAS12 atmospheric models <cit.>, were used to infer the stellar labels of T_eff, log g, [M/H] and vsin i. Our model grid spanned a range of T_eff from [6000]K to [15,000]K, log g from [3.5]dex to [4.5]dex, Fe/H from [-1.0]dex to [1.0]dex, and vsin i from [10]km,s^-1 to [500]km,s^-1 . We confirmed there is no systematic bias against the T_eff. In Figure <ref>, we present the Hertzsprung–Russell (HR) diagram for our complete dataset. To mitigate artifacts from our model's temperature limitations, we implemented an upper temperature threshold of [14500]K (indicated by black vertical lines), thereby excluding an anomalous concentration of stars beyond or close to our training grids of temperature. Similarly, a lower temperature limit of [7000]K was applied to filter out low-mass stars from our analysis. Additionally, stars that have evolved beyond the MS were excluded from this selection process. To enhance the reliability of our projected rotational velocity (vsin i) measurements, we further removed any star with a relative uncertainty σ_vsin i/vsin i > 0.05. With a median S/N around 60, the cross validated scatter for T_eff, log g, [M/H], and vsin i are ∼[75]K, [0.06]dex, [0.05]dex, and ∼[3.5]km s^-1, respectively . Following these criteria, our refined catalog comprises 104,752 stars, whose spatial distribution is shown in Figure <ref>. This distribution in the disk largely follows the subset described in , as shown in Figure 1 of <cit.>. Moreover, <cit.> looked into the radial metallicity gradients and azimuthal metallicity distributions on the Galactocentric X–Y plane using mono-temperature stellar populations of , and found negative metallicity gradient ranges decreasing as the effective temperature decreases. §.§ Contamination To ensure our sample reflects intrinsic stellar rotation unaffected by external factors, we identified four primary contamination types: CP stars, binaries, cluster members, and periodic variables. CP stars are early-type MS stars with unusual chemical compositions, often linked to slower rotation speeds under [120]km,s^-1 <cit.>. Close binaries can experience altered rotation due to tidal forces, mass transfer, or mergers, affecting the rotational characteristics of both components <cit.>. Cluster members may exhibit rotational velocity distributions that diverge from field stars, reflecting their unique formation and evolutionary histories <cit.>. In alignment with , we assessed the relative variation in radial velocity from single-exposure spectra via cross-correlation, excluding stars with a variation exceeding [4]σ and an absolute change greater than [16]km,s^-1. This criterion identified 15,527 stars as spectroscopic binaries, which represent approximately 15% of our sample. A tentative trend is found between binary fraction and metallicity, wherein the fraction decreases from 18% to 13% as the mean metallicity increases from approximately [-0.5]dex to [0.05]dex, similar to the close binary fraction of solar-type stars <cit.>. For additional contamination types, we cross-matched our data with pre-compiled catalogs from <cit.>, <cit.>, <cit.>, and <cit.> to pinpoint CP stars, and with <cit.> for periodic variables. We also utilized an updated catalog <cit.> based on Gaia DR3 to identify cluster members. This process resulted in the identification of 2,765 CP stars, 886 periodic variables, and 1,952 cluster members. Among the periodic variables with type classification, over 85% are labeled as EW-type, EA-type eclipsing binaries, and RS Canum Venaticorum–type systems. Non-binary systems like δ Scuti account for less than 10%. This indicates that the majority of these variable contaminants are likely in multi-star systems, which need to be excluded from further analysis. While δ Scuti might be considered `normal' in this sense, these detected variables may exhibit variability in luminosity and spectral features, showing an average amplitude variation of [0.15]mag in the Zwicky Transient Facility g band. This level of variability indicates that single-epoch observations without phase information are insufficient to accurately characterize these stars, thereby affecting the precision of parameter measurements. Consequently, we determined that their exclusion is warranted. After these exclusions, we categorized the remaining 84,683 stars as our `normal' sample, comprising single, non-CP, non-variable stars not affiliated with clusters. The numbers of different contaminants are tabulated in Table <ref>. CP stars of different subgroups are also tabulated: metallic line (Am), magnetically peculiar (mAp), stars with enhanced Hg ii and Mn ii (HgMn), and He-weak stars. While we attempted to identify and exclude contaminations, it is possible that some undetected contamination remains. The detection of cluster members is considered complete, yet other methods face challenges. Specifically, our binary-detection technique lacks sensitivity to wide binaries, with effectiveness further constrained by the time cadence and observation frequency. For CP stars, detection rates reported by <cit.> are significantly lower than those found in previous studies <cit.>, indicating potential underestimation. Additionally, the selection in <cit.>, based on LAMOST DR5's Low-Resolution Survey, covered approximately 70% of LRS targets and about 25% of the MRS targets in DR9, suggesting a coverage gap that could influence detection completeness. §.§ Stellar mass and age We estimated the mass and age of stars in our sample using the Stellar Parameters INferred Systematically (SPInS) tool <cit.>, a Bayesian framework designed to derive the probability distribution functions of stellar parameters. This analysis incorporated stellar evolution models from the Bag of Stellar Tracks and Isochrones <cit.> and adopted the Kroupa initial mass function <cit.> as a prior. Inputs for SPInS included: (1) effective temperature as determined by SLAM; (2) bolometric luminosity, calculated from Gaia parallaxes and 2MASS K-band magnitudes, corrected for extinction and bolometric corrections; and (3) metallicity as derived by SLAM. The inference of stellar mass, age, and radius was performed using a Markov chain Monte Carlo method. We translated stellar ages into their respective evolutionary phases on the MS, represented as the ratio t/, where t denotes the age of a star since the ZAMS, and signifies the duration from ZAMS to the Terminal Age Main Sequence (TAMS). Figure <ref> illustrates the distribution across the t/–M (mass) plane (top panel) and t/–[M/H] (metallicity) plane (bottom panel) for stars categorized as `normal'. Our sample's distribution within the mass–age diagram aligns closely with the findings of , illustrating consistent profiles across studies. The noticeable absence of younger stars with masses exceeding [3.5] can be attributed to the implemented temperature cut at T_eff=[14500]K, as marked by the vertical line in Figure <ref>, which systematically excludes these higher mass, potentially hotter stars from our analysis. Furthermore, an analysis of metallicity trends reveals a gradual shift towards an older stellar population as metallicity decreases, suggesting that metal-poor stars in our sample tend to be further along in their evolutionary phases. The median uncertainty of the mass and age are [0.03] and [0.03]. § RESULT In this section, we undertake a detailed investigation into the dynamics of stellar rotation, analyzing how it evolves and its dependence on stellar mass and metallicity. Recognizing the challenge posed by the projection effect in measuring spectroscopic rotational velocity, we employed a binning method. Within each bin, we calculated the average projected rotational velocity, ⟨ vsin i⟩, subsequently converting these averages to true rotational velocities, ⟨ v⟩. This conversion relies on the assumption of stars having randomly oriented rotational axes, a premise supported by the statistical framework outlined by <cit.>. Note that average values of vsin i might not well reflect a population's rotational evolution if the underlying distribution is complicated (see discussion in Section <ref>). We initiate our analysis by examining the evolution of rotational velocity as a function of stellar mass, focusing on the ratio ⟨ v/v_ZAMS⟩ across different mass categories. This first step allows us to isolate the effect of mass on rotational dynamics, setting a foundation for further exploration into how metallicity influences these patterns. We then compared this ratio against the predictions from models, ensuring that both values begin at unity at the onset of the MS and evolve on the same scale. While the absolute rotational velocity is largely determined by the initial angular momentum, the adopted initial rotation rates in the models have a minor impact on this ratio. Consequently, the corresponding evolutionary profile exhibits a similar behavior that remains relatively stable, as we will see in Section <ref>. By categorizing our sample into distinct metallicity bins, we then explore how metallicity affects stellar rotation, providing insight into the complex interplay between mass, age, and metallicity. This gradual refinement from mass to metallicity is followed by a broader comparison with stellar populations in different galaxies. We analyze the prevalence of rapid rotators within these populations as a function of metallicity, offering a holistic view of the varied forces influencing stellar rotation across diverse galactic environments. For the majority of our analysis, we focus on stars with metallicity [M/H] less than [0.1]dex, except for discussions in Sections <ref> where we extend our considerations to include a broader range of metallicity. This decision is informed by the complexity of modeling stars with super-solar metallicity, where the increased abundance of heavy elements leads to enhanced opacity, affecting stellar winds, mass loss, and convective overshooting <cit.>, as highlighted by observations in metal-rich environments like the Galactic center or metal-rich globular clusters <cit.>. We also introduce a lower metallicity limit of [-0.3]dex for specific analyses in Section <ref> and <ref>, ensuring a focused examination of stellar rotation within a well-defined parameter space. The numbers of stars within the metallicity bins used in this work are tabulated in Table <ref>. §.§ Evolution of rotation as a function of mass In Figure <ref>, we explore the equatorial velocity ratio v/v_ZAMS across the MS lifespan for stars within mass ranges from 1.75 to [3.25]. ⟨ v/v_ZAMS⟩ is calculated utilizing a sliding average across t/, with a bin width of 0.1. We confirm that the choice of sliding window size and mass binning minorly influences the observed trends. We overplot the observed velocities against theoretical expectations for two distinct rotational models: stars evolving as rigid rotators (gray), characterized by ⟨ v/v_ZAMS⟩>1, and those as differential rotators (purple), which do not redistribute angular momentum and have ⟨ v/v_ZAMS⟩<1. The shaded areas depict theoretical evolutionary paths for mass regimes ranging from 1.5 to [3.0], as discussed in <cit.>. According to these models, rigid rotators exhibit a gradual increase in rotational velocity with age, remaining relatively constant through the initial two-thirds of the MS phase before significantly increasing in the final third. This acceleration is attributed to a mass-compensation effect, as described by <cit.>, where rotation enhances central density and facilitates the transfer of angular momentum from the core to the surface. Conversely, for differential rotators, changes in surface velocity result from an expanding radius as stars progress from ZAMS to TAMS. The observed evolution of rotational velocity presents notable discrepancies from theoretical models, particularly in stars with masses below approximately [2] and above [2.2]. Initially, their rotational velocities, ⟨ v/v_ZAMS⟩, exhibit an increase during the early stages of the MS, followed by a gradual decrease as they approach the end of the MS phase. This pattern of acceleration peaks at around [0.25] with a window size of [0.1], as indicated by the first vertical dashed gray line in the figure, a trend consistent across these mass ranges. Interestingly, in the more massive subset (⟨ M⟩>[2.8]), a second acceleration peak appears at [0.55], denoted by the second vertical dashed gray line, albeit this secondary increase is substantially less pronounced in less massive stars. Meanwhile, stars with masses between ∼[2.0] and ∼[2.2] demonstrate a modest increase in rotational velocity during the first half of their MS lifespan. This characteristic closely mirrors observations by <cit.>, where the authors collected vsin i for a sample of 2014 B6- to F2-type stars and compared the observed evolution of rotational velocities with different theoretical models. The analysis in their work is the most relevant. So I feel necessary to discuss the differences in detail. They reported rapid velocity increases in the initial third of the MS phase, peaking around [0.3-0.4] for considered masses of 2, 2.5, and [3]. However, our findings diverge in several respects: Firstly, the early acceleration observed in stars approximately [2] is less pronounced in our analysis than in <cit.>, where peak velocity increased by a factor of around 1.3. In our dataset, such significant increases in rotational velocity exceeding ⟨ v/v_ZAMS⟩>1.3 are only seen in stars around [1.8], diminishing as mass increases. Notably, even as acceleration becomes apparent again for stars exceeding [2.35], it does not surpass the magnitude observed in lower-mass stars. Secondly, while <cit.> identified the maximum ⟨ v/v_ZAMS⟩ at t/ roughly between 0.3 and 0.4, our analysis suggests a slightly earlier peak at 0.25. Lastly, the potential second acceleration phase around t/∼ 0.55, also noted by <cit.>, occurred later in their analysis, within the 0.6 < t/ < 0.7 interval. The last two discrepancies might stem from different binning on age between our study and theirs. In the latter half of the MS lifespan, the rotational velocity curves are characterized by gradual deceleration, interspersed with minor fluctuations. A notable exception occurs within the mass range of approximately [2.0] to [2.1], where the most dominant peak in velocity is observed at t/=0.7, a feature unique to this mass bin. For stars more massive than [2.5], a consistent and monotonical decline in rotation is observed until the MS ends. By analyzing the phase between t/=0.6 and t/=0.9, we found that the deceleration rates exceed those predicted by the differential rotation model (approximately -0.6), reaching a minimum rate of -1.50±0.16 for stars around [2.8]. While this linear relationship between ⟨ v/v_ZAMS⟩ and age is most pronounced in more massive stars, the deceleration in stars with M<[2] during their late MS phase can also be approximated by a gradual slowing. Moreover, the slopes between t/=0.25 and 0.55 for various mass bins (excluding the 2.0 < M/ < 2.1 range) are gentler compared to later stages (0.6 < t/<0.9), hinting at the presence of a second acceleration phase concluding at t=[0.55]. In every mass bin, we observed a distinctive `hook'-like feature in the rotational velocity curves at the terminal phase of the MS, lasting for a short period of Δ t ≈[0.05]. This temporary increase in rotation likely reflects the star's overall contraction when it evolves off the MS, as detailed by <cit.>. This contraction phase is triggered when the hydrogen within the convective core dwindles, no longer sufficient to sustain the star's equilibrium structure. Consequently, the star contracts until hydrogen burning recommences in the surrounding shell, leading to a brief spike in rotational speed. §.§ Evolution of rotation's dependence on metallicity Figure <ref> is extended to analyze metallicity's influence on rotational velocity, categorized into three metallicity ranges: two near solar — [0.0]dex < [M/H] < [0.1]dex (blue) and [-0.1]dex < [M/H] < [0.0]dex (green) — and one for metal-poor stars, [-0.3]dex < [M/H] < [-0.1]dex (orange). This selection is guided by two main considerations: firstly, ensuring each metallicity bin contains a sufficiently large sample for robust statistical analysis; and secondly, avoiding the super-solar metallicity range where models become less reliable due to increased opacity from heavy elements. These metallicity intervals allow for a comparison across similarly sized samples, although disparities may arise in certain mass bins. The average metallicity values for these bins are [0.05]dex, [-0.05]dex, and [-0.18]dex, respectively. The observations detailed in Figure <ref> largely remain unchanged in Figure <ref>, demonstrating that: (1) early acceleration reaching a peak at t/∼ 0.25 occurs independently of metallicity within the studied range; (2) beyond the mid-point of the MS lifespan, rotational velocity generally decreases gently, except for the mass bin 2.0 < M/ < 2.1 which deviates from this pattern, and a noticeable shift in the location of the second peak for stars with M/ < 2.7 across different metallicity ranges; (3) the `hook' feature is consistently present. These consistent observations across our analyses suggest that the identified features are not mere artifacts of sample selection but likely reflect inherent characteristics of A-type stars. In the first row of Figure <ref>, the deceleration phase in the second half of the MS could be approximated by a linear decline (dashed lines). Based on the slope of this approximation, we notice a metallicity-dependent deceleration pattern, where the metal-poor subsample exhibits a more gradual slowdown compared to their metal-rich counterparts, with deceleration rates intensifying as average metallicity increases. Specifically, the deceleration slopes for the metal-rich subsample, steeper than those predicted by the differential rotation model, reach their lowest at -1.80±0.24 for stars in the least massive range (1.7<M/ < 1.8). This trend of varying slopes with stellar mass is consistent across all metallicity bins, showcasing less steep declines as stellar mass increases. For example, in the intermediate metallicity range ([-0.1]dex < [M/H] < [0.0]dex), slopes shift from -1.10±0.12 to a less steep -0.44±0.10 as stars gets more massive. Remarkably, for the most metal-poor subsample, the curve even becomes almost flat in 1.9 < M/ < 2.0 with a slope of -0.00± 0.13. It's noted that for metal-rich stars in the mass range 1.7<M/ < 1.8, the evolutionary curve is truncated before the MS's end, primarily due to a temperature cutoff at [7000]K, which excludes late MS phase data for low-mass, metal-rich stars. Yet, this distinct metallicity dependence is less pronounced in stars exceeding ∼[2.0] in mass, especially within the range of 2 to [2.5]. Here, the rotational evolution does not exhibit a consistent pattern across different metallicity levels, as illustrated in the second row of Figure <ref>. It's only when the average stellar mass reaches approximately [2.6] that more predictable patterns emerge once again. Contrary to expectations, the evolution profile for the slightly metal-poor subgroup, particularly around ⟨ M ⟩∼[2.7], significantly diverges from the anticipated intermediate behavior between the most metal-poor and metal-rich groups. This slightly metal-poor subgroup experiences notably sharp acceleration in the early MS phase and pronounced deceleration towards its conclusion. This discrepancy might be attributed to stochastic effects, exacerbated by the limited number of young, massive stars in our dataset. This limitation, as highlighted in the top panel of Figure <ref>, introduces significant uncertainty in the initial velocity estimates (⟨ v_ZAMS⟩), affecting the observed evolutionary trends for ⟨ v/v_ZAMS⟩. Such deviations suggest that inaccuracies in determining ⟨ v_ZAMS⟩ for the slightly metal-poor subgroup may lead to an underestimation of their true evolutionary dynamics, which, if corrected, would likely position their trajectory between those observed for the metal-rich and metal-poor groups. To alleviate the reliance on the ⟨ v_ZAMS⟩, we introduce an analogous figure showcasing the average equatorial velocity ⟨ v ⟩ in Figure <ref>. This approach synthesizes evolutionary effects with the initial rotational states, presenting evolution profiles that start from actual ⟨ v_ZAMS⟩ values rather than a normalized starting point. As a result, this method substantially reduces the associated uncertainties in the profiles. The first row of the figure illustrates deceleration slopes that continue to emphasize metallicity's influence, indicating that the more metal-rich samples may experience greater angular momentum loss on the surface. However, the consistency in evolutionary order observed for less massive stars does not hold for those up to [2.5]. In contrast, for stars more massive than this threshold, clear metallicity-driven patterns reappear, with the evolution curve for [-0.1]dex < [M/H] < [0.0]dex fitting neatly between those of the other two metallicity groups—a distinction not as apparent in Figure <ref>. This discrepancy suggests that the scaling in the previous figure might be influenced by the precision of ⟨ v_ZAMS⟩ estimates. Intriguingly, for stars averaging more than [2.7] in mass, a similar metallicity relationship emerges, where the [0.0]dex < [M/H] < [0.1]dex and [-0.1]dex < [M/H] < [0.0]dex bins show much steeper slopes than the most metal-poor subgroup. These patterns suggest a metallicity-correlated angular momentum loss rate, a phenomenon that appears to be consistent across different stellar masses, even as deceleration rates vary according to both mass and metallicity. Now, we move our focus to ⟨ v_ZAMS⟩, which represents the initial status of rotation. For the first three mass bins where ⟨ M ⟩ is below [2], ⟨ v_ZAMS⟩ increases as the metallicity gets larger. When the mass is in the transition domain between 2 and [2.5], it is hard to confirm any difference between these samples, and for more massive subsamples, the conclusion is hindered by the limited sample size at the very point of ZAMS. As mentioned, an underestimation of ⟨ v_ZAMS⟩ for [-0.1]dex < [M/H] < [0.0]dex could explain the outlier behavior in the last row of Figure <ref>. It is worth noticing that, while the ZAMS velocities of various metallicity might be consistent with the errors, generally speaking, metal-rich stars show larger rotational velocity on the MS lifetime compared to slightly metal-poor and metal-poor subsamples for ⟨ M ⟩>[2.5]. This is most obvious after the possible second peak in Figure <ref>, where the blue evolution track is positioned above the green track, while the green track, in turn, lies above the orange track. Such a feature is different from that of the less massive stars which, initially, the metal-rich stars rotate faster than metal-poor ones, but a remarkable reversal occurs beyond the second peak, with metal-rich stars slowing down more swiftly. This rapid deceleration, associated with higher metallicity, as delineated in Figure <ref>, suggests that increased metallicity significantly influences the surface angular momentum loss, either due to angular momentum redistribution or mass loss, leading to a quicker reduction in rotational speed. §.§ Rapid rotators as a function of metallicity In the previous section, we discussed the evolution of rotation as a function of MS lifetime and their dependence on stellar mass and metallicity. While such an approach can simplify the analysis by focusing on how these specific factors influence stellar rotation, it requires a significant number of stars and thus is only possible for stars around solar metallicity in the Milky Way (MW). To extend our understanding to lower metallicity regimes, we adopt an alternative approach: comparing rotation rates of a given population within the MW to those in the Small Magellanic Cloud (SMC) and Large Magellanic Cloud (LMC) dwarf galaxies <cit.>. These dwarf galaxies enable us to explore metallicity effects down to [0.14] ([M/H]∼[-0.82]dex) for the SMC and [0.5] ([M/H]∼[-0.30]dex) for the LMC. Such comparisons, however, often grapple with the complexities of analyzing mixed populations or cluster members whose formation environment might be unique. These challenges are compounded by typically small sample sizes of less than a thousand, hindering the segregation of stars into distinct bins for a close examination of variations within specific subsets. In our study, we leverage our extensive catalog, categorizing stars by mass, to assess how the proportion of rapid rotators varies with metallicity, extending to [M/H]=[-0.65]dex, thereby providing a more detailed picture of rotational behaviors across different metallicity levels. We define fast-rotating stars as those with vsin i exceeding [200]km,s^-1, following the criteria used by <cit.> and <cit.>. Our analysis divides stars into three mass bins, based on their behavior seen in Figure <ref>: low-mass stars (1.7 < M/ < 2.0, blue) experiencing a metallicity-dependent decrease in rotation rates; intermediate-mass stars (2.0 < M/ < 2.5, green) in a transitional phase; and high-mass stars (2.5 < M/ < 4.0, orange) that exhibit a monotonical decrease after the second peak. In Figure <ref>, we observe linear relationships between the fraction of fast rotators and metallicity for stars with [M/H]<[-0.2]dex, noting substantial variability across mass bins at super-solar metallicity levels. This study follows the methodology of <cit.>, who compared OB stars' rotational velocities across the MW, LMC, and SMC. Our findings for metal-poor A-type stars align closely with these OB stars' rotational trends. Interestingly, A-type stars with solar metallicity exhibit a higher fraction of rapid rotators than their OB counterparts in the MW. This aligns with <cit.> and is supported by the velocity trends across spectral types described by <cit.> that field and cluster stars have mean projected rotational velocity increasing from below [20]km s^-1 at spectral type G0 to around [200]km s^-1 at spectral type A, and then gently decreasing to [150]km s^-1 at spectral type earlier than B0. Additionally, the observed correlation between lower metallicity and higher rotational speeds among OB stars, potentially with a steeper slope for A-type stars, suggests a universal pattern in stellar rotational velocity evolution across different metallic environments. It's noteworthy that while variations among A-type stars' mass bins are minimal, the dataset for OB stars exhibits greater variability <cit.>, highlighting the diverse dynamics of stellar rotation in different galactic contexts. § DISCUSSION §.§ Mechanisms controlling the evolution of rotation Upon settling on the MS with a given angular momentum, a star's rotational evolution is predominantly influenced by a trio of physical processes[Interaction with a companion by tides or mergers could affect surface rotation as well, e.g., the orbital decay of a planet due to tidal forces could transform angular momentum to the red giant stars <cit.>.]: * local conservation of the angular momentum * internal transport mechanisms * stellar winds Local angular momentum conservation leads to adjustments in rotational velocity as the star undergoes radial expansion. Internal transport mechanisms can redistribute the angular momentum throughout the star through meridional circulation <cit.> and the Gratton–Öpik cell at the outer envelope carries angular momentum from the inner part of the star to the surface, tending to accelerate it <cit.>. As for stellar winds, they can remove angular momentum from the stellar surface through the mass loss process. In contrast, stellar winds can act to decelerate the star by stripping angular momentum from the surface through mass loss. The interplay between these processes—whereby the first and last tend to decelerate surface rotation, counterbalanced by the acceleration due to internal transport—dictates the trajectory of a star's rotational velocity on the MS. While stellar winds are pivotal in the angular momentum evolution of massive stars, affecting their ultimate fate <cit.>, they are anticipated to have a negligible impact on the angular momentum evolution of A-type stars considered in this study. Observations of the Hα line profile in A-type MS stars suggest an upper mass loss limit of up to 1 and [2×10^-10],yr^-1 <cit.>, and typically in the range of 10^-12 to [10^-10],yr^-1 <cit.>. Even for stars rotating near their critical velocity, the rotationally induced enhancement in mass loss remains modest, less than a fivefold increase, not exceeding [10^-11],yr^-1 <cit.>. Therefore, the principal drivers of rotational evolution in this context are local angular momentum conservation and internal transport mechanisms, not stellar winds. Models from <cit.> indicate a balance between inefficient internal transport and minimal mass loss, leading stars to evolve with nearly constant rotational velocities for masses up to [10]. However, as seen in Figure <ref>, the observed acceleration in the early MS phase exceeds predictions from both solid body and differential rotation scenarios. This contradicts findings of <cit.> that stars in the [6-12] range, show no significant rotational speed variation through the MS from t∼ 1 and [15]Myr, challenging the expectation of a uniform rotational behavior across different stellar mass ranges. Since the gradual expansion of stellar radii would inherently slow down the surface rotational velocity, any observed acceleration in rotation at early phases of the MS must stem from internal redistribution of angular momentum, barring external influences like close binary interactions. The amplitude of the meridional circulation's radial component undergoes a sign change near the stellar surface, enabling the transport of angular momentum from the star's inner regions to its surface <cit.>. Although this mechanism is theoretically capable of increasing surface rotation <cit.>, such acceleration typically occurs later in the MS phase (t/ > 0.6), difficult to reconcile with the early and pronounced speed-up depicted in Figure <ref>. Another aspect to consider is the initial adjustment phase which depends on the model's assumed initial rotation. A brief period of adjustment occurs at the start, as shear turbulence becomes active, eroding gradients established by meridional circulation and leading the rotation rate (Ω) profile toward an equilibrium configuration <cit.>. Since models typically assume uniform rotation at the ZAMS, meridional circulation shifts angular momentum from the star's outer regions to its core, noticeably decelerating the surface rotation for a tiny portion of the MS lifetime <cit.>. This assumption of initial rigid rotation, to minimize the total angular momentum constrained by the ratio of rotational kinetic energy to the gravitational potential, finds justification in (a) dynamic stability against axisymmetric perturbations, advocated for solid-body stellar rotators <cit.>, and (b) the promotion of angular momentum redistribution by turbulent viscosity <cit.>. Thus, it's generally believed that stars exhibit uniform rotation throughout the fully convective pre-MS stage. Despite its widespread application, direct evidence for this assumption remains elusive. The relaxation timescale from initial conditions to a steady state in stellar rotation shows a significant dependence on surface rotational velocity, as delineated by <cit.>. The formula τ_rel∝τ_KHΩ_S^-2(M^2/R^3) reveals that τ_rel, the relaxation timescale, scales with the Kelvin-Helmholtz timescale τ_KH, where Ω_S represents the angular velocity at the star's surface. This relationship suggests that for a differentially rotating star at the ZAMS with very low angular velocities, reaching a steady state could extend significantly, potentially matching the star's lifespan. For stars initiating rotation at high velocities (v_s > [100]km s^-1), the relaxation process is too short to be observable. Conversely, for stars beginning with rotational velocities around [10]km s^-1, permissible under differential rotation models without minimizing total angular momentum, this could imply a prolonged adjustment period and observable surface acceleration in the early MS phase. This might challenge the assumption of uniform rotation at ZAMS, particularly for intermediate-mass A-type stars, suggesting a slower initial surface rotation. This aligns with findings by <cit.> that stars do not maintain solid-body rotation during their transition from convective to radiative phases in the Orion star-forming region, encompassing stars ranging from 0.4 to over [10]. The tentative second peak in rotational velocity observed around t/∼ 0.55 is hard to explain for the relaxation, as a steady-state rotation profile would presumably have been reached after the initial peak. This suggests that any significant readjustment phase at this later stage, especially one that results in acceleration, would necessitate some form of internal angular momentum shift not accounted for in the initial relaxation. Given that the surface rotation has already been accelerated, the duration of this secondary acceleration phase should be shorter than that of the initial peak. Therefore, this later acceleration could be linked to the Gratton–Öpik term, indicating a more intense than expected deepening of the outer zone where inverse circulation occurs during MS evolution <cit.>. Such a scenario hints at the absence of a truly stationary state in stellar rotation during the MS phase, challenging the simplifications used in some stellar evolution models. In the latter stages of the MS, the evolutionary path of stars is generally characterized by a monotonical deceleration, with the notable exception of those in the mass range 2.0 < M/ < 2.1. This observation suggests that the dominant factor influencing this phase is the loss of angular momentum at the stellar surface, attributable more to the star's radial expansion than to stellar wind-induced mass loss. The observed steepening of the deceleration slope with increasing stellar mass, although potentially exceeding theoretical predictions, can be possibly reconciled with the model for two main reasons: Firstly, the measured value of ⟨ v/v_ZAMS⟩ — and consequently, its slope — could be inaccurately determined due to erroneous estimations of ⟨ v_ZAMS⟩, particularly for higher mass stars. Secondly, the differential rotation model employed by <cit.> is predicated on solar metallicity assumptions. In contrast, our study imposes a metallicity range from -0.3 to [0.1]dex, with an average slightly below solar ([-0.07]dex). This discrepancy in metallicity could account for the observed variation in slope. §.§ Abnormity between [2.0] and [2.1] Given the complexity of stellar evolution and rotational dynamics, the phenomena observed in the mass range of 2.0 to [2.3] stand out as particularly intriguing. Within this mass range, a transition occurs from a unimodal to a bimodal distribution of rotational velocities, as identified by . A notable deviation in rotational velocity is observed at 2.0 < M/ < 2.1, where the anticipated continuous decline in rotational velocity following the first or second peak is disrupted by a significant peak at t/∼ 0.7, a feature absent in other mass bins. This anomaly persists across all three examined metallicity bins and does not vary with the choice of ⟨ v/v_ZAMS⟩ or ⟨ v ⟩, suggesting it could be an intrinsic characteristic or result from systematic bias within this specific subset. If the emergence of this delayed peak results from enhanced angular momentum transfer to the surface due to inverse circulation, it challenges explanations for its delayed appearance by approximately δ t∼ 0.15 of the MS lifetime. Furthermore, the magnitude of this acceleration does not align with the second peak observed around t/∼0.55 in other mass ranges. The absence of this delayed peak in prior studies casts doubt on its authenticity, suggesting the influence of a bias unique to certain data subsets. One potential source of bias could relate to the effective temperature determination around [9500]K <cit.>, where the spectral gradients of hydrogen lines in A0 stars diminish, potentially leading to skewed interpretations for stars with masses between [2.0] and [2.2]. Additional investigations are warranted to clarify the origins and implications of this anomalous behavior, highlighting the need for careful consideration of potential biases in stellar rotational studies. §.§ Influence of metallicity The role of metallicity in shaping the rotational characteristics of stars, particularly throughout the MS phase, is intricately linked to three fundamental mechanisms outlined in Section <ref>: * Under similar distributions of initial angular momentum, stars having lower metallicity rotate faster because they undergo less violent nuclear reactions, making their radii at the ZAMS smaller than those of metal-rich stars, extending down to the metallicity levels observed in the Magellanic Clouds <cit.>. However, the subsequent rapid expansion of metal-poor stars leads to a slightly more pronounced decrease in rotational velocity over time. * The Gratton–Öpik cell is proportional to 1/ρ, which gets more prounced at the outer envelop and lower Z. This makes the transfer of angular momentum by this inverse meridional circulation more efficient with a decreasing metallicity <cit.>. * The mass loss due to radiation-driven stellar winds is linked with the metallicity of the star, as higher metallicity increases the number of lines available for the absorption and scattering of radiation, thus driving the stellar outflow more effectively. This relationship between mass-loss rate and metallicity follows a power-law distribution of line strengths, as detailed by <cit.>. In contrast, <cit.> highlighted the relevance of mechanical winds in very metal-poor (Z⩽[10^-5]) environments, where the efficacy of radiation-driven winds diminishes. Moreover, rotational mixing can lead to enhanced opacity in the outer layers of a star, potentially reinvigorating line-driven winds. This dynamic interplay between different types of stellar winds and metallicity underscores the complex mechanisms governing stellar mass loss and its consequent impact on stellar rotation and evolution. For the range of stellar masses analyzed here, the effect of stellar wind on surface rotation velocity is minimal. Even when accounting for mechanical winds, <cit.> reported a maximum mass loss rate below [10^-11] yr^-1 across the MS for stars lighter than [3]. Hence, the variations in rotation velocity as a function of metallicity, as observed in Figures <ref> and <ref>, are likely not due to mass loss but rather to internal angular momentum redistribution. A discernible pattern emerges, where metal-rich samples exhibit a faster decline in both ⟨ v/v_ZAMS⟩ and ⟨ v⟩ for masses below [2.0]. While this correlation fades in the transitional mass range between [2.0] and [2.3], it reappears, albeit tentatively, for heavier stars, particularly in terms of ⟨ v⟩. This behavior aligns with predictions related to meridional circulation, whereby the Gratton–Öpik cell, more effective at lower metallicities, mitigates the reduction in rotational velocity caused by stellar expansion. The trend between the deceleration phase's slope and metallicity aligns with the predictions made by <cit.>, though a direct comparison is challenging due to our analysis involving mixed subpopulations with varied initial rotation velocities. In Figure <ref>, a noticeable discrepancy in ⟨ v_ZAMS⟩ across different metallicity groups was observed for stars with lower masses. This trend, where metal-rich populations exhibit higher initial rotational velocities, contradicts the expectation that metal-poor stars should rotate faster due to their comparatively smaller sizes at the ZAMS. It is important to recognize that this assumption presumes the same angular momentum across stars of varying metallicity at the onset of the MS. The discrepancy observed could imply a more efficient process of angular momentum dispersion during the stellar formation phase for metal-poor stars, potentially influenced by stronger fossil magnetic fields, which are theorized to be more intense in stars with lower metallicity <cit.>. Should this be verified, it would suggest a more significant role for magnetic braking in the evolution of metal-poor star rotation velocities. §.§ Caveats The analysis of rotation within this study primarily relies on the average values of ⟨ vsin i⟩ or ⟨ v ⟩. This approach, while useful for generalizing rotational trends, introduces potential caveats. Firstly, the extensive size of the sample in this study presents significant challenges in removing contaminants such as close binaries and chemically peculiar stars. Although efforts were made to identify these objects through radial velocity variations and cross-referencing with existing catalogs, the process is inherently incomplete. The complexity and diversity of stellar phenomena mean that some contaminants inevitably remain, potentially influencing the overall analysis. A more detailed and refined approach to identifying and excluding binary systems and chemically peculiar stars could further enhance the reliability of the findings. This aspect underscores the need for caution in interpreting the results and highlights the potential for future work to refine and corroborate these observations <cit.>. Secondly, our model primarily considers surface rotational velocity as a function of stellar mass, age, metallicity, and initial rotation velocity, while potentially overlooking other influential factors. Conspicuously, magnetic fields could significantly impact internal stellar rotation, structure, and evolution <cit.>. Magnetic activity, stemming from dynamo-driven fields in stars with convective envelopes <cit.> or fossil fields in early-type stars <cit.>, is known to affect rotational dynamics. Although the detection is quite few, magnetic fields are detected in 10% of the early-type stars <cit.>, which typically rotate slower than their non-magnetic counterparts due to angular momentum loss via magnetically confined winds <cit.>. This relationship between magnetic fields and rotational behavior, supported by studies of rotational modulation of eight CP stars observed by Transiting Exoplanet Survey Satellite <cit.>, suggests magnetic braking as a critical factor in stellar rotation. The limited detection of magnetic fields in early-type stars, however, underscores the need for further investigation into the broader implications of magnetic activity on stellar rotation across different stellar types and evolutionary stages. Finally, employing the average values of vsin i or v to characterize a population's rotational behavior may complicate direct comparisons with models, particularly if the underlying distribution is not singularly peaked. I think this caveat is about how to interpret the average value of vsin i, so I added a note at the end of the first paragraph in Section <ref>, instead of in Section <ref> where I measured vsin i. For instance, bimodal distributions of vsin i have been observed in certain star clusters <cit.>, highlighting the potential complexity of stellar rotational velocities. Such distributions, despite converging towards a unified mean value in our analysis, necessitate sophisticated deconvolution methods for accurate delineation, akin to approaches utilized in <cit.> or . However, it's crucial to acknowledge that average rotational velocities remain valid indicators for populations exhibiting multi-peak distributions. Additionally, the phenomenon of split MSs observed in young star clusters <cit.>, which might suggest divergent rotational speeds, does not significantly affect our age determinations. The bimodal MS seen in clear in the color-magnitude diagram <cit.> is predominantly attributed to gravity-darkening effects and does not distinctly manifest within the HR diagram when effective temperatures are mainly determined through normalized Hα line profiles. Consequently, this bifurcation will not manifest as two branches in the HR diagram, thus not misleadingly categorized as another population with fast rotation at t/t_MS∼ 0.1. § SUMMARY In this work, we built upon the foundational studies of and , assembling a comprehensive catalog of 104,572 A-type stars, derived from LAMOST MRS DR 9 and spanning effective temperatures between [7000]K and [14500]K. Our analysis focused on a subset of 84,683 `normal' stars—those classified as single, non-CP, non-variable, and not belonging to star clusters. The major insights gained from our study are summarized as follows. * By examining the evolution of surface rotational velocity, categorized by stellar mass, we observed diverse evolutionary profiles. The velocities were normalized to their initial values at the ZAMS, and ages were normalized to the total MS lifetime, allowing a consistent comparison across different stellar masses. The majority of these profiles exhibited a common pattern: an initial rapid acceleration up to t/ = 0.25±0.1, a possible second acceleration peak around t/ = 0.55±0.1 for stars with ⟨ M ⟩ > [2.5], followed by a gradual decline, and a distinctive `hook' feature towards the end of the MS phase. * The emergence of the initial acceleration peak surpasses the expectations derived from two extreme theoretical models: the rigid body rotator and the differential rotator. This phenomenon is ascribed to the relaxation phase that commences immediately after the ZAMS. Contrary to the flat angular velocity profile typically assumed in models, a more concentrated profile could result in a significant speed increase. This acceleration phase may persist longer before settling into a steady state. * Stars within the mass range of 2.0<M/<2.1 exhibit anomalous acceleration peaks at t/ = 0.7. This behavior does not align neatly with theories of internal angular momentum redistribution. Should external factors not account for this phenomenon, it could suggest an artifact stemming from inaccuracies in the effective temperature (T_eff) estimations around [9500]K. * The monotonical decline during the second half of MS is regulated by the conservation of angular momentum due to an expansion of stellar radii, accompanied by a significant contribution from inverse meridional circulation. The slope of the deceleration is larger at more massive stars, which manifests a more pronounced spin-down trend compared to the prediction of a completely differential model with no internal angular momentum distribution or external angular momentum loss. * We categorized our subsample based on metallicity into three bins around solar: [0.0]dex < [M/H] < [0.1]dex, [-0.1]dex < [M/H] < [0.0]dex, and [-0.3]dex < [M/H] < [-0.1]dex, enabling an analysis of metallicity's impact. The deceleration phase's slope correlates with metallicity, showing that metal-poor ([-0.3]dex < [M/H] < [-0.1]dex) sample experiences a steeper spin-down. This observation can be linked to the Gratton–Öpik cell's increased efficiency at lower densities and metallicity. As this mechanism transports more angular momentum from the star's interior to its surface, it moderates the surface rotational velocity's decline, resulting in a gentler slope for metal-poor samples compared to their metal-rich counterparts. * The metal-poor ([-0.1]dex < [M/H] < [0.0]dex and [-0.3]dex < [M/H] < [-0.1]dex) subsamples begin with lower rotational velocities at the ZAMS, possibly indicating a metallicity-dependent mechanism for angular momentum removal during the formation of MS stars. This observation suggests that the production of magnetic fields, which varies with metallicity, could play a part in this process. * We observed that the proportion of fast rotators (vsin i > [200]km s^-1) declines with increasing metallicity, up to log(Z/Z_⊙)∼ -0.2. This pattern aligns with findings related to OB-type stars in the SMC and LMC <cit.>. However, this correlation diverges at and above solar metallicity, suggesting a difference in stellar mass, or it could be attributed to uncertainties arising from mass loss, which become increasingly significant at higher metallicities. We thank the anonymous referee for their valuable comments. The Guoshoujing Telescope (the Large Sky Area Multi-Object Fiber Spectroscopic Telescope; LAMOST) is a National Major Scientific Project built by the Chinese Academy of Sciences. Funding for the project has been provided by the National Development and Reform Commission. LAMOST is operated and managed by the National Astronomical Observatories, Chinese Academy of Sciences. This work has made use of data from the European Space Agency (ESA) mission Gaia (<https://www.cosmos.esa.int/gaia>), processed by the Gaia Data Processing and Analysis Consortium (DPAC; <https://www.cosmos.esa.int/web/gaia/dpac/consortium>). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement. Facility: LAMOST Software: PARSEC <cit.>, astropy <cit.>, IPython <cit.>, SPInS <cit.>, laspec <cit.>, SLAM <cit.>, matplotlib <cit.> [Abt(1981)]1981ApJS...45..437AAbt, H. A. 1981, , 45, 437 [Abt & Morrell(1995)]1995ApJS...99..135AAbt, H. A. & Morrell, N. I. 1995, , 99, 135 [Aerts et al.(2019)]2019ARA A..57...35AAerts, C., Mathis, S., & Rogers, T. M. 2019, , 57, 35 [Ando(1980)]1980Ap SS..73..159AAndo, H. 1980, , 73, 159 [Astropy Collaboration et al.(2013)]2013A A...558A..33AAstropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, , 558, A33 [Bailer-Jones et al.(2021)]2021AJ....161..147BBailer-Jones, C. A. L., Rybizki, J., Fouesneau, M., et al. 2021, , 161, 147 [Bragança et al.(2012)]2012AJ....144..130BBragança, G. A., Daflon, S., Cunha, K., et al. 2012, , 144, 130 [Buldgen et al.(2015)]2015A A...583A..62BBuldgen, G., Reese, D. R., & Dupret, M. A. 2015, , 583, A62 [Bétrisey et al.(2023)]2023A A...673L..11BBétrisey, J., Eggenberger, P., Buldgen, G., et al. 2023, , 673, L11 [Castelli(2005)]2005MSAIS...8...25CCastelli, F. 2005, Memorie della Societa Astronomica Italiana Supplementi, 8, 25 [Chandrasekhar & Münch(1950)]1950ApJ...111..142CChandrasekhar, S. & Münch, G. 1950, , 111, 142 [Charbonnel & Lagarde(2010)]2010A A...522A..10CCharbonnel, C. & Lagarde, N. 2010, , 522, A10 [Chen et al.(2020)]2020ApJS..249...18CChen, X., Wang, S., Deng, L., et al. 2020, , 249, 18 [Correnti et al.(2017)]2017MNRAS.467.3628CCorrenti, M., Goudfrooij, P., Bellini, A., et al. 2017, , 467, 3628 [Cui et al.(2012)]2012RAA....12.1197CCui, X.-Q., Zhao, Y.-H., Chu, Y.-Q., et al. 2012, Research in Astronomy and Astrophysics, 12, 1197 [D'Antona et al.(2015)]2015MNRAS.453.2637DD'Antona, F., Di Criscienzo, M., Decressin, T., et al. 2015, , 453, 2637 [de Jong et al.(2019)]2019Msngr.175....3Dde Jong, R. S., Agertz, O., Berbel, A. A., et al. 2019, The Messenger, 175, 3 [de Mink et al.(2013)]2013ApJ...764..166Dde Mink, S. E., Langer, N., Izzard, R. G., et al. 2013, , 764, 166 [Denissenkov et al.(1999)]1999A A...341..181DDenissenkov, P. A., Ivanova, N. S., & Weiss, A. 1999, , 341, 181 [Deutsch(1970)]1970stro.coll..207DDeutsch, A. J. 1970, IAU Colloq. 4: Stellar Rotation, 207 [Dufton et al.(2013)]2013A A...550A.109DDufton, P. L., Langer, N., Dunstall, P. R., et al. 2013, , 550, A109 [Dufton et al.(2006)]2006A A...457..265DDufton, P. L., Smartt, S. J., Lee, J. K., et al. 2006, , 457, 265 [Ekström et al.(2012)]2012A A...537A.146EEkström, S., Georgy, C., Eggenberger, P., et al. 2012, , 537, A146 [Ekström et al.(2008)]2008A A...478..467EEkström, S., Meynet, G., Maeder, A., et al. 2008, , 478, 467 [Ferraro(1937)]1937MNRAS..97..458FFerraro, V. C. A. 1937, , 97, 458 [Frémat et al.(2023)]2023A A...674A...8FFrémat, Y., Royer, F., Marchal, O., et al. 2023, , 674, A8 [Fujimoto(1987)]1987A A...176...53FFujimoto, M. Y. 1987, , 176, 53 [Gaia Collaboration et al.(2016)]2016A A...595A...1GGaia Collaboration, Prusti, T., de Bruijne, J. H. J., et al. 2016, , 595, A1 [Gaia Collaboration et al.(2023)]2023A A...674A...1GGaia Collaboration, Vallenari, A., Brown, A. G. A., et al. 2023, , 674, A1 [Georgy et al.(2013)]2013A A...553A..24GGeorgy, C., Ekström, S., Granada, A., et al. 2013, , 553, A24 [Gratton(1945)]1945MmSAI..17....5GGratton, L. 1945, , 17, 5 [Haemmerlé et al.(2017)]2017A A...602A..17HHaemmerlé, L., Eggenberger, P., Meynet, G., et al. 2017, , 602, A17 [Hastings et al.(2021)]2021A A...653A.144HHastings, B., Langer, N., Wang, C., et al. 2021, , 653, A144 [Heger & Langer(2000)]2000ApJ...544.1016HHeger, A. & Langer, N. 2000, , 544, 1016 [Hirschi et al.(2009)]2009IAUS..256..337HHirschi, R., Ekström, S., Georgy, C., et al. 2009, The Magellanic System: Stars, Gas, and Galaxies, 256, 337 [Howarth & Smith(2001)]2001MNRAS.327..353HHowarth, I. D. & Smith, K. C. 2001, , 327, 353 [Huang & Gies(2006)]2006ApJ...648..580HHuang, W. & Gies, D. R. 2006, , 648, 580 [Hunt & Reffert(2023)]2023A A...673A.114HHunt, E. L. & Reffert, S. 2023, , 673, A114 [Hunter et al.(2008)]2008A A...479..541HHunter, I., Lennon, D. J., Dufton, P. L., et al. 2008, , 479, 541 [Hunter(2007)]2007CSE.....9...90HHunter, J. D. 2007, Computing in Science and Engineering, 9, 90 [Hümmerich et al.(2020)]2020A A...640A..40HHümmerich, S., Paunzen, E., & Bernhard, K. 2020, , 640, A40 [Kamann et al.(2020)]2020MNRAS.492.2177KKamann, S., Bastian, N., Gossage, S., et al. 2020, , 492, 2177 [Kamann et al.(2023)]2023MNRAS.518.1505KKamann, S., Saracino, S., Bastian, N., et al. 2023, , 518, 1505 [Keszthelyi et al.(2022)]2022MNRAS.517.2028KKeszthelyi, Z., de Koter, A., Götberg, Y., et al. 2022, , 517, 2028 [Kobzar et al.(2022)]2022MNRAS.517.5340KKobzar, O., Khalack, V., Bohlender, D., et al. 2022, , 517, 5340 [Kroupa(2001)]2001MNRAS.322..231KKroupa, P. 2001, , 322, 231 [Kroupa et al.(2013)]2013pss5.book..115KKroupa, P., Weidner, C., Pflamm-Altenburg, J., et al. 2013, Planets, Stars and Stellar Systems. Volume 5: Galactic Structure and Stellar Populations, 115 [Lamers & Cassinelli(1999)]1999isw..book.....LLamers, H. J. G. L. M. & Cassinelli, J. P. 1999, Introduction to Stellar Winds, by Henny J. G. L. M. Lamers and Joseph P. Cassinelli, pp. 452. ISBN 0521593980. Cambridge, UK: Cambridge University Press, June 1999., 452 [Lanz & Catala(1992)]1992A A...257..663LLanz, T. & Catala, C. 1992, , 257, 663 [Lebreton & Reese(2020)]2020A A...642A..88LLebreton, Y. & Reese, D. R. 2020, , 642, A88 [Li et al.(2017)]2017ApJ...844..119LLi, C., de Grijs, R., Deng, L., et al. 2017, , 844, 119 [Li et al.(2019)]2019MNRAS.487..782LLi, G., Van Reeth, T., Bedding, T. R., et al. 2019, , 487, 782 [Limongi & Chieffi(2018)]2018ApJS..237...13LLimongi, M. & Chieffi, A. 2018, , 237, 13. [Liu et al.(2015)]2015RAA....15.1137LLiu, C., Cui, W.-Y., Zhang, B., et al. 2015, Research in Astronomy and Astrophysics, 15, 1137 [Liu et al.(2020)]2020arXiv200507210LLiu, C., Fu, J., Shi, J., et al. 2020, arXiv:2005.07210 [Maeder(2009)]2009pfer.book.....MMaeder, A. 2009, Physics, Formation and Evolution of Rotating Stars: , Astronomy and Astrophysics Library. ISBN 978-3-540-76948-4. Springer Berlin Heidelberg, 2009 [Maeder et al.(2008)]2008A A...479L..37MMaeder, A., Georgy, C., & Meynet, G. 2008, , 479, L37 [Maeder et al.(1999)]1999A A...346..459MMaeder, A., Grebel, E. K., & Mermilliod, J.-C. 1999, , 346, 459 [Maeder & Meynet(2000)]2000ARA A..38..143MMaeder, A. & Meynet, G. 2000, , 38, 143 [Martayan et al.(2007a)]2007A A...472..577MMartayan, C., Floquet, M., Hubert, A. M., et al. 2007a, , 472, 577 [Martayan et al.(2006)]2006A A...452..273MMartayan, C., Frémat, Y., Hubert, A.-M., et al. 2006, , 452, 273 [Martayan et al.(2007b)]2007A A...462..683MMartayan, C., Frémat, Y., Hubert, A.-M., et al. 2007b, , 462, 683 [Meynet & Maeder(2000)]2000A A...361..101MMeynet, G. & Maeder, A. 2000, , 361, 101 [Meynet et al.(2006a)]2006isna.confE..15MMeynet, G., Maeder, A., Hirschi, R., et al. 2006a, International Symposium on Nuclear Astrophysics - Nuclei in the Cosmos, 15.1 [Meynet et al.(2006b)]2006astro.ph.11261MMeynet, G., Mowlavi, N., & Maeder, A. 2006b, astro-ph/0611261 [Milone et al.(2015)]2015MNRAS.450.3750MMilone, A. P., Bedin, L. R., Piotto, G., et al. 2015, , 450, 3750 [Milone et al.(2016)]2016MNRAS.458.4368MMilone, A. P., Marino, A. F., D'Antona, F., et al. 2016, , 458, 4368 [Milone et al.(2017)]2017MNRAS.465.4363MMilone, A. P., Marino, A. F., D'Antona, F., et al. 2017, , 465, 4363 [Milone et al.(2018)]2018MNRAS.477.2640MMilone, A. P., Marino, A. F., Di Criscienzo, M., et al. 2018, , 477, 2640 [Moe et al.(2019)]2019ApJ...875...61MMoe, M., Kratter, K. M., & Badenes, C. 2019, , 875, 61 [Nguyen et al.(2022)]2022A A...665A.126NNguyen, C. T., Costa, G., Girardi, L., et al. 2022, , 665, A126 [Oksala et al.(2012)]2012MNRAS.419..959OOksala, M. E., Wade, G. A., Townsend, R. H. D., et al. 2012, , 419, 959 [Paunzen et al.(2021)]2021A A...645A..34PPaunzen, E., Hümmerich, S., & Bernhard, K. 2021, , 645, A34 [Pedersen(2022)]2022ApJ...940...49PPedersen, M. G. 2022, , 940, 49 [Perez & Granger(2007)]2007CSE.....9c..21PPerez, F. & Granger, B. E. 2007, Computing in Science and Engineering, 9, 21 [Pietrinferni et al.(2004)]2004ApJ...612..168PPietrinferni, A., Cassisi, S., Salaris, M., et al. 2004, , 612, 168 [Pietrinferni et al.(2006)]2006ApJ...642..797PPietrinferni, A., Cassisi, S., Salaris, M., et al. 2006, , 642, 797 [Potter et al.(2012)]2012MNRAS.419..748PPotter, A. T., Tout, C. A., & Eldridge, J. J. 2012, , 419, 748 [Preston(1974)]1974ARA A..12..257PPreston, G. W. 1974, , 12, 257 [Privitera et al.(2016)]2016A A...591A..45PPrivitera, G., Meynet, G., Eggenberger, P., et al. 2016, , 591, A45 [Puls et al.(2008)]2008A ARv..16..209PPuls, J., Vink, J. S., & Najarro, F. 2008, , 16, 209 [Qin et al.(2019)]2019ApJS..242...13QQin, L., Luo, A.-L., Hou, W., et al. 2019, , 242, 13 [Quentin & Tout(2018)]2018MNRAS.477.2298QQuentin, L. G. & Tout, C. A. 2018, , 477, 2298 [Raghavan et al.(2010)]2010ApJS..190....1RRaghavan, D., McAlister, H. A., Henry, T. J., et al. 2010, , 190, 1 [Ramachandran et al.(2018)]2018A A...609A...7RRamachandran, V., Hainich, R., Hamann, W.-R., et al. 2018, , 609, A7 [Ramachandran et al.(2019)]2019A A...625A.104RRamachandran, V., Hamann, W.-R., Oskinova, L. M., et al. 2019, , 625, A104 [Ramírez-Agudelo et al.(2013)]2013A A...560A..29RRamírez-Agudelo, O. H., Simón-Díaz, S., Sana, H., et al. 2013, , 560, A29 [Reese et al.(2012)]2012A A...539A..63RReese, D. R., Marques, J. P., Goupil, M. J., et al. 2012, , 539, A63 [Riello et al.(2021)]2021A A...649A...3RRiello, M., De Angeli, F., Evans, D. W., et al. 2021, , 649, A3 [Rosen et al.(2012)]2012ApJ...748...97RRosen, A. L., Krumholz, M. R., & Ramirez-Ruiz, E. 2012, , 748, 97 [Royer et al.(2007)]2007A A...463..671RRoyer, F., Zorec, J., & Gómez, A. E. 2007, , 463, 671 [Sackmann(1970)]1970A A.....8...76SSackmann, I. J. 1970, , 8, 76 [Sana et al.(2012)]2012Sci...337..444SSana, H., de Mink, S. E., de Koter, A., et al. 2012, Science, 337, 444 [Santos et al.(2021)]2021ApJS..255...17SSantos, A. R. G., Breton, S. N., Mathur, S., et al. 2021, , 255, 17 [Sanyal et al.(2017)]2017A A...597A..71SSanyal, D., Langer, N., Szécsi, D., et al. 2017, , 597, A71 [Simón-Díaz & Herrero(2014)]2014A A...562A.135SSimón-Díaz, S. & Herrero, A. 2014, , 562, A135 [Skrutskie et al.(2006)]2006AJ....131.1163SSkrutskie, M. F., Cutri, R. M., Stiening, R., et al. 2006, , 131, 1163 [Spruit(1999)]1999A A...349..189SSpruit, H. C. 1999, , 349, 189 [Stauffer & Hartmann(1986)]1986PASP...98.1233SStauffer, J. B. & Hartmann, L. W. 1986, , 98, 1233 [Sun et al.(2021a)]2021ApJS..257...22SSun, W., Duan, X.-W., Deng, L., et al. 2021a, , 257, 22 [Sun et al.(2021b)]2021ApJ...921..145SSun, W., Duan, X.-W., Deng, L., et al. 2021b, , 921, 145 [Sun et al.(2019)]2019ApJ...883..182SSun, W., Li, C., Deng, L., et al. 2019, , 883, 182 [Takahashi & Langer(2021)]2021A A...646A..19TTakahashi, K. & Langer, N. 2021, , 646, A19 [Tian et al.(2023)]2023ApJS..266...14TTian, X.-. man ., Wang, Z.-. hua ., Zhu, L.-. ying ., et al. 2023, , 266, 14 [Ud-Doula et al.(2009)]2009MNRAS.392.1022UUd-Doula, A., Owocki, S. P., & Townsend, R. H. D. 2009, , 392, 1022 [van Belle(2012)]2012A ARv..20...51Vvan Belle, G. T. 2012, , 20, 51 [Vink et al.(2001)]2001A A...369..574VVink, J. S., de Koter, A., & Lamers, H. J. G. L. M. 2001, , 369, 574 [Wang et al.(2023)]2023A A...674A.129WWang, C., Yuan, H., Xiang, M., et al. 2023, , 674, A129 [Weber & Davis(1967)]1967ApJ...148..217WWeber, E. J. & Davis, L. 1967, , 148, 217 [Wolff et al.(2007)]2007AJ....133.1092WWolff, S. C., Strom, S. E., Dror, D., et al. 2007, , 133, 1092 [Wolff et al.(2004)]2004ApJ...601..979WWolff, S. C., Strom, S. E., & Hillenbrand, L. A. 2004, , 601, 979 [Xiang et al.(2022)]2022A A...662A..66XXiang, M., Rix, H.-W., Ting, Y.-S., et al. 2022, , 662, A66 [Yusof et al.(2022)]2022MNRAS.511.2814YYusof, N., Hirschi, R., Eggenberger, P., et al. 2022, , 511, 2814 [Zahn(1992)]1992A A...265..115ZZahn, J.-P. 1992, , 265, 115 [Zhang et al.(2021)]2021ApJS..256...14ZZhang, B., Li, J., Yang, F., et al. 2021, , 256, 14 [Zhang et al.(2020)]2020ApJS..246....9ZZhang, B., Liu, C., & Deng, L.-C. 2020, , 246, 9 [Zorec(2023)]2023Galax..11...54ZZorec, J. 2023, Galaxies, 11, 54 [Zorec & Royer(2012)]2012A A...537A.120ZZorec, J. & Royer, F. 2012, , 537, A120 [Öpik(1951)]1951MNRAS.111..278OÖpik, E. J. 1951, , 111, 278