text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
We present new results for classical-particle propagation subject to Lorentz violation. Our analysis is dedicated to spin-nondegenerate operators of arbitrary mass dimension provided by the fermion sector of the Standard-Model Extension. In particular, classical Lagrangians are obtained for the operators $\hat{b}_{\mu}$ and $\hat{H}_{\mu\nu}$ as perturbative expansions in Lorentz violation. The functional dependence of the higher-order contributions in the background fields is found to be quite peculiar, which is probably attributed to particle spin playing an essential role for these cases. This paper closes one of the last gaps in understanding classical-particle propagation in the presence of Lorentz violation. Lagrangians of the kind presented will turn out to be valuable for describing particle propagation in curved backgrounds with diffeomorphism invariance and/or local Lorentz symmetry explicitly violated. | high energy physics theory |
We present Chandra imaging and spectral observations of Seyfert 2 galaxy NGC 4388. Three extended X-ray structures around the nucleus on kpc scale are well imaged, allowing an in-depth spatially resolved study. Both the extended hard continuum and the Fe K$\alpha$ line show similar morphology, consistent with a scenario where the ionizing emission from nucleus is reprocessed by circumnuclear cold gas, resulting in a weak reflection continuum and an associated neutral Fe K$\alpha$ line. This has been seen in other Compton thick active galactic nuclei (AGN), but NGC 4388 is one of the rare cases with a lower column density ($N_{\rm H} < 1.25\times 10^{24}$ cm$^{-2}$) along the line of sight. Significant differences in equivalent width of the Fe K$\alpha$ emission line are found for the nuclear and extended regions, which could be ascribed to different column densities or scattering angles with respect to the line of sight, rather than variations in iron abundances. The north-east and west extended structures are aligned with the galactic disk and dust lane in the HST $V-H$ map, and located at the peak of molecular gas distribution. The morphology implies that the kpc-scale radio outflow may have compressed the interstellar gas and produced clumps working as the reflector to enhance line emission. Using [OIV] emission as a proxy of the AGN intrinsic luminosity, we find that both of the extended Fe K$\alpha$ emission and reflection continuum are linearly correlated with the [OIV] luminosity, indicating a connection between the AGN and the extended emission. | astrophysics |
We prove a Jensen formula for slice-regular functions of one quaternionic variable. The formula relates the value of the function and of its first two derivatives at a point with its integral mean on a three dimensional sphere centred at that point and with the disposition of its zeros. The formula can be extended to semiregular slice functions. | mathematics |
Recently, deep learning has become much more popular in computer vision area. The Convolution Neural Network (CNN) has brought a breakthrough in images segmentation areas, especially, for medical images. In this regard, U-Net is the predominant approach to medical image segmentation task. The U-Net not only performs well in segmenting multimodal medical images generally, but also in some tough cases of them. However, we found that the classical U-Net architecture has limitation in several aspects. Therefore, we applied modifications: 1) designed efficient CNN architecture to replace encoder and decoder, 2) applied residual module to replace skip connection between encoder and decoder to improve based on the-state-of-the-art U-Net model. Following these modifications, we designed a novel architecture--DC-UNet, as a potential successor to the U-Net architecture. We created a new effective CNN architecture and build the DC-UNet based on this CNN. We have evaluated our model on three datasets with tough cases and have obtained a relative improvement in performance of 2.90%, 1.49% and 11.42% respectively compared with classical U-Net. In addition, we used the Tanimoto similarity to replace the Jaccard similarity for gray-to-gray image comparisons. | electrical engineering and systems science |
Leakage is a particularly damaging error that occurs when a qubit state falls out of its two-level computational subspace. Compared to independent depolarizing noise, leaked qubits may produce many more configurations of harmful correlated errors during error-correction. In this work, we investigate different local codes in the low-error regime of a leakage gate error model. When restricting to bare-ancilla extraction, we observe that subsystem codes are good candidates for handling leakage, as their locality can limit damaging correlated errors. As a case study, we compare subspace surface codes to the subsystem surface codes introduced by Bravyi et al. In contrast to depolarizing noise, subsystem surface codes outperform same-distance subspace surface codes below error rates as high as $\lessapprox 7.5 \times 10^{-4}$ while offering better per-qubit distance protection. Furthermore, we show that at low to intermediate distances, Bacon-Shor codes offer better per-qubit error protection against leakage in an ion-trap motivated error model below error rates as high as $\lessapprox 1.2 \times 10^{-3}$. For restricted leakage models, this advantage can be extended to higher distances by relaxing to unverified two-qubit cat state extraction in the surface code. These results highlight an intrinsic benefit of subsystem code locality to error-corrective performance. | quantum physics |
Many characteristics of dwarf carbon stars are broadly consistent with a binary origin, including mass transfer from an evolved companion. While the population overall appears to have old-disc or halo kinematics, roughly 2$\,$per cent of these stars exhibit H$\alpha$ emission, which in low-mass main-sequence stars is generally associated with rotation and relative youth. Its presence in an older population therefore suggests either irradiation or spin-up. This study presents time-series analyses of photometric and radial-velocity data for seven dwarf carbon stars with H$\alpha$ emission. All are shown to have photometric periods in the range 0.2--5.2$\,$d, and orbital periods of similar length, consistent with tidal synchronisation. It is hypothesised that dwarf carbon stars with emission lines are the result of close-binary evolution, indicating that low-mass, metal-weak or metal-poor stars can accrete substantial material prior to entering a common-envelope phase. | astrophysics |
The most direct approach for characterizing the quantum dynamics of a strongly-interacting system is to measure the time-evolution of its full many-body state. Despite the conceptual simplicity of this approach, it quickly becomes intractable as the system size grows. An alternate framework is to think of the many-body dynamics as generating noise, which can be measured by the decoherence of a probe qubit. Our work centers on the following question: What can the decoherence dynamics of such a probe tell us about the many-body system? In particular, we utilize optically addressable probe spins to experimentally characterize both static and dynamical properties of strongly-interacting magnetic dipoles. Our experimental platform consists of two types of spin defects in diamond: nitrogen-vacancy (NV) color centers (probe spins) and substitutional nitrogen impurities (many-body system). We demonstrate that signatures of the many-body system's dimensionality, dynamics, and disorder are naturally encoded in the functional form of the NV's decoherence profile. Leveraging these insights, we directly characterize the two-dimensional nature of a nitrogen delta-doped diamond sample. In addition, we explore two distinct facets of the many-body dynamics: First, we address a persistent debate about the microscopic nature of spin dynamics in strongly-interacting dipolar systems. Second, we demonstrate direct control over the spectral properties of the many-body system, including its correlation time. Our work opens the door to new directions in both quantum sensing and simulation. | quantum physics |
Multiple and combined endpoints involving also non-normal outcomes appear in many clinical trials in various areas in medicine. In some cases, the outcome can be observed only on an ordinal or dichotomous scale. Then the success of two therapies is assessed by comparing the outcome of two randomly selected patients from the two therapy groups by 'better', 'equal' or 'worse'. These outcomes can be described by the probabilities $p^-=P(X<Y)$, $p_0=P(X=Y)$, and $p^+ =P(X~>~Y)$. For a clinician, however, these quantities are less intuitive. Therefore, Noether (1987) introduced the quantity $\lambda=p^+ / p^-$ assuming continuous distributions. The same quantity was used by Pocock et al. (2012) and by Wang and Pocock (2016) also for general non-normal outcomes and has been called 'win-ratio' $\lambda_{WR}$. Unlike Noether (1987), Wang and Pocock (2016) explicitly allowed for ties in the data. It is the aim of this manuscript to investigate the properties of $\lambda_{WR}$ in case of ties. It turns out that it has the strange property of becoming larger if the data are observed less accurately, i.e. include more ties. Thus, in case of ties, the win-ratio looses its appealing property to describe and quantify an intuitive and well interpretable treatment effect. Therefore, a slight modification of $\lambda_{WR} = \theta / (1-\theta)$ is suggested, namely the so-called 'success-odds' where $\theta=p^+ + \frac12 p_0$ is called a success of a therapy if $\theta>\frac12$. In the case of no ties, $\lambda_{SO}$ is identical to $\lambda_{WR}$. A test for the hypothesis $\lambda_{SO}=1$ and range preserving confidence intervals for $\lambda_{SO}$ are derived. By two counterexamples it is demonstrated that generalizations of both the win-ratio and the success-odds to more than two treatments or to stratified designs are not straightforward and need more detailed considerations. | statistics |
The $Z_2\times Z_2$ heterotic string orbifold gives rise to a large space of phenomenological three generation models that serves as a testing ground to explore how the Standard Model of particle physics may be incorporated in a theory of quantum gravity. Recently, we demonstrated the existence of type 0 $Z_2\times Z_2$ heterotic string orbifolds in which there are no massless fermionic states. In this paper we demonstrate the existence of non--supersymmetric tachyon--free $Z_2\times Z_2$ heterotic string orbifolds that do not contain any massless bosonic states from the twisted sectors. We dub these configurations type ${\bar 0}$ models. They necessarily contain untwisted bosonic states, producing the gravitational, gauge and scalar moduli degrees of freedom, but possess an excess of massless fermionic states over bosonic ones, hence producing a positive cosmological constant. Such configurations may be instrumental when trying to understand the string dynamics in the early universe. | high energy physics theory |
We discuss the spin, the angular momentum, and the magnetic moment of rotating chiral fermions using a kinetic theory. We find that, in addition to the chiral vortical contribution along the rotation axis, finite circular spin polarization is induced by the spin-momentum correlation of chiral fermions, which is canceled by a change in the orbital angular momentum. We point out that the eddy magnetic moment is nonvanishing due to the $g$-factors, exhibiting the chiral Barnett effect. | high energy physics phenomenology |
This study examines the impact of turbulent mixing on horizontal density compensation in the upper ocean. A series of simulations model the role of mixing in scenarios initialized with geostrophically-adjusted compensated and uncompensated thermohaline gradients. Numerical experiments isolate the influence of mixing on these gradients using idealized conditions with zero surface heat and momentum flux. Prompted by theoretical considerations and observed consequences of mixing, a new non-linear horizontal diffusion scheme is introduced as an alternative to the standard Laplacian diffusion. Results suggest that when horizontal mixing is parameterized using constant diffusivities, horizontal density compensation is substantially unchanged as the gradients erode. Simulations using the new scheme, which parameterizes mixing with horizontal diffusivities scaled by squared buoyancy gradient, suggest that horizontal mixing can produce compensated gradients during this decay, but only at scales of 10 km and less. Reducing vertical mixing to small background values has a similar effect, increasing the degree of compensation at submesoscales. Reproducing observed compensated thermohaline variability in the mixed layer at scales greater than 10 km requires external forcing. These results show important influences of mixing on density compensation within the ageostrophic submesoscale regime. In the transition from a horizontally compensated mixed layer to a partially compensated thermocline, advection must play an important role; mixing alone is insufficient. | physics |
The British party system is known for its discipline and cohesion, but it remains wedged on one issue: European integration. This was observed both in the days of the EEC in the 1970s and the EU-Maastricht treaty in the 1990s; This work aims to investigate whether this holds true in the Brexit era. We utilise social network analysis to unpack the patterns of dissent and rebellion among pairs of MPs. Using data from Hansard, we compute similarity scores between pairs of MPs from June 2017 until April 2019 and visualise them in a force-directed network. Comparing Brexit- and non-Brexit divisions, we analyse whether patterns of voting similarity and polarity differ among pairs of MPs. Our results show that Brexit causes a wedge in party politics, consistent to what is observed in history. | physics |
Calibration error is commonly adopted for evaluating the quality of uncertainty estimators in deep neural networks. In this paper, we argue that such a metric is highly beneficial for training predictive models, even when we do not explicitly measure the uncertainties. This is conceptually similar to heteroscedastic neural networks that produce variance estimates for each prediction, with the key difference that we do not place a Gaussian prior on the predictions. We propose a novel algorithm that performs simultaneous interval estimation for different calibration levels and effectively leverages the intervals to refine the mean estimates. Our results show that, our approach is consistently superior to existing regularization strategies in deep regression models. Finally, we propose to augment partial dependence plots, a model-agnostic interpretability tool, with expected prediction intervals to reveal interesting dependencies between data and the target. | statistics |
The electric quadrupole-quadrupole ($\mathcal{E}_{qq}$) interaction is believed to play an important role in the broken symmetry transition from Phase I to II in solid hydrogen. To evaluate this, we study structures adopted by purely classical quadrupoles using Markov Chain Monte Carlo simulations of fcc and hcp quadrupolar lattices. Both undergo first-order phase transitions from rotationally ordered to disordered structures, as indicated by a discontinuity in both quadrupole interaction energy ($\mathcal{E}_{qq}$) and its heat capacity. Cooling fcc reliably induced a transition to the P$a3$ structure, whereas cooling hcp gave inconsistent, frustrated and $c/a$-ratio-dependent broken symmetry states. Analysing the lowest-energy hcp states using simulated annealing, we found P$6_3/m$ and P$ca2_1$ structures found previously as minimum-energy structures in full electronic structure calculations. The candidate structures for hydrogen Phases III-V were not observed. This demonstrates that $\mathcal{E}_{qq}$ is the dominant interaction determining the symmetry breaking in Phase II. The disorder transition occurs at significantly lower temperature in hcp than fcc, showing that the $\mathcal{E}_{qq}$ cannot be responsible for hydrogen Phase II being based on hcp. | condensed matter |
The classical NP-hard feedback arc set problem (FASP) and feedback vertex set problem (FVSP) ask for a minimum set of arcs $\varepsilon \subseteq E$ or vertices $\nu \subseteq V$ whose removal $G\setminus \varepsilon$, $G\setminus \nu$ makes a given multi-digraph $G=(V,E)$ acyclic, respectively. Though both problems are known to be APX-hard, approximation algorithms or proofs of inapproximability are unknown. We propose a new $\mathcal{O}(|V||E|^4)$-heuristic for the directed FASP. While a ratio of $r \approx 1.3606$ is known to be a lower bound for the APX-hardness, at least by empirical validation we achieve an approximation of $r \leq 2$. The most relevant applications, such as circuit testing, ask for solving the FASP on large sparse graphs, which can be done efficiently within tight error bounds due to our approach. | computer science |
We present work on creating a synthetic population from census data for Australia, applied to the greater Melbourne region. We use a sample-free approach to population synthesis that does not rely on a disaggregate sample from the original population. The inputs for our algorithm are joint marginal distributions from census of desired person-level and household-level attributes, and outputs are a set of comma-separated-value (.csv) files containing the full synthetic population of unique individuals in households; with age, gender, relationship status, household type, and size, matched to census data. Our algorithm is efficient in that it can create the synthetic population for Melbourne comprising 4.5 million persons in 1.8 million households within three minutes on a modern computer. Code for the algorithm is hosted on GitHub. | statistics |
As experimental null results increase the pressure on heavy weakly interacting massive particles (WIMPs) as an explanation of thermal dark matter (DM), it seems timely to explore previously overlooked regions of the WIMP parameter space. In this work we extend the minimal gauged $U(1)_{L_\mu-L_\tau}$ model studied in \cite{Bauer:2018onh} by a light (MeV-scale) vector-like fermion $\chi$. Taking into account constraints from cosmology, direct and indirect detection we find that the standard benchmark of $M_V=3 m_\chi$ for DM coupled to a vector mediator is firmly ruled out for unit DM charges. However, exploring the near-resonance region $M_V\gtrsim 2 m_\chi$ we find that this model can simultaneously explain the DM relic abundance $\Omega h^2 =0.12$ and the $(g-2)_\mu$ anomaly. Allowing for small charge hierarchies of $\lesssim\mathcal{O}(10)$, we identify a second window of parameter space in the few-GeV region, where $\chi$ can account for the full DM relic density. | high energy physics phenomenology |
We discuss implementation of the LHC experimental data sets in the new CT18 global analysis of quantum chromodynamics (QCD) at the next-to-next-leading order of the QCD coupling strength. New methodological developments in the fitting methodology are discussed. Behavior of the CT18 NNLO PDFs for the conventional and "saturation-inspired" factorization scales in deep-inelastic scattering is reviewed. Four new families of (N)NLO CTEQ-TEA PDFs are presented: CT18, A, X, and Z. | high energy physics phenomenology |
We compute the Galois group of the splitting field $ F$ of any irreducible and separable polynomial $f(x)=x^6+ax^3+b$ with $a,b\in K$, a field with characteristic different from two. The proofs require to distinguish between two cases: whether or not the cubic roots of unity belong to $K$. We also give a criterion to determine whether a polynomial as $f(x)$ is irreducible, when $F$ is a finite field. Moreover, at the end of the paper we also give a complete list of all the possible subfields of $F$. | mathematics |
The aim of decentralized gradient descent (DGD) is to minimize a sum of $n$ functions held by interconnected agents. We study the stability of DGD in open contexts where agents can join or leave the system, resulting each time in the addition or the removal of their function from the global objective. Assuming all functions are smooth, strongly convex, and their minimizers all lie in a given ball, we characterize the sensitivity of the global minimizer of the sum of these functions to the removal or addition of a new function and provide bounds in $ O\left(\min \left(\kappa^{0.5}, \kappa/n^{0.5},\kappa^{1.5}/n\right)\right)$ where $\kappa$ is the condition number. We also show that the states of all agents can be eventually bounded independently of the sequence of arrivals and departures. The magnitude of the bound scales with the importance of the interconnection, which also determines the accuracy of the final solution in the absence of arrival and departure, exposing thus a potential trade-off between accuracy and sensitivity. Our analysis relies on the formulation of DGD as gradient descent on an auxiliary function. The tightness of our results is analyzed using the PESTO Toolbox. | mathematics |
The impasse surface is an important concept in the differential-algebraic equation (DAE) model of power systems, which is associated with short-term voltage collapse. This paper establishes a necessary condition for a system trajectory hitting the impasse surface. The condition is in terms of admittance matrices regarding the power network, generators and loads, which specifies the pattern of interaction between those system components that can induce voltage collapse. It applies to generic DAE models featuring high-order synchronous generators, static load components, induction motors and a lossy power network. We also identify a class of static load parameters that prevents power systems from hitting the impasse surface; this proves a conjecture made by Hiskens that has been unsolved for decades. Moreover, the obtained results lead to an early indicator of voltage collapse and a novel viewpoint that inductive compensation has a positive effect on preventing short-term voltage collapse, which are verified via numerical simulations. | mathematics |
We put forward the idea that classical blockchains and smart contracts are potentially useful primitives not only for classical cryptography, but for quantum cryptography as well. Abstractly, a smart contract is a functionality that allows parties to deposit funds, and release them upon fulfillment of algorithmically checkable conditions, and can thus be employed as a formal tool to enforce monetary incentives. In this work, we give the first example of the use of smart contracts in a quantum setting. We describe a simple hybrid classical-quantum payment system whose main ingredients are a classical blockchain capable of handling stateful smart contracts, and quantum lightning, a strengthening of public-key quantum money introduced by Zhandry [Eurocrypt 2019]. Our hybrid payment system uses quantum states as banknotes and a classical blockchain to settle disputes and to keep track of the valid serial numbers. It has several desirable properties: it is decentralized, requiring no trust in any single entity; payments are as quick as quantum communication, regardless of the total number of users; when a quantum banknote is damaged or lost, the rightful owner can recover the lost value. | quantum physics |
LUX-ZEPLIN (LZ) is a second-generation direct dark matter experiment with spin-independent WIMP-nucleon scattering sensitivity above $1.4 \times 10^{-48}$ cm$^{2}$ for a WIMP mass of 40 GeV/c$^{2}$ and a 1000 d exposure. LZ achieves this sensitivity through a combination of a large 5.6 t fiducial volume, active inner and outer veto systems, and radio-pure construction using materials with inherently low radioactivity content. The LZ collaboration performed an extensive radioassay campaign over a period of six years to inform material selection for construction and provide an input to the experimental background model against which any possible signal excess may be evaluated. The campaign and its results are described in this paper. We present assays of dust and radon daughters depositing on the surface of components as well as cleanliness controls necessary to maintain background expectations through detector construction and assembly. Finally, examples from the campaign to highlight fixed contaminant radioassays for the LZ photomultiplier tubes, quality control and quality assurance procedures through fabrication, radon emanation measurements of major sub-systems, and bespoke detector systems to assay scintillator are presented. | physics |
The next-to minimal supersymmetric standard model (NMSSM) with non-universal Higgs masses, or the semi-constrained NMSSM (scNMSSM), extend the minimal supersymmetric standard model (MSSM) by a singlet superfield and assume universal conditions except for the Higgs sector. It can not only keep the simpleness and grace of the fully constrained MSSM and NMSSM, and relax the tension that they face after the 125-GeV Higgs boson discovered, but also predict an exotic phenomenon that Higgs decay to a pair of light singlet-dominated scalars ($10\!\sim\! 60\;{\rm GeV}$). This condition can be classified to three scenarios according to the identities of the SM-like Higgs and the light scalar: (i) the light scalar is CP-odd, and the SM-like Higgs is $h_2$; (ii) the light scalar is CP-odd, and the SM-like Higgs is $h_1$; (iii) the light scalar is CP-even, and the SM-like Higgs is $h_2$. In this work, we compare the three scenarios, checking the interesting parameter schemes that lead to the scenarios, the mixing levels of the doublets and singlets, the tri-scalar coupling between the SM-like Higgs and a pair of light scalars, the branching ratio of Higgs decay to the light scalars, and sensitivities in hunting for the exotic decay at the HL-LHC and the future lepton colliders such as CEPC, FCC-ee, and ILC. | high energy physics phenomenology |
Motivated by the collective behaviour of biological swarms, we study the critical dynamics of field theories with coupling between order parameter and conjugate momentum in the presence of dissipation. By performing a dynamical renormalization group calculation at one loop, we show that the violation of momentum conservation generates a crossover between a conservative yet IR-unstable fixed point, characterized by a dynamic critical exponent $z=d/2$, and a dissipative IR-stable fixed point with $z=2$. Interestingly, the two fixed points have different upper critical dimensions. The interplay between these two fixed points gives rise to a crossover in the critical dynamics of the system, characterized by a crossover exponent $\kappa=4/d$. Such crossover is regulated by a conservation length scale, $\mathcal R_0$, which is larger the smaller the dissipation: beyond $\mathcal R_0$ the dissipative fixed point dominates, while at shorter distances dynamics is ruled by the conservative fixed point and critical exponent, a behaviour which is all the more relevant in finite-size systems with weak dissipation. We run numerical simulations in three dimensions and find a crossover between the exponents $z=3/2$ and $z=2$ in the critical slowing down of the system, confirming the renormalization group results. From the biophysical point of view, our calculation indicates that in finite-size biological groups mode-coupling terms in the equation of motion can significantly change the dynamical critical exponents even in the presence of dissipation, a step towards reconciling theory with experiments in natural swarms. Moreover, our result provides the scale within which fully conservative Bose-Einstein condensation is a good approximation in systems with weak symmetry-breaking terms violating number conservation, as quantum magnets or photon gases. | condensed matter |
We report the results of searching pulsar-like candidates from the unidentified objects in the $3^{\rm rd}$ Catalog of Hard Fermi-LAT sources (3FHL). Using a machine-learning based classification scheme with a nominal accuracy of $\sim98\%$, we have selected 27 pulsar-like objects from 200 unidentified 3FHL sources for an identification campaign. Using archival data, X-ray sources are found within the $\gamma-$ray error ellipses of 10 3FHL pulsar-like candidates. Within the error circles of the much better constrained X-ray positions, we have also searched for the optical/infrared counterparts and examined their spectral energy distributions. Among our short-listed candidates, the most secure identification is the association of 3FHL J1823.3-1339 and its X-ray counterpart with the globular cluster Mercer 5. The $\gamma-$rays from the source can be contributed by a population of millisecond pulsars residing in the cluster. This makes Mercer 5 as one of the slowly growing hard $\gamma-$ray population of globular clusters with emission $>10$ GeV. Very recently, another candidate picked by our classification scheme, 3FHL J1405.1-6118, has been identified as a new $\gamma-$ray binary with an orbital period of $13.7$ days. Our X-ray analysis with a short Chandra observation has found a possible periodic signal candidate of $\sim1.4$ hrs and a putative extended X-ray tail of $\sim20$ arcsec long. Spectral energy distribution of its optical/infrared counterpart conforms with a blackbody of $T_{\rm bb}\sim40000$ K and $R_{\rm bb}\sim12R_{\odot}$ at a distance of 7.7 kpc. This is consistent with its identification as an early O star as found by infrared spectroscopy. | astrophysics |
Superconducting gmon qubits allow for highly tuneable quantum computing devices. Optimally controlled evolution of these systems is of considerable interest. We determine the optimal dynamical protocols for the generation of the maximally entangled W state of three qubits from an easily prepared initial product state. These solutions are found by simulated annealing. Using the connection to the Pontryagin's minimum principle, we fully characterize the patterns of these ``bang-bang'' protocols, which shortcut the adiabatic evolution. The protocols are remarkably robust, facilitating the development of high-performance three-qubit quantum gates. | quantum physics |
The dynamics of the next quantum jump for a qubit [two level system] coupled to a readout resonator [damped driven harmonic oscillator] is calculated. A quantum mechanical treatment of readout resonator reveals non exponential short time behavior which could facilitate detection of the state of the qubit faster than the resonator lifetime. | quantum physics |
Objective: State of the art navigation systems for pelvic osteotomies use optical systems with external fiducials. We propose the use of X-Ray navigation for pose estimation of periacetabular fragments without fiducials. Methods: A 2D/3D registration pipeline was developed to recover fragment pose. This pipeline was tested through an extensive simulation study and 6 cadaveric surgeries. Using osteotomy boundaries in the fluoroscopic images, the preoperative plan is refined to more accurately match the intraoperative shape. Results: In simulation, average fragment pose errors were 1.3{\deg}/1.7 mm when the planned fragment matched the intraoperative fragment, 2.2{\deg}/2.1 mm when the plan was not updated to match the true shape, and 1.9{\deg}/2.0 mm when the fragment shape was intraoperatively estimated. In cadaver experiments, the average pose errors were 2.2{\deg}/2.2 mm, 3.8{\deg}/2.5 mm, and 3.5{\deg}/2.2 mm when registering with the actual fragment shape, a preoperative plan, and an intraoperatively refined plan, respectively. Average errors of the lateral center edge angle were less than 2{\deg} for all fragment shapes in simulation and cadaver experiments. Conclusion: The proposed pipeline is capable of accurately reporting femoral head coverage within a range clinically identified for long-term joint survivability. Significance: Human interpretation of fragment pose is challenging and usually restricted to rotation about a single anatomical axis. The proposed pipeline provides an intraoperative estimate of rigid pose with respect to all anatomical axes, is compatible with minimally invasive incisions, and has no dependence on external fiducials. | computer science |
This paper concerns a convex, stochastic zeroth-order optimization (S-ZOO) problem, where the objective is to minimize the expectation of a cost function and its gradient is not accessible directly. To solve this problem, traditional optimization techniques mostly yield query complexities that grow polynomially with dimensionality, i.e., the number of function evaluations is a polynomial function of the number of decision variables. Consequently, these methods may not perform well in solving massive-dimensional problems arising in many modern applications. Although more recent methods can be provably dimension-insensitive, almost all of them work with arguably more stringent conditions such as everywhere sparse or compressible gradient. Thus, prior to this research, it was unknown whether dimension-insensitive S-ZOO is possible without such conditions. In this paper, we give an affirmative answer to this question by proposing a sparsity-inducing stochastic gradient-free (SI-SGF) algorithm. It is proved to achieve dimension-insensitive query complexity in both convex and strongly convex cases when neither gradient sparsity nor gradient compressibility is satisfied. Our numerical results demonstrate the strong potential of the proposed SI-SGF compared with existing alternatives. | mathematics |
We present a ground-state cooling scheme for the mechanical degrees of freedom of mesoscopic magnetic particles levitated in low-frequency traps. Our method makes use of a binary sensor and suitably shaped pulses to perform weak, adaptive measurements on the position of the magnet. This allows us to precisely determine the position and momentum of the particle, transforming the initial high-entropy thermal state into a pure coherent state. The energy is then extracted by shifting the trap center. By delegating the task of energy extraction to a coherent displacement operation we overcome the limitations associated with cooling schemes that rely on the dissipation of a two-level system coupled to the oscillator. We numerically benchmark our protocol in realistic experimental conditions, including heating rates and imperfect readout fidelities, showing that it is well suited for magneto-gravitational traps operating at cryogenic temperatures. Our results pave the way for ground-state cooling of micron-scale particles. | quantum physics |
Low power wide area network technologies (LPWANs) are attracting attention because they fulfill the need for long range low power communication for the Internet of Things. LoRa is one of the proprietary LPWAN physical layer (PHY) technologies, which provides variable data-rate and long range by using chirp spread spectrum modulation. This paper describes the basic LoRa PHY receiver algorithms and studies their performance. The LoRa PHY is first introduced and different demodulation schemes are proposed. The effect of carrier frequency offset and sampling frequency offset are then modeled and corresponding compensation methods are proposed. Finally, a software-defined radio implementation for the LoRa transceiver is briefly presented. | computer science |
We introduce MTT, a dependent type theory which supports multiple modalities. MTT is parametrized by a mode theory which specifies a collection of modes, modalities, and transformations between them. We show that different choices of mode theory allow us to use the same type theory to compute and reason in many modal situations, including guarded recursion, axiomatic cohesion, and parametric quantification. We reproduce examples from prior work in guarded recursion and axiomatic cohesion -- demonstrating that MTT constitutes a simple and usable syntax whose instantiations intuitively correspond to previous handcrafted modal type theories. In some cases, instantiating MTT to a particular situation unearths a previously unknown type theory that improves upon prior systems. Finally, we investigate the metatheory of MTT. We prove the consistency of MTT and establish canonicity through an extension of recent type-theoretic gluing techniques. These results hold irrespective of the choice of mode theory, and thus apply to a wide variety of modal situations. | computer science |
Scale variance among different sizes of body parts and objects is a challenging problem for visual recognition tasks. Existing works usually design dedicated backbone or apply Neural architecture Search(NAS) for each task to tackle this challenge. However, existing works impose significant limitations on the design or search space. To solve these problems, we present ScaleNAS, a one-shot learning method for exploring scale-aware representations. ScaleNAS solves multiple tasks at a time by searching multi-scale feature aggregation. ScaleNAS adopts a flexible search space that allows an arbitrary number of blocks and cross-scale feature fusions. To cope with the high search cost incurred by the flexible space, ScaleNAS employs one-shot learning for multi-scale supernet driven by grouped sampling and evolutionary search. Without further retraining, ScaleNet can be directly deployed for different visual recognition tasks with superior performance. We use ScaleNAS to create high-resolution models for two different tasks, ScaleNet-P for human pose estimation and ScaleNet-S for semantic segmentation. ScaleNet-P and ScaleNet-S outperform existing manually crafted and NAS-based methods in both tasks. When applying ScaleNet-P to bottom-up human pose estimation, it surpasses the state-of-the-art HigherHRNet. In particular, ScaleNet-P4 achieves 71.6% AP on COCO test-dev, achieving new state-of-the-art result. | computer science |
We conjecture a formula for the refined $\mathrm{SU}(3)$ Vafa-Witten invariants of any smooth surface $S$ satisfying $H_1(S,\mathbb{Z}) = 0$ and $p_g(S)>0$. The unrefined formula corrects a proposal by Labastida-Lozano and involves unexpected algebraic expressions in modular functions. We prove that our formula satisfies a refined $S$-duality modularity transformation. We provide evidence for our formula by calculating virtual $\chi_y$-genera of moduli spaces of rank 3 stable sheaves on $S$ in examples using Mochizuki's formula. Further evidence is based on the recent definition of refined $\mathrm{SU}(r)$ Vafa-Witten invariants by Maulik-Thomas and subsequent calculations on nested Hilbert schemes by Thomas (rank 2) and Laarakker (rank 3). | mathematics |
We consider the effects of a non-vanishing strange-quark mass in the determination of the full basis of dimension six matrix elements for $B_{s}$ mixing, in particular we get for the ratio of the $V-A$ Bag parameter in the $B_s$ and $B_d$ system: $\overline{B}^s_{Q_1} / \overline{B}^d_{Q_1} = 0.987^{+0.007}_{-0.009}$. Combining these results with the most recent lattice values for the ratio of decay constants $f_{B_s} / f_{B_d}$ we obtain the most precise determination of the ratio $\xi = f_{B_s} \sqrt{\overline{B}^s_{Q_1}}/ f_{B_d} \sqrt{\overline{B}^d_{Q_1}} = 1.2014^{+0.0065}_{-0.0072}$ in agreement with recent lattice determinations. We find $\Delta M_s=(18.5_{-1.5}^{+1.2})\text{ps}^{-1}$ and $\Delta M_d=(0.547_{-0.046}^{+0.035})\text{ps}^{-1}$ to be consistent with experiments at below one sigma. Assuming the validity of the SM, our calculation can be used to directly determine the ratio of CKM elements $|V_{td} / V_{ts} | = 0.2045^{+0.0012}_{-0.0013}$, which is compatible with the results from the CKM fitting groups, but again more precise. | high energy physics phenomenology |
In this paper, new digital predistortion (DPD) solutions for power amplifier (PA) linearization are proposed, with particular emphasis on reduced processing complexity in future 5G and beyond wideband radio systems. The first proposed method, referred to as the spline-based Hammerstein (SPH) approach, builds on complex spline-interpolated lookup table (LUT) followed by a linear finite impulse response (FIR) filter. The second proposed method, the spline-based memory polynomial (SMP) approach, contains multiple parallel complex spline-interpolated LUTs together with an input delay line such that more versatile memory modeling can be achieved. For both structures, gradient-based learning algorithms are derived to efficiently estimate the LUT control points and other related DPD parameters. Large set of experimental results are provided, with specific focus on 5G New Radio (NR) systems, showing successful linearization of multiple sub-6 GHz PA samples as well as a 28 GHz active antenna array, incorporating channel bandwidths up to 200 MHz. Explicit performance-complexity comparisons are also reported between the SPH and SMP DPD systems and the widely-applied ordinary memory-polynomial (MP) DPD solution. The results show that the linearization capabilities of the proposed methods are very close to that of the ordinary MP DPD, particularly with the proposed SMP approach, while having substantially lower processing complexity. | electrical engineering and systems science |
The acceleration of a light buoyant object in a fluid is analyzed. Misconceptions about the magnitude of that acceleration are briefly described and refuted. The notion of the added mass is explained and the added mass is computed for an ellipsoid of revolution. A simple approximation scheme is employed to derive the added mass of a slender body. The slender-body limit is non-analytic, indicating a singular character of the perturbation due to the thickness of the body. An experimental determination of the acceleration is presented and found to agree well with the theoretical prediction. The added mass illustrates the concept of mass renormalization in an accessible manner. | physics |
We study the dynamics of bosonic and fermionic anyons defined on a one-dimensional lattice, under the effect of Hamiltonians quadratic in creation and annihilation operators, commonly referred to as linear optics. These anyonic models are obtained from deformations of the standard bosonic or fermionic commutation relations via the introduction of a non-trivial exchange phase between different lattice sites. We study the effects of the anyonic exchange phase on the usual bosonic and fermionic bunching behaviors. We show how to exploit the inherent Aharonov-Bohm effect exhibited by these particles to build a deterministic, entangling two-qubit gate and prove quantum computational universality in these systems. We define coherent states for bosonic anyons and study their behavior under two-mode linear-optical devices. In particular we prove that, for a specific value of the exchange factor, an anyonic mirror can generate cat states, an important resource in quantum information processing with continuous variables. | quantum physics |
We demonstrate that certain vortices in spinor Bose-Einstein condensates are non-Abelian anyons and may be useful for topological quantum computation. We perform numerical experiments of controllable braiding and fusion of such vortices, implementing the actions required for manipulating topological qubits. Our results suggest that a new platform for topological quantum information processing could potentially be developed by harnessing non-Abelian vortex anyons in spinor Bose-Einstein condensates. | condensed matter |
We study the bimodal Edwards-Anderson spin glass comparing established methods, namely the multicanonical method, the $1/k$-ensemble and parallel tempering, to an approach where the ensemble is modified by simulating power-law-shaped histograms in energy instead of flat histograms as in the standard multicanonical case. We show that by this modification a significant speed-up in terms of mean round-trip times can be achieved for all lattice sizes taken into consideration. | condensed matter |
In many mechatronic applications, controller input costs are negligible and time optimality is of great importance to maximize the productivity by executing fast positioning maneuvers. As a result, the obtained control input has mostly a bang-bang nature, which excite undesired mechanical vibrations, especially in systems with flexible structures. This paper tackles the time-optimal control problem and proposes a novel approach, which explicitly addresses the vibrational behavior in the context of the receding horizon technique. Such technique is a key feature, especially for systems with a time-varying vibrational behavior. In the context of model predictive control (MPC), vibrational behavior is predicted and coped in a soft-constrained formulation, which penalize any violation of undesired vibrations. This formulation enlarges the feasibility on a wide operating range in comparison with a hard-constrained formulation. The closed-loop performance of this approach is demonstrated on a numerical example of stacker crane with high degree of flexibility. | electrical engineering and systems science |
Variational autoencoders~(VAEs) have shown a promise in data-driven conversation modeling. However, most VAE conversation models match the approximate posterior distribution over the latent variables to a simple prior such as standard normal distribution, thereby restricting the generated responses to a relatively simple (e.g., unimodal) scope. In this paper, we propose DialogWAE, a conditional Wasserstein autoencoder~(WAE) specially designed for dialogue modeling. Unlike VAEs that impose a simple distribution over the latent variables, DialogWAE models the distribution of data by training a GAN within the latent variable space. Specifically, our model samples from the prior and posterior distributions over the latent variables by transforming context-dependent random noise using neural networks and minimizes the Wasserstein distance between the two distributions. We further develop a Gaussian mixture prior network to enrich the latent space. Experiments on two popular datasets show that DialogWAE outperforms the state-of-the-art approaches in generating more coherent, informative and diverse responses. | computer science |
Techniques to manipulate the individual constituents of an ultracold mixture are key to investigating impurity physics. In this work, we confine a mixture of the hyperfine ground states of Rb-87 in a double-well potential. The potential is produced by dressing the atoms with multiple radiofrequencies. The amplitude and phase of each frequency component of the dressing field are individually controlled to independently manipulate each species. Furthermore, we verify that our mixture of hyperfine states is collisionally stable, with no observable inelastic loss. | condensed matter |
We report the anisotropic magnetic properties of the ternary compound ErAl$_2$Ge$_2$. Single crystals of this compound were grown by high temperature solution growth technique,using Al:Ge eutectic composition as flux. From the powder x-ray diffraction we confirmed that ErAl$_2$Ge$_2$ crystallizes in the trigonal CaAl$_2$Si$_2$-type crystal structure. The anisotropic magnetic properties of a single crystal were investigated by measuring the magnetic susceptibility, magnetization, heat capacity and electrical resistivity. A bulk magnetic ordering occurs around 4 K inferred from the magnetic susceptibility and the heat capacity. The magnetization measured along the $ab$-plane increases more rapidly than along the $c$-axis suggesting the basal $ab$-plane as the easy plane of magnetization. The magnetic susceptibility, magnetization and the $4f$-derived part of the heat capacity in the paramagnetic regime analysed based on the point charge model of the crystalline electric field (CEF) indicate a relatively low CEF energy level splitting. | condensed matter |
Silicon Photo-Multipliers (SiPMs) are detectors sensitive to single photons that are used to detect scintillation and Cherenkov light in a variety of physics and medical-imaging applications. SiPMs measure single photons by amplifying the photo-generated carriers (electrons or holes) via a Geiger-mode avalanche. The Photon Detection Efficiency (PDE) is the combined probability that a photon is absorbed in the active volume of the device with a subsequently triggered avalanche. Absorption and avalanche triggering probabilities are correlated since the latter probability depends on where the photon is absorbed. In this paper, we introduce a physics motivated parameterization of the avalanche triggering probability that describes the PDE of a SiPM as a function of its reverse bias voltage, at different wavelengths. This parameterization is based on the fact that in p-on-n SiPMs the induced avalanches are electron-driven in the ultra-violet and near-ultra-violet ranges, while they become increasingly hole-driven towards the near-infra-red range. The model has been successfully applied to characterize two Hamamatsu MPPCs and one FBK SiPM, and it can be extended to other SiPMs. Furthermore, this model provides key insight on the electric field structure within SiPMs, which can explain the limitation of existing devices and be used to optimize the performance of future SiPMs. | physics |
We present Frequency Marching, FM, an algorithm that refines three-dimensional electron density distributions from solution X-ray scattering data in both the small- and wide-angle regimes. This algorithm is based on a series of optimization steps, marching along the frequency (reciprocal) space and refining detailed periodic structures with the corresponding real-space resolution. Buffer subtraction and excluded volumes, key factors in extracting the signatures of the biomolecule of interest from the sample, are accounted for using implicit density models. We provide the numerical and analytical basis of the FM algorithm. We demonstrate this technique by application to structured and unstructured nucleic acid systems, where higher resolution features are carved out of low resolution reconstructions as the algorithm marches into wider angles. | physics |
Graphs can be associated with a matrix according to some rule and we can find the spectrum of a graph with respect to that matrix. Two graphs are cospectral if they have the same spectrum. Constructions of cospectral graphs help us establish patterns about structural information not preserved by the spectrum. We generalize a construction for cospectral graphs previously given for the distance Laplacian matrix to a larger family of graphs. In addition, we show that with appropriate assumptions this generalized construction extends to the adjacency matrix, combinatorial Laplacian matrix, signless Laplacian matrix, normalized Laplacian matrix, and distance matrix. | mathematics |
Searchable Symmetric Encryption (SSE) allows a data owner to securely outsource its encrypted data to a cloud server while maintaining the ability to search over it and retrieve matched documents. Most existing SSE schemes leak which documents are accessed per query, i.e., the so-called access pattern, and thus are vulnerable to attacks that can recover the database or the queried keywords. Current techniques that fully hide access patterns, such as ORAM or PIR, suffer from heavy communication or computational costs, and are not designed with search capabilities in mind. Recently, Chen et al. (INFOCOM'18) proposed an obfuscation framework for SSE that protects the access pattern in a differentially private way with a reasonable utility cost. However, this scheme leaks the so-called search pattern, i.e., how many times a certain query is performed. This leakage makes the proposal vulnerable to certain database and query recovery attacks. In this paper, we propose OSSE (Obfuscated SSE), an SSE scheme that obfuscates the access pattern independently for each query performed. This in turn hides the search pattern and makes our scheme resistant against attacks that rely on this leakage. Under certain reasonable assumptions, our scheme has smaller communication overhead than ORAM-based SSE. Furthermore, our scheme works in a single communication round and requires very small constant client-side storage. Our empirical evaluation shows that OSSE is highly effective at protecting against different query recovery attacks while keeping a reasonable utility level. Our protocol provides significantly more protection than the proposal by Chen et al.~against some state-of-the-art attacks, which demonstrates the importance of hiding search patterns in designing effective privacy-preserving SSE schemes. | computer science |
CP violation in the lepton sector, and other aspects of neutrino physics, are studied within a high scale supersymmetry model. In addition to the sneutrino vacuum expectation values (VEVs), the heavy vector-like triplet also contributes to neutrino masses. Phases of the VEVs of relevant fields, complex couplings and Zino mass are considered. The approximate degeneracy of neutrino masses $m_{\nu_1}$ and $m_{\nu_2}$ can be naturally understood. The neutrino masses are then normal ordered, $\sim$ 0.020 eV, 0.022 eV, and 0.054 eV. Large CP violation in neutrino oscillations is favored. The effective Majorana mass of the electron neutrino is about 0.02 eV. | high energy physics phenomenology |
Communication complexity and privacy are the two key challenges in Federated Learning where the goal is to perform a distributed learning through a large volume of devices. In this work, we introduce FedSKETCH and FedSKETCHGATE algorithms to address both challenges in Federated learning jointly, where these algorithms are intended to be used for homogeneous and heterogeneous data distribution settings respectively. The key idea is to compress the accumulation of local gradients using count sketch, therefore, the server does not have access to the gradients themselves which provides privacy. Furthermore, due to the lower dimension of sketching used, our method exhibits communication-efficiency property as well. We provide, for the aforementioned schemes, sharp convergence guarantees. Finally, we back up our theory with various set of experiments. | statistics |
The spatial non-uniformity of the electric field in air discharges, such as streamers, can influence the accuracy of spectroscopic diagnostic methods and hence the estimation of the peak electric field. In this work, we use a self-consistent streamer discharge model to investigate the spatial non-uniformity in streamer heads and streamer glows. We focus our analysis on air discharges at atmospheric pressure and at the low pressure of the mesosphere. This approach is useful to investigate the spatial non-uniformity of laboratory discharges as well as sprite streamers and blue jet streamers, two types of Transient Luminous Event (TLE) taking place above thunderclouds. This characterization of the spatial non-uniformity of the electric field in air discharges allows us to develop two different spectroscopic diagnostic methods to estimate the peak electric field in cold plasmas. The commonly employed method to derive the peak electric field in streamer heads underestimates the electric field by about 40-50~\% as a consequence of the high spatial non-uniformity of the electric field. Our diagnostic methods reduce this underestimation to about 10-20\%. However, our methods are less accurate than previous methods for streamer glows, where the electric field is uniformly distributed in space. Finally, we apply our diagnostic methods to the measured optical signals in the Second Positive System of $N_2$ and the First Negative System of $N_2^+$ of sprites recorded by Armstrong et al. (1998) during the SPRITE's 95 and 96 campaigns. | physics |
In this paper we list all possible degrees of a faithful transitive permutation representation of the group of symmetries of a regular map of types $\{4,4\}$ and $\{3,6\}$ and we give examples of graphs, called CPR-graphs, representing some of these permutation representations. | mathematics |
Most neutron stars are expected to be born in supernovae, but only about half of supernova remnants (SNRs) are associated with a compact object. In many cases, a supernova progenitor may have resulted in a black hole. However, there are several possible reasons why true pulsar-SNR associations may have been missed in previous surveys: The pulsar's radio beam may not be oriented towards us; the pulsar may be too faint to be detectable; or there may be an offset in the pulsar position caused by a kick. Our goal is to find new pulsars in SNRs and explore their possible association with the remnant. The search and selection of the remnants presented in this paper was inspired by the non-detection of any X-ray bright compact objects in these remnants when previously studied. Five SNRs were searched for radio pulsars with the Green Bank Telescope at 820 MHz with multiple pointings to cover the full spatial extent of the remnants. A periodicity search plus an acceleration search up to 500 m/s^2 and a single pulse search were performed for each pointing in order to detect potential isolated binary pulsars and single pulses, respectively. No new pulsars were detected in the survey. However, we were able to re-detect a known pulsar, PSR J2047+5029, near SNR G89.0+4.7. We were unable to detect the radio-quiet gamma-ray pulsar PSR J2021+4026, but we do find a flux density limit of 0.08 mJy. Our flux density limits make our survey two to 16 times more sensitive than previous surveys, while also covering the whole spatial extent of the same remnants. We discuss potential explanations for the non-detection of a pulsar in the studied SNRs and conclude that sensitivity is still the most likely factor responsible for the lack of pulsars in some remnants. | astrophysics |
Consider a population evolving from year to year through three seasons: spring, summer and winter. Every spring starts with $N$ dormant individuals waking up independently of each other according to a given distribution. Once an individual is awake, it starts reproducing at a constant rate. By the end of spring, all individuals are awake and continue reproducing independently as Yule processes during the whole summer. In the winter, $N$ individuals chosen uniformly at random go to sleep until the next spring, and the other individuals die. We show that because an individual that wakes up unusually early can have a large number of surviving descendants, for some choices of model parameters the genealogy of the population will be described by a $\Lambda$-coalescent. In particular, the beta coalescent can describe the genealogy when the rate at which individuals wake up increases exponentially over time. We also characterize the set of all $\Lambda$-coalescents that can arise in this framework. | mathematics |
Precision measurements of quantum systems often seek to probe or must account for the interaction with blackbody radiation. Over the past several decades, much attention has been given to AC Stark shifts and stimulated state transfer. For a blackbody in thermodynamic equilibrium, these two effects are determined by the expectation value of photon number in each mode of the Planck spectrum. Here, we explore how the photon number variance of an equilibrium blackbody generally leads to a parametric broadening of the energy levels of quantum systems that is inversely proportional to the square-root of the blackbody volume. We consider the the effect in two cases which are potentially highly sensitive to this broadening: Rydberg atoms and atomic clocks. We find that even in blackbody volumes as small as 1\,cm$^3$, this effect is unlikely to contribute meaningfully to transition linewidths. | physics |
Energy beamforming has emerged as a promising technique for enhancing the energy transfer efficiency of wireless power transfer (WPT). However, the performance of conventional energy beamforming may seriously degrade due to the non-linear radio frequency (RF) to direct current (DC) conversion at energy receivers (ERs). To tackle this issue, this letter proposes a new time-division energy beamforming, in which different energy beamforming matrices (of high ranks in general) are time shared to exploit the "convex-concave" shape of the RF-DC power relation at ERs. By considering a particular time duration for WPT, we maximize the minimum harvested DC energy among all ERs, by jointly optimizing the energy beamforming matrices and the corresponding time allocation. In order to solve the non-convex min-DC-energy maximization problem, we propose an efficient solution by using the techniques of alternating optimization and successive convex approximation (SCA). Numerical results show that the proposed time-division energy beamforming design indeed outperforms the conventional multi-beam and time-division-multiple-access (TDMA)-based energy transmissions. | electrical engineering and systems science |
Spin to pseudo-spin conversion by which spin population imbalance converts to non-equilibrium pseudo-spin density in Dirac systems has been investigated particularly for graphene and insulator phase of silicene. Calculations have been performed within the Kubo approach and by taking into account the vertex correction. Results indicate that spin converts to pseudo-spin in either graphene or silicene that identified to come from the spin-orbit interactions. The response function of spin to pseudo-spin conversion is weakened several orders of magnitude by vertex correction of impurities in graphene, however, this conversion is strengthened in insulator silicene. In addition, in the case of silicene, results are indicative of an obvious change in the mentioned response function as a result of the change in band-topology which can be observed by manipulation of external electric field, vertically applied to the system surface. At the critical electric field in which the topological phase transition for silicene nano-ribbon has been observed, response function changes abruptly. Interconversion between the quantum numbers could provide a field for information and data processing technologies. | condensed matter |
The Enskog--Vlasov (EV) equation is a semi-empiric kinetic model describing gas-liquid phase transitions. In the framework of the EV equation, these correspond to an instability with respect to infinitely long perturbations, developing in a gas state when the temperature drops below (or density rises above) a certain threshold. In this paper, we show that the EV equation describes one more instability, with respect to perturbations with a finite wavelength and occurring at a higher density. This instability corresponds to fluid-solid phase transition and the perturbations' wavelength is essentially the characteristic scale of the emerging crystal structure. Thus, even though the EV model does not describe the fundamental physics of the solid state, it can `mimic' it -- and, thus, be used in applications involving both evaporation and solidification of liquids. Our results also predict to which extent a pure fluid can be overcooled before it definitely turns into a solid. | condensed matter |
The anti-kaon nucleon scattering lengths resulting from a Hamiltonian effective field theory analysis of experimental data and lattice QCD studies are presented. The same Hamiltonian is then used to compute the scattering length for the $K^- d$ system, taking careful account of the effects of recoil on the energy at which the $\bar{K}N$ T-matrices are evaluated. These results are then used to estimate the shift and width of the $1S$ levels of anti-kaonic hydrogen and deuterium. The $K^- p$ result is in excellent agreement with the SIDDHARTA measurement. In the $K^- d$ case the imaginary part of the scattering length and consequently the width of the $1S$ state are considerably larger than found in earlier work. This is a consequence of the effect of recoil on the energy of the $\bar{K}N$ energy, which enhances the role of the $\Lambda(1405)$ resonance. | high energy physics phenomenology |
Automatically finding good and general remote sensing representations allows to perform transfer learning on a wide range of applications - improving the accuracy and reducing the required number of training samples. This paper investigates development of generic remote sensing representations, and explores which characteristics are important for a dataset to be a good source for representation learning. For this analysis, five diverse remote sensing datasets are selected and used for both, disjoint upstream representation learning and downstream model training and evaluation. A common evaluation protocol is used to establish baselines for these datasets that achieve state-of-the-art performance. As the results indicate, especially with a low number of available training samples a significant performance enhancement can be observed when including additionally in-domain data in comparison to training models from scratch or fine-tuning only on ImageNet (up to 11% and 40%, respectively, at 100 training samples). All datasets and pretrained representation models are published online. | computer science |
In this paper, we construct a framework for investigating magnetohydrodynamical jet structure of spinning black holes (BHs), where electromagnetic fields and fluid motion are governed by the Grad-Shafranov equation and the Bernoulli equation, respectively. Assuming steady and axisymmetric jet structure, we can self-consistently obtain electromagnetic fields, fluid energy density and velocity within the jet, given proper plasma loading and boundary conditions. Specifically, we structure the two coupled governing equations as two eigenvalue problems, and develop full numerical techniques for solving them. As an example, we explicitly solve the governing equations for the split monopole magnetic field configuration and simplified plasma loading on the stagnation surface where the poloidal fluid velocity vanishes. As expected, we find the rotation of magnetic field lines is dragged down by fluid inertia, and the fluid as a whole does not contribute to energy extraction from the central BH, i.e., the magnetic Penrose process is not working. However, if we decompose the charged fluid as two oppositely charged components, we find the magnetic Penrose process does work for one of the two components when the plasma loading is low enough. | astrophysics |
Is the evaporation of a black hole described by a unitary theory? In order to shed light on this question ---especially aspects of this question such as a black hole's negative specific heat---we consider the real-time dynamics of a solitonic object in matrix quantum mechanics, which can be interpreted as a black hole (black zero-brane) via holography. We point out that the chaotic nature of the system combined with the flat directions of its potential naturally leads to the emission of D0-branes from the black brane, which is suppressed in the large $N$ limit. Simple arguments show that the black zero-brane, like the Schwarzschild black hole, has negative specific heat, in the sense that the temperature goes up when it evaporates by emitting D0-branes. While the largest Lyapunov exponent grows during the evaporation, the Kolmogorov-Sinai entropy decreases. These are consequences of the generic properties of matrix models and gauge theory. Based on these results, we give a possible geometric interpretation of the eigenvalue distribution of matrices in terms of gravity. Applying the same argument in the M-theory parameter region, we provide a scenario to derive the Hawking radiation of massless particles from the Schwarzschild black hole. Finally, we suggest that by adding a fraction of the quantum effects to the classical theory, we can obtain a matrix model whose classical time evolution mimics the entire life of the black brane, from its formation to the evaporation. | high energy physics theory |
We propose a novel representation of differential scattering cross-sections that locally realises the direct cancellation of infrared singularities exhibited by its so-called real-emission and virtual degrees of freedom. We take advantage of the Loop-Tree Duality representation of each individual forward-scattering diagram and we prove that the ensuing expression is locally free of infrared divergences, applies at any perturbative order and for any process without initial-state collinear singularities. Divergences for loop momenta with large magnitudes are regulated using local ultraviolet counterterms that reproduce the usual Lagrangian renormalisation procedure of quantum field theories. Our representation is especially suited for a numerical implementation and we demonstrate its practical potential by computing fully numerically and without any IR counterterm the next-to-leading order accurate differential cross-section for the process $e^+ e^- \rightarrow d \bar{d}$. We also show first results beyond next-to-leading order by computing interference terms part of the N4LO-accurate inclusive cross-section of a $1\rightarrow 2+X$ scalar scattering process. | high energy physics phenomenology |
Superconductors that possess both broken spatial inversion symmetry and spin-orbit interactions exhibit a mix of spin singlet and triplet pairing. Here, we report on measurements of the superconducting properties of electron-doped, strained SrTiO3 films. These films have an enhanced superconducting transition temperature and were previously shown to undergo a transition to a polar phase prior to becoming superconducting. We show that some films show signatures of an unusual superconducting state, such as an in-plane critical field that is higher than both the paramagnetic and orbital pair breaking limits. Moreover, nonreciprocal transport, which reflects the ratio of odd versus even pairing interactions, is observed. Together, these characteristics indicate that these films provide a tunable platform for investigations of unconventional superconductivity. | condensed matter |
We investigate domain formation and local morphology of thin films of $\alpha$-sexithiophene ($\alpha$-6T) on Au(100) beyond monolayer coverage by combining high resolution scanning tunneling microscopy (STM) experiments with electronic structure theory calculations and computational structure search. We report a layerwise growth of highly-ordered enantiopure domains. For the second and third layer, we show that the molecular orbitals of individual $\alpha$-6T molecules can be well resolved by STM, providing access to detailed information on the molecular orientation. We find that already in the second layer the molecules abandon the flat adsorption structure of the monolayer and adopt a tilted conformation. Although the observed tilted arrangement resembles the orientation of $\alpha$-6T in the bulk, the observed morphology does not yet correspond to a well-defined surface of the $\alpha$-6T bulk structure. A similar behavior is found for the third layer indicating a growth mechanism where the bulk structure is gradually adopted over several layers. | condensed matter |
We consider $W^+W^-$ production in hadronic collisions and present the computation of next-to-next-to-leading order accurate predictions consistently matched to parton showers (NNLO+PS) using the MiNNLO$_{\rm PS}$ method. Spin correlations, interferences and off-shell effects are included by calculating the full process $pp \to e^+\nu_e \mu^-\bar{\nu}_\mu$. This is the first NNLO+PS calculation for $W^+W^-$ production that does not require an a-posteriori multi-differential reweighting. The evaluation time of the two-loop contribution has been reduced by more than one order of magnitude through a four-dimensional cubic spline interpolation. We find good agreement with the inclusive and fiducial cross sections measured by ATLAS and CMS. Both NNLO corrections and matching to parton showers are important for an accurate simulation of the $W^+W^-$ signal, and their matching provides the best description of fully exclusive $W^+W^-$ events to date. | high energy physics phenomenology |
Homotopy algebra and its involutive generalisation plays an important role in the construction of string field theory. I will review recent progress in these applications of homotopy algebra and its relation to moduli spaces. | high energy physics theory |
We consider the correlator of two concentric circular Wilson loops with equal radii for arbitrary spatial and internal separation at strong coupling within a defect version of $\mathcal{N}=4$ SYM. Compared to the standard Gross-Ooguri phase transition between connected and disconnected minimal surfaces, a more complicated pattern of saddle-points contributes to the two-circles correlator due to the defect's presence. We analyze the transitions between different kinds of minimal surfaces and their dependence on the setting's numerous parameters. | high energy physics theory |
In this work we consider the inverse problem of reconstructing the optical properties of a layered medium from an elastography measurement where optical coherence tomography is used as the imaging method. We hereby model the sample as a linear dielectric medium so that the imaging parameter is given by its electric susceptibility, which is a frequency- and depth-dependent parameter. Additionally to the layered structure (assumed to be valid at least in the small illuminated region), we allow for small scatterers which we consider to be randomly distributed, a situation which seems more realistic compared to purely homogeneous layers. We then show that a unique reconstruction of the susceptibility of the medium (after averaging over the small scatterers) can be achieved from optical coherence tomography measurements for different compression states of the medium. | mathematics |
We investigate the emission of single photons from CdSe/CdS dot-in-rods which are optically trapped in the focus of a deep parabolic mirror. Thanks to this mirror, we are able to image almost the full 4$\pi$ emission pattern of nanometer-sized elementary dipoles and verify the alignment of the rods within the optical trap. From the motional dynamics of the emitters in the trap we infer that the single-photon emission occurs from clusters comprising several emitters. We demonstrate the optical trapping of rod-shaped quantum emitters in a configuration suitable for efficiently coupling an ensemble of linear dipoles with the electromagnetic field in free space. | quantum physics |
In this paper, we study the asymptotic behavior as $\varepsilon\to0^+$ of solutions $u\_\varepsilon$ to the nonlocal stationary Fisher-KPP type equation$$ \frac{1}{\varepsilon^m}\int\_{\mathbb{R}^N}J\_\varepsilon(x-y)(u\_\varepsilon(y)-u\_\varepsilon(x))\mathrm{d}y+u\_\varepsilon(x)(a(x)-u\_\varepsilon(x))=0\text{ in }\mathbb{R}^N, $$where $\varepsilon>0$ and $0\leq m<2$. Under rather mild assumptions and using very little technology, we prove that there exists one and only one positive solution $u\_\varepsilon$ and that $u\_\varepsilon\to a^+$ as $\varepsilon\to0^+$ where $a^+=\max\{0,a\}$. This generalizes the previously known results and answers an open question raised by Berestycki, Coville and Vo. Our method of proof is also of independent interest as it shows how to reduce this nonlocal problem to a local one. The sharpness of our assumptions is also briefly discussed. | mathematics |
A prerequisite for many quantum information processing tasks to truly surpass classical approaches is an efficient procedure to encode classical data in quantum superposition states. In this work, we present a circuit-based flip-flop quantum random access memory to construct a quantum database of classical information in a systematic and flexible way. For registering or updating classical data consisting of $M$ entries, each represented by $n$ bits, the method requires $O(n)$ qubits and $O(Mn)$ steps. With post-selection at an additional cost, our method can also store continuous data as probability amplitudes. As an example, we present a procedure to convert classical training data for a quantum supervised learning algorithm to a quantum state. Further improvements can be achieved by reducing the number of state preparation queries with the introduction of quantum forking. | quantum physics |
We examine current collider constraints on some simple $Z^\prime$ models that fit neutral current $B-$anomalies, including constraints coming from measurements of Standard Model (SM) signatures at the LHC. The `MDM' simplified model is not constrained by the SM measurements but {\em is} strongly constrained by a 139 fb$^{-1}$ 13 TeV ATLAS di-muon search. Constraints upon the `MUM' simplified model are much weaker. A combination of the current $B_s$ mixing constraint and ATLAS' $Z^\prime$ search implies $M_{Z^\prime}>1.2$ TeV in the Third Family Hypercharge Model example case. LHC SM measurements rule out a portion of the parameter space of the model for $M_{Z^\prime}<1.5$ TeV. | high energy physics phenomenology |
One of the main challenges in current systems neuroscience is the analysis of high-dimensional neuronal and behavioral data that are characterized by different statistics and timescales of the recorded variables. We propose a parametric copula model which separates the statistics of the individual variables from their dependence structure, and escapes the curse of dimensionality by using vine copula constructions. We use a Bayesian framework with Gaussian Process (GP) priors over copula parameters, conditioned on a continuous task-related variable. We validate the model on synthetic data and compare its performance in estimating mutual information against the commonly used non-parametric algorithms. Our model provides accurate information estimates when the dependencies in the data match the parametric copulas used in our framework. When the exact density estimation with a parametric model is not possible, our Copula-GP model is still able to provide reasonable information estimates, close to the ground truth and comparable to those obtained with a neural network estimator. Finally, we apply our framework to real neuronal and behavioral recordings obtained in awake mice. We demonstrate the ability of our framework to 1) produce accurate and interpretable bivariate models for the analysis of inter-neuronal noise correlations or behavioral modulations; 2) expand to more than 100 dimensions and measure information content in the whole-population statistics. These results demonstrate that the Copula-GP framework is particularly useful for the analysis of complex multidimensional relationships between neuronal, sensory and behavioral data. | statistics |
We systematically perform hydrodynamics simulations of 20 km s^-1 converging flows of the warm neutral medium (WNM) to calculate the formation of the cold neutral medium (CNM), especially focusing on the mean properties of the multiphase interstellar medium (ISM), such as the average shock front position and the mean density on a 10 pc scale. Our results show that the convergence in those mean properties requires 0.02 pc spatial resolution that resolves the cooling length of the thermally unstable neutral medium (UNM) to follow the dynamical condensation from the WNM to CNM. We also find that two distinct post-shock states appear in the mean properties depending on the amplitude of the upstream WNM density fluctuation (= sqrt(<drho^2>)/rho_0). When the amplitude > 10 %, the interaction between shocks and density inhomogeneity leads to a strong driving of the post-shock turbulence of > 3 km s^-1, which dominates the energy budget in the shock-compressed layer. The turbulence prevents the dynamical condensation by cooling and the following CNM formation, and the CNM mass fraction remains as ~ 45 %. In contrast, when the amplitude <= 10 %, the shock fronts maintain an almost straight geometry and CNM formation efficiently proceeds, resulting in the CNM mass fraction of ~ 70 %. The velocity dispersion is limited to the thermal-instability mediated level of 2 - 3 km s^-1 and the layer is supported by both turbulent and thermal energy equally. We also propose an effective equation of state that models the multiphase ISM formed by the WNM converging flow as a one-phase ISM. | astrophysics |
This paper proposes a learning method to construct an efficient sensing (measurement) matrix, having orthogonal rows, for compressed sensing of a class of signals. The learning scheme identifies the sensing matrix by maximizing the entropy of measurement vectors. The bounds on the entropy of the measurement vector necessary for the unique recovery of a signal are also proposed. A comparison of the performance of the designed sensing matrix and the sensing matrices constructed using other existing methods is also presented. The simulation results on the recovery of synthetic, speech, and image signals, compressively sensed using the sensing matrix identified, shows an improvement in the accuracy of recovery. The reconstruction quality is better, using less number of measurements, than those measured using sensing matrices identified by other methods. | electrical engineering and systems science |
The purpose of this paper is twofold. First, we use a classical method to establish Gaussian bounds of the fundamental matrix of a generalized parabolic Lam\'{e} system with only bounded and measurable coefficients. Second, we derive a maximal $L^1$ regularity result for the abstract Cauchy problem associated with a composite operator. In a concrete example, we also obtain maximal $L^1$ regularity for the Lam\'{e} system, from which it follows that the Lipschitz seminorm of the solutions to the Lam\'{e} system is globally $L^1$-in-time integrable. As an application, we use a Lagrangian approach to prove a global-in-time well-posedness result for a viscous pressureless flow provided that the initial velocity satisfies a scaling-invariant smallness condition. The method established in this paper might be a powerful tool for studying many issues arising from viscous fluids with truly variable densities. | mathematics |
The Mahalanobis distance-based confidence score, a recently proposed anomaly detection method for pre-trained neural classifiers, achieves state-of-the-art performance on both out-of-distribution (OoD) and adversarial examples detection. This work analyzes why this method exhibits such strong performance in practical settings while imposing an implausible assumption; namely, that class conditional distributions of pre-trained features have tied covariance. Although the Mahalanobis distance-based method is claimed to be motivated by classification prediction confidence, we find that its superior performance stems from information not useful for classification. This suggests that the reason the Mahalanobis confidence score works so well is mistaken, and makes use of different information from ODIN, another popular OoD detection method based on prediction confidence. This perspective motivates us to combine these two methods, and the combined detector exhibits improved performance and robustness. These findings provide insight into the behavior of neural classifiers in response to anomalous inputs. | statistics |
Ly$\alpha$-emitting galaxies (LAEs) are easily detectable in the high-redshift Universe and are potentially efficient tracers of large scale structure at early epochs, as long as their observed properties do not strongly depend on environment. We investigate the luminosity and equivalent width functions of LAEs in the overdense field of a protocluster at redshift $z \simeq 3.78$. Using a large sample of LAEs (many spectroscopically confirmed), we find that the Ly$\alpha$ luminosity distribution is well-represented by a Schechter (1976) function with $\log(L^{\ast}/{\rm erg s^{-1}}) = 43.26^{+0.20}_{-0.22}$ and $\log(\phi^{\ast}/{\rm Mpc^{-3}})=-3.40^{+0.03}_{-0.04}$ with $\alpha=-1.5$. Fitting the equivalent width distribution as an exponential, we find a scale factor of $\omega=79^{+15}_{-15}\: \mathring{A}$. We also measured the Ly$\alpha$ luminosity and equivalent width functions using the subset of LAEs lying within the densest cores of the protocluster, finding similar values for $L^*$ and $\omega$. Hence, despite having a mean overdensity more than 2$\times$ that of the general field, the shape of the Ly$\alpha$ luminosity function and equivalent width distributions in the protocluster region are comparable to those measured in the field LAE population by other studies at similar redshift. While the observed Ly$\alpha$ luminosities and equivalent widths show correlations with the UV continuum luminosity in this LAE sample, we find that these are likely due to selection biases and are consistent with no intrinsic correlations within the sample. This protocluster sample supports the strong evolutionary trend observed in the Ly$\alpha$ escape fraction and suggest that lower redshift LAEs are on average significantly more dusty that their counterparts at higher redshift. | astrophysics |
We investigate the localization of two incoherent point sources with arbitrary angular and axial separations in the paraxial approximation. By using quantum metrology techniques, we show that a simultaneous estimation of the two separations is achievable by a single quantum measurement, with a precision saturating the ultimate limit stemming from the quantum Cram\'er-Rao bound. Such a precision is not degraded in the sub-wavelength regime, thus overcoming the traditional limitations of classical direct imaging derived from Rayleigh's criterion. Our results are qualitatively independent of the point spread function of the imaging system, and quantitatively illustrated in detail for the Gaussian instance. This analysis may have relevant applications in three-dimensional surface measurements. | quantum physics |
This paper is devoted to an inverse Steklov problem for a particular class of n-dimensional manifolds having the topology of a hollow sphere and equipped with a warped product metric. We prove that the knowledge of the Steklov spectrum determines uniquely the associated warping function up to a natural invariance. | mathematics |
Friedel's law guarantees an inversion-symmetric diffraction pattern for thin, light materials where a kinematic approximation or a single-scattering model holds. Typically, breaking Friedel symmetry is ascribed to multiple scattering events within thick, non-centrosymmetric crystals. However, two-dimensional (2D) materials such as a single monolayer of MoS$_2$ can also violate Friedel's law, with unexpected contrast between conjugate Bragg peaks. We show analytically that retaining higher order terms in the power series expansion of the scattered wavefunction can describe the anomalous contrast between $hkl$ and $\overline{hkl}$ peaks that occurs in 2D crystals with broken in-plane inversion symmetry. These higher-order terms describe multiple scattering paths starting from the same atom in an atomically thin material. Furthermore, 2D materials containing heavy elements, such as WS$_2$, always act as strong phase objects, violating Friedel's law no matter how high the energy of the incident electron beam. Experimentally, this understanding can enhance diffraction-based techniques to provide rapid imaging of polarity, twin domains, in-plane rotations, or other polar textures in 2D materials. | condensed matter |
Image-to-image translation, which translates input images to a different domain with a learned one-to-one mapping, has achieved impressive success in recent years. The success of translation mainly relies on the network architecture to reserve the structural information while modify the appearance slightly at the pixel level through adversarial training. Although these networks are able to learn the mapping, the translated images are predictable without exclusion. It is more desirable to diversify them using image-to-image translation by introducing uncertainties, i.e., the generated images hold potential for variations in colors and textures in addition to the general similarity to the input images, and this happens in both the target and source domains. To this end, we propose a novel generative adversarial network (GAN) based model, InjectionGAN, to learn a many-to-many mapping. In this model, the input image is combined with latent variables, which comprise of domain-specific attribute and unspecific random variations. The domain-specific attribute indicates the target domain of the translation, while the unspecific random variations introduce uncertainty into the model. A unified framework is proposed to regroup these two parts and obtain diverse generations in each domain. Extensive experiments demonstrate that the diverse generations have high quality for the challenging image-to-image translation tasks where no pairing information of the training dataset exits. Both quantitative and qualitative results prove the superior performance of InjectionGAN over the state-of-the-art approaches. | computer science |
Under special conditions, we prove that the set of preperiodic points for semigroups of self-morphisms of affine spaces falling on cyclotomic closures is not dense. generalising results of Ostafe and Young (2020). We also extend previous results about boundness of house and height on certain preperiodicity sets of higher dimension in semigroup dynamics. | mathematics |
In present-day quantum communications, one of the main problems is the lack of a quantum repeater design that can simultaneously secure high rates and long distances. Recent literature has established the end-to-end capacities that are achievable by the most general protocols for quantum and private communication within a quantum network, encompassing the case of a quantum repeater chain. However, whether or not a physical design exists to approach such capacities remains a challenging objective. Driven by this motivation, in this work, we put forward a design for continuous-variable quantum repeaters and show that it can actually achieve the feat. We also show that even in a noisy regime our rates surpass the Pirandola-Laurenza-Ottaviani-Banchi (PLOB) bound. Our repeater setup is developed upon using noiseless linear amplifiers, quantum memories, and continuous-variable Bell measurements. We, furthermore, propose a non-ideal model for continuous-variable quantum memories that we make use of in our design. We then show that potential quantum communications rates would deviate from the theoretical capacities, as one would expect, if the quantum link is too noisy and/or low-quality quantum memories and amplifiers are employed. | quantum physics |
This paper is concerned with the recovery of (approximate) solutions to parabolic problems from incomplete and possibly inconsistent observational data, given on a time-space cylinder that is a strict subset of the computational domain under consideration. Unlike previous approaches to this and related problems our starting point is a regularized least squares formulation in a continuous infinite-dimensional setting that is based on stable variational time-space formulations of the parabolic PDE. This allows us to derive a priori as well as a posteriori error bounds for the recovered states with respect to a certain reference solution. In these bounds the regularization parameter is disentangled from the underlying discretization. An important ingredient for the derivation of a posteriori bounds is the construction of suitable Fortin operators which allow us to control oscillation errors stemming from the discretization of dual norms. Moreover, the variational framework allows us to contrive preconditioners for the discrete problems whose application can be performed in linear time, and for which the condition numbers of the preconditioned systems are uniformly proportional to that of the regularized continuous problem. In particular, we provide suitable stopping criteria for the iterative solvers based on the a posteriori error bounds. The presented numerical experiments quantify the theoretical findings and demonstrate the performance of the numerical scheme in relation with the underlying discretization and regularization. | mathematics |
Risk-limiting audits (RLAs) for many social choice functions can be reduced to testing sets of null hypotheses of the form "the average of this list is not greater than 1/2" for a collection of finite lists of nonnegative numbers. Such social choice functions include majority, super-majority, plurality, multi-winner plurality, Instant Runoff Voting (IRV), Borda count, approval voting, and STAR-Voting, among others. The audit stops without a full hand count iff all the null hypotheses are rejected. The nulls can be tested in many ways. Ballot-polling is particularly simple; two new ballot-polling risk-measuring functions for sampling without replacement are given. Ballot-level comparison audits transform each null into an equivalent assertion that the mean of re-scaled tabulation errors is not greater than 1/2. In turn, that null can then be tested using the same statistical methods used for ballot polling---but applied to different finite lists of nonnegative numbers. SHANGRLA comparison audits are more efficient than previous comparison audits for two reasons: (i) for most social choice functions, the conditions tested are both necessary and sufficient for the reported outcome to be correct, while previous methods tested conditions that were sufficient but not necessary, and (ii) the tests avoid a conservative approximation. The SHANGRLA abstraction simplifies stratified audits, including audits that combine ballot polling with ballot-level comparisons, producing sharper audits than the "SUITE" approach. SHANGRLA works with the "phantoms to evil zombies" strategy to treat missing ballot cards and missing or redacted cast vote records. That also facilitates sampling from "ballot-style manifests," which can dramatically improve efficiency when the audited contests do not appear on every ballot card. Open-source software implementing SHANGRLA ballot-level comparison audits is available. | statistics |
We present and develop a general dispersive framework allowing us to construct representations of the amplitudes for the processes $P\pi\to\pi\pi$, $P=K,\eta$, valid at the two-loop level in the low-energy expansion. The construction proceeds through a two-step iteration, starting from the tree-level amplitudes and their S and P partial-wave projections. The one-loop amplitudes are obtained for all possible configurations of pion masses. The second iteration is presented in detail in the cases where either all masses of charged and neutral pions are equal, or for the decay into three neutral pions. Issues related to analyticity properties of the amplitudes and of their lowest partial-wave projections are given particular attention. This study is introduced by a brief survey of the situation, for both experimental and theoretical aspects, of the decay modes into three pions of charged and neutral kaons and of the eta meson. | high energy physics phenomenology |
Security is one of the biggest concern in power system operation. Recently, the emerging cyber security threats to operational functions of power systems arouse high public attention, and cybersecurity vulnerability thus become an emerging topic to evaluate compromised operational performance under cyber attack. In this paper, vulnerability of cyber security of load frequency control (LFC) system, which is the key component in energy manage system (EMS), is assessed by exploiting the system response to attacks on LFC variables/parameters. Two types of attacks: 1) injection attack and 2) scale attack are considered for evaluation. Two evaluation criteria reflecting the damage on system stability and power generation are used to quantify system loss under cyber attacks. Through a sensitivity-based method and attack tree models, the vulnerability of different LFC components is ranked. In addition, a post-intrusion cyber attack detection scheme is proposed. Classification-based schemes using typical classification algorithms are studied and compared to identify different attack scenarios. | electrical engineering and systems science |
We investigate the non-linear response and energy absorption in bulk silicon irradiated by intense 12-fs near-infrared laser pulses. Depending on the laser intensity, we distinguish two regimes of non-linear absorption of the laser energy: for low intensities, energy deposition and photoionization involve perturbative three-photon transition through the direct bandgap of silicon. For laser intensities near and above 10$^{14}$ W/cm$^2$, corresponding to photocarrier density of order 10$^{22}$ cm$^{-3}$, we find that absorption at near-infrared wavelengths is greatly enhanced due to excitation of bulk plasmon resonance. In this regime, the energy transfer to electrons exceeds a few times the thermal melting threshold of Si. The optical reflectivity of the photoexcited solid is found in good qualitative agreement with existing experimental data. In particular, the model predicts that the main features of the reflectivity curve of photoexcited Si as a function of the laser fluence are determined by the competition between state and band filling associated with Pauli exclusion principle and Drude free-carrier response. The non-linear response of the photoexcited solid is also investigated for irradiation of silicon with a sequence of two strong and temporary non-overlapping pulses. The cumulative effect of the two pulses is non-additive in terms of deposited energy. Photoionization and energy absorption on the leading edge of the second pulse is greatly enhanced due to free carrier absorption. | physics |
We investigate numerically the effect of Kerr nonlinearity on the transmission spectrum of a one dimensional $\delta$-function photonic crystal. It is found that the photonic band gap (PBG) width either increases or decreases depending on both sign and strength of Kerr nonlinearity. We found that any amount of self-focusing nonlinearity $(\alpha >0)$ leads to an increase of the PBG width leading to light localization. However, for defocusing nonlinearity, we found a range of non-linearity strengths for which the photonic band gap width decreases when the nonlinearity strength increases and a critical non-linearity strength $|\alpha_{c}|$ above which the behaviour is reversed. At this critical value the photonic crystal become transparent and the photonic band gap is suppressed. We have also studied the dependence on the angle of incidence and polarization in the transmission spectrum of our one-dimensional photonic crystal. We found that the minimum of the transmission increases with incident angle but seems to be polarization-independent. We also found that position of the photonic band gap (PBG) shifts to lower wavelengths when the angle of incidence increases for TE mode while it shifts to longer wavelengths for TM mode. | physics |
We present the design of a novel instrument tuned to detect transiting exoplanet atmospheres. The instrument, which we call the exoplanet transmission spectroscopy imager (ETSI), makes use of a new technique called common-path multi-band imaging (CMI). ETSI uses a prism and multi-band filter to simultaneously image 15 spectral bandpasses on two detectors from $430-975nm$ (with a average spectral resolution of $R = \lambda/\Delta\lambda = 23$) during exoplanet transits of a bright star. A prototype of the instrument achieved photon-noise limited results which were below the atmospheric amplitude scintillation noise limit. ETSI can detect the presence and composition of an exoplanet atmosphere in a relatively short time on a modest-size telescope. We show the optical design of the instrument. Further, we discuss design trades of the prism and multi-band filter which are driven by the science of the ETSI instrument. We describe the upcoming survey with ETSI that will measure dozens of exoplanet atmosphere spectra in $\sim2$ years on a two meter telescope. Finally, we will discuss how ETSI will be a powerful means for follow up on all gas giant exoplanets that transit bright stars, including a multitude of recently identified TESS (NASA's Transiting Exoplanet Survey Satellite) exoplanets. | astrophysics |
In this article, we study the extremal processes of branching Brownian motions conditioned on having an unusually large maximum. The limiting point measures form a one-parameter family and are the decoration point measures in the extremal processes of several branching processes, including branching Brownian motions with variable speed and multitype branching Brownian motions. We give a new, alternative representation of these point measures and we show that they form a continuous family. This also yields a simple probabilistic expression for the constant that appears in the large deviation probability of having a large displacement. As an application, we show that Bovier and Hartung (2015)'s results about variable speed branching Brownian motion also describe the extremal point process of branching Ornstein-Uhlenbeck processes. | mathematics |
In this work, learning schemes for measure-valued data are proposed, i.e. data that their structure can be more efficiently represented as probability measures instead of points on $\R^d$, employing the concept of probability barycenters as defined with respect to the Wasserstein metric. Such type of learning approaches are highly appreciated in many fields where the observational/experimental error is significant (e.g. astronomy, biology, remote sensing, etc.) or the data nature is more complex and the traditional learning algorithms are not applicable or effective to treat them (e.g. network data, interval data, high frequency records, matrix data, etc.). Under this perspective, each observation is identified by an appropriate probability measure and the proposed statistical learning schemes rely on discrimination criteria that utilize the geometric structure of the space of probability measures through core techniques from the optimal transport theory. The discussed approaches are implemented in two real world applications: (a) clustering eurozone countries according to their observed government bond yield curves and (b) classifying the areas of a satellite image to certain land uses categories which is a standard task in remote sensing. In both case studies the results are particularly interesting and meaningful while the accuracy obtained is high. | statistics |
We report detailed investigation of the existence and stability of mixed and demixed modes in binary atomic Bose-Einstein condensates with repulsive interactions in a ring-trap geometry. The stability of such states is examined through eigenvalue spectra for small perturbations, produced by the Bogoliubov-de Gennes equations, and directly verified by simulations based on the coupled Gross-Pitaevskii equations, varying inter- and intra-species scattering lengths so as to probe the entire range of miscibility-immiscibility transitions. In the limit of the one-dimensional (1D) ring, i.e., a very narrow one, stability of mixed states is studied analytically, including hidden-vorticity (HV) modes, i.e., those with opposite vorticities of the two components and zero total angular momentum. The consideration of demixed 1D states reveals, in addition to stable composite single-peak structures, double- and triple-peak ones, above a certain particle-number threshold. In the 2D annular geometry, stable demixed states exist both in radial and azimuthal configurations. We find that stable radially-demixed states can carry arbitrary vorticity and, counter-intuitively, the increase of the vorticity enhances stability of such states, while unstable ones evolve into randomly oscillating angular demixed modes. The consideration of HV states in the 2D geometry expands the stability range of radially-demixed states. | condensed matter |
$SO(5) \times U(1) \times SU(3)$ gauge-Higgs unification model inspired by $SO(11)$ gauge-Higgs grand unification is constructed in the Randall-Sundrum warped space. The 4D Higgs boson is identified with the Aharonov-Bohm phase in the fifth dimension. Fermion multiplets are introduced in the bulk in the spinor, vector and singlet representations of $SO(5)$ such that they are implemented in the spinor and vector representations of $SO(11)$. The mass spectrum of quarks and leptons in three generations is reproduced except for the down quark mass. The small neutrino masses are explained by the gauge-Higgs seesaw mechanism which takes the same form as in the inverse seesaw mechanism in grand unified theories in four dimensions. | high energy physics phenomenology |
Knot theory provides a powerful tool for the understanding of topological matters in biology, chemistry, and physics. Here knot theory is introduced to describe topological phases in the quantum spin system. Exactly solvable models with long-range interactions are investigated, and Majorana modes of the quantum spin system are mapped into different knots and links. The topological properties of ground states of the spin system are visualized and characterized using crossing and linking numbers, which capture the geometric topologies of knots and links. The interactivity of energy bands is highlighted. In gapped phases, eigenstate curves are tangled and braided around each other forming links. In gapless phases, the tangled eigenstate curves may form knots. Our findings provide an alternative understanding of the phases in the quantum spin system, and provide insights into one-dimension topological phases of matter. | condensed matter |
Literary theme identification and interpretation is a focal point of literary studies scholarship. Classical forms of literary scholarship, such as close reading, have flourished with scarcely any need for commonly defined literary themes. However, the rise in popularity of collaborative and algorithmic analyses of literary themes in works of fiction, together with a requirement for computational searching and indexing facilities for large corpora, creates the need for a collection of shared literary themes to ensure common terminology and definitions. To address this need, we here introduce a first draft of the Literary Theme Ontology. Inspired by a traditional framing from literary theory, the ontology comprises literary themes drawn from the authors own analyses, reference books, and online sources. The ontology is available at https://github.com/theme-ontology/lto under a Creative Commons Attribution 4.0 International license (CC BY 4.0). | computer science |
This is the documentation of the 3D dynamic tomographic X-ray (CT) data of a ladybug. The open data set is available https://zenodo.org/record/3375488#.XV_T3vxS9oA and can be freely used for scientific purposes with appropriate references to the data and to this document in http://arxiv.org/ | physics |