text
stringlengths
11
9.77k
label
stringlengths
2
104
More computational resources (i.e., more physical qubits and qubit connections) on a superconducting quantum processor not only improve the performance but also result in more complex chip architecture with lower yield rate. Optimizing both of them simultaneously is a difficult problem due to their intrinsic trade-off. Inspired by the application-specific design principle, this paper proposes an automatic design flow to generate simplified superconducting quantum processor architecture with negligible performance loss for different quantum programs. Our architecture-design-oriented profiling method identifies program components and patterns critical to both the performance and the yield rate. A follow-up hardware design flow decomposes the complicated design procedure into three subroutines, each of which focuses on different hardware components and cooperates with corresponding profiling results and physical constraints. Experimental results show that our design methodology could outperform IBM's general-purpose design schemes with better Pareto-optimal results.
quantum physics
Standard Model may allow an extended gauge sector with anomaly-free flavored gauge symmetries, such as $L_{i} - L_{j}$, $B_{i} - L_{j}$, and $B - 3L_{i}$, where $i,j=1,2,3$ are flavor indices. We investigate phenomenological implications of the new flavored gauge boson $Z^{\prime}$ in the above three classes of gauge symmetries. Focusing on the gauge boson mass above 5 GeV, we use the lepton universality test in the $Z$ and $\tau/\mu$ decays, LEP searches, LHC searches, neutrino trident production bound, and LHC $Z\rightarrow 4\mu$ searches to put constraints on the $g^{\prime}-M_{Z^{\prime}}$ plane. When $L_1$ is involved, the LEP bounds on the $e^{-}e^{+} \rightarrow \ell^{-}\ell^{+}$ processes give the most stringent bounds, while the LHC bound becomes the strongest constraints in the large $M_{Z^{\prime}}$ region when $B_{i}$ is involved. The bound from $Z\rightarrow 4\mu$ productions, which is applicable for $L_2$-involved scenarios, provides stringent bounds in the small $M_{Z^{\prime}}$ region. One exception is the $B-3L_2$ scenario, in which case only a small region is favored due to the lepton universality.
high energy physics phenomenology
We consider $Z \gamma$ production in hadronic collisions and present the first computation of next-to-next-to-leading order accurate predictions consistently matched to parton showers (NNLO+PS). Spin correlations, interferences and off-shell effects are included by calculating the full process $pp\to\ell^+\ell^-\gamma$. We extend the recently developed MiNNLO$_{\rm PS}$ method to genuine $2\to 2$ hard scattering processes at the LHC, which paves the way for NNLO+PS simulations of all diboson processes. This is the first $2\to2$ NNLO+PS calculation that does not require an a-posteriori multi-differential reweighting. We find that both NNLO corrections and matching to parton showers are crucial for an accurate simulation of the $Z \gamma$ process. Our predictions are in very good agreement with recent ATLAS data.
high energy physics phenomenology
Within the framework of covariant confined quark model, we compute the transition form factors of $D$ and $D_s$ mesons decaying to light scalar mesons $f_0(980)$ and $a_0(980)$. The transition form factors are then utilized to compute the semileptonic branching fractions. We study the channels namely, $D_{(s)}^+ \to f_0(980) \ell^+ \nu_\ell$ and $D \to a_0(980) \ell^+ \nu_\ell$ for $\ell = e$ and $\mu$. For computation of semileptonic branching fractions, we consider the $a_0(980)$ meson to be the conventional quark-antiquark structure and the $f_0(980)$ meson as the admixture of $s\bar{s}$ and light quark-antiquark pairs. Our findings are found to support the recent BESIII data.
high energy physics phenomenology
In this numerical study, an original approach to simulate non-isothermal viscoelastic fluid flows at high Weissenberg numbers is presented. Stable computations over a wide range of Weissenberg numbers are assured by using the root conformation approach in a finite volume framework on general unstructured meshes. The numerical stabilization framework is extended to consider thermo-rheological properties in Oldroyd-B type viscoelastic fluids. The temperature dependence of the viscoelastic fluid is modeled with the time-temperature superposition principle. Both Arrhenius and WLF shift factors can be chosen, depending on the flow characteristics. The internal energy balance takes into account both energy and entropy elasticity. Partitioning is achieved by a constant split factor. An analytical solution of the balance equations in planar channel flow is derived to verify the results of the main field variables and to estimate the numerical error. The more complex entry flow of a polyisobutylene-based polymer solution in an axisymmetric 4:1 contraction is studied and compared to experimental data from the literature. We demonstrate the stability of the method in the experimentally relevant range of high Weissenberg numbers. The results at different imposed wall temperatures, as well as Weissenberg numbers, are found to be in good agreement with experimental data. Furthermore, the division between energy and entropy elasticity is investigated in detail with regard to the experimental setup.
physics
Skew-elliptical distributions constitute a large class of multivariate distributions that account for both skewness and a variety of tail properties. This class has simpler representations in terms of densities rather than cumulative distribution functions, and the tail density approach has previously been developed to study tail properties when multivariate densities have more tractable forms. The special skew-elliptical structure allows for derivations of specific forms for the tail densities for those skew-elliptical copulas that admit probability density functions, under heavy and light tail conditions on density generators. The tail densities of skew-elliptical copulas are explicit and depend only on tail properties of the underlying density generator and conditions on the skewness parameters. In the heavy tail case skewness parameters affect tail densities of the skew-elliptical copulas more profoundly than that in the light tail case, whereas in the latter case the tail densities of skew-elliptical copulas are only proportional to the tail densities of symmetrical elliptical copulas. Various examples, including tail densities of skew-normal and skew-t distributions, are given.
mathematics
An important methodological problem of theoretical mechanics related to inertia is discussed. Analysis Inertia is performed in four-dimensional Minkowski space-time based on the law of conservation of energy-momentum. This approach allows us to combine the laws of conservation of momentum and angular momentum into a single law and separate the forces of inertia that actually exist in nature from the imaginary forces introduced to simplify calculations or arising from the transition from one frame of reference to another. From the energy-momentum tensor, in a non-relativistic approximation, the equation of balance of inertia forces existing in nature for a moving continuous medium and a material point in an inertial frame of reference is obtained. It follows from this equation that the pseudo-Euclidean geometry of our world plays an important role in the manifestation of inertia forces. The tensor and the balance equations of all inertia forces in a continuous medium a moving with angular acceleration are obtained. This allows uniform formulation and presentation of the original categories and concepts of classical and relativistic mechanics in educational and scientific literature within the framework of the current paradigm.
physics
$U(1)_X$SSM is the extension of the minimal supersymmetric standard model(MSSM) and its local gauge group is $SU(3)_C\times SU(2)_L \times U(1)_Y \times U(1)_X$. To obtain this model, three singlet new Higgs superfields and right-handed neutrinos are added to MSSM. In the framework of $U(1)_X$SSM, we study the Higgs mass and take the lightest CP-even sneutrino as a cold dark matter candidate. For the lightest CP-even sneutrino, the relic density and the cross section for dark matter scattering off nucleon are both researched. In suitable parameter space of the model, the numerical results satisfy the constraints of the relic density and the cross section with the nucleon.
high energy physics phenomenology
The performance of risk prediction models is often characterized in terms of discrimination and calibration. The Receiver Operating Characteristic (ROC) curve is widely used for evaluating model discrimination. When evaluating the performance of a risk prediction model in a new sample, the shape of the ROC curve is affected by both case-mix and the postulated model. Further, compared to discrimination, evaluating calibration has not received the same level of attention. Commonly used methods for model calibration involve subjective specification of smoothing or grouping. Leveraging the familiar ROC framework, we introduce the model-based ROC (mROC) curve to assess the calibration of a pre-specified model in a new sample. mROC curve is the ROC curve that should be observed if a pre-specified model is calibrated in the sample. We show the empirical ROC and mROC curves for a sample converge asymptotically if the model is calibrated in that sample. As a consequence, the mROC curve can be used to assess visually the effect of case-mix and model mis-calibration. Further, we propose a novel statistical test for calibration that does not require any smoothing or grouping. Simulations support the adequacy of the test. A case study puts these developments in a practical context. We conclude that mROC can easily be constructed and used to evaluate the effect of case-mix and model calibration on the ROC plot, thus adding to the utility of ROC curve analysis in the evaluation of risk prediction models. R code for the proposed methodology is provided (https://github.com/msadatsafavi/mROC/).
statistics
Insight in the structure of nanoparticle assemblies up to a single particle level is key to understand the collective properties of these assemblies, which critically depend on the individual particle positions and orientations. However, the characterization of large, micron sized assemblies containing small, 10-500 nanometer, sized colloids is highly challenging and cannot easily be done with the conventional light, electron or X-ray microscopy techniques. Here, we demonstrate that focused ion beam-scanning electron microscopy (FIB-SEM) tomography in combination with image processing enables quantitative real-space studies of ordered and disordered particle assemblies too large for conventional transmission electron tomography, containing particles too small for confocal microscopy. First, we demonstrate the high resolution structural analysis of spherical nanoparticle assemblies, containing small anisotropic gold nanoparticles. Herein, FIB-SEM tomography allows the characterization of assembly dimensions which are inaccessible to conventional transmission electron microscopy. Next, we show that FIB-SEM tomography is capable of characterizing much larger ordered and disordered assemblies containing silica colloids with a diameter close to the resolution limit of confocal microscopes. We determined both the position and the orientation of each individual (nano)particle in the assemblies by using recently developed particle tracking routines. Such high precision structural information is essential in the understanding and design of the collective properties of new nanoparticle based materials and processes.
condensed matter
Many algorithms for maximizing a monotone submodular function subject to a knapsack constraint rely on the natural greedy heuristic. We present a novel refined analysis of this greedy heuristic which enables us to: $(1)$ reduce the enumeration in the tight $(1-e^{-1})$-approximation of [Sviridenko 04] from subsets of size three to two; $(2)$ present an improved upper bound of $0.42945$ for the classic algorithm which returns the better between a single element and the output of the greedy heuristic.
computer science
We present a scattering theory of transport through noncollinear disordered magnetic insulators. For concreteness, we study and compare the random field model (RFM) and the random anisotropy model (RAM). The RFM and RAM are used to model random spin disorder systems and amorphous materials, respectively. We utilize the Landauer-Buttiker formalism to compute the transmission probability and spin conductance of one-dimensional disordered spin chains. The RFM and the RAM both exhibit Anderson localization, which means that the transmission probability and spin conductance decay exponentially with the system length. We define two localization lengths based on the transmission probability and the spin conductance, respectively. Next, we numerically determine the relationship between the localization lengths and the strength of the disorder. In the limit of weak disorder, we find that the localization lengths obey power laws and determine the critical exponents. Our results are expressed via the universal exchange length and are therefore expected to be general.
condensed matter
We employ Dirac's bra-ket notation to define the inertia tensor operator that is independent of the choice of bases or coordinate system. The principal axes and the corresponding principal values for the elliptic plate are determined only based on the geometry. By making use of a general symmetric tensor operator, we develop a method of diagonalization that is convenient and intuitive in determining the eigenvector. We demonstrate that the bra-ket approach greatly simplifies the computation of the inertia tensor with an example of an $N$-dimensional ellipsoid. The exploitation of the bra-ket notation to compute the inertia tensor in classical mechanics should provide undergraduate students with a strong background necessary to deal with abstract quantum mechanical problems.
physics
With the enhancements in the field of software-defined networking and virtualization technologies, novel networking paradigms such as network function virtualization (NFV) and the Internet of things (IoT) are rapidly gaining ground. Development of IoT as well as 5G networks and explosion in online services has resulted in an exponential growth of devices connected to the network. As a result, application service providers (ASPs) and Internet service providers (ISPs) are being confronted with the unprecedented challenge of accommodating increasing service and traffic demands from the geographically distributed users. To tackle this problem, many ASPs and ISPs, such as Netflix, Facebook, AT&T and others are increasingly adopting micro-services (MS) application architecture. Despite the success of MS in the industry, there is no specific standard or research work for service providers as guidelines, especially from the perspective of basic micro-service operations. In this work, we aim to bridge this gap between industry and academia and discuss different micro-service deployment, discovery and communication options for service providers as a means to forming complete service chains. In addition, we address the problem of scheduling micro-services across multiple clouds, including micro-clouds. We consider different user-level SLAs, such as latency and cost, while scheduling such services. We aim to reduce overall turnaround time as well as costs for the deployment of complete end-to-end service. In this work, we present a novel affinity-based fair weighted scheduling heuristic to solve this problem. We also compare the results of proposed solution with standard greedy scheduling algorithms presented in the literature and observe significant improvements.
computer science
Quantum sensors based on nitrogen-vacancy centers in diamond have emerged as a promising detection modality for nuclear magnetic resonance (NMR) spectroscopy owing to their micron-scale detection volume and non-inductive based detection. A remaining challenge is to realize sufficiently high spectral resolution and concentration sensitivity for multidimensional NMR analysis of picoliter sample volumes. Here, we address this challenge by spatially separating the polarization and detection phases of the experiment in a microfluidic platform. We realize a spectral resolution of 0.65 +/- 0.05 Hz, an order-of-magnitude improvement over previous diamond NMR studies. We use the platform to perform two-dimensional correlation spectroscopy of liquid analytes within an effective ~20 picoliter detection volume. The use of diamond quantum sensors as in-line microfluidic NMR detectors is a significant step towards applications in mass-limited chemical analysis and single cell biology.
physics
We construct the zero temperature (no compact dimensions) effective action for an SU(2) Yang-Mills theory in five dimensions, with boundary conditions that reduce the symmetry on the four-dimensional boundary located at the origin to a U(1)-complex scalar system. In order to be sensitive to the Higgs phase, we need to include higher dimensional operators in the effective action, which can be naturally achieved by generating it by expanding the corresponding lattice construction in small lattice spacing, taking the naive continuum limit and then renormalizing. In addition, we build in the effective action non-perturbative information, related to a first order quantum phase transition known to exist. As a result, the effective action acquires a finite cut-off that is low and the fine tuning of the scalar mass is rather mild.
high energy physics theory
We present a novel approach for training deep neural networks in a Bayesian way. Classical, i.e. non-Bayesian, deep learning has two major drawbacks both originating from the fact that network parameters are considered to be deterministic. First, model uncertainty cannot be measured thus limiting the use of deep learning in many fields of application and second, training of deep neural networks is often hampered by overfitting. The proposed approach uses variational inference to approximate the intractable a posteriori distribution on basis of a normal prior. The variational density is designed in such a way that the a posteriori uncertainty of the network parameters is represented per network layer and depending on the estimated parameter expectation values. This way, only a few additional parameters need to be optimized compared to a non-Bayesian network. We apply this Bayesian approach to train and test the LeNet architecture on the MNIST dataset. Compared to classical deep learning, the test error is reduced by 15%. In addition, the trained model contains information about the parameter uncertainty in each layer. We show that this information can be used to calculate credible intervals for the prediction and to optimize the network architecture for a given training data set.
statistics
In a covariant quark-diquark model, we investigate the rare decay of $\Lambda_b \rightarrow n l^+ l^-$ and $\Lambda_b \rightarrow n \gamma$ in the Bethe-Salpeter equation approach. In this model the baryons are treated as bound states of a constituent quark and a diquark interacting via a gluon exchange and the linear confinement. We find that the ratio of form factors $R$ is varies from $-0.90$ to $-0.25$ and the branching ratios $Br(\Lambda_b \rightarrow n l^+ l^-)\times 10^{8}$ are $6.79(l=e),~4.08 ~ (l=\mu),~2.9 ~(l=\tau) $ and the branching ratio $Br(\Lambda_b \rightarrow \gamma)\times 10^{7} )=3.69$ in central values of parameters.
high energy physics phenomenology
We theoretically investigate thermalization and spin diffusion driven by a quantum spin bath for a realistic solid-state NMR experiment. We consider polycrystalline L-alanine, and investigate how the spin polarization spreads among several $^{13}$C nuclear spins, which interact via dipole-dipole coupling with the bath of strongly dipolar-coupled $^1$H nuclear (proton) spins. We do this by using direct numerical simulation of the many-spin time-dependent Schr\"odinger equation. We find that, although the proton spins located near the carbon sites interact most strongly with the $^{13}$C spins, this interaction alone is not enough to drive spin diffusion and thermalize the $^{13}$C nuclear spins. We demonstrate that the thermalization within the $^{13}$C subsystem is driven by the collective many-body dynamics of the proton spin bath, and specifically, that the onset of thermalization among the $^{13}$C spins is directly related to the onset of chaotic behavior in the proton spin bath. Therefore, thermalization and spin diffusion within the $^{13}$C subsystem is controlled by the proton spins located far from the C sites. In spite of their weak coupling to the $^{13}$C spins, these far-away protons help produce a network of strongly coupled proton spins with collective dynamics, that drives thermalization.
quantum physics
Motivated by recent progress on the motive power of information in quantum thermodynamics, we put forth an operational resource theory of Maxwell's demons. We show that the resourceful ({\em daemonic}) states can be partitioned into at most nine irreducible subsets. The sets can be classified by a rank akin to the Schmidt rank for entanglement theory. Moreover, we show that there exists a natural monotone, called the wickedness, which quantifies the multilevel resource content of the states. The present resource theory is shown to share deep connections with the resource theory of thermodynamics. In particular, the nine irreducible sets are found to be characterized by well defined temperatures which, however, are not monotonic in the wickedness. This result, as we demonstrate, is found to have dramatic consequences for Landauer's erasure principle. Our analysis therefore settles a longstanding debate surrounding the identity of Maxwell's demons and the operational significance of other related fundamental thermodynamic entities.
quantum physics
In this paper, we consider the diagonal reflection symmetries and three-zero texture in the SM. The three-zero texture has two less assumptions ($(M_{u})_{11} , (M_{\nu})_{11} \neq 0$) than the universal four-zero texture for mass matrices $(M_{f})_{11} = (M_{f})_{13,31} = 0$ for $f = u,d,\nu, e$. The texture and symmetries reproduce the CKM and MNS matrices with accuracies of $O(10^{-4})$ and $O(10^{-3})$. By assuming a $d$-$e$ unified relation ($M_{d} \sim M_{e}$), this system predicts the normal hierarchy, the Dirac phase $\delta_{CP} \simeq 202^{\circ},$ the Majorana phases $\alpha_{12} = 11.3^{\circ}, \alpha_{13} = 6.90^{\circ}$ up to $\pi$, and the lightest neutrino mass $m_{1} \simeq 2.97\,-\,4.72\,$[meV]. The effective mass of the double beta decay $|m_{ee}|$ is found to be $1.24 \sim 1.77 \,$[meV].
high energy physics phenomenology
We present a framework for learning an efficient holistic representation for handwritten word images. The proposed method uses a deep convolutional neural network with traditional classification loss. The major strengths of our work lie in: (i) the efficient usage of synthetic data to pre-train a deep network, (ii) an adapted version of the ResNet-34 architecture with the region of interest pooling (referred to as HWNet v2) which learns discriminative features for variable sized word images, and (iii) a realistic augmentation of training data with multiple scales and distortions which mimics the natural process of handwriting. We further investigate the process of transfer learning to reduce the domain gap between synthetic and real domain, and also analyze the invariances learned at different layers of the network using visualization techniques proposed in the literature. Our representation leads to a state-of-the-art word spotting performance on standard handwritten datasets and historical manuscripts in different languages with minimal representation size. On the challenging IAM dataset, our method is first to report an mAP of around 0.90 for word spotting with a representation size of just 32 dimensions. Furthermore, we also present results on printed document datasets in English and Indic scripts which validates the generic nature of the proposed framework for learning word image representation.
computer science
Complex numbers are widely used in both classical and quantum physics, and are indispensable components for describing quantum systems and their dynamical behavior. Recently, the resource theory of imaginarity has been introduced, allowing for a systematic study of complex numbers in quantum mechanics and quantum information theory. In this work we develop theoretical methods for the resource theory of imaginarity, motivated by recent progress within theories of entanglement and coherence. We investigate imaginarity quantification, focusing on the geometric imaginarity and the robustness of imaginarity, and apply these tools to the state conversion problem in imaginarity theory. Moreover, we analyze the complexity of real and general operations in optical experiments, focusing on the number of unfixed wave plates for their implementation. We also discuss the role of imaginarity for local state discrimination, proving that any pair of real orthogonal pure states can be discriminated via local real operations and classical communication. Our study reveals the significance of complex numbers in quantum physics, and proves that imaginarity is a resource in optical experiments.
quantum physics
Given the widespread use of density functional theory (DFT), there is an increasing need for the ability to model large systems (beyond 1,000 atoms). We present a brief overview of the large-scale DFT code Conquest, which is capable of modelling such large systems, and discuss approaches to the generation of consistent, well-converged pseudo-atomic basis sets which will allow such large scale calculations. We present tests of these basis sets for a variety of materials, comparing to fully converged plane wave results using the same pseudopotentials and grids.
condensed matter
Dust temperature is an important property of the interstellar medium (ISM) of galaxies. It is required when converting (sub)millimeter broadband flux to total infrared luminosity (L_IR), and hence star formation rate, in high-z galaxies. However, different definitions of dust temperatures have been used in the literature, leading to different physical interpretations of how ISM conditions change with, e.g., redshift and star formation rate. In this paper, we analyse the dust temperatures of massive (M* > 10^10 Msun) z=2-6 galaxies with the help of high-resolution cosmological simulations from the Feedback in Realistic Environments (FIRE) project. At z~2, our simulations successfully predict dust temperatures in good agreement with observations. We find that dust temperatures based on the peak emission wavelength increase with redshift, in line with the higher star formation activity at higher redshift, and are strongly correlated with the specific star formation rate. In contrast, the mass-weighted dust temperature does not strongly evolve with redshift over z=2-6 at fixed IR luminosity but is tightly correlated with L_IR at fixed z. The mass-weighted temperature is important for accurately estimating the total dust mass. We also analyse an 'equivalent' dust temperature for converting (sub)millimeter flux density to total IR luminosity, and provide a fitting formula as a function of redshift and dust-to-metal ratio. We find that galaxies of higher equivalent (or higher peak) dust temperature ('warmer dust') do not necessarily have higher mass-weighted temperatures. A 'two-phase' picture for interstellar dust can explain the different scaling relations of the various dust temperatures.
astrophysics
We present a minimally-invasive endoscope based on a multimode fiber that combines photoacoustic and fluorescence sensing. From the measurement of a transmission matrix during a prior calibration step, a focused spot is produced and raster-scanned over a sample at the distal tip of the fiber by use of a fast spatial light modulator. An ultra-sensitive fiber-optic ultrasound sensor for photoacoustic detection placed next to the fiber is combined with a photodetector to obtain both fluorescence and photoacoustic images with a distal imaging tip no larger than 250um. The high signal-to-noise ratio provided by wavefront shaping based focusing and the ultra-sensitive ultrasound sensor enables imaging with a single laser shot per pixel, demonstrating fast two-dimensional hybrid imaging of red blood cells and fluorescent beads.
physics
Optimal Control Theory is a powerful mathematical tool, which has known a rapid development since the 1950s, mainly for engineering applications. More recently, it has become a widely used method to improve process performance in quantum technologies by means of highly efficient control of quantum dynamics. This review aims at providing an introduction to key concepts of optimal control theory which is accessible to physicists and engineers working in quantum control or in related fields. The different mathematical results are introduced intuitively, before being rigorously stated. This review describes modern aspects of optimal control theory, with a particular focus on the Pontryagin Maximum Principle, which is the main tool for determining open-loop control laws without experimental feedback. The different steps to solve an optimal control problem are discussed, before moving on to more advanced topics such as the existence of optimal solutions or the definition of the different types of extremals, namely normal, abnormal, and singular. The review covers various quantum control issues and describes their mathematical formulation suitable for optimal control. The optimal solution of different low-dimensional quantum systems is presented in detail, illustrating how the mathematical tools are applied in a practical way.
quantum physics
In hadron interactions at the LHC energies, the reflective scattering mode starts to play a role which is expected to be even a more significant beyond the energies of the LHC. This new but still arguable phenomenon implies a peripheral dependence of the inelastic probability distribution in the impact parameter space and asymptotically evolving to the black ring. As a consequence, the straightforward extension to hadrons of the centrality definition adopted for nuclei needs to be modified.
high energy physics phenomenology
We quantitatively study the out-of-equilibrium edge-Majorana correlation in a linearly ramped one-dimensional Kitaev chain of finite length in a dissipative environment. The chemical potential is dynamically ramped to drive the chain from its topologically trivial to nontrivial phase in the presence of couplings to nonthermal Markovian baths. We consider two distinctive situations: In the first situation, the bath is quasilocal in the site basis (local in quasiparticle basis) while in the other it is local. Following a Lindbladian approach, we compute the early time dynamics as well as the asymptotic behavior of the edge-Majorana correlation to probe the interplay between two competing timescales - one due to the coherent ramping while the other to the dissipative coupling. For the quasilocal bath, we establish that there is a steady generation of Majorana correlations in asymptotic time and the presence of an optimal ramping time which facilitates a quicker approach to the topological steady state. In the second scenario, we analyze the action of a local particle-loss type of bath in which we have established the existence of an optimal ramping time which results from the competing dynamics between the unitary ramp and the dissipative coupling. While the defect generated by the former decays exponentially with increasing ramp duration, the later scales linearly with the same. This linear scaling is further established through a perturbation theory formulated using the nondimensionalized coupling to the bath as a small parameter.
condensed matter
We show how the renormalons emerge from the renormalization group equation with a priori no reference to any Feynman diagrams. The proof is rather given by recasting the renormalization group equation as a resurgent equation studied in the mathematical literature, which describes a function with an infinite number of singularities in the positive axis of the Borel plane. Consistency requires a one-to-one correspondence between the existence of such kind of equation and the actual (generalized) Borel resummation of the renormalons through a one-parameter transseries. Our finding suggests how non-perturbative contributions can affect the running couplings. We also discuss these concepts within the context of gauge theories, making use of the large number of flavor expansion.
high energy physics theory
In this paper, the control of the edge-mode spectrum of integer-Hall-effect 2D waveguides by electric field is proposed and modeled with the effective mass approach. Under certain found conditions, the applied transversal electric field allows refining the modal spectrum from non-localized waves, and, additionally, it can switch the edge-mode from the propagating to the evanescent state, and it is interesting in the design of the edge-mode off and on logic components. These waveguides, arbitrary biased by potentials, are described by the Pauli spin-less charge equation, and they are simulated using the order reduction method of partial differential equations in its Kron s quantum-circuit representation. Additionally to the spectrum control mechanism, the influence of large-scale disorder of confinement potential and magnetic field on the edge localization and modal switching is studied
condensed matter
In this paper, we derive a general formula for the quantized fractional corner charge in two-dimensional C_n-symmetric higher-order topological insulators. We assume that the electronic states can be described by the Wannier functions and that the edges are charge neutral, but we do not assume vanishing bulk electric polarization. We expand the scope of the corner charge formula obtained in previous works by considering more general surface conditions, such as surfaces with higher Miller index and surfaces with surface reconstruction. Our theory is applicable even when the electronic states are largely modulated near system boundaries. It also applies to insulators with non-vanishing bulk polarization, and we find that in such cases the value of the corner charge depends on the surface termination even for the same bulk crystal with C_3 or C_4 symmetry, via a difference in the Wyckoff position of the center of the C_n-symmetric crystal.
condensed matter
This letter is dedicated to providing proofs of two statements concerning the gradient expansion of relativistic hydrodynamics. The first statement is that \textit{the ordering of transverse derivatives is irrelevant in the gradient expansion of a non-conformal fluid}. The second statement is that \textit{the longitudinal projection of the Weyl covariant derivative can be eliminated in the gradient expansion of a conformal fluid}. This second statement does not apply to curvature tensors. In the conformal case, we know that the ordering of Weyl covariant derivatives is irrelevant in the gradient expansion. Here we discuss the equivalences in the gradient expansion revealed by the theorems and its consequences in the transport coefficients.
high energy physics theory
We derive the optimal analytical quantum-state-transfer control solutions for two disparate quantum memory blocks. Employing the SLH formalism description of quantum network theory, we calculate the full quantum dynamics of system populations, which lead to the optimal solution for the highest quantum fidelity attainable. We show that, for the example where the mechanical modes of two optomechanical oscillators act as the quantum memory blocks, their optical modes and a waveguide channel connecting them can be used to achieve a quantum state transfer fidelity of 96% with realistic parameters using our derived optimal control solution. The effects of the intrinsic losses and the asymmetries in the physical memory parameters are discussed quantitatively.
quantum physics
The most accurate method for modelling planetary migration and hence the formation of resonant systems is using hydrodynamical simulations. Usually, the force (torque) acting on a planet is calculated using the forces from the gas disc and the star, while the gas accelerations are computed using the pressure gradient, the star, and the planet's gravity, ignoring its own gravity. For the non-migrating the neglect of the disc gravity results in a consistent torque calculation while for the migrating case it is inconsistent. We aim to study how much this inconsistent torque calculation can affect the final configuration of a two-planet system. Our focus will be on low-mass planets because most of the multi-planetary systems, discovered by the Kepler survey, have masses around 10 Earth masses. Performing hydrodynamical simulations of planet-disc interaction, we measure the torques on non-migrating and migrating planets for various disc masses as well as density and temperature slopes with and without considering the disc self-gravity. Using this data, we find a relation that quantifies the inconsistency, use it in an N-body code, and perform an extended parameter study modelling the migration of a planetary system with different planet mass ratios and disc surface densities, in order to investigate the impact of the torque inconsistency on the architecture of the planetary system. Not considering disc self-gravity produces an artificially larger torque on the migrating planet that can result in tighter planetary systems. The deviation of this torque from the correct value is larger in discs with steeper surface density profiles. In hydrodynamical modelling of multi-planetary systems, it is crucial to account for the torque correction, otherwise the results favour more packed systems.
astrophysics
Within the framework of the $\mu\nu$SSM, a displaced dilepton signal is expected at the LHC from the decay of a tau left sneutrino as the lightest supersymmetric particle (LSP) with a mass in the range $45 - 100$ GeV. We compare the predictions of this scenario with the ATLAS search for long-lived particles using displaced lepton pairs in $pp$ collisions, considering an optimization of the trigger requirements by means of a high level trigger that exploits tracker information. The analysis is carried out in the general case of three families of right-handed neutrino superfields, where all the neutrinos get contributions to their masses at tree level. To analyze the parameter space, we sample the $\mu\nu$SSM for a tau left sneutrino LSP with proper decay length $c\tau > 0.1$ mm using a likelihood data-driven method, and paying special attention to reproduce the current experimental data on neutrino and Higgs physics, as well as flavor observables. The sneutrino is special in the $\mu\nu$SSM since its couplings have to be chosen so that the neutrino oscillation data are reproduced. We find that important regions of the parameter space can be probed at the LHC run 3.
high energy physics phenomenology
The semiclassical general formula for the probability of radiation of twisted photons by ultrarelativistic scalar and Dirac particles moving in the electromagnetic field of a general form is derived. This formula is the analog of the Baier-Katkov formula for the probability of radiation of one plane wave photon with the quantum recoil taken into account. The derived formula is used to describe the radiation of twisted photons by charged particles in undulators and laser waves. Thus, the general theory of undulator radiation of twisted photons and radiation of twisted photons in the nonlinear Compton process is developed with account for the quantum recoil. The explicit formulas for the probability to record a twisted photon are obtained in these cases. In particular, we found that the quantum recoil and spin degrees of freedom increase the radiation probability of twisted photons in comparison with the formula for scalar particles without recoil. In the range of applicability of the semiclassical formula, the selection rules for undulator radiation established in the purely classical framework are not violated. The manifestation of the blossoming out rose effect in the nonlinear Compton process in a strong laser wave with circular polarization and in the wiggler radiation is revealed. Several examples are studied: the radiation of MeV twisted photons by $180$ GeV electrons in the wiggler; the radiation of twisted photons by $256$ MeV electrons in strong electromagnetic waves produced by the CO$_2$ and Ti:Sa lasers; and the radiation of MeV twisted photons by $51.1$ MeV electrons in the electromagnetic wave generated by the FEL with photon energy $1$ keV.
physics
In this study, the effects of the generalized uncertainty principle on the theory of gravity are analyzed. Inspired by Verlinde's entropic gravity approach and using the modified Unruh temperature, the generalized Einstein field equations with cosmological constant are obtained and corresponding conservation law is investigated. The resulting conservation law of energy-momentum tensor dictates that the generalized Einstein field equations are valid in a very limited range of accelerations. Moreover, the modified Newton's law of gravity and the modified Poisson equation are derived. In a certain limit, these modified equations reduce to their standard forms.
high energy physics theory
The aim of this paper is to present a mixture composite regression model for claim severity modelling. Claim severity modelling poses several challenges such as multimodality, heavy-tailedness and systematic effects in data. We tackle this modelling problem by studying a mixture composite regression model for simultaneous modeling of attritional and large claims, and for considering systematic effects in both the mixture components as well as the mixing probabilities. For model fitting, we present a group-fused regularization approach that allows us for selecting the explanatory variables which significantly impact the mixing probabilities and the different mixture components, respectively. We develop an asymptotic theory for this regularized estimation approach, and fitting is performed using a novel Generalized Expectation-Maximization algorithm. We exemplify our approach on real motor insurance data set.
statistics
We have shown the existence of self-dual solitons in a type of generalized Chern-Simons baby Skyrme model where the generalized function (depending only in the Skyrme field) is coupled to the sigma-model term. The consistent implementation of the Bogomol'nyi-Prasad-Sommerfield (BPS) formalism requires the generalizing function becomes the superpotential defining properly the self-dual potential. Thus, we have obtained a topological energy lower-bound (Bogomol'nyi bound) and the self-dual equations satisfied by the fields saturating such a bound. The Bogomol'nyi bound being proportional to the topological charge of the Skyrme field is quantized whereas the total magnetic flux is not. Such as expected in a Chern-Simons model the total magnetic flux and the total electrical charge are proportional to each other. Thus, by considering the superpotential a well-behaved function in the whole target space we have shown the existence of three types of self-dual solutions: compacton solitons, soliton solutions whose tail decays following an exponential-law $e^{-\alpha r^{2}}$ ($\alpha>0$), and solitons having a power-law decay $r^{-\beta}$ ($\beta>0$). The profiles of the two last solitons can exhibit a compactonlike behavior. The self-dual equations have been solved numerically and we have depicted the soliton profiles, commenting on the main characteristics exhibited by them.
high energy physics theory
We analyze measurements of $C_{\rm mag}/T$, specific heat $C_{\rm mag}$ divided by temperature $T$, of the recently observed ultra spin liquid. The measurements are carried out in magnetic fields on the triangular lattice compound $\rm Lu_3Cu_2Sb_3O_{14}$. We show that the obtained heat capacity $C_{\rm mag}/T$ formed by ultra spin liquid as a function of temperature $T$ versus magnetic field $B$ behaves very similar to the electronic specific heat $C_{el}/T$ of the heavy fermion (HF) metal $\rm YbRh_2Si_2$ and that of the quantum magnet $\rm ZnCu_3(OH)_6Cl_2$. We further demonstrate that the spinon effective mass $M^*\propto C_{\rm mag}/T$ exhibits the universal scaling coinciding with that observed in HF metals and in $\rm ZnCu_3(OH)_6Cl_2$. Based on these observations we conclude that a strongly correlated spin liquid determines the thermodynamic properties of the ultra spin liquid of $\rm Lu_3Cu_2Sb_3O_{14}$.
condensed matter
The evolution of collapsing clouds embedded in different star-forming environments is investigated using three-dimensional non-ideal magnetohydrodynamics simulations considering different cloud metallicities ($Z/\thinspace Z_\odot$ = 0, 10$^{-5}$, 10$^{-4}$, 10$^{-3}$, 10$^{-2}$, 10$^{-1}$ and 1) and ionisation strengths ($C_\zeta$=0, 0.01, 1 and 10, where $C_\zeta$ is a coefficient controlling the ionisation intensity and $C_\zeta=1$ corresponds to the ionisation strength of nearby star-forming regions). With all combinations of these considered values of $Z/\thinspace Z_\odot$ and $C_\zeta$, 28 different star-forming environments are prepared and simulated. The cloud evolution in each environment is calculated until the central density reaches $n\approx10^{16}\,{\rm cm}^{-3}$ just before protostar formation, and the outflow driving conditions are derived. An outflow appears when the (first) adiabatic core forms in a magnetically active region where the magnetic field is well coupled with the neutral gas. In cases where outflows are driven, their momentum fluxes are always comparable to the observations of nearby star-forming regions. Thus, these outflows should control the mass growth of the protostars as in the local universe. Roughly, an outflow appears when $Z/\thinspace Z_\odot>10^{-4}$ and $C_\zeta \ge 0.01$. It is expected that the transition of the star formation mode from massive stars to normal solar-type stars occurs when the cloud metallicity is enhanced to the range of $Z/\thinspace Z_\odot\approx 10^{-4}$--$10^{-3}$, above which relatively low-mass stars would preferentially appear as a result of strong mass ejection.
astrophysics
We show that a polyregular word-to-word function is regular if and only if its output size is at most linear in its input size. Moreover a polyregular function can be realized by: a transducer with two pebbles if and only if its output has quadratic size in its input, a transducer with three pebbles if and only if its output has cubic size in its input, etc. Moreover the characterization is decidable and, given a polyregular function, one can compute a transducer realizing it with the minimal number of pebbles. We apply the result to mso interpretations from words to words. We show that mso interpretations of dimension k exactly coincide with k-pebble transductions.
computer science
In this paper, a model inspired by Grand Unification principles featuring three generations of vector-like fermions, new Higgs doublets and a rich neutrino sector at the low scale is presented. Using the state-of-the-art Deep Learning techniques we perform the first phenomenological analysis of this model focusing on the study of new charged vector-like leptons (VLLs) and their possible signatures at CERN's Large Hadron Collider (LHC). In our numerical analysis we consider signal events for vector-boson fusion and VLL pair production topologies, both involving a final state containing a pair of charged leptons of different flavor and two sterile neutrinos that provide a missing energy. We also consider the case of VLL single production where, in addition to a pair of sterile neutrinos, the final state contains only one charged lepton. All calculated observables are provided as data sets for Deep Learning analysis, where a neural network is constructed, based on results obtained via an evolutive algorithm, whose objective is to maximise either the accuracy metric or the Asimov significance for different masses of the VLL. Taking into account the effect of the three analysed topologies, we have found that the combined significance for the observation of new VLLs at the high-luminosity LHC can range from $5.7\sigma$, for a mass of $1.25~\mathrm{TeV}$, all the way up to $28\sigma$ if the VLL mass is $200~\mathrm{GeV}$. We have also shown that by the end of the LHC Run-III a $200~\mathrm{GeV}$ VLL can be excluded with a confidence of $8.8$ standard deviations. The results obtained show that our model can be probed well before the end of the LHC operations and, in particular, providing important phenomenological information to constrain the energy scale at which new gauge symmetries emergent from the considered Grand Unification picture can be manifest.
high energy physics phenomenology
The supersymmetric Lee-Yang model is arguably the simplest interacting supersymmetric field theory in two dimensions, albeit non-unitary. A natural question is if there is an analogue of supersymmetric Lee-Yang fixed point in higher dimensions. The absence of any $\mathbb{Z}_2$ symmetry (except for fermion numbers) makes it impossible to approach it by using perturbative $\epsilon$ expansions. We find that the truncated conformal bootstrap suggests that candidate fixed points obtained by the dimensional continuation from two dimensions annihilate below three dimensions, implying that there is no supersymmetric Lee-Yang fixed point in three dimensions. We conjecture that the corresponding phase transition, if any, will be the first order transition.
high energy physics theory
A phenomenological model with active and sterile neutrinos is used for calculations of neutrino oscillation characteristics at the normal mass hierarchy of active neutrinos. Taking into account the contributions of sterile neutrinos, appearance and survival probabilities for active neutrinos are calculated. Modified graphical dependencies for the probability of appearance of electron neutrinos/antineutrinos in muon neutrino/antineutrino beams as a function of the ratio of the distance to the neutrino energy and other model parameters are obtained. It is shown that in the case of a certain type mixing between active and sterile neutrinos it is possible to clarify some features of the anomalies of neutrino data at short distances. A new parametrization for a particular type mixing matrix of active and sterile neutrinos that takes into account the additional sources of CP violation is used. The comparison with the existing experimental data is performed and, with using this knowledge, the estimates of some model parameters are found. The theoretical results obtained for mixing of active and sterile neutrinos can be applied for interpretation and prediction of results of ground-based experiments on search of sterile neutrinos as well as for analyses of some astrophysical data.
high energy physics phenomenology
The overwhelming foreground contamination is one of the primary impediments to probing the Epoch of Reionization (EoR) through measuring the redshifted 21 cm signal. Among various foreground components, radio halos are less studied and their impacts on the EoR observations are still poorly understood. In this work, we employ the Press-Schechter formalism, merger-induced turbulent re-acceleration model, and the latest SKA1-Low layout configuration to simulate the SKA "observed" images of radio halos. We calculate the one-dimensional power spectra from simulated images and find that radio halos can be about $10^4$, $10^3$ and $10^{2.5}$ times more luminous than the EoR signal on scales of $0.1\,\text{Mpc}^{-1} < k < 2\,\text{Mpc}^{-1}$ in the 120-128, 154-162, and 192-200 MHz bands, respectively. By examining the two-dimensional power spectra inside properly defined EoR windows, we find that the power leaked by radio halos can still be significant, as the power ratios of radio halos to the EoR signal on scales of $0.5\,\text{Mpc}^{-1} \lesssim k \lesssim 1\,\text{Mpc}^{-1}$ can be up to about 230-800%, 18-95%, and 7-40% in the three bands, when the 68% uncertainties caused by the variation of the number density of bright radio halos are considered. Furthermore, we find that radio halos located inside the far side-lobes of the station beam can also impose strong contamination within the EoR window. In conclusion, we argue that radio halos are severe foreground sources and need serious treatments in future EoR experiments.
astrophysics
Foot-mounted inertial sensors become popular in many indoor or GPS-denied applications, including but not limited to medical monitoring, gait analysis, soldier and first responder positioning. However, the foot-mounted inertial navigation relies largely on the aid of Zero Velocity Update (ZUPT) and has encountered inherent problems such as heading drift. This paper implements a pedestrian navigation system based on dual foot-mounted low-cost inertial measurement units (IMU) and inter-foot ultrasonic ranging. The observability analysis of the system is performed to investigate the roles of the ZUPT measurement and the foot-to-foot ranging measurement in improving the state estimability. A Kalman-based estimation algorithm is mechanized in the Earth frame, rather than in the common local-level frame, which is found to be effective in depressing the linearization error in Kalman filtering. An ellipsoid constraint in the Earth frame is also proposed to further restrict the height drift. Simulation and real field experiments show that the proposed method has better robustness and positioning accuracy (about 0.1-0.2% travelled distance) than the traditional pedestrian navigation schemes do.
computer science
In recent years, convolutional neural network has gained popularity in many engineering applications especially for computer vision. In order to achieve better performance, often more complex structures and advanced operations are incorporated into the neural networks, which results very long inference time. For time-critical tasks such as autonomous driving and virtual reality, real-time processing is fundamental. In order to reach real-time process speed, a light-weight, high-throughput CNN architecture namely RoadNet-RT is proposed for road segmentation in this paper. It achieves 90.33% MaxF score on test set of KITTI road segmentation task and 8 ms per frame when running on GTX 1080 GPU. Comparing to the state-of-the-art network, RoadNet-RT speeds up the inference time by a factor of 20 at the cost of only 6.2% accuracy loss. For hardware design optimization, several techniques such as depthwise separable convolution and non-uniformed kernel size convolution are customized designed to further reduce the processing time. The proposed CNN architecture has been successfully implemented on an FPGA ZCU102 MPSoC platform that achieves the computation capability of 83.05 GOPS. The system throughput reaches 327.9 frames per second with image size 1216x176.
electrical engineering and systems science
This technical note studies Lyapunov-like conditions to ensure a class of dynamical systems to exhibit predefined-time stability. The origin of a dynamical system is predefined-time stable if it is fixed-time stable and an upper bound of the settling-time function can be arbitrarily chosen a priori through a suitable selection of the system parameters. We show that the studied Lyapunov-like conditions allow to demonstrate equivalence between previous Lyapunov theorems for predefined-time stability for autonomous systems. Moreover, the obtained Lyapunov-like theorem is extended for analyzing the property of predefined-time ultimate boundedness with predefined bound, which is useful when analyzing uncertain dynamical systems. Therefore, the proposed results constitute a general framework for analyzing predefined-time stability, and they also unify a broad class of systems which present the predefined-time stability property. On the other hand, the proposed framework is used to design robust controllers for affine control systems, which induce predefined-time stability (predefined-time ultimate boundedness of the solutions) w.r.t. to some desired manifold. A simulation example is presented to show the behavior of a developed controller, especially regarding the settling time estimation.
electrical engineering and systems science
We study electron-phonon interaction and related transport properties of nodal-line semimetal ZrSiS using first-principles calculations. We find that ZrSiS is characterized by a weak electron-phonon coupling on the order of 0.1, which is almost energy independent. The main contribution to the electron-phonon coupling originates from long-wavelength optical phonons, causing no significant renormalization of the electron spectral function. At the charge neutrality point, we find that electrons and holes provide a comparable contribution to the scattering rate. The phonon-limited resistivity calculated within the Boltzmann transport theory is found to be strongly direction-dependent with the ratio between out-of-plane and in-plane directions being $\rho_{zz}/\rho_{xx}\sim 7.5$, mainly determined by the anisotropy of carrier velocities. We estimate zero-field resistivity to be $\rho_{xx}\approx12$ $\mu\Omega$ cm at 300 K, which is in good agreement with experimental data. Relatively small resistivity in ZrSiS can be attributed to a combination of weak electron-phonon coupling and high carrier velocities.
condensed matter
Recently, low-rank matrix recovery theory has been emerging as a significant progress for various image processing problems. Meanwhile, the group sparse coding (GSC) theory has led to great successes in image restoration (IR) problem with each group contains low-rank property. In this paper, we propose a novel low-rank minimization based denoising model for IR tasks under the perspective of GSC, an important connection between our denoising model and rank minimization problem has been put forward. To overcome the bias problem caused by convex nuclear norm minimization (NNM) for rank approximation, a more generalized and flexible rank relaxation function is employed, namely weighted nonconvex relaxation. Accordingly, an efficient iteratively-reweighted algorithm is proposed to handle the resulting minimization problem combing with the popular L_(1/2) and L_(2/3) thresholding operators. Finally, our proposed denoising model is applied to IR problems via an alternating direction method of multipliers (ADMM) strategy. Typical IR experiments on image compressive sensing (CS), inpainting, deblurring and impulsive noise removal demonstrate that our proposed method can achieve significantly higher PSNR/FSIM values than many relevant state-of-the-art methods.
electrical engineering and systems science
In this paper, the robustness and accuracy of the deep neural network (DNN) was enhanced by introducing the $L_{2,\infty}$ normalization of the weight matrices of the DNN with Relu as the activation function. It is proved that the $L_{2,\infty}$ normalization leads to large dihedral angles between two adjacent faces of the polyhedron graph of the DNN function and hence smoother DNN functions, which reduces over-fitting. A measure is proposed for the robustness of a classification DNN, which is the average radius of the maximal robust spheres with the sample data as centers. A lower bound for the robustness measure is given in terms of the $L_{2,\infty}$ norm. Finally, an upper bound for the Rademacher complexity of DNN with $L_{2,\infty}$ normalization is given. An algorithm is given to train a DNN with the $L_{2,\infty}$ normalization and experimental results are used to show that the $L_{2,\infty}$ normalization is effective to improve the robustness and accuracy.
statistics
The celebrated hook-length formula gives a product formula for the number of standard Young tableaux of a straight shape. In 2014, Naruse announced a more general formula for the number of standard Young tableaux of skew shapes as a positive sum over excited diagrams of products of hook-lengths. We give an algebraic and a combinatorial proof of Naruse's formula, by using factorial Schur functions and a generalization of the Hillman--Grassl correspondence, respectively. The main new results are two different $q$-analogues of Naruse's formula: for the skew Schur functions, and for counting reverse plane partitions of skew shapes. We establish explicit bijections between these objects and families of integer arrays with certain nonzero entries, which also proves the second formula.
mathematics
Materials under complex loading develop large strains and often transition via an elastic instability, as observed in both simple and complex systems. Here, we present Si I under large strain in terms of Lagrangian strain by an $5^{th}$-order elastic potential found by minimizing error relative to density functional theory (DFT) results. The Cauchy stress-Lagrangian strain curves for arbitrary complex loadings are in excellent correspondence with DFT results, including elastic instability driving Si I$\rightarrow$II phase transformation (PT) and the shear instabilities. PT conditions for Si I$\rightarrow$II under action of cubic axial stresses are linear in Cauchy stresses in agreement with DFT predictions. Such elastic potential permits study of elastic instabilities and orientational dependence leading to different PTs, slip, twinning, or fracture, providing a fundamental basis for continuum simulations of crystal behavior under extreme loading.
condensed matter
The Hofstadter model is a classic model known for its fractal spectrum and the associated integer quantum Hall effect. Building on the recent synthesis of non-Abelian gauge fields in real space, we introduce and theoretically study a three-dimensional non-Abelian generalization of the Hofstadter model with SU(2) gauge potentials. Each Cartesian surface ($xy$, $yz$, or $zx$) of our model reduces to a two-dimensional non-Abelian Hofstadter problem. We derive its genuine (necessary and sufficient) non-Abelian condition and discuss its internal symmetries and gapped phases, finding that our model can realize both strong and weak 3D topological insulators under different choices of the gauge potentials. Furthermore, we find that certain configurations of hopping phases in our model can lead to multiple chiral and particle-hole symmetry operators, leading to phenomena such as fourfold degeneracy.
condensed matter
Multi-collinear factorization limits provide a window to study how locality and unitarity of scattering amplitudes can emerge dynamically from celestial CFT, the conjectured holographic dual to gauge and gravitational theories in flat space. To this end, we first use asymptotic symmetries to commence a systematic study of conformal and Kac-Moody descendants in the OPE of celestial gluons. Recursive application of these OPEs then equips us with a novel holographic method of computing the multi-collinear limits of gluon amplitudes. We perform this computation for some of the simplest helicity assignments of the collinear particles. The prediction from the OPE matches with Mellin transforms of the expressions in the literature to all orders in conformal descendants. In a similar vein, we conclude by studying multi-collinear limits of graviton amplitudes in the leading approximation of sequential double-collinear limits, again finding a consistency check against the leading order OPE of celestial gravitons.
high energy physics theory
We studied the transmission of photons in mouse brain in both with and without intact skull states using Monte Carlo method. Photon track, optical absorption density and fluence rate were used as indicators for analysis. We found that, the photon distribution without intact skull goes farther in both longitudinal and transverse directions compared with that of with intact skull. The distribution of optical absorption density and fluence rate was fusiform and rounder on the whole with intact skull. This study will provide reference and theoretical guidance for the optical imaging of mouse brain and the study of the mouse and human brain.
physics
We propose a method to prepare the spin singlet in an ensemble of integer-spin atoms confined within a high-finesse optical cavity. Using a cavity-assisted Raman transition to produce an effective Tavis-Cummings model, we show that a high fidelity spin singlet can be produced probabilistically, although with low efficiency, heralded by the \emph{absence} of photons escaping the cavity. In a different limit, a similar configuration of laser and cavity fields can be used to engineer a model that emulates spinor collisional dynamics. Borrowing from techniques used in spinor Bose-Einstein condensates, we show that adiabatic transformation of the system Hamiltonian (via a time-dependent, effective quadratic Zeeman shift) can be used to produce a low fidelity spin singlet. Then, by following this method with the aforementioned heralding technique, we show that it is possible to prepare the singlet state with both high fidelity and good efficiency for a large ensemble.
quantum physics
We prove the existence and uniqueness of solution of a nonlocal cross-diffusion competitive population model for two species. The model may be considered as a version, or even an approximation, of the paradigmatic Shigesada-Kawasaki-Teramoto cross-diffusion model, in which the usual diffusion differential operator is replaced by an integral diffusion operator. The proof of existence of solutions is based on a compactness argument, while the uniqueness of solution is achieved through a duality technique.
mathematics
The availability of satellite optical information is often hampered by the natural presence of clouds, which can be problematic for many applications. Persistent clouds over agricultural fields can mask key stages of crop growth, leading to unreliable yield predictions. Synthetic Aperture Radar (SAR) provides all-weather imagery which can potentially overcome this limitation, but given its high and distinct sensitivity to different surface properties, the fusion of SAR and optical data still remains an open challenge. In this work, we propose the use of Multi-Output Gaussian Process (MOGP) regression, a machine learning technique that learns automatically the statistical relationships among multisensor time series, to detect vegetated areas over which the synergy between SAR-optical imageries is profitable. For this purpose, we use the Sentinel-1 Radar Vegetation Index (RVI) and Sentinel-2 Leaf Area Index (LAI) time series over a study area in north west of the Iberian peninsula. Through a physical interpretation of MOGP trained models, we show its ability to provide estimations of LAI even over cloudy periods using the information shared with RVI, which guarantees the solution keeps always tied to real measurements. Results demonstrate the advantage of MOGP especially for long data gaps, where optical-based methods notoriously fail. The leave-one-image-out assessment technique applied to the whole vegetation cover shows MOGP predictions improve standard GP estimations over short-time gaps (R$^2$ of 74\% vs 68\%, RMSE of 0.4 vs 0.44 $[m^2m^{-2}]$) and especially over long-time gaps (R$^2$ of 33\% vs 12\%, RMSE of 0.5 vs 1.09 $[m^2m^{-2}]$).
electrical engineering and systems science
Predictive modelling and supervised learning are central to modern data science. With predictions from an ever-expanding number of supervised black-box strategies - e.g., kernel methods, random forests, deep learning aka neural networks - being employed as a basis for decision making processes, it is crucial to understand the statistical uncertainty associated with these predictions. As a general means to approach the issue, we present an overarching framework for black-box prediction strategies that not only predict the target but also their own predictions' uncertainty. Moreover, the framework allows for fair assessment and comparison of disparate prediction strategies. For this, we formally consider strategies capable of predicting full distributions from feature variables, so-called probabilistic supervised learning strategies. Our work draws from prior work including Bayesian statistics, information theory, and modern supervised machine learning, and in a novel synthesis leads to (a) new theoretical insights such as a probabilistic bias-variance decomposition and an entropic formulation of prediction, as well as to (b) new algorithms and meta-algorithms, such as composite prediction strategies, probabilistic boosting and bagging, and a probabilistic predictive independence test. Our black-box formulation also leads (c) to a new modular interface view on probabilistic supervised learning and a modelling workflow API design, which we have implemented in the newly released skpro machine learning toolbox, extending the familiar modelling interface and meta-modelling functionality of sklearn. The skpro package provides interfaces for construction, composition, and tuning of probabilistic supervised learning strategies, together with orchestration features for validation and comparison of any such strategy - be it frequentist, Bayesian, or other.
statistics
Recent studies of holographic properties of massless higher-order gravities, whose linear spectrum contains only the (massless) graviton, yielded some universal relations in $d=4$ dimensions between the holographic $a$, $c$ charges and the overall coefficient factor ${\cal C}_T$ of the energy-momentum tensor two-point function, namely $c=\frac{1}{d-1} \ell\frac{\partial a}{\partial \ell}={\cal C}_T$, where $\ell$ is the AdS radius. The second equality was shown to be valid in all dimensions. In this paper, we establish the first equality in $d=6$ by examining these quantities from $D=7$ higher-order gravities. Consequently the overall coefficient of the two-point function of the energy-momentum tensor is proportional to this $c$ charge, generalizing the well-known $d=4$ result. We identify the relevant $c$ charge as the coefficient of a specific combination of the three type-B anomalies. Modulo total derivatives, this combination involves Riemann tensor at most linearly.
high energy physics theory
In the forthcoming 6G era, Future Public transport vehicles (F-PTV), such as buses, trains, etc. will be designed to cater to the communication needs of the commuters that will carry numerous smart devices (smartphones, e-bands etc.). These battery-powered devices need frequent recharging. Since recharging facilities are not readily available while commuting, we envision F-PTVs that will provide in-vehicle recharging via Wireless Energy Transfer (WET) through in-vehicle Access Points. F-PTV will also be internally coated with Intelligent Reflecting Surface (IRS) that reflects incident radio waves towards the intended device to improve signal strength at the receiver, for both information and energy transmissions. F-PTVs will also be equipped with Mobile Edge Computing (MEC) servers to also serve multiple purposes, including a reduction in devices' task completion latency and offering in-vehicle Cloud services. MEC-server offloading will also relieve smart devices' processors off intensive tasks to preserve battery power. The challenges associated with the IRS-aided integrated MEC-WET model within F-PTV are also discussed. We present the simulation to show the effectiveness of such a model for F-PTV.
computer science
State-of-the-art automated segmentation algorithms are not 100\% accurate especially when segmenting difficult to interpret datasets like those with severe osteoarthritis (OA). We present a novel interactive method called just-enough interaction (JEI), which adds a fast correction step to the automated layered optimal graph segmentation of multiple objects and surfaces (LOGISMOS). After LOGISMOS segmentation in knee MRI, the JEI user interaction does not modify boundary surfaces of the bones and cartilages directly. Local costs of underlying graph nodes are modified instead and the graph is re-optimized, providing globally optimal corrected results. Significant performance improvement ($p \ll 0.001$) was observed when comparing JEI-corrected results to the automated. The algorithm was extended from 3D JEI to longitudinal multi-3D (4D) JEI allowing simultaneous visualization and interaction of multiple-time points of the same patient.
computer science
In this correspondence, we propose a space domain index modulation (IM) scheme for the downlink of multiuser multiple-input multiple-output (MU-MIMO) systems. Instead of the most common approach where spatial bits select active receiver antennas, in the presented scheme the spatial information is mapped onto the transmitter side. This allows IM to better exploit large dimensional antenna settings which are typically easier to deploy at the base station. In order to mitigate inter-user interference and allow single user detection, a precoder is adopted at the BS. Furthermore two alternative enhanced signal construction methods are proposed for minimizing the transmitted power or enable an implementation with a reduced number of RF chains. Simulation results for different scenarios show that the proposed approach can be an attractive alternative to conventional precoded MU-MIMO.
electrical engineering and systems science
Collisions between galaxy clusters dissipate enormous amounts of energy in the intra-cluster medium (ICM) through turbulence and shocks. In the process, Mpc-scale diffuse synchrotron emission in form of radio halos and relics can form. However, little is known about the very early phase of the collision. We used deep radio observations from 53 MHz to 1.5 GHz to study the pre-merging galaxy clusters A1758N and A1758S that are $\sim2$ Mpc apart. We confirm the presence of a giant bridge of radio emission connecting the two systems that was reported only tentatively in our earlier work. This is the second large-scale radio bridge observed to date in a cluster pair. The bridge is clearly visible in the LOFAR image at 144 MHz and tentatively detected at 53 MHz. Its mean radio emissivity is more than one order of magnitude lower than that of the radio halos in A1758N and A1758S. Interestingly, the radio and X-ray emissions of the bridge are correlated. Our results indicate that non-thermal phenomena in the ICM can be generated also in the region of compressed gas in-between infalling systems.
astrophysics
This paper presents our recent effort on end-to-end speaker-attributed automatic speech recognition, which jointly performs speaker counting, speech recognition and speaker identification for monaural multi-talker audio. Firstly, we thoroughly update the model architecture that was previously designed based on a long short-term memory (LSTM)-based attention encoder decoder by applying transformer architectures. Secondly, we propose a speaker deduplication mechanism to reduce speaker identification errors in highly overlapped regions. Experimental results on the LibriSpeechMix dataset shows that the transformer-based architecture is especially good at counting the speakers and that the proposed model reduces the speaker-attributed word error rate by 47% over the LSTM-based baseline. Furthermore, for the LibriCSS dataset, which consists of real recordings of overlapped speech, the proposed model achieves concatenated minimum-permutation word error rates of 11.9% and 16.3% with and without target speaker profiles, respectively, both of which are the state-of-the-art results for LibriCSS with the monaural setting.
electrical engineering and systems science
Molecular phenotyping by gene expression profiling is common in contemporary cancer research and in molecular diagnostics. However, molecular profiling remains costly and resource intense to implement, and is just starting to be introduced into clinical diagnostics. Molecular changes, including genetic alterations and gene expression changes, occuring in tumors cause morphological changes in tissue, which can be observed on the microscopic level. The relationship between morphological patterns and some of the molecular phenotypes can be exploited to predict molecular phenotypes directly from routine haematoxylin and eosin (H&E) stained whole slide images (WSIs) using deep convolutional neural networks (CNNs). In this study, we propose a new, computationally efficient approach for disease specific modelling of relationships between morphology and gene expression, and we conducted the first transcriptome-wide analysis in prostate cancer, using CNNs to predict bulk RNA-sequencing estimates from WSIs of H&E stained tissue. The work is based on the TCGA PRAD study and includes both WSIs and RNA-seq data for 370 patients. Out of 15586 protein coding and sufficiently frequently expressed transcripts, 6618 had predicted expression significantly associated with RNA-seq estimates (FDR-adjusted p-value < 1*10-4) in a cross-validation. 5419 (81.9%) of these were subsequently validated in a held-out test set. We also demonstrate the ability to predict a prostate cancer specific cell cycle progression score directly from WSIs. These findings suggest that contemporary computer vision models offer an inexpensive and scalable solution for prediction of gene expression phenotypes directly from WSIs, providing opportunity for cost-effective large-scale research studies and molecular diagnostics.
computer science
In this work, we estimate extreme sea surface temperature (SST) hotspots, i.e., high threshold exceedance regions, for the Red Sea, a vital region of high biodiversity. We analyze high-resolution satellite-derived SST data comprising daily measurements at 16703 grid cells across the Red Sea over the period 1985-2015. We propose a semiparametric Bayesian spatial mixed-effects linear model with a flexible mean structure to capture spatially-varying trend and seasonality, while the residual spatial variability is modeled through a Dirichlet process mixture (DPM) of low-rank spatial Student-$t$ processes (LTPs). By specifying cluster-specific parameters for each LTP mixture component, the bulk of the SST residuals influence tail inference and hotspot estimation only moderately. Our proposed model has a nonstationary mean, covariance and tail dependence, and posterior inference can be drawn efficiently through Gibbs sampling. In our application, we show that the proposed method outperforms some natural parametric and semiparametric alternatives. Moreover, we show how hotspots can be identified and we estimate extreme SST hotspots for the whole Red Sea, projected until the year 2100, based on the Representative Concentration Pathways 4.5 and 8.5. The estimated 95\% credible region for joint high threshold exceedances include large areas covering major endangered coral reefs in the southern Red Sea.
statistics
Laser induced electronic excitations that spontaneously emit photons and decay directly to the initial ground state ("optical cycling transitions") are used in quantum information and precision measurement for state initialization and readout. To extend this primarily atomic technique to organic compounds, we theoretically investigate optical cycling of alkaline earth phenoxides and their functionalized derivatives. We find that optical cycle leakage due to wavefunction mismatch is low in these species, and can be further suppressed by using chemical substitution to boost the electron withdrawing strength of the aromatic molecular ligand through resonance and induction effects. This provides a straightforward way to use chemical functional groups to construct optical cycling moieties for laser cooling, state preparation, and quantum measurement.
physics
We investigate the existence and properties of a double asymptotic expansion in $1/N^{2}$ and $1/\sqrt{D}$ in $\mathrm{U}(N)\times\mathrm{O}(D)$ invariant Hermitian multi-matrix models, where the $N\times N$ matrices transform in the vector representation of $\mathrm{O}(D)$. The crucial point is to prove the existence of an upper bound $\eta(h)$ on the maximum power $D^{1+\eta(h)}$ of $D$ that can appear for the contribution at a given order $N^{2-2h}$ in the large $N$ expansion. We conjecture that $\eta(h)=h$ in a large class of models. In the case of traceless Hermitian matrices with the quartic tetrahedral interaction, we are able to prove that $\eta(h)\leq 2h$; the sharper bound $\eta(h)=h$ is proven for a complex bipartite version of the model, with no need to impose a tracelessness condition. We also prove that $\eta(h)=h$ for the Hermitian model with the sextic wheel interaction, again with no need to impose a tracelessness condition.
high energy physics theory
Higher genus modular graph tensors map Feynman graphs to functions on the Torelli space of genus-$h$ compact Riemann surfaces which transform as tensors under the modular group $Sp(2h , \mathbb Z)$, thereby generalizing a construction of Kawazumi. An infinite family of algebraic identities between one-loop and tree-level modular graph tensors are proven for arbitrary genus and arbitrary tensorial rank. We also derive a family of identities that apply to modular graph tensors of higher loop order.
high energy physics theory
The purpose of this study is to detect the mismatch between text script and voice-over. For this, we present a novel utterance verification (UV) method, which calculates the degree of correspondence between a voice-over and the phoneme sequence of a script. We found that the phoneme recognition probabilities of exaggerated voice-overs decrease compared to ordinary utterances, but their rankings do not demonstrate any significant change. The proposed method, therefore, uses the recognition ranking of each phoneme segment corresponding to a phoneme sequence for measuring the confidence of a voice-over utterance for its corresponding script. The experimental results show that the proposed UV method outperforms a state-of-the-art approach using cross modal attention used for detecting mismatch between speech and transcription.
electrical engineering and systems science
We extend on-shell bootstrap methods from spacetime amplitudes to the worldsheet objects of the CHY formalism. We find that the integrands corresponding to tree-level non-linear sigma model, Yang-Mills and $(DF)^2$ theory are determined by demanding enhanced UV scaling under BCFW shifts. Back in spacetime, we also find that $(DF)^2$ theory is fixed by gauge invariance/UV scaling and simple locality assumptions.
high energy physics theory
Electron backscatter diffraction is a widely used technique for nano- to micro-scale analysis of crystal structure and orientation. Backscatter patterns produced by an alloy solid solution matrix and its ordered superlattice exhibit only extremely subtle differences, due to the inelastic scattering that precedes coherent diffraction. We show that unsupervised machine learning (with PCA, NMF, and an autoencoder neural network) is well suited to fine feature extraction and superlattice/matrix classification. Remapping cluster average patterns onto the diffraction sphere lets us compare Kikuchi band profiles to dynamical simulations, confirm the superlattice stoichiometry, and facilitate virtual imaging with a spherical solid angle aperture. This pipeline now enables unparalleled mapping of exquisite crystallographic detail from a wide range of materials within the scanning electron microscope.
condensed matter
The rheology of pressure-driven flows of two-dimensional dense monodisperse emulsions in neutral wetting microchannels is investigated by means of mesoscopic lattice simulations, capable of handling large collections of droplets, in the order of several hundreds. The simulations reveal that the fluidization of the emulsion proceeds through a sequence of discrete steps, characterized by yielding events whereby layers of droplets start rolling over each other, thus leading to sudden drops of the relative effective viscosity. It is shown that such discrete fluidization is robust against loss of confinement, namely it persists also in the regime of small ratios of the droplet diameter over the microchannel width. We also develop a simple phenomenological model which predicts a linear relation between the relative effective viscosity of the emulsion and the product of the confinement parameter (global size of the device over droplet radius) and the viscosity ratio between the disperse and continuous phases. The model shows excellent agreement with the numerical simulations. The present work offers new insights to enable the design of microfluidic scaffolds for tissue engineering applications and paves the way to detailed rheological studies of soft-glassy materials in complex geometries.
condensed matter
The yielding of dry foams is enabled by small elementary yield events on the bubble scale, "T1"s. We study the large scale detection of these in an expanding 2D flow geometry using artificial intelligence (AI) and nearest neighbour analysis. A good level of accuracy is reached by the AI approach using only a single frame, with the maximum score for vertex centered images highlighting the important role the vertices play in the local yielding of foams. We study the predictability of T1s ahead of time and show that this is possible on a timescale related to the waiting time statistics of T1s in local neighborhoods. The local T1 event predictability development is asymmetric in time, and measures the variation of the local property to yielding and similarly the existence of a relaxation timescale post local yielding.
condensed matter
In the Abelian Higgs model electric (and magnetic) fields of external charges (and currents) are screened by the scalar field. In this contribution, complementing recent investigations of Ishihara and Ogawa, we present a detailed investigation of charge screening using a perturbative approach with the charge strength as an expansion parameter. It is shown how perfect global and remarkably good local screening can be derived from Gauss' theorem, and the asymptotic form of the fields far from the sources. The perturbative results are shown to compare favourably to the numerical ones.
high energy physics theory
We provide upper and lower bounds for the length of controlled bad sequences over the majoring and the minoring orderings of finite sets of $\mathbb{N}^d$. The results are obtained by bounding the length of such sequences by functions from the Cichon hierarchy. This allows us to translate these results to bounds over the fast-growing complexity classes. The obtained bounds are proven to be tight for the majoring ordering, which solves a problem left open by Abriola, Figueira and Senno (Theor. Comp. Sci, Vol. 603). Finally, we use the results on controlled bad sequences to prove upper bounds for the emptiness problem of some classes of automata.
computer science
We present an abstract model of quantum computation, the "Pauli Fusion" model, whose primitive operations correspond closely to generators of the ZX calculus (a formal graphical language for quantum computing). The fundamental operations of Pauli Fusion are also straightforward abstractions of basic processes in some leading proposed quantum technologies. These operations have non-deterministic heralded effects, similarly to measurement-based quantum computation. We describe sufficient conditions for Pauli Fusion procedures to be deterministically realisable, so that it performs a given transformation independently of its non-deterministic outcomes. This provides an operational model to realise ZX terms beyond the circuit model.
quantum physics
In this paper, a multi-frequency multi-link three-dimensional (3D) non-stationary wideband multiple-input multiple-output (MIMO) channel model is proposed. The spatial consistency and multi-frequency correlation are considered in parameters initialization of every single-link and different frequencies, including large scale parameters (LSPs) and small scale parameters (SSPs). Moreover, SSPs are time-variant and updated when scatterers and the receiver (Rx) are moving. The temporal evolution of clusters is modeled by birth and death processes. The single-link channel model which has considered the inter-correlation can be easily extended to multi-link channel model. Statistical properties, including spatial cross-correlation function (CCF), power delay profile (PDP), and correlation matrix collinearity (CMC) are investigated and compared with the 3rd generation partner project (3GPP) TR 38.901 and quasi deterministic radio channel generator (QuaDRiGa) channel models. Besides, the CCF is validated against measurement data.
computer science
A concrete, stylized example illustrates that inferences may be degraded, rather than improved, by incorporating supplementary data via a joint likelihood. In the example, the likelihood is assumed to be correctly specified, as is the prior over the parameter of interest; all that is necessary for the joint modeling approach to suffer is misspecification of the prior over a nuisance parameter.
statistics
The determination of the neutrino mass is one of the major challenges in astroparticle physics today. Direct neutrino mass experiments, based solely on the kinematics of beta-decay, provide a largely model-independent probe to the neutrino mass scale. The Karlsruhe Tritium Neutrino (KATRIN) experiment is designed to directly measure the effective electron antineutrino mass with a sensitivity of 0.2 eV 90% CL. In this work we report on the first operation of KATRIN with tritium which took place in 2018. During this commissioning phase of the tritium circulation system, excellent agreement of the theoretical prediction with the recorded spectra was found and stable conditions over a time period of 13 days could be established. These results are an essential prerequisite for the subsequent neutrino mass measurements with KATRIN in 2019.
physics
We investigate the optimality of perturbation based algorithms in the stochastic and adversarial multi-armed bandit problems. For the stochastic case, we provide a unified regret analysis for both sub-Weibull and bounded perturbations when rewards are sub-Gaussian. Our bounds are instance optimal for sub-Weibull perturbations with parameter 2 that also have a matching lower tail bound, and all bounded support perturbations where there is sufficient probability mass at the extremes of the support. For the adversarial setting, we prove rigorous barriers against two natural solution approaches using tools from discrete choice theory and extreme value theory. Our results suggest that the optimal perturbation, if it exists, will be of Frechet-type.
statistics
The circular Wilson loop in the two-node quiver CFT is computed at large-N and strong 't Hooft coupling by solving the localization matrix model.
high energy physics theory
In this paper, we explore the wall crossing phenomenon for K-stability, and apply it to explain the wall crossing for K-moduli stacks and K-moduli spaces.
mathematics
Magnetohydrodynamics is a theory of long-lived, gapless excitations in plasmas. It was argued from the point of view of fluid with higher-form symmetry that magnetohydrodynamics remains a consistent, non-dissipative theory even in the limit where temperature is negligible compared to the magnetic field. In this limit, leading-order corrections to the ideal magnetohydrodynamics arise at the second order in the gradient expansion of relevant fields, not at the first order as in the standard hydrodynamic theory of dissipative fluids and plasmas. In this paper, we classify the non-dissipative second-order transport by constructing the appropriate non-linear effective action. We find that the theory has eleven independent charge and parity invariant transport coefficients for which we derive a set of Kubo formulae. The relation between hydrodynamics with higher-form symmetry and the theory of force-free electrodynamics, which has recently been shown to correspond to the zero-temperature limit of the ideal magnetohydrodynamics, as well as simple astrophysical applications are also discussed.
high energy physics theory
I will discuss spontaneous conformal symmetry breaking in the strongly $\gamma$-deformed limit of the $\mathcal N=4$ supersymmetric Yang-Mills theory known as~\emph{Fishnet Conformal Field Theory}.
high energy physics theory
Acoustic scene classification systems using deep neural networks classify given recordings into pre-defined classes. In this study, we propose a novel scheme for acoustic scene classification which adopts an audio tagging system inspired by the human perception mechanism. When humans identify an acoustic scene, the existence of different sound events provides discriminative information which affects the judgement. The proposed framework mimics this mechanism using various approaches. Firstly, we employ three methods to concatenate tag vectors extracted using an audio tagging system with an intermediate hidden layer of an acoustic scene classification system. We also explore the multi-head attention on the feature map of an acoustic scene classification system using tag vectors. Experiments conducted on the detection and classification of acoustic scenes and events 2019 task 1-a dataset demonstrate the effectiveness of the proposed scheme. Concatenation and multi-head attention show a classification accuracy of 75.66 % and 75.58 %, respectively, compared to 73.63 % accuracy of the baseline. The system with the proposed two approaches combined demonstrates an accuracy of 76.75 %.
electrical engineering and systems science
We investigate the vulnerability of a power transmission grid to load oscillation attacks. We demonstrate that an adversary with a relatively small amount of resources can launch a successful load oscillation attack to destabilize the grid. The adversary is assumed to be able to compromise smart meters at a subset of load buses and control their switches. In the studied attack scenarios the adversary estimates the line flow sensitivity factors (LFSFs) associated with the monitored tie lines by perturbing a small amount of load at compromised buses and observing the monitored lines flow changes. The learned LFSF values are used for selecting a target line and optimizing the oscillation attack to cause the target line to trip while minimizing the magnitude of load oscillation. We evaluated the attack impact using the COSMIC time-domain simulator with two test cases, the IEEE RTS 96 and Polish 2383-Bus Systems. The proposed attack strategy succeeded in causing 33% of load to be shed while oscillating only 7% of load in the IEEE RTS 96 test system, and full blackout after oscillating only 3% of the load in the Polish test system, which is much smaller than oscillation magnitudes used by other benchmarks.
electrical engineering and systems science
The concept of causal nonseparability has been recently introduced, in opposition to that of causal separability, to qualify physical processes that locally abide by the laws of quantum theory, but cannot be embedded in a well-defined global causal structure. While the definition is unambiguous in the bipartite case, its generalisation to the multipartite case is not so straightforward. Two seemingly different generalisations have been proposed, one for a restricted tripartite scenario and one for the general multipartite case. Here we compare the two, showing that they are in fact inequivalent. We propose our own definition of causal (non)separability for the general case, which---although a priori subtly different---turns out to be equivalent to the concept of "extensible causal (non)separability" introduced before, and which we argue is a more natural definition for general multipartite scenarios. We then derive necessary, as well as sufficient conditions to characterise causally (non)separable processes in practice. These allow one to devise practical tests, by generalising the tool of witnesses of causal nonseparability.
quantum physics
The Higgs potential appears to be fine-tuned, hence very sensitive to values of other scalar fields that couple to the Higgs. We show that this feature can lead to a new epoch in the early universe featuring violent dynamics coupling the Higgs to a scalar modulus. The oscillating modulus drives tachyonic Higgs particle production. We find a simple parametric understanding of when this process can lead to rapid modulus fragmentation, resulting in gravitational wave production. A nontrivial equation-of-state arising from the nonlinear dynamics also affects the time elapsed from inflation to the CMB, influencing fits of inflationary models. Supersymmetric theories automatically contain useful ingredients for this picture.
high energy physics phenomenology
This work focuses on the estimation of multiple change-points in a time-varying Ising model that evolves piece-wise constantly. The aim is to identify both the moments at which significant changes occur in the Ising model, as well as the underlying graph structures. For this purpose, we propose to estimate the neighborhood of each node by maximizing a penalized version of its conditional log-likelihood. The objective of the penalization is twofold: it imposes sparsity in the learned graphs and, thanks to a fused-type penalty, it also enforces them to evolve piece-wise constantly. Using few assumptions, we provide two change-points consistency theorems. Those are the first in the context of unknown number of change-points detection in time-varying Ising model. Finally, experimental results on several synthetic datasets and a real-world dataset demonstrate the performance of our method.
statistics
Suppose a Boolean function $f$ is symmetric under a group action $G$ acting on the $n$ bits of the input. For which $G$ does this mean $f$ does not have an exponential quantum speedup? Is there a characterization of how rich $G$ must be before the function $f$ cannot have enough structure for quantum algorithms to exploit? In this work, we make several steps towards understanding the group actions $G$ which are "quantum intolerant" in this way. We show that sufficiently transitive group actions do not allow a quantum speedup, and that a "well-shuffling" property of group actions -- which happens to be preserved by several natural transformations -- implies a lack of super-polynomial speedups for functions symmetric under the group action. Our techniques are motivated by a recent paper by Chailloux (2018), which deals with the case where $G=S_n$. Our main application is for graph symmetries: we show that any Boolean function $f$ defined on the adjacency matrix of a graph (and symmetric under relabeling the vertices of the graph) has a power $6$ relationship between its randomized and quantum query complexities, even if $f$ is a partial function. In particular, this means no graph property testing problems can have super-polynomial quantum speedups, settling an open problem of Ambainis, Childs, and Liu (2011).
quantum physics
Entanglement entropy plays a variety of roles in quantum field theory, including the connections between quantum states and gravitation through the holographic principle. This article provides a review of entanglement entropy from a mixed viewpoint of field theory and holography. A set of basic methods for the computation is developed and illustrated with simple examples such as free theories and conformal field theories. The structures of the ultraviolet divergences and the universal parts are determined and compared with the holographic descriptions of entanglement entropy. The utility of quantum inequalities of entanglement are discussed and shown to derive the C-theorem that constrains renormalization group flows of quantum field theories in diverse dimensions.
high energy physics theory
As large amounts of data are circulated both from users to a cloud server and between users, there is a critical need for privately aggregating the shared data. This paper considers the problem of private weighted sum aggregation with secret weights, where an aggregator wants to compute the weighted sum of the local data of some agents. Depending on the privacy requirements posed on the weights, there are different secure multi-party computation schemes exploiting the information structure. First, when each agent has a local private value and a local private weight, we review private sum aggregation schemes. Second, we discuss how to extend the previous schemes for when the agents have a local private value, but the aggregator holds the corresponding weights. Third, we treat a more general case where the agents have their local private values, but the weights are known neither by the agents nor by the aggregator; they are generated by a system operator, who wants to keep them private. We give a solution where aggregator obliviousness is achieved, even under collusion between the participants, and we show how to obtain a more efficient communication and computation strategy for multi-dimensional data, by batching the data into fewer ciphertexts. Finally, we implement our schemes and discuss the numerical results and efficiency improvements.
computer science
Heterogeneous quantum networks consisting of mixed-technologies - Continuous Variable (CV) and Discrete Variable (DV) - will become ubiquitous as global quantum communication matures. Hybrid quantum-entanglement between CV and DV modes will be a critical resource in such networks. A leading candidate for such hybrid quantum entanglement is that between Schr\"odinger-cat states and photon-number states. In this work, we explore the use of Two-Mode Squeezed Vacuum (TMSV) states, distributed from satellites, as a teleportation resource for the re-distribution of our candidate hybrid entanglement pre-stored within terrestrial quantum networks. We determine the loss conditions under which teleportation via the TMSV resource outperforms direct-satellite distribution of the hybrid entanglement, in addition to quantifying the advantage of teleporting the DV mode relative to the CV mode. Our detailed calculations show that under the loss conditions anticipated from Low-Earth-Orbit, DV teleportation via the TMSV resource will always provide for significantly improved outcomes, relative to other means for distributing hybrid entanglement within heterogeneous quantum networks.
quantum physics
Light detection and ranging (lidar) has long been used in various applications. Solid-state beam steering mechanisms are needed for robust lidar systems. Here we propose and demonstrate a lidar scheme called "Swept Source Lidar" that allows us to perform frequency-modulated continuous-wave (FMCW) ranging and nonmechanical beam steering simultaneously. Wavelength dispersive elements provide angular beam steering, while a laser frequency is continuously swept by a wideband swept source over its whole tuning bandwidth. Employing a tunable vertical-cavity surface-emitting laser and a 1-axis mechanical beam scanner, three-dimensional point cloud data has been obtained. Swept Source Lidar systems can be flexibly combined with various beam steering elements to realize full solid-state FMCW lidar systems.
physics
Optomechanical systems are promising platforms for controlled light-matter interactions. They are capable of providing several fundamental and practical novel features when the mechanical oscillator is cooled down to nearly reach its ground state. In this framework, measuring the effective temperature of the oscillator is perhaps the most relevant step in the characterization of those systems. In conventional schemes, the cavity is driven strongly, and the overall system is well-described by a linear (Gaussian preserving) Hamiltonian. Here, we depart from this regime by considering an undriven optomechanical system via non-Gaussian radiation-pressure interaction. To measure the temperature of the mechanical oscillator, initially in a thermal state, we use light as a probe to coherently interact with it and create an entangled state. We show that the optical probe gets a nonlinear phase, resulting from the non-Gaussian interaction, and undergoes an incoherent phase diffusion process. To efficiently infer the temperature from the entangled light-matter state, we propose using a nonlinear Kerr medium before a homodyne detector. Remarkably, placing the Kerr medium enhances the precision to nearly saturate the ultimate quantum bound given by the quantum Fisher information. Furthermore, it also simplifies the thermometry procedure as it makes the choice of the homodyne local phase independent of the temperature, which avoids the need for adaptive sensing protocols.
quantum physics