text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
Resonance scattering (RS) is an important process in astronomical objects, because it affects measurements of elemental abundances and distorts surface brightness of the object. It is predicted that RS can occur in plasmas of supernova remnants (SNRs). Although several authors reported hints of RS in SNRs, no strong observational evidence has been established so far. We perform a high-resolution X-ray spectroscopy of the SNR N49 with the Reflection Grating Spectrometer aboard XMM-Newton. The RGS spectrum of N49 shows a high G-ratio of O VII He$\alpha$ lines as well as O VIII Ly$\beta$/$\alpha$ and Fe XVII (3s-2p)/(3d-2p) ratios which cannot be explained by the emission from a thin thermal plasma. These line ratios can be well explained by the effect of RS. Our result implies that RS has a large impact particularly on a measurement of the oxygen abundance. | astrophysics |
In this paper, we investigate covert communication over millimeter-wave (mmWave) frequencies. In particular, a mmWave transmitter Alice attempts to reliably communicate to a receiver Bob while hiding the existence of communication from a warden Willie. In this regard, operating over mmWave bands not only increases the covertness thanks to directional beams, but also increases the transmission data rates given much more available bandwidths and enables ultra-low form factor transceivers due to the lower wavelengths used compared to the conventional radio frequency (RF) counterpart. We first assume that the transmitter Alice employs two independent antenna arrays in which one of the arrays is to form a directive beam for data transmission to Bob. The other antenna array is used by Alice to generate another beam toward Willie as a jamming signal while changing the transmit power independently across the transmission blocks. For this dual-beam setup, we characterize Willie's detection error rate with the optimal detector and the closed-form of its expected value from Alice's perspective. We then derive the closed-form expression for the outage probability of the Alice-Bob link, which enables characterizing the optimal covert rate that can be achieved using the proposed setup. We further obtain tractable forms for the ergodic capacity of the Alice-Bob link involving only one-dimensional integrals that can be computed in closed forms for most ranges of the channel parameters. Finally, we highlight how the results can be extended to more practical scenarios, particularly to cases where perfect information about the location of the passive warden is not available. Our results demonstrate the advantages of covert mmWave communication compared to the RF counterpart. The research in this paper is the first analytical attempt in exploring covert communication using mmWave systems. | computer science |
This paper details the algorithms involved and task planner for vehicle collaboration in building a structure. This is the problem defined in challenge 2 of Mohammed Bin Zayed International Robotic Challenge 2020 (MBZIRC). The work addresses various aspects of the challenge for Unmanned Aerial Vehicles (UAVs) and Unmanned Ground Vehicle (UGV). The challenge involves repeated pick and place operations using UAVs and UGV to build two structures of different shape and sizes. The algorithms are implemented using the Robot Operating System (ROS) framework and visualised in Gazebo. The whole developed architecture could readily be implemented in suitable hardware. | computer science |
The paper addresses a structural controllability problem for continuum ensembles of linear time-invariant systems. All the individual linear systems of an ensemble are sparse and they share the same sparsity pattern. Controllability of an ensemble system is, by convention, the capability of using a common control input to simultaneously steer every individual systems in it. A sparsity pattern is structurally controllable if it admits a controllable linear ensemble system. A main contribution of the paper is to provide a graphical condition that is necessary and sufficient for a sparsity pattern to be structurally controllable. Like other structural problems, the property of being structural controllable is monotone. We provide a complete characterization of minimal sparsity patterns as well. | electrical engineering and systems science |
We consider Bayesian optimization of objective functions of the form $\rho[ F(x, W) ]$, where $F$ is a black-box expensive-to-evaluate function and $\rho$ denotes either the VaR or CVaR risk measure, computed with respect to the randomness induced by the environmental random variable $W$. Such problems arise in decision making under uncertainty, such as in portfolio optimization and robust systems design. We propose a family of novel Bayesian optimization algorithms that exploit the structure of the objective function to substantially improve sampling efficiency. Instead of modeling the objective function directly as is typical in Bayesian optimization, these algorithms model $F$ as a Gaussian process, and use the implied posterior on the objective function to decide which points to evaluate. We demonstrate the effectiveness of our approach in a variety of numerical experiments. | statistics |
Small changes in the masses of massive external scattering states should correspond to small changes in the non-perturbative parameterization of form factors in quantum field theory, as long as the relevant energy range is not near strong deformations. Here, the definition of ``small'' is investigated and applied to $SU(3)$ breaking in semileptonic $B_{(s)}\rightarrow D_{(s)}$ transitions. When unitarity and analyticity are imposed, the differences in the form factors for semileptonic $B\rightarrow D$ versus $B_s\rightarrow D_s$ decays are found to be within $\mathcal{O}(1\%)$ over the entire kinematic range, not just at zero recoil, which is consistent with results from lattice calculations and differs from the expectation using HQET alone. | high energy physics phenomenology |
Following recent work on heavy-light correlators in higher-dimensional conformal field theories (CFTs) with a large central charge $C_T$, we clarify the properties of stress tensor composite primary operators of minimal twist, $[T^m]$, using arguments in both CFT and gravity. We provide an efficient proof that the three-point coupling $\langle \mathcal{O}_L\mathcal{O}_L [T^m]\rangle$, where $\mathcal{O}_L$ is any light primary operator, is independent of the purely gravitational action. Next, we consider corrections to this coupling due to additional interactions in AdS effective field theory and the corresponding dual CFT. When the CFT contains a non-zero three-point coupling $\langle TT \mathcal{O}_L\rangle$, the three-point coupling $\langle \mathcal{O}_L\mathcal{O}_L [T^2]\rangle$ is modified at large $C_T$ if $\langle TT\mathcal{O}_L \rangle \sim \sqrt{C_T}$. This scaling is obeyed by the dilaton, by Kaluza-Klein modes of prototypical supergravity compactifications, and by scalars in stress tensor multiplets of supersymmetric CFTs. Quartic derivative interactions involving the graviton and the light probe field dual to $\mathcal{O}_L$ can also modify the minimal-twist couplings; these local interactions may be generated by integrating out a spin-$\ell \geq 2$ bulk field at tree level, or any spin $\ell$ at loop level. These results show how the minimal-twist OPE coefficients can depend on the higher-spin gap scale, even perturbatively. | high energy physics theory |
Upon starting a collective endeavour, it is important to understand your partners' preferences and how strongly they commit to a common goal. Establishing a prior commitment or agreement in terms of posterior benefits and consequences from those engaging in it provides an important mechanism for securing cooperation. Resorting to methods from Evolutionary Game Theory (EGT), here we analyse how prior commitments can also be adopted as a tool for enhancing coordination when its outcomes exhibit an asymmetric payoff structure, in both pairwise and multiparty interactions. Arguably, coordination is more complex to achieve than cooperation since there might be several desirable collective outcomes in a coordination problem (compared to mutual cooperation, the only desirable collective outcome in cooperation dilemmas). Our analysis, both analytically and via numerical simulations, shows that whether prior commitment would be a viable evolutionary mechanism for enhancing coordination and the overall population social welfare strongly depends on the collective benefit and severity of competition, and more importantly, how asymmetric benefits are resolved in a commitment deal. Moreover, in multiparty interactions, prior commitments prove to be crucial when a high level of group diversity is required for optimal coordination. The results are robust for different selection intensities. Overall, our analysis provides new insights into the complexity and beauty of behavioral evolution driven by humans' capacity for commitment, as well as for the design of self-organised and distributed multi-agent systems for ensuring coordination among autonomous agents. | computer science |
Live time-lapse microscopy is essential for a wide range of biological applications. Software-based automation is the gold standard for the operation of hardware accessories necessary for image acquisition. Given that current software packages are neither affordable nor open to complex structured programming, we have developed a Matlab-based graphical user interface (GUI) that is fundamentally accessible while providing limitless avenues for further customization. The GUI simultaneously communicates with the open-source, cross-functional platform micro-Manager for controlling hardware for time-lapse image acquisition and with other software for image processing. The use of the GUI is demonstrated through an 18-hour cell migration experiment. The results suggest that the GUI can generate high-quality, high-throughput time-lapse images. The core code behind the GUI is open-source so that it can be modified and upgraded. Therefore, it benefits the researchers worldwide by providing them a fundamental yet functional template to add in advanced features specific to their needs, such as additional non-microscopy hardware parts and software packages supported by the Matlab software. | electrical engineering and systems science |
In this paper, we study the fragmentation of a heavy quark into a jet near threshold, meaning that final state jet carries most of the energy of the fragmenting heavy quark. Using the heavy quark fragmentation function, we simultaneously resum large logarithms of the jet radius $R$ and $1-z$, where $z$ is the ratio of the jet energy to the initiating heavy quark energy. There are numerically significant corrections to the leading order rate due to this resummation. We also investigate the heavy quark fragmentation to a groomed jet, using the soft drop grooming algorithm as an example. In order to do so, we introduce a collinear-ultrasoft mode sensitive to the grooming region determined by the algorithm's $z_{\mathrm{cut}}$ parameter. This allows us to resum large logarithms of $z_{\mathrm{cut}}/(1-z)$, again leading to large numerical corrections near the endpoint. A nice feature of the analysis of the heavy quark fragmenting to a groomed jet is the heavy quark mass $m$ renders the algorithm infrared finite, allowing a perturbative calculation. We analyze this for $E_JR \sim m$ and $E_JR\gg m$, where $E_J$ is the jet energy. To do the later case, we introduce an ultracolliear-soft mode, allowing us to resum large logarithms of $E_JR/m$. Finally, as an application we calculate the rate for $e^+e^-$ collisions to produce a heavy quark jet in the endpoint region, where we show that grooming effects have a sizable contribution near the endpoint. | high energy physics phenomenology |
In contrast with traditional video, omnidirectional video enables spherical viewing direction with support for head-mounted displays, providing an interactive and immersive experience. Unfortunately, to the best of our knowledge, there are few visual quality assessment (VQA) methods, either subjective or objective, for omnidirectional video coding. This paper proposes both subjective and objective methods for assessing quality loss in encoding omnidirectional video. Specifically, we first present a new database, which includes the viewing direction data from several subjects watching omnidirectional video sequences. Then, from our database, we find a high consistency in viewing directions across different subjects. The viewing directions are normally distributed in the center of the front regions, but they sometimes fall into other regions, related to video content. Given this finding, we present a subjective VQA method for measuring difference mean opinion score (DMOS) of the whole and regional omnidirectional video, in terms of overall DMOS (O-DMOS) and vectorized DMOS (V-DMOS), respectively. Moreover, we propose two objective VQA methods for encoded omnidirectional video, in light of human perception characteristics of omnidirectional video. One method weighs the distortion of pixels with regard to their distances to the center of front regions, which considers human preference in a panorama. The other method predicts viewing directions according to video content, and then the predicted viewing directions are leveraged to allocate weights to the distortion of each pixel in our objective VQA method. Finally, our experimental results verify that both the subjective and objective methods proposed in this paper advance state-of-the-art VQA for omnidirectional video. | electrical engineering and systems science |
This paper introduces a modular processing chain to derive global high-resolution maps of leaf traits. In particular, we present global maps at 500 m resolution of specific leaf area, leaf dry matter content, leaf nitrogen and phosphorus content per dry mass, and leaf nitrogen/phosphorus ratio. The processing chain exploits machine learning techniques along with optical remote sensing data (MODIS/Landsat) and climate data for gap filling and up-scaling of in-situ measured leaf traits. The chain first uses random forests regression with surrogates to fill gaps in the database ($> 45 \% $ of missing entries) and maximize the global representativeness of the trait dataset. Along with the estimated global maps of leaf traits, we provide associated uncertainty estimates derived from the regression models. The process chain is modular, and can easily accommodate new traits, data streams (traits databases and remote sensing data), and methods. The machine learning techniques applied allow attribution of information gain to data input and thus provide the opportunity to understand trait-environment relationships at the plant and ecosystem scales. | statistics |
Collective behaviour in suspensions of microswimmers is often dominated by the impact of long-ranged hydrodynamic interactions. These phenomena include active turbulence, where suspensions of pusher bacteria at sufficient densities exhibit large-scale, chaotic flows. To study this collective phenomenon, we use large-scale (up to $N=3\times 10^6$) particle-resolved lattice Boltzmann simulations of model microswimmers described by extended stresslets. Such system sizes enable us to obtain quantitative information about both the transition to active turbulence and characteristic features of the turbulent state itself. In the dilute limit, we test analytical predictions for a number of static and dynamic properties against our simulation results. For higher swimmer densities, where swimmer-swimmer interactions become significant, we numerically show that the length- and timescales of the turbulent flows increase steeply near the predicted finite-system transition density. | condensed matter |
We perform the first three-dimensional radiation hydrodynamical simulations that investigate the growth of intermediate-mass BHs (IMBHs) embedded in massive self-gravitating, dusty nuclear accretion disks. We explore the dependence of mass accretion efficiency on the gas metallicity $Z$ and mass injection at super-Eddington accretion rates from the outer galactic disk $\dot{M}_{\rm in}$, and find that the central BH can be fed at rates exceeding the Eddington rate only when the dusty disk becomes sufficiently optically thick to ionizing radiation. In this case, mass outflows from the disk owing to photoevaporation is suppressed and thus a large fraction ($\gtrsim 40\%$) of the mass injection rate can feed the central BH. The conditions are expressed as $\dot{M}_{\rm in} > 2.2\times 10^{-1}~M_\odot ~{\rm yr}^{-1} (1+Z/10^{-2}~Z_\odot)^{-1}(c_{\rm s}/10~{\rm km~s}^{-1})$, where $c_{\rm s}$ is the sound speed in the gaseous disk. With increasing numerical resolution, vigorous disk fragmentation reduces the disk surface density and dynamical heating by formed clumps makes the disk thickness higher. As a result, the photoevaorative mass-loss rate rises and thus the critical injection rate increases for fixed metallicity. This process enables super-Eddington growth of BHs until the BH mass reaches $M_{\rm BH} \sim 10^{7-8}~M_\odot$, depending on the properties of the host dark-matter halo and metal-enrichment history. In the assembly of protogalaxies, seed BHs that form in overdense regions with a mass variance of 3-4$\sigma$ at $z\sim 15-20$ are able to undergo short periods of their rapid growth and transits into the Eddington-limited growth phase afterwards to be supermassive BHs observed at $z>6-7$. | astrophysics |
The COVID-19 pandemic has brought into sharp focus the need to understand respiratory virus transmission mechanisms. In preparation for an anticipated influenza pandemic, a substantial body of literature has developed over the last few decades showing that the short-range aerosol route is an important, though often neglected transmission path. We develop a simple mathematical model for COVID-19 transmission via aerosols, apply it to known outbreaks, and present quantitative guidelines for ventilation and occupancy in the workplace. | physics |
An observational tension on estimates of the Hubble parameter, $H_0$, using early and late Universe information, is being of intense discussion in the literature. Additionally, it is of great importance to measure $H_0$ independently of CMB data and local distance ladder method. In this sense, we analyze 15 measurements of the transversal BAO scale, $\theta_{\rm BAO}$, obtained in a weakly model-dependent approach, in combination with other data sets obtained in a model-independent way, namely, Big Bang Nucleosynthesis (BBN) information, 6 gravitationally lensed quasars with measured time delays by the H0LiCOW team, and measures of cosmic chronometers (CC). We find $H_0 = 74.88_{-2.1}^{+1.9}$ km s${}^{-1}$ Mpc${}^{-1}$ and $H_0 = 72.06_{-1.3}^{+1.2}$ km s${}^{-1}$ Mpc${}^{-1}$ from $\theta_{BAO}$+BBN+H0LiCOW and $\theta_{BAO}$+BBN+CC, respectively, in fully accordance with local measurements. Moreover, we estimate the sound horizon at drag epoch, $r_{\rm d}$, independent of CMB data, and find $r_{\rm d}=144.1_{-5.5}^{+5.3}$ Mpc (from $\theta_{BAO}$+BBN+H0LiCOW) and $r_{\rm d} =150.4_{-3.3}^{+2.7}$ Mpc (from $\theta_{BAO}$+BBN+CC). In a second round of analysis, we test how the presence of a possible spatial curvature, $\Omega_k$, can influence the main results. We compare our constraints on $H_0$ and $r_{\rm d}$ with other reported values. Our results show that it is possible to use a robust compilation of transversal BAO data, $\theta_{BAO}$, jointly with other model-independent measurements, in such a way that the tension on the Hubble parameter can be alleviated. | astrophysics |
The asymptotic behavior of the scattering amplitude for two scalar particles at high energies with fixed momentum transfers is studied. The study is done within the effective theory of quantum gravity based on quasi-potential equation. By using the modified perturbation theory, a systematic method is developed to find the leading eikonal scattering amplitudes together with corrections to them in the one-loop gravitational approximation. The relation is established and discussed between the solutions obtained by means of the operator and functional approaches applied to quasi-potential equation. The first non-leading corrections to the leading eikonal amplitude are found. | high energy physics theory |
It has been recently demonstrated that the 125 GeV Higgs boson can mediate a long-range force between TeV-scale particles, that can impact considerably their annihilation due to the Sommerfeld effect, and hence the density of thermal relic dark matter. In the presence of long-range interactions, the formation and decay of particle-antiparticle bound states can also deplete dark matter significantly. We consider the Higgs boson as mediator in the formation of bound states, and compute the effect on the dark matter abundance. To this end, we consider a simplified model in which dark matter co-annihilates with coloured particles that have a sizeable coupling to the Higgs. The Higgs-mediated force affects the dark matter depletion via bound state formation in several ways. It enhances the capture cross-sections due to the attraction it mediates between the incoming particles, it increases the binding energy of the bound states, hence rendering their ionisation inefficient sooner in the early universe, and for large enough couplings, it can overcome the gluon repulsion of certain colour representations and give rise to additional bound states. Because it alters the momentum exchange in the bound states, the Higgs-mediated force also affects the gluon-mediated potential via the running of the strong coupling. We comment on the experimental implications and conclude that the Higgs-mediated potential must be taken into account when circumscribing the viable parameter space of related models. | high energy physics phenomenology |
Line-Intensity Mapping is an emerging technique which promises new insights into the evolution of the Universe, from star formation at low redshifts to the epoch of reionization and cosmic dawn. It measures the integrated emission of atomic and molecular spectral lines from galaxies and the intergalactic medium over a broad range of frequencies, using instruments with aperture requirements that are greatly relaxed relative to surveys for single objects. A coordinated, comprehensive, multi-line intensity-mapping experimental effort can efficiently probe over 80% of the volume of the observable Universe - a feat beyond the reach of other methods. Line-intensity mapping will uniquely address a wide array of pressing mysteries in galaxy evolution, cosmology, and fundamental physics. Among them are the cosmic history of star formation and galaxy evolution, the compositions of the interstellar and intergalactic media, the physical processes that take place during the epoch of reionization, cosmological inflation, the validity of Einstein's gravity theory on the largest scales, the nature of dark energy and the origin of dark matter. | astrophysics |
We show the consistent interactions in the generalized electrodynamics gauge theory with higher derivative matter fields by means of the order reduction method. We deduce the BRST deformations in the reduced Lagrangian and using the equations of motion of the auxiliary fields in the antighost number zero part in the resulting deformed action, we are able to obtain the consistent coupling terms added into the original Lagrangian density which are compatible with the deformation master equations. We emphasize that the order of deformations is truncated at four and the corresponding higher-order deformations are equal to zero precisely. Moreover, the local Abelian gauge symmetry turns out to be non-Abelian after the deformation procedure. | high energy physics theory |
Information theory provides a mathematical foundation to measure uncertainty in belief. Belief is represented by a probability distribution that captures our understanding of an outcome's plausibility. Information measures based on Shannon's concept of entropy include realization information, Kullback-Leibler divergence, Lindley's information in experiment, cross entropy, and mutual information. We derive a general theory of information from first principles that accounts for evolving belief and recovers all of these measures. Rather than simply gauging uncertainty, information is understood in this theory to measure change in belief. We may then regard entropy as the information we expect to gain upon realization of a discrete latent random variable. This theory of information is compatible with the Bayesian paradigm in which rational belief is updated as evidence becomes available. Furthermore, this theory admits novel measures of information with well-defined properties, which we explore in both analysis and experiment. This view of information illuminates the study of machine learning by allowing us to quantify information captured by a predictive model and distinguish it from residual information contained in training data. We gain related insights regarding feature selection, anomaly detection, and novel Bayesian approaches. | computer science |
Existences of vector-like quarks (VLQs) are predicted in many new physics scenarios beyond the Standard Model (SM). We study the possibility of detecting the vector-like bottom quark (VLQ-$B$) being the $SU(2)$ singlet with electric charge $-1/3$ at the Large Hadron Electron Collider (LHeC) in a model-independent framework. The decay properties and single production of VLQ-$B$ at the LHeC are explored. Three types of signatures are investigated. By carrying out a fast simulation for the signals and the corresponding backgrounds, the signal significances are obtained. Our numerical results show that detecting of VLQ-$B$ via the semileptonic channel is better than via the fully hadronic or leptonic channel. | high energy physics phenomenology |
The motion of a dynamical system on an $n$-dimensional configuration space may be regarded as the lightlike shadow of null geodsics moving in an $(n+2)$ dimensional spacetime known as its Einsenhart-Duval lift. In this paper it is shown that if the configuration space is $n$-dimensional Euclidean space, and in the absence of magnetic type forces, the Eisenhart-Duval lift may be regarded as an $(n+1)$-brane moving in a flat $(n+4)$ -dimensional space with two times. If the Eisenhart-Duval lift is Ricci flat, then the $(n+1)$-brane moves in such a way as to extremise its spacetime volume. A striking example is provided by the motion of $N$ point particles moving in three-dimensional Euclidean space under the influence of their mutual gravitational attraction. Embeddings with curved configuration space metrics and velocity dependent forces are also be constructed. Some of the issues arising from the two times are addressed. | high energy physics theory |
The type IIB matrix model is a promising candidate for a nonperturbative formulation of superstring theory. As such, it is expected to explain the origin of space--time and matter at the same time. This has been partially demonstrated by the previous Monte Carlo studies on the Lorentzian version of the model, which suggested the emergence of (3+1)-dimensional expanding space--time. Here we investigate the same model by solving numerically the classical equation of motion, which is expected to be valid at late times since the action becomes large due to the expansion of space. Many solutions are obtained by the gradient descent method starting from random matrix configurations, assuming a quasi-direct-product structure for the (3+1)-dimensions and the extra 6 dimensions. We find that these solutions generally admit the emergence of expanding space--time and a block-diagonal structure in the extra dimensions, the latter being important for the emergence of intersecting D-branes. For solutions corresponding to D-branes with appropriate dimensionality, the Dirac operator is shown to acquire a zero mode in the limit of infinite matrix size. | high energy physics theory |
The Salerno model constitutes an intriguing interpolation between the integrable Ablowitz-Ladik (AL) model and the more standard (non-integrable) discrete nonlinear Schr{\"o}dinger (DNLS) one. The competition of local on-site nonlinearity and nonlinear dispersion governs the thermalization of this model. Here, we investigate the statistical mechanics of the Salerno one-dimensional lattice model in the nonintegrable case and illustrate the thermalization in the Gibbs regime. As the parameter interpolating between the two limits (from DNLS towards AL) is varied, the region in the space of initial energy and norm-densities leading to thermalization expands. The thermalization in the non-Gibbs regime heavily depends on the finite system size; we explore this feature via direct numerical computations for different parametric regimes. | physics |
With the rapid development of mobile-internet technologies, on-demand ride-sourcing services have become increasingly popular and largely reshaped the way people travel. Demand prediction is one of the most fundamental components in supply-demand management systems of ride-sourcing platforms. With accurate short-term prediction for origin-destination (OD) demand, the platforms make precise and timely decisions on real-time matching, idle vehicle reallocations and ride-sharing vehicle routing, etc. Compared to zone-based demand prediction that has been examined by many previous studies, OD-based demand prediction is more challenging. This is mainly due to the complicated spatial and temporal dependencies among demand of different OD pairs. To overcome this challenge, we propose the Spatio-Temporal Encoder-Decoder Residual Multi-Graph Convolutional network (ST-ED-RMGC), a novel deep learning model for predicting ride-sourcing demand of various OD pairs. Firstly, the model constructs OD graphs, which utilize adjacent matrices to characterize the non-Euclidean pair-wise geographical and semantic correlations among different OD pairs. Secondly, based on the constructed graphs, a residual multi-graph convolutional (RMGC) network is designed to encode the contextual-aware spatial dependencies, and a long-short term memory (LSTM) network is used to encode the temporal dependencies, into a dense vector space. Finally, we reuse the RMGC networks to decode the compressed vector back to OD graphs and predict the future OD demand. Through extensive experiments on the for-hire-vehicles datasets in Manhattan, New York City, we show that our proposed deep learning framework outperforms the state-of-arts by a significant margin. | electrical engineering and systems science |
We investigate the geometry of a quantum universe with the topology of the four-torus. The study of non-contractible geodesic loops reveals that a typical quantum geometry consists of a small semi-classical toroidal bulk part, dressed with many outgrowths, which contain most of the four-volume and which have almost spherical topologies, but nevertheless are quite fractal. | high energy physics theory |
We study three dimensional $\mathcal{N}=2$ supersymmetric Chern-Simons-Matter theories on the direct product of a circle and a two dimensional hemisphere ($S^1 \times D^2$) with specified boundary conditions by the method of localization. We construct boundary interactions to cancel the supersymmetric variation of the three dimensional superpotential term and the Chern-Simons term and show inflows of the bulk-boundary anomalies. It finds that the boundary conditions induce two dimensional $\mathcal{N}=(0,2)$ type supersymmetry on the boundary torus. We also study the relation between the 3d-2d coupled partition function of our model and three dimensional holomorphic blocks. | high energy physics theory |
We calculate the mean-over-variance ratio of the net-kaon fluctuations in the Hadron Resonance Gas (HRG) Model for the five highest energies of the RHIC Beam Energy Scan (BES) for different particle data lists. We compare these results with the latest experimental data from the STAR collaboration in order to extract sets of chemical freeze-out parameters for each list. We focused on the PDG2012 and PDG2016+ particle lists, which differ largely in the number of resonant states. Our analysis determines the effect of the amount of resonances included in the HRG on the freeze-out conditions. | high energy physics phenomenology |
In the framework of the scotogenic model, which features radiative generation of neutrino masses, we explore light dark matter scenario. Throughout the paper we chiefly focus on keV-scale dark matter which can be produced either via freeze-in through the decays of the new scalars, or from the decays of next-to-lightest fermionic particle in the spectrum, which is produced through freeze-out. The latter mechanism is required to be suppressed as it typically produces a hot dark matter component. Constraints from BBN are also considered and in combination with the former production mechanism they impose the dark matter to be light. For this scenario we consider signatures at High Luminosity LHC and proposed future hadron and lepton colliders, namely FCC-hh and CLIC, focusing on searches with two leptons and missing energy as a final state. While a potential discovery at High Luminosity LHC is in tension with limits from cosmology, the situation greatly improves for future colliders. | high energy physics phenomenology |
Grain boundaries are thermodynamically unstable. Hence, their properties should be path-dependent: grain boundaries in nanocrystals prepared using different methods might exhibit different properties. Using molecular dynamics simulations, we investigated grain boundaries of nanocrystals formed by quenching,solidification with pre-induced nucleation sites, and Voronoi tessellation. Some properties were found to be path depend: the quenched model has lower boundary energy per atom, smaller boundary excess free volume per atom, and slower grain growth than the Voronoi model. We surmise that these differences are attributed to the abundant annealing twins in the quenched model. On the other hand, other properties are path independent, such as Young's modulus, Poisson's ratio, and the ratio between grain boundary energy and boundary excess free volume. The results of this study further the understanding of the structural-property relationship of nanocrystals and provide guidance to future simulation-based studies of nanocrystalline materials. | condensed matter |
We use geometric concepts originally proposed by Anandan and Aharonov to show that the Farhi-Gutmann time optimal analog quantum search evolution between two orthogonal quantum states is characterized by unit efficiency dynamical trajectories traced on a projective Hilbert space. In particular, we prove that these optimal dynamical trajectories are the shortest geodesic paths joining the initial and the final states of the quantum evolution. In addition, we verify they describe minimum uncertainty evolutions specified by an uncertainty inequality that is tighter than the ordinary time-energy uncertainty relation. We also study the effects of deviations from the time optimality condition from our proposed Riemannian geometric perspective. Furthermore, after pointing out some physically intuitive aspects offered by our geometric approach to quantum searching, we mention some practically relevant physical insights that could emerge from the application of our geometric analysis to more realistic time-dependent quantum search evolutions. Finally, we briefly discuss possible extensions of our work to the geometric analysis of the efficiency of thermal trajectories of relevance in quantum computing tasks. | quantum physics |
We use an open, hourly-resolved, networked model of the European energy system to investigate the storage requirements under decreasing CO$_2$ emissions targets and several sector-coupling scenarios. For the power system, significant storage capacities only emerge for CO$_2$ reductions higher than 80% of 1990 level in that sector. For 95% CO$_2$ reductions, the optimal system includes electric batteries and hydrogen storage energy capacities equivalent to 1.4 and 19.4 times the average hourly electricity demand. Coupling heating and transport sectors enables deeper global CO$_2$ reductions before the required storage capacities become significant, which highlights the importance of sector coupling strategies in the transition to low carbon energy systems. A binary selection of storage technologies is consistently found, i.e., electric batteries act as short-term storage to counterbalance solar photovoltaic generation while hydrogen storage smooths wind fluctuations. Flexibility from the electric vehicle batteries provided by coupling the transport sector avoid the need for additional stationary batteries and reduce the usage of pumped hydro storage. Coupling the heating sector brings to the system large capacities of thermal energy storage to compensate for the significant seasonal variation in heating demand. | physics |
Lead-free double perovskite halides are emerging optoelectronic materials that are alternatives to lead-based perovskite halides. Recently, single-crystalline double perovskite halides were synthesized, and their intriguing functional properties were demonstrated. Despite such pioneering works, lead-free double perovskite halides with better crystallinity are still in demand for applications to novel optoelectronic devices. Here, we realized highly crystalline Cs2AgBiBr6 single crystals with a well-defined atomic ordering on the microscopic scale. We avoided the formation of Ag vacancies and the subsequent secondary Cs3Bi2Br9 by manipulating the initial chemical environments in hydrothermal synthesis. The suppression of Ag vacancies allows us to reduce the trap density in the as-grown crystals and to enhance the carrier mobility further. Our design strategy is applicable for fabricating other lead-free halide materials with high crystallinity. | condensed matter |
As Deep Learning (DL) models have been increasingly used in latency-sensitive applications, there has been a growing interest in improving their response time. An important venue for such improvement is to profile the execution of these models and characterize their performance to identify possible optimization opportunities. However, the current profiling tools lack the highly desired abilities to characterize ideal performance, identify sources of inefficiency, and quantify the benefits of potential optimizations. Such deficiencies have led to slow characterization/optimization cycles that cannot keep up with the fast pace at which new DL models are introduced. We propose Benanza, a sustainable and extensible benchmarking and analysis design that speeds up the characterization/optimization cycle of DL models on GPUs. Benanza consists of four major components: a model processor that parses models into an internal representation, a configurable benchmark generator that automatically generates micro-benchmarks given a set of models, a database of benchmark results, and an analyzer that computes the "lower-bound" latency of DL models using the benchmark data and informs optimizations of model execution. The "lower-bound" latency metric estimates the ideal model execution on a GPU system and serves as the basis for identifying optimization opportunities in frameworks or system libraries. We used Benanza to evaluate 30 ONNX models in MXNet, ONNX Runtime, and PyTorch on 7 GPUs ranging from Kepler to the latest Turing, and identified optimizations in parallel layer execution, cuDNN convolution algorithm selection, framework inefficiency, layer fusion, and using Tensor Cores. | computer science |
This paper deals with stability of discrete-time switched linear systems whose all subsystems are unstable. We present sufficient conditions on the subsystems matrices such that a switched system is globally exponentially stable under a set of purely time-dependent switching signals that are allowed to activate all subsystems. The main apparatuses for our analysis are (matrix) commutation relations between certain products of the subsystems matrices and graph-theoretic arguments. We present a numerical experiment to demonstrate our results. | electrical engineering and systems science |
Identification of the optimal quantum metrological protocols in realistic many particle quantum models is in general a challenge that cannot be efficiently addressed by the state-of-the-art numerical and analytical methods. Here we provide a comprehensive framework exploiting matrix product operators (MPO) type tensor networks for quantum metrological problems. Thanks to the fact that the MPO formalism allows for an efficient description of short-range spatial and temporal noise correlations, the maximal achievable estimation precision in such models, as well as the optimal probe states in previously inaccessible regimes can be identified. Moreover, the application of infinite MPO (iMPO) techniques allows for a direct and efficient determination of the asymptotic precision of optimal protocols in the limit of infinite particle numbers. We illustrate the potential of our framework in terms of an atomic clock stabilization (temporal noise correlation) example as well as for magnetic field sensing in the presence of locally correlated magnetic field fluctuations (spatial noise correlations). As a byproduct, the developed methods for calculating the quantum Fisher information via MPOs may be used to calculate the fidelity susceptibility - a parameter widely used in many-body physics to study phase transitions. | quantum physics |
We propose and study a general framework for regularized Markov decision processes (MDPs) where the goal is to find an optimal policy that maximizes the expected discounted total reward plus a policy regularization term. The extant entropy-regularized MDPs can be cast into our framework. Moreover, under our framework, many regularization terms can bring multi-modality and sparsity, which are potentially useful in reinforcement learning. In particular, we present sufficient and necessary conditions that induce a sparse optimal policy. We also conduct a full mathematical analysis of the proposed regularized MDPs, including the optimality condition, performance error, and sparseness control. We provide a generic method to devise regularization forms and propose off-policy actor critic algorithms in complex environment settings. We empirically analyze the numerical properties of optimal policies and compare the performance of different sparse regularization forms in discrete and continuous environments. | statistics |
Lumen formation plays an essential role in the morphogenesis of tissues during development. Here we review the physical principles that play a role in the growth and coarsening of lumens. Solute pumping by the cell, hydraulic flows driven by differences of osmotic and hydrostatic pressures, balance of forces between extracellular fluids and cell-generated cytoskeletal forces, and electro-osmotic effects have been implicated in determining the dynamics and steady-state of lumens. We use the framework of linear irreversible thermodynamics to discuss the relevant force, time and length scales involved in these processes. We focus on order of magnitude estimates of physical parameters controlling lumen formation and coarsening. | condensed matter |
The Navier-Stokes equations generate an infinite set of generalized Lyapunov exponents defined by different ways of measuring the distance between exponentially diverging perturbed and unperturbed solutions. This set is demonstrated to be similar, yet different, from the generalized Lyapunov exponent that provides moments of distance between two fluid particles below the Kolmogorov scale. We derive rigorous upper bounds on dimensionless Lyapunov exponent of the fluid particles that demonstrate the exponent's decay with Reynolds number $Re$ in accord with previous studies. In contrast, terms of cumulant series for exponents of the moments have power-law growth with $Re$. We demonstrate as an application that the growth of small fluctuations of magnetic field in ideal conducting turbulence is hyper-intermittent, being exponential in both time and Reynolds number. We resolve the existing contradiction between the theory, that predicts slow decrease of dimensionless Lyapunov exponent of turbulence with $Re$, and observations exhibiting quite fast growth. We demonstrate that it is highly plausible that a pointwise limit for the growth of small perturbations of the Navier-Stokes equations exists. | physics |
An extremely fast exponential expansion of the Universe is typical for the stable version of the inflationary model, based on the anomaly-induced action of gravity. The total amount of exponential $e$-folds could be very large, before the transition to the unstable version and the beginning of the Starobinsky inflation. Thus, the stable exponential expansion can be seen as a pre-inflationary semiclassical cosmological solution. We explore whether this stable phase could follow after the bounce, subsequent to the contraction of the Universe. Extending the previous consideration of the bounce, we explore both stable expansion and the bounce solutions in the models with non-zero cosmological constant and the presence of background radiation. The critical part of the analysis concerns stability for small perturbations of the Hubble parameter. It is shown that the stability is possible for the variations in the bounce region, but not in the sufficiently distant past in the contraction phase. | high energy physics theory |
Although academic research on the 'hot hand' effect (in particular, in sports, especially in basketball) has been going on for more than 30 years, it still remains a central question in different areas of research whether such an effect exists. In this contribution, we investigate the potential occurrence of a 'hot shoe' effect for the performance of penalty takers in football based on data from the German Bundesliga. For this purpose, we consider hidden Markov models (HMMs) to model the (latent) forms of players. To further account for individual heterogeneity of the penalty taker as well as the opponent's goalkeeper, player-specific abilities are incorporated in the model formulation together with a LASSO penalty. Our results suggest states which can be tied to different forms of players, thus providing evidence for the hot shoe effect, and shed some light on exceptionally well-performing goalkeepers, which are of potential interest to managers and sports fans. | statistics |
Let $R$ be a right notherian ring. We introduce the concept of relative singularity category $\Delta_{\mathcal{X}}(R)$ of $R$ with respect to a contravariantly finite subcategory $\mathcal{X}$ of $\rm{mod}\mbox{-}R.$ Along with some finiteness conditions on $\mathcal{X}$, we prove that $\Delta_{\mathcal{X}}(R)$ is triangle equivalent to a subcategory of the homotopy category $\mathbb{K}_{\rm{ac}}(\mathcal{X})$ of exact complexes over $\mathcal{X}$. As an application, a new description of the classical singularity category $\mathbb{D}_{\rm{sg}}(R)$ is given. The relative singularity categories are applied to lift a stable equivalence between two suitable subcategories of the module categories of two given right notherian ring to get a singular equivalence between the rings. In different types of rings, including path rings, triangular matrix rings, trivial extension rings and tensor rings, we provide some consequences for their singularity categories. | mathematics |
We describe a new library named picasso, which implements a unified framework of pathwise coordinate optimization for a variety of sparse learning problems (e.g., sparse linear regression, sparse logistic regression, sparse Poisson regression and scaled sparse linear regression) combined with efficient active set selection strategies. Besides, the library allows users to choose different sparsity-inducing regularizers, including the convex $\ell_1$, nonconvex MCP and SCAD regularizers. The library is coded in C++ and has user-friendly R and Python wrappers. Numerical experiments demonstrate that picasso can scale up to large problems efficiently. | statistics |
The increasing importance of well-controlled ordered nanostructures on surfaces represents a challenge for existing metrology techniques. To develop such nanostructures and monitor complex processing constraints fabrication, both a dimensional reconstruction of nanostructures and a characterization (ideally a quantitative characterization) of their composition is required. In this work, we present a soft X-ray fluorescence-based methodology that allows both of these requirements to be addressed at the same time. By applying the grazing-incidence X-ray fluorescence technique and thus utilizing the X-ray standing wave field effect, nanostructures can be investigated with a high sensitivity with respect to their dimensional and compositional characteristics. By varying the incident angles of the exciting radiation, element-sensitive fluorescence radiation is emitted from different regions inside the nanoobjects. By applying an adequate modeling scheme, these datasets can be used to determine the nanostructure characteristics. We demonstrate these capabilities by performing an element-sensitive reconstruction of a lamellar grating made of Si$_3$N$_4$, where GIXRF data for the O-K$\alpha$ and N-K$\alpha$ fluorescence emission allows a thin oxide layer to be reconstructed on the surface of the grating structure. In addition, we employ the technique also to three dimensional nanostructures and derive both dimensional and compositional parameters in a quantitative manner. | physics |
In this paper we discuss an infinite class of AdS$_6$ backgrounds in Type IIB supergravity dual to five dimensional SCFTs whose low energy description is in terms of linear quiver theories. The quantisation of the Page charges imposes that each solution is determined once a convex, piece-wise linear function is specified. In the dual field theory, we interpret this function as encoding the ranks of colour and flavour groups in the associated quiver. We check our proposal with several examples and provide general expressions for the holographic central charge and the Wilson loop VEV. Some solutions outside this general class, with less clear quiver interpretation, are also discussed. | high energy physics theory |
Many complex systems can spontaneously oscillate under non-periodic forcing. Such self-oscillators are commonplace in biological and technological assemblies where temporal periodicity is needed, such as the beating of a human heart or the vibration of a cello string. While self-oscillation is well understood in classical non-linear systems and their quantized counterparts, the spontaneous emergence of periodicity in quantum systems without a semi-classical limit is more elusive. Here, we show that this behavior can emerge within the repeated-interaction description of open quantum systems. Specifically, we consider a many-body quantum system that undergoes dissipation due to sequential coupling with auxiliary systems at random times. We develop dynamical symmetry conditions that guarantee an oscillatory long-time state in this setting. Our rigorous results are illustrated with specific spin models, which could be implemented in trapped-ion quantum simulators. | quantum physics |
The structure of 1,3,5_triphenylbenzene C6H3(C6H5)3 at 473 K was investigated using the X-ray diffraction method. The measurements of scattered radiation intensity were performed in a wide range of wave vector. For the first time the theoretically predicted model of the structure 1,3,5_triphenylbenzene was experimentally confirmed. The model of short range arrangement of the molecules was proposed. The determined mean, smallest mutual distances between molecules of liquid studied are: r1=4.30 A, r2=5.30 A, r3=5.40 A. The most probable value of the packing coefficient of the molecules was found to be k=0.53. This value falls in the range of k values permissible for liquid phase. The liquid studied is a major aromatic compound. The study of structure of 1,3,5_triphenylbenzene may be helpful in the explanation of the mechanism of intermolecular interactions in synthetic polymers. | physics |
We propose a method for nonstationary covariance function modeling, based on the spatial deformation method of Sampson and Guttorp [1992], but using a low-rank, scalable deformation function written as a linear combination of the tensor product of B-spline basis. This approach addresses two important weaknesses in current computational aspects. First, it allows one to constrain estimated 2D deformations to be non-folding (bijective) in 2D. This requirement of the model has, up to now,been addressed only by arbitrary levels of spatial smoothing. Second, basis functions with compact support enable the application to large datasets of spatial monitoring sites of environmental data. An application to rainfall data in southeastern Brazil illustrates the method | statistics |
High performance but unverified controllers, e.g., artificial intelligence-based (a.k.a. AI-based) controllers, are widely employed in cyber-physical systems (CPSs) to accomplish complex control missions. However, guaranteeing the safety and reliability of CPSs with this kind of controllers is currently very challenging, which is of vital importance in many real-life safety-critical applications. To cope with this difficulty, we propose in this work a Safe-visor architecture for sandboxing unverified controllers in CPSs operating in noisy environments (a.k.a. stochastic CPSs). The proposed architecture contains a history-based supervisor, which checks inputs from the unverified controller and makes a compromise between functionality and safety of the system, and a safety advisor that provides fallback when the unverified controller endangers the safety of the system. By employing this architecture, we provide formal probabilistic guarantees on preserving the safety specifications expressed by accepting languages of deterministic finite automata (DFA). Meanwhile, the unverified controllers can still be employed in the control loop even though they are not reliable. We demonstrate the effectiveness of our proposed results by applying them to two (physical) case studies. | electrical engineering and systems science |
We use an efficient method that eases the daunting task of simulating dynamics in spin systems with long-range interaction. Our Monte Carlo simulations of the long-range Ising model for the nonequilibrium phase ordering dynamics in two spatial dimensions erform significantly faster than the standard Metropolis approach and considerably more efficiently than the kinetic Monte Carlo method. Importantly, this enables us to establish agreement with the theoretical prediction for the time dependence of domain growth, in contrast to previous numerical studies. This method can easily be generalized to applications in other systems. | condensed matter |
The radiative decays of $b$-baryons facilitate the direct measurement of photon helicity in $b\to s\gamma$ transitions thus serving as an important test of physics beyond the Standard Model. In this paper we analyze the complete angular distribution of ground state $b$-baryon ($\Lambda_{b}^{0}$ and $\Xi_{b}^{-}$) radiative decays to multibody final states assuming an initially polarized $b$-baryon sample. Our sensitivity study suggests that the photon polarization asymmetry can be extracted to a good accuracy along with a simultaneous measurement of the initial $b$-baryon polarization. With higher yields of $b$-baryons, achievable in subsequent runs of the Large Hadron Collider (LHC), we find that the photon polarization measurement can play a pivotal role in constraining different new physics scenarios. | high energy physics phenomenology |
We demonstrate cost-effective QKD integration for GPON and NG-PON2. Operation at 5.1e-7 secure bits/pulse and a QBER of 3.28% is accomplished for a 13.5-km reach, 2:16-split PON, with 0.52% co-existence penalty for 19 classical channels. | quantum physics |
Model-based reinforcement learning (MBRL) has been proposed as a promising alternative solution to tackle the high sampling cost challenge in the canonical reinforcement learning (RL), by leveraging a learned model to generate synthesized data for policy training purpose. The MBRL framework, nevertheless, is inherently limited by the convoluted process of jointly learning control policy and configuring hyper-parameters (e.g., global/local models, real and synthesized data, etc). The training process could be tedious and prohibitively costly. In this research, we propose an "reinforcement on reinforcement" (RoR) architecture to decompose the convoluted tasks into two layers of reinforcement learning. The inner layer is the canonical model-based RL training process environment (TPE), which learns the control policy for the underlying system and exposes interfaces to access states, actions and rewards. The outer layer presents an RL agent, called as AI trainer, to learn an optimal hyper-parameter configuration for the inner TPE. This decomposition approach provides a desirable flexibility to implement different trainer designs, called as "train the trainer". In our research, we propose and optimize two alternative trainer designs: 1) a uni-head trainer and 2) a multi-head trainer. Our proposed RoR framework is evaluated for five tasks in the OpenAI gym (i.e., Pendulum, Mountain Car, Reacher, Half Cheetah and Swimmer). Compared to three other baseline algorithms, our proposed Train-the-Trainer algorithm has a competitive performance in auto-tuning capability, with upto 56% expected sampling cost saving without knowing the best parameter setting in advance. The proposed trainer framework can be easily extended to other cases in which the hyper-parameter tuning is costly. | computer science |
Spin-orbit torque (SOT) driven deterministic control of the magnetization state of a magnet with perpendicular magnetic anisotropy (PMA) is key to next generation spintronic applications including non-volatile, ultrafast, and energy efficient data storage devices. But, field-free deterministic switching of perpendicular magnetization remains a challenge because it requires an out-of-plane anti-damping torque, which is not allowed in conventional spin source materials such as heavy metals (HM) and topological insulators due to the system's symmetry. The exploitation of low-crystal symmetries in emergent quantum materials offers a unique approach to achieve SOTs with unconventional forms. Here, we report the first experimental realization of field-free deterministic magnetic switching of a perpendicularly polarized van der Waals (vdW) magnet employing an out-of-plane anti-damping SOT generated in layered WTe2 which is a low-crystal symmetry quantum material. The numerical simulations confirm that out-of-plane antidamping torque in WTe2 is responsible for the observed magnetization switching in the perpendicular direction. | condensed matter |
We present small angle neutron scattering (SANS) data collected on polycrystalline Ni$_{1-x}$V$_x$ samples with $x\geq0.10$ with confirmed random atomic distribution. We aim to determine the relevant length scales of magnetic correlations in ferromagnetic samples with low critical temperatures $T_c$ that show signs of magnetic inhomogeneities in magnetization and $\mu$SR data. The SANS study reveals signatures of long-range order and coexistence of short-range magnetic correlations in this randomly disordered ferromagnetic alloy. We show the advantages of a polarization analysis in identifying the main magnetic contributions from the dominating nuclear scattering. | condensed matter |
We review the problem of BPS state counting described by the generalized quiver matrix model of ADHM type. In four dimensions the generating function of the counting gives the Nekrasov partition function and we obtain generalization in higher dimensions. By the localization theorem, the partition function is given by the sum of contributions from the fixed points of the torus action, which are labeled by partitions, plane partitions and solid partitions. The measure or the Boltzmann weight of the path integral can take the form of the plethystic exponential. Remarkably after integration the partition function or the vacuum expectation value is again expressed in plethystic form. We regard it as a characteristic property of the BPS state counting problem, which is closely related to the integrability. | high energy physics theory |
Computing the Sparse Fast Fourier Transform(sFFT) of a K-sparse signal of size N has emerged as a critical topic for a long time. The sFFT algorithms decrease the runtime and sampling complexity by taking advantage of the signal inherent characteristics that a large number of signals are sparse in the frequency domain. More than ten sFFT algorithms have been proposed, which can be classified into many types according to filter, framework, method of location, method of estimation. In this paper, the technology of these algorithms is completely analyzed in theory. The performance ofthem is thoroughly tested and verified in practice. The theoretical analysis includes thefollowing contents: five operations of signal, three methods of frequency bucketization, five methods of location, four methods of estimation, two problems caused by bucketization, three methods to solve these two problems, four algorithmic frameworks. All the above technologies and methods are introduced in detail and examples are given to illustrate the above research. After theoretical research, we make experiments for computing the signals of different SNR, N , K by a standard testing platform and record the run time, percentage of the signal sampled and L0 , L1 , L2 error with eight different sFFT algorithms. The result of experiments satisfies the inferences obtained in theory. | electrical engineering and systems science |
Mars' polar layered deposits (PLD) are comprised of layers of varying dust-to-water ice volume mixing ratios (VMR) that may record astronomically-forced climatic variation over Mars' recent orbital history. Retracing the formation of these layers by quantifying the sensitivity of deposition rates of polar material to astronomical forcing is critical for the interpretation of this record. Using a Mars global climate model (GCM), we investigate the sensitivity of annual polar water ice and dust surface deposition to various obliquities and surface water ice distributions at zero eccentricity, providing a reasonable characterization of the evolution of the PLD during recent low-eccentricity epochs. For obliquities between 15{\deg} - 35{\deg}, predicted net annual accumulation rates range from -1 to +14 mm/yr for water ice and from +0.003 to +0.3 mm/yr for dust. GCM-derived rates are ingested into an integration model that simulates polar accumulation of water ice and dust over 5 consecutive obliquity cycles (~700 kyrs) during a low eccentricity epoch. A subset of integration simulations predict combined accumulation of water ice and dust in the north at time averaged rates that are near the observationally-inferred value of 0.5 mm/yr. Three types of layers are produced per obliquity cycle: a ~30 m-thick dust-rich (~25% dust VMR) layer forms at high obliquity, a ~0.5 m-thick dust lag forms at low obliquity, and two ~10 m-thick dust-poor (~3%) layers form when obliquity is increasing/decreasing. The ~30 m-thick dust-rich layer is reminiscent of a ~30 m feature derived from visible imagery analysis the north PLD, while the ~0.5 m-thick dust lag is a factor of ~2 smaller than observed "thin layers". Overall, this investigation provides further evidence for obliquity forcing in the PLD climate record, and demonstrates the importance of ice-on-dust nucleation in polar depositional processes. | astrophysics |
One of the most important tasks in modern quantum science is to coherently control and entangle many-body systems, and to subsequently use these systems to realize powerful quantum technologies such as quantum-enhanced sensors. However, many-body entangled states are difficult to prepare and preserve since internal dynamics and external noise rapidly degrade any useful entanglement. Here, we introduce a protocol that counterintuitively exploits inhomogeneities, a typical source of dephasing in a many-body system, in combination with interactions to generate metrologically useful and robust many-body entangled states. Motivated by current limitations in state-of-the-art three-dimensional (3D) optical lattice clocks (OLCs) operating at quantum degeneracy, we use local interactions in a Hubbard model with spin-orbit coupling to achieve a spin-locking effect. In addition to prolonging inter-particle spin coherence, spin-locking transforms the dephasing effect of spin-orbit coupling into a collective spin-squeezing process that can be further enhanced by applying a modulated drive. Our protocol is fully compatible with state-of-the-art 3D OLC interrogation schemes and may be used to improve their sensitivity, which is currently limited by the intrinsic quantum noise of independent atoms. We demonstrate that even with realistic experimental imperfections, our protocol may generate $\sim10$--$14$ dB of spin squeezing in $\sim1$ second with $\sim10^2$--$10^4$ atoms. This capability allows OLCs to enter a new era of quantum enhanced sensing using correlated quantum states of driven non-equilibrium systems. | quantum physics |
Depression detection using vocal biomarkers is a highly researched area. Articulatory coordination features (ACFs) are developed based on the changes in neuromotor coordination due to psychomotor slowing, a key feature of Major Depressive Disorder. However findings of existing studies are mostly validated on a single database which limits the generalizability of results. Variability across different depression databases adversely affects the results in cross corpus evaluations (CCEs). We propose to develop a generalized classifier for depression detection using a dilated Convolutional Neural Network which is trained on ACFs extracted from two depression databases. We show that ACFs derived from Vocal Tract Variables (TVs) show promise as a robust set of features for depression detection. Our model achieves relative accuracy improvements of ~10% compared to CCEs performed on models trained on a single database. We extend the study to show that fusing TVs and Mel-Frequency Cepstral Coefficients can further improve the performance of this classifier. | electrical engineering and systems science |
The Euclidean scattering transform was introduced nearly a decade ago to improve the mathematical understanding of convolutional neural networks. Inspired by recent interest in geometric deep learning, which aims to generalize convolutional neural networks to manifold and graph-structured domains, we define a geometric scattering transform on manifolds. Similar to the Euclidean scattering transform, the geometric scattering transform is based on a cascade of wavelet filters and pointwise nonlinearities. It is invariant to local isometries and stable to certain types of diffeomorphisms. Empirical results demonstrate its utility on several geometric learning tasks. Our results generalize the deformation stability and local translation invariance of Euclidean scattering, and demonstrate the importance of linking the used filter structures to the underlying geometry of the data. | statistics |
Understanding particle drifts in a non-symmetric magnetic field is of primary interest in designing optimized stellarators to minimize the neoclassical radial loss of particles. Quasisymmetry and omnigeneity, two distinct properties proposed to ensure radial localization of collisionless trapped particles in stellarators, have been explored almost exclusively for magnetic fields that generate nested flux surfaces. In this work, we extend these concepts to the case where all the field lines are closed. We then study charged particle dynamics in the exact non-symmetric vacuum magnetic field with closed field lines, obtained recently by Weitzner and Sengupta (arXiv:1909.01890), which possesses X-points. The magnetic field can be used to construct magnetohydrodynamic equilibrium in the limit of vanishing plasma pressure. Expanding in the amplitude of the non-symmetric fields, we explicitly evaluate the omnigeneity and quasisymmetry constraints. We show that the magnetic field is omnigeneous in the sense that the drift surfaces coincide with the pressure surfaces. However, it is not quasisymmetric according to the standard definitions. | physics |
A fractional matching of a graph $G$ is a function $f$ giving each edge a number in $[0,1]$ such that $\sum_{e\in\Gamma(v)}f(e)\leq1$ for each vertex $v\in V(G)$, where $\Gamma(v)$ is the set of edges incident to $v$. The fractional matching number of $G$, written $\alpha^{\prime}_*(G)$, is the maximum value of $\sum_{e\in E(G)}f(e)$ over all fractional matchings. In this paper, we investigate the relations between the fractional matching number and the signless Laplacian spectral radius of a graph. Moreover, we give some sufficient spectral conditions for the existence of a fractional perfect matching. | mathematics |
Rare diseases affecting 350 million individuals are commonly associated with delay in diagnosis or misdiagnosis. To improve those patients' outcome, rare disease detection is an important task for identifying patients with rare conditions based on longitudinal medical claims. In this paper, we present a deep learning method for detecting patients with exocrine pancreatic insufficiency (EPI) (a rare disease). The contribution includes 1) a large longitudinal study using 7 years medical claims from 1.8 million patients including 29,149 EPI patients, 2) a new deep learning model using generative adversarial networks (GANs) to boost rare disease class, and also leveraging recurrent neural networks to model patient sequence data, 3) an accurate prediction with 0.56 PR-AUC which outperformed benchmark models in terms of precision and recall. | computer science |
The interplay between strong light-matter interactions and charge doping represents an important frontier in the pursuit of exotic many-body physics and optoelectronics. Here, we consider a simplified model of a two-dimensional semiconductor embedded in a microcavity, where the interactions between electrons and holes are strongly screened, allowing us to develop a diagrammatic formalism for this system with an analytic expression for the exciton-polariton propagator. We apply this to the scattering of spin-polarized polaritons and electrons, and show that this is strongly enhanced compared with exciton-electron interactions. As we argue, this counter-intuitive result is a consequence of the shift of the collision energy due to the strong light-matter coupling, and hence this is a generic feature that applies also for more realistic electron-hole and electron-electron interactions. We furthermore demonstrate that the lack of Galilean invariance inherent in the light-matter coupled system can lead to a narrow resonance-like feature for polariton-electron interactions close to the polariton inflection point. Our results are potentially important for realizing tunable light-mediated interactions between charged particles. | condensed matter |
Radio Resource Management (RRM) in 5G mobile communication is a challenging problem for which Recurrent Neural Networks (RNN) have shown promising results. Accelerating the compute-intensive RNN inference is therefore of utmost importance. Programmable solutions are desirable for effective 5G-RRM top cope with the rapidly evolving landscape of RNN variations. In this paper, we investigate RNN inference acceleration by tuning both the instruction set and micro-architecture of a micro-controller-class open-source RISC-V core. We couple HW extensions with software optimizations to achieve an overall improvement in throughput and energy efficiency of 15$\times$ and 10$\times$ w.r.t. the baseline core on a wide range of RNNs used in various RRM tasks. | electrical engineering and systems science |
We propose the multi-layered cepstrum (MLC) method to estimate multiple fundamental frequencies (MF0) of a signal under challenging contamination such as high-pass filter noise. Taking the operation of cepstrum (i.e., Fourier transform, filtering, and nonlinear activation) recursively, MLC is shown as an efficient method to enhance MF0 saliency in a step-by-step manner. Evaluation on a real-world polyphonic music dataset under both normal and low-fidelity conditions demonstrates the potential of MLC. | electrical engineering and systems science |
This paper introduces equivariant hamiltonian flows, a method for learning expressive densities that are invariant with respect to a known Lie-algebra of local symmetry transformations while providing an equivariant representation of the data. We provide proof of principle demonstrations of how such flows can be learnt, as well as how the addition of symmetry invariance constraints can improve data efficiency and generalisation. Finally, we make connections to disentangled representation learning and show how this work relates to a recently proposed definition. | statistics |
We study black-box adversarial attacks for image classifiers in a constrained threat model, where adversaries can only modify a small fraction of pixels in the form of scratches on an image. We show that it is possible for adversaries to generate localized \textit{adversarial scratches} that cover less than $5\%$ of the pixels in an image and achieve targeted success rates of $98.77\%$ and $97.20\%$ on ImageNet and CIFAR-10 trained ResNet-50 models, respectively. We demonstrate that our scratches are effective under diverse shapes, such as straight lines or parabolic B\a'ezier curves, with single or multiple colors. In an extreme condition, in which our scratches are a single color, we obtain a targeted attack success rate of $66\%$ on CIFAR-10 with an order of magnitude fewer queries than comparable attacks. We successfully launch our attack against Microsoft's Cognitive Services Image Captioning API and propose various mitigation strategies. | computer science |
The problem of private information retrieval with graph-based replicated storage was recently introduced by Raviv, Tamo and Yaakobi. Its capacity remains open in almost all cases. In this work the asymptotic (large number of messages) capacity of this problem is studied along with its generalizations to include arbitrary $T$-privacy and $X$-security constraints, where the privacy of the user must be protected against any set of up to $T$ colluding servers and the security of the stored data must be protected against any set of up to $X$ colluding servers. A general achievable scheme for arbitrary storage patterns is presented that achieves the rate $(\rho_{\min}-X-T)/N$, where $N$ is the total number of servers, and each message is replicated at least $\rho_{\min}$ times. Notably, the scheme makes use of a special structure inspired by dual Generalized Reed Solomon (GRS) codes. A general converse is also presented. The two bounds are shown to match for many settings, including symmetric storage patterns. Finally, the asymptotic capacity is fully characterized for the case without security constraints $(X=0)$ for arbitrary storage patterns provided that each message is replicated no more than $T+2$ times. As an example of this result, consider PIR with arbitrary graph based storage ($T=1, X=0$) where every message is replicated at exactly $3$ servers. For this $3$-replicated storage setting, the asymptotic capacity is equal to $2/\nu_2(G)$ where $\nu_2(G)$ is the maximum size of a $2$-matching in a storage graph $G[V,E]$. In this undirected graph, the vertices $V$ correspond to the set of servers, and there is an edge $uv\in E$ between vertices $u,v$ only if a subset of messages is replicated at both servers $u$ and $v$. | computer science |
We present stellar evolution calculations from the Asymptotic Giant Branch (AGB) to the Planetary Nebula (PN) phase for models of initial mass 1.2 M\odot and 2.0 M\odot that experience a Late Thermal Pulse (LTP), a helium shell flash that occurs following the AGB and causes a rapid looping evolution between the AGB and PN phase. We use these models to make comparisons to the central star of the Stingray Nebula, V839 Ara (SAO 244567). The central star has been observed to be rapidly evolving (heating) over the last 50 to 60 years and rapidly dimming over the past 20 - 30 years. It has been reported to belong to the youngest known planetary nebula, now rapidly fading in brightness. In this paper we show that the observed timescales, sudden dimming, and increasing Log(g), can all be explained by LTP models of a specific variety. We provide a possible explanation for the nebular ionization, the 1980s sudden mass loss episode, the sudden decline in mass loss, and the nebular recombination and fading. | astrophysics |
New sets of young M dwarfs with complex, sharp-peaked, and strictly periodic photometric modulations have recently been discovered with Kepler/K2 and TESS data. All of these targets are part of young star-forming associations. Suggested explanations range from accretion of dust disks to co-rotating clouds of material to stellar spots getting periodically occulted by spin-orbit-misaligned dust disks. Here we provide a comprehensive overview of all aspects of these hypotheses, and add more observational constraints in an effort to understand these objects with photometry from TESS and the SPECULOOS Southern Observatory (SSO). We scrutinize the hypotheses from three different angles: (1) we investigate the occurrence rates of these scenarios through existing young star catalogs; (2) we study the longevity of these features using over one year of combined photometry from TESS and SSO; and (3) we probe the expected color dependency with multi-color photometry from SSO. In this process, we also revisit the stellar parameters accounting for activity effects, study stellar flares as activity indicators over year-long time scales, and develop toy models to imitate typical morphologies. We identify which parts of the hypotheses hold true or are challenged by these new observations. So far, none of the hypotheses stand out as a definite answer, and each come with limitations. While the mystery of these complex rotators remains, we here add valuable observational pieces to the puzzle for all studies going forward. | astrophysics |
We consider a homogeneous heteronuclear Bose mixture with contact interactions at the mean-field collapse, i.e. with interspecies attraction equal to the mean geometrical intraspecies repulsion. We show that the Lee-Huang-Yang (LHY) energy functional is accurately approximated by an expression that has the same functional form as in the homonuclear case. The approximated energy functional is characterized by two exponents, which can be treated as fitting parameters. We demonstrate that the values of these parameters which preserve the invariance under permutation of the two atomic species are exactly those of the homonuclear case. Deviations from the exact expression of LHY energy functional are discussed quantitatively and a specific application is described. | condensed matter |
In this note a review, some considerations and new results about maps with values in a distribution space and domain in a $\sigma$-finite measure space $X$, are obtained. In particular, it is a survey about Bessel, frames and bases (in particular Riesz and Gel'fand bases) in a distribution space. In this setting, the Riesz-Fischer maps and semi-frames are defined and new results about them are attained. Some example in tempered distributions space are examined. | mathematics |
Working within the Stochastic Series Expansion (SSE) framework, we construct efficient quantum cluster algorithms for transverse field Ising antiferromagnets on the pyrochlore lattice and the planar pyrochlore lattice, for the fully frustrated square lattice Ising model in a transverse field (dual to the 2+1 dimensional odd Ising gauge theory), and for a transverse field Ising model with multi-spin interactions on the square lattice, which is dual to a 2+1 dimensional even Ising gauge theory (and reduces to the two dimensional quantum loop model in a certain limit). Our cluster algorithms use a microcanonical update procedure that generalizes and exploits the notion of "pre-marked motifs" introduced earlier in the context of a quantum cluster algorithm for triangular lattice transverse field Ising antiferromagnets. We demonstrate that the resulting algorithms are significantly more efficient than the standard link percolation based quantum cluster approach. We also introduce a new canonical update scheme that leads to a further improvement in measurement of some observables arising from its ability to make one-dimensional clusters in the "imaginary time" direction. Finally, we demonstrate that refinements in the choice of premarking strategies can lead to additional improvements in the efficiency of the microcanonical updates. As a first example of the physics that can be studied using these algorithmic developments, we obtain evidence for a power-law ordered intermediate-temperature phase associated with the two-step melting of long-range order in the fully frustrated square lattice transverse field Ising model. | condensed matter |
We propose a computationally light method for estimating similarities between text documents, which we call the density similarity (DS) method. The method is based on a word embedding in a high-dimensional Euclidean space and on kernel regression, and takes into account semantic relations among words. We find that the accuracy of this method is virtually the same as that of a state-of-the-art method, while the gain in speed is very substantial. Additionally, we introduce generalized versions of the top-k accuracy metric and of the Jaccard metric of agreement between similarity models. | computer science |
In this paper, we mainly focus on the penalized maximum likelihood estimation (MLE) of the high-dimensional approximate factor model. Since the current estimation procedure can not guarantee the positive definiteness of the error covariance matrix, by reformulating the estimation of error covariance matrix and based on the lagrangian duality, we propose an accelerated proximal gradient (APG) algorithm to give a positive definite estimate of the error covariance matrix. Combined the APG algorithm with EM method, a new estimation procedure is proposed to estimate the high-dimensional approximate factor model. The new method not only gives positive definite estimate of error covariance matrix but also improves the efficiency of estimation for the high-dimensional approximate factor model. Although the proposed algorithm can not guarantee a global unique solution, it enjoys a desirable non-increasing property. The efficiency of the new algorithm on estimation and forecasting is also investigated via simulation and real data analysis. | statistics |
Yarel is a core reversible programming language that implements a class of permutations, defined recursively, which are primitive recursive complete. The current release of Yarel syntax and operational semantics, implemented by compiling Yarel to Java, is 0.1.0, according to Semantic Versioning 2.0.0. Yarel comes with Yarel-IDE, developed as an Eclipse plug-in by means of XText. | computer science |
Observations indicate that nearly all galaxies contain supermassive black holes (SMBHs) at their centers. When galaxies merge, their component black holes form SMBH binaries (SMBHBs), which emit low-frequency gravitational waves (GWs) that can be detected by pulsar timing arrays (PTAs). We have searched the recently-released North American Nanohertz Observatory for Gravitational Waves (NANOGrav) 11-year data set for GWs from individual SMBHBs in circular orbits. As we did not find strong evidence for GWs in our data, we placed 95\% upper limits on the strength of GWs from such sources as a function of GW frequency and sky location. We placed a sky-averaged upper limit on the GW strain of $h_0 < 7.3(3) \times 10^{-15}$ at $f_\mathrm{gw}= 8$ nHz. We also developed a technique to determine the significance of a particular signal in each pulsar using ``dropout' parameters as a way of identifying spurious signals in measurements from individual pulsars. We used our upper limits on the GW strain to place lower limits on the distances to individual SMBHBs. At the most-sensitive sky location, we ruled out SMBHBs emitting GWs with $f_\mathrm{gw}= 8$ nHz within 120 Mpc for $\mathcal{M} = 10^9 \, M_\odot$, and within 5.5 Gpc for $\mathcal{M} = 10^{10} \, M_\odot$. We also determined that there are no SMBHBs with $\mathcal{M} > 1.6 \times 10^9 \, M_\odot$ emitting GWs in the Virgo Cluster. Finally, we estimated the number of potentially detectable sources given our current strain upper limits based on galaxies in Two Micron All-Sky Survey (2MASS) and merger rates from the Illustris cosmological simulation project. Only 34 out of 75,000 realizations of the local Universe contained a detectable source, from which we concluded it was unsurprising that we did not detect any individual sources given our current sensitivity to GWs. | astrophysics |
Administrative data allows us to count for the number of residents. The geo-localization of people by mobile phone, by quantifying the number of people at a given moment in time, enriches the amount of useful information for "smart" (cities) evaluations. However, using Telecom Italia Mobile (TIM) data, we are able to characterize the spatio-temporal dynamic of the presences in the city of just TIM users. A strategy to estimate total presences is needed. In this paper we propose a strategy to extrapolate the number of total people by using TIM data only. To do so, we apply a spatial record linkage of mobile phone data with administrative archives using the number of residents at the level of sezione di censimento. | statistics |
The paper aims at developing low-storage implicit Runge-Kutta methods which are easy to implement and achieve higher-order of convergence for both the velocity and pressure in the finite volume formulation of the incompressible Navier-Stokes equations on a static collocated grid. To this end, the effect of the momentum interpolation, a procedure required by the finite volume method for collocated grids, on the differential-algebraic nature of the spatially-discretized Navier-Stokes equations should be examined first. A new framework for the momentum interpolation is established, based on which the semi-discrete Navier-Stokes equations can be strictly viewed as a system of differential-algebraic equations of index 2. The accuracy and convergence of the proposed momentum interpolation framework is examined. We then propose a new method of applying implicit Runge-Kutta schemes to the time-marching of the index 2 system of the incompressible Navier-Stokes equations. Compared to the standard method, the proposed one significantly reduces the numerical difficulties in momentum interpolations and delivers higher-order pressures without requiring additional computational effort. Applying stiff-accurate diagonal implicit Runge-Kutta (DIRK) schemes with the proposed method allows the schemes to attain the classical order of convergence for both the velocity and pressure. We also develop two families of low-storage stiff-accurate DIRK schemes to reduce the storage required by their implementations. Examining the two dimensional Taylor-Green vortex as an example, the spatial and temporal accuracy of the proposed methods in simulating incompressible flow is demonstrated. | mathematics |
We have proposed in several recent papers a critical view of some parts of quantum mechanics (QM) that is methodologically unusual because it rests on analysing the language of QM by using some elementary but fundamental tools of mathematical logic. Our approach proves that some widespread beliefs about QM can be questioned and establishes new links with a classical view, which is significant in the debate on the interpretations of QM. We propose here a brief survey of our results, highlighting their common background. We firstly show how quantum logic (QL) can be embedded into classical logic (CL) if the embedding is required to preserve the logical order and not the algebraic structure, and also how QL can be interpreted as a pragmatic sublanguage within a pragmatic extension of CL. Both these results challenge the thesis that CL and QL formalize the properties of different and incompatible notions of truth. We then show that quantum probability admits an epistemic interpretation if contextuality is taken into account as a basic constituent of the language of QM, which overcomes the interpretation of quantum probability as ontic. Finally, we show that the proofs that QM is a contextual theory stand on a supplementary epistemological assumption that is usually unnoticed and left implicit. Dropping such assumption opens the way, at least in principle, to non-contextual interpretations of QM. | quantum physics |
Previously, we developed a minimal model based on random cooperative strings for the relaxation of supercooled liquids in the bulk and near free interfaces, and we recovered some key experimental observations. In this article, after recalling the main ingredients of the cooperative string model, we study the effective glass transition and surface mobility of various experimentally-relevant confined geometries: freestanding films, supported films, spherical particles, and cylindrical particles, with free interfaces and/or passive substrates. Finally, we introduce a novel way to account for a purely attractive substrate, and explore the impact of the latter in the previous geometries. | condensed matter |
In this work we study the behavior of massless fermions in a graphene wormhole and in the presence of an external magnetic field. The graphene wormhole is made from two sheets of graphene that play the roles of asymptotically flat spaces connected through a carbon nanotube with a zig-zag boundary. We solve the massless Dirac equation in this geometry and analyze its wave function. We show that the energy spectra of these solutions exhibit similar behavior to Landau levels. | high energy physics theory |
For further improving the capacity and reliability of optical networks, a closed-loop autonomous architecture is preferred. Considering a large number of optical components in an optical network and many digital signal processing modules in each optical transceiver, massive real-time data can be collected. However, for a traditional monitoring structure, collecting, storing and processing a large size of data are challenging tasks. Moreover, strong correlations and similarities between data from different sources and regions are not properly considered, which may limit function extension and accuracy improvement. To address abovementioned issues, a data-fusion-assisted telemetry layer between the physical layer and control layer is proposed in this paper. The data fusion methodologies are elaborated on three different levels: Source Level, Space Level and Model Level. For each level, various data fusion algorithms are introduced and relevant works are reviewed. In addition, proof-of-concept use cases for each level are provided through simulations, where the benefits of the data-fusion-assisted telemetry layer are shown. | electrical engineering and systems science |
User studies have shown that reducing the latency of our simultaneous lecture translation system should be the most important goal. We therefore have worked on several techniques for reducing the latency for both components, the automatic speech recognition and the speech translation module. Since the commonly used commitment latency is not appropriate in our case of continuous stream decoding, we focused on word latency. We used it to analyze the performance of our current system and to identify opportunities for improvements. In order to minimize the latency we combined run-on decoding with a technique for identifying stable partial hypotheses when stream decoding and a protocol for dynamic output update that allows to revise the most recent parts of the transcription. This combination reduces the latency at word level, where the words are final and will never be updated again in the future, from 18.1s to 1.1s without sacrificing performance in terms of word error rate. | electrical engineering and systems science |
We perform a study of $W$-boson production in polarized proton-proton collisions through next-to-next-to-leading order (NNLO) in perturbative QCD. This calculation is required to extend the extraction of polarized parton distribution functions to NNLO accuracy. We present differential distributions at $\sqrt{s}=510$ GeV, relevant for comparison to measurements from the Relativistic Heavy Ion Collider (RHIC). The NNLO QCD corrections significantly reduce the scale dependence of the cross section. We compare the longitudinal single-spin asymmetries as a function of lepton pseudorapidity to RHIC data. The asymmetries exhibit excellent stability under perturbative QCD corrections. | high energy physics phenomenology |
Using the results of \cite{P1}, we get some estimates of warping functions for isometric immersions by changing the target manifolds by some types of Riemannian manifolds: constant space forms and Hermitian symmetric spaces. And we deal with equality cases and obtain their applications. Finally, we give some open problems. | mathematics |
The LHCb Collaboration recently reported the observation of a new excited bottom baryon $\Xi_b(6227)^0$ and announced an improvement in the measurements related to the previously observed $\Xi_b(6227)^-$ state. We conduct an analysis for $\Xi_b(6227)^0$ state considering it as isospin partner of the $\Xi_b(6227)^-$ resonance and possibly $1P$ or $2S$ excited state with spin $J=\frac{3}{2}$. The corresponding masses for both possibilities have consistent results with the experimental data, indicating that only with the mass sum rules, one can not make exact decision on the nature and quantum numbers of this state. To go further, the decays of these possible excited states to $\Xi_b^- \pi^+$ final state are also considered and the relevant strong coupling constants are extracted from the light cone sum rules. The obtained decay width values support the possibility of $\Xi_b(6227)^0$ to be the $1P$ excited state of $\Xi_b(5945)^0$ baryon. | high energy physics phenomenology |
A major issue in harmonic analysis is to capture the phase dependence of frequency representations, which carries important signal properties. It seems that convolutional neural networks have found a way. Over time-series and images, convolutional networks often learn a first layer of filters which are well localized in the frequency domain, with different phases. We show that a rectifier then acts as a filter on the phase of the resulting coefficients. It computes signal descriptors which are local in space, frequency and phase. The non-linear phase filter becomes a multiplicative operator over phase harmonics computed with a Fourier transform along the phase. We prove that it defines a bi-Lipschitz and invertible representation. The correlations of phase harmonics coefficients characterise coherent structures from their phase dependence across frequencies. For wavelet filters, we show numerically that signals having sparse wavelet coefficients can be recovered from few phase harmonic correlations, which provide a compressive representation | electrical engineering and systems science |
Emergent quantum phases driven by electronic interactions can manifest in materials with narrowly dispersing, i.e. "flat", energy bands. Recently, flat bands have been realized in a variety of graphene-based heterostructures using the tuning parameters of twist angle, layer stacking and pressure, and resulting in correlated insulator and superconducting states. Here we report the experimental observation of similar correlated phenomena in twisted bilayer tungsten diselenide (tWSe2), a semiconducting transition metal dichalcogenide (TMD). Unlike twisted bilayer graphene where the flat band appears only within a narrow range around a "magic angle", we observe correlated states over a continuum of angles, spanning 4 degree to 5.1 degree. A Mott-like insulator appears at half band filling that can be sensitively tuned with displacement field. Hall measurements supported by ab initio calculations suggest that the strength of the insulator is driven by the density of states at half filling, consistent with a 2D Hubbard model in a regime of moderate interactions. At 5.1 degree twist, we observe evidence of superconductivity upon doping away from half filling, reaching zero resistivity around 3 K. Our results establish twisted bilayer TMDs as a model system to study interaction-driven phenomena in flat bands with dynamically tunable interactions. | condensed matter |
Speech encodes a wealth of information related to human behavior and has been used in a variety of automated behavior recognition tasks. However, extracting behavioral information from speech remains challenging including due to inadequate training data resources stemming from the often low occurrence frequencies of specific behavioral patterns. Moreover, supervised behavioral modeling typically relies on domain-specific construct definitions and corresponding manually-annotated data, rendering generalizing across domains challenging. In this paper, we exploit the stationary properties of human behavior within an interaction and present a representation learning method to capture behavioral information from speech in an unsupervised way. We hypothesize that nearby segments of speech share the same behavioral context and hence map onto similar underlying behavioral representations. We present an encoder-decoder based Deep Contextualized Network (DCN) as well as a Triplet-Enhanced DCN (TE-DCN) framework to capture the behavioral context and derive a manifold representation, where speech frames with similar behaviors are closer while frames of different behaviors maintain larger distances. The models are trained on movie audio data and validated on diverse domains including on a couples therapy corpus and other publicly collected data (e.g., stand-up comedy). With encouraging results, our proposed framework shows the feasibility of unsupervised learning within cross-domain behavioral modeling. | electrical engineering and systems science |
We investigate the production of exotic tetraquarks, $QQ\bar{q}\bar{q} \equiv T_{QQ}$ ($Q=c$ or $b$ and $q=u$ or $d$), in relativistic heavy-ion collisions using the quark coalescence model. The $T_{QQ}$ yield is given by the overlap of the density matrix of the constituents in the emission source with the Wigner function of the produced tetraquark. The tetraquark wave function is obtained from exact solutions of the four-body problem using realistic constituent models. The production yields are typically one order of magnitude smaller than previous estimations based on simplified wave functions for the tetraquarks. We also evaluate the consequences of the partial restoration of chiral symmetry at the hadronization temperature on the coalescence probability. Such effects, in addition to increasing the stability of the tetraquarks, lead to an enhancement of the production yields, pointing towards an excellent discovery potential in forthcoming experiments. We discuss further consequences of our findings for the search of exotic tetraquarks in central Pb+Pb collisions at the LHC. | high energy physics phenomenology |
In this paper, the bit error rate (BER) performance of spatial modulation (SM) systems is investigated both theoretically and by simulation in a non-stationary Kronecker-based massive multiple-input-multiple-output (MIMO) channel model in multi-user (MU) scenarios. Massive MIMO SM systems are considered in this paper using both a time-division multiple access (TDMA) scheme and a block diagonalization (BD) based precoding scheme, for different system settings. Their performance is compared with a vertical Bell labs layered space-time (V-BLAST) architecture based system and a conventional channel inversion system. It is observed that a higher cluster evolution factor can result in better BER performance of SM systems due to the low correlation among sub-channels. Compared with the BD-SM system, the SM system using the TDMA scheme obtains a better BER performance but with a much lower total system data rate. The BD-MU-SM system achieves the best trade-off between the data rate and the BER performance among all of the systems considered. When compared with the V-BLAST system and the channel inversion system, SM approaches offer advantages in performance for MU massive MIMO systems. | electrical engineering and systems science |
Finding physical principles lying behind quantum mechanics is essential to understand various quantum features, e.g., the quantum correlations, in a theory-independent manner. Here we propose such a principle, namely, no disturbance without uncertainty, stating that the disturbance caused by a measurement to a subsequent incompatible measurement is no larger than the uncertainty of the first measurement, equipped with suitable theory-independent measures for disturbance and uncertainty. When applied to local systems in a multipartite scenario, our principle imposes such a strong constraint on non-signaling correlations that quantum correlations can be recovered in many cases: i. it accounts for the Tsirelsons bound; ii. it provides the so far tightest boundary for a family of the noisy super-nonlocal box with 3 parameters, and iii. it rules out an almost quantum correlation from quantum correlations by which all the previous principles fail, as well as the celebrated quantum criterion due to Navascues, Pironio, and Acin. Our results pave the way to understand nonlocality exhibited in quantum correlations from local principles. | quantum physics |
Exotic spinor fields arise from inequivalent spin structures on non-trivial topological manifolds, $M$. This induces an additional term in the Dirac operator, defined by the cohomology group $H^1(M,\mathbb{Z}_2)$ that rules a Cech cohomology class. This formalism is extended for manifolds of any finite dimension, endowed with a metric of arbitrary signature. The exotic corrections to heat kernel coefficients, relating spectral properties of exotic Dirac operators to the geometric invariants of $M$, are derived and scrutinized. | high energy physics theory |
We perform the projector quantum Monte Carlo simulation of the half-filled attractive SU(3) Hubbard model on a honeycomb lattice, exploring the effects of SU(3) symmetry on the correlated attractive Dirac fermions. Our simulation indicates the absence of pairing order in the system and shows a quantum phase transition from the semimetal to charge density wave (CDW) at the critical point $U_c=-1.52(2)$. We demonstrate that this quantum phase transition belongs to the chiral Ising universality class according to the numerically determined critical exponents $\nu=0.82(3)$ and $\eta=0.58(4)$. With the increase of coupling strength, the trion formation is investigated, and the change in probability of the on-site trion occupancy infers the coexistence of on-site trionic and off-site trionic CDW states at half-filling. | condensed matter |
Noise sources are ubiquitous in Nature and give rise to a description of quantum systems in terms of stochastic Hamiltonians. Decoherence dominates the noise-averaged dynamics and leads to dephasing and the decay of coherences in the eigenbasis of the fluctuating operator. For energy-diffusion processes stemming from fluctuations of the system Hamiltonian the characteristic decoherence time is shown to be proportional to the heat capacity. We analyze the decoherence dynamics of entangled CFTs and characterize the dynamics of the purity, and logarithmic negativity, that are shown to decay monotonically as a function of time. The converse is true for the quantum Renyi entropies. From the short-time asymptotics of the purity, the decoherence rate is identified and shown to be proportional to the central charge. The fixed point characterizing long times of evolution depends on the presence degeneracies in the energy spectrum. We show how information loss associated with decoherence can be attributed to its leakage to an auxiliary environment and discuss how gravity duals of decoherence dynamics in holographic CFTs looks like in AdS/CFT. We find that the inner horizon region of eternal AdS black hole is highly squeezed due to decoherence. | high energy physics theory |
Techniques known as Nonlinear Set Membership prediction, Lipschitz Interpolation or Kinky Inference are approaches to machine learning that utilise presupposed Lipschitz properties to compute inferences over unobserved function values. Provided a bound on the true best Lipschitz constant of the target function is known a priori they offer convergence guarantees as well as bounds around the predictions. Considering a more general setting that builds on Hoelder continuity relative to pseudo-metrics, we propose an online method for estimating the Hoelder constant online from function value observations that possibly are corrupted by bounded observational errors. Utilising this to compute adaptive parameters within a kinky inference rule gives rise to a nonparametric machine learning method, for which we establish strong universal approximation guarantees. That is, we show that our prediction rule can learn any continuous function in the limit of increasingly dense data to within a worst-case error bound that depends on the level of observational uncertainty. We apply our method in the context of nonparametric model-reference adaptive control (MRAC). Across a range of simulated aircraft roll-dynamics and performance metrics our approach outperforms recently proposed alternatives that were based on Gaussian processes and RBF-neural networks. For discrete-time systems, we provide guarantees on the tracking success of our learning-based controllers both for the batch and the online learning setting. | mathematics |