text
stringlengths 11
9.77k
| label
stringlengths 2
104
|
---|---|
Inspired by the experiment from Moresco \& Alboussi\`ere (2004, J. Fluid Mech.), we study the stability of a liquid metal flow in a rectangular, electrically insulating duct with a steady homogeneous transverse magnetic field. The Lorentz force tends to eliminate velocity variations along the magnetic field, leading to a quasi-two dimensional base flow with Hartmann boundary layers near the walls perpendicular to the magnetic field, and Shercliff layers near the walls parallel to the field. Since the Lorentz force strongly opposes the growth of perturbations with a dependence along the magnetic field direction too, we represent the flow with Sommeria \& Moreau's (1982, J. Fluid Mech.) model, a two-dimensional shallow water model with linear friction accounting for the effect of the Hartmann layer. The simplicity of this model makes it possible to study the stability and transient growth of quasi-two dimensional perturbations over an extensive range of parameters up to the limit of high magnetic fields, where the Reynolds number based on the Shercliff layer thickness $Re/H^{1/2}$ becomes the only relevant parameter. Tollmien-Schlichting waves are the most linearly unstable mode, with a further unstable mode $H \gtrsim 42$. The flow is linearly unstable for $Re/H^{1/2}\gtrsim 48350$ and energetically stable for $Re/H^{1/2}\lesssim 65.32$. Between these two bounds, non-modal quasi-two dimensional perturbations undergo significant transient growth (between 2 and 7 times more than in the case of a purely 2D Poiseuille flow, and for much more subcritical values of $Re$). In the limit of a high magnetic field, the maximum gain $G_{max}$ associated to this transient growth varies as $G_{max} \sim (Re/Re_c)^{2/3}$ and occurs at time $t_{Gmax}\sim(Re/Re_c)^{1/3}$ for streamwise wavenumbers of the same order of magnitude as the critical wavenumber for the linear stability. | physics |
Hadronization of heavy quarks reveals various unusual features. Gluon radiation by a heavy quark originated from a hard process, ceases shortly on a distance of the order of few fm. Due to the dead-cone effect a heavy quark radiates only a small fraction of its energy. This is why the measured fragmentation function D(z) peaks at large z. Hadronization finishes at very short distances, well shorter than 1 fm, by production of a colorless small-size Qq-bar dipole. This ensures dominance of a perturbative mechanism and makes possible factorization of short and long distances. The latter corresponds to final state interactions of the produced dipole propagating through a dense medium. The results provide good description of data on beauty and charm suppression in heavy ion collisions, fixing the transport coefficient for b-quarks about twice smaller than for charm, and both significantly lower that the values determined from data on suppression of high-pT light hadrons. We relate this to reduction of the QCD coupling at higher scales, and suppression of radiation by the dead-cone effect. | high energy physics phenomenology |
The aim of this article is to study a Cahn-Hilliard model for a multicomponent mixture with cross-diffusion effects, degenerate mobility and where only one of the species does separate from the others. We define a notion of weak solution adapted to possible degeneracies and our main result is (global in time) existence. In order to overcome the lack of a-priori estimates, our proof uses the formal gradient flow structure of the system and an extension of the boundedness by entropy method which involves a careful analysis of an auxiliary variational problem. This allows to obtain solutions to an approximate, time-discrete system. Letting the time step size go to zero, we recover the desired weak solution where, due to their low regularity, the Cahn-Hilliard terms require a special treatment. | mathematics |
The Poisson equation has wide applications in many areas of science and engineering. Although there are some quantum algorithms that can efficiently solve the Poisson equation, they generally require a fault-tolerant quantum computer which is beyond the current technology. In this paper, we propose a Variational Quantum Algorithm (VQA) to solve the Poisson equation, which can be executed on Noise Intermediate-Scale Quantum (NISQ) devices. In detail, we first adopt the finite difference method to transform the Poisson equation into a linear system. Then, according to the special structure of the linear system, we find an explicit tensor product decomposition, with only $2\log n+1$ items, of its coefficient matrix under a specific set of simple operators, where $n$ is the dimension of the coefficient matrix. This implies that the proposed VQA only needs $O(\log n)$ measurements, which dramatically reduce quantum resources. Additionally, we perform quantum Bell measurements to efficiently evaluate the expectation values of simple operators. Numerical experiments demonstrate that our algorithm can effectively solve the Poisson equation. | quantum physics |
Efficient operation of a submillimeter interferometer requires remote (preferably automated) control of mechanically tuned local oscillators, phase-lock loops, mixers, optics, calibration vanes and cryostats. The present control system for these aspects of the Submillimeter Array (SMA) will be described. Distributed processing forms the underlying architecture and the software is split between hardware platforms in a leader/follower arrangement. In each antenna cabin, a serial network of up to ten independent 80C196 microcontroller boards attaches to the real-time PowerPC computer (running LynxOS). A multi-threaded, gcc-compiled leader program on the PowerPC accepts top-level requests via remote procedure calls (RPC), subsequently dispatches tuning commands to the relevant follower microcontrollers, and regularly reports the system status to optical-fiber-based reflective memory for common access by the telescope monitor and error reporting system. All serial communication occurs asynchronously via encoded, variable-length packets. The microcontrollers respond to the requested commands and queries by accessing non-volatile, rewriteable lookup-tables (when appropriate) and executing embedded software that operates additional electronic devices (DACs, ADCs, etc.). Since various receiver hardware components require linear or rotary motion, each microcontroller also implements a position servo via a one-millisecond interrupt service routine which drives a DC-motor/encoder combination that remains standard across each subsystem. | astrophysics |
In this paper, we study the change point localization problem in a sequence of dependent nonparametric random dot product graphs. To be specific, assume that at every time point, a network is generated from a nonparametric random dot product graph model (see e.g. Athreya et al., 2017), where the latent positions are generated from unknown underlying distributions. The underlying distributions are piecewise constant in time and change at unknown locations, called change points. Most importantly, we allow for dependence among networks generated between two consecutive change points. This setting incorporates edge-dependence within networks and temporal dependence between networks, which is the most flexible setting in the published literature. To accomplish the task of consistently localizing change points, we propose a novel change point detection algorithm, consisting of two steps. First, we estimate the latent positions of the random dot product model, our theoretical result being a refined version of the state-of-the-art results, allowing the dimension of the latent positions to grow unbounded. Subsequently, we construct a nonparametric version of the CUSUM statistic (Page, 1954, Padilla et al., 2019) that allows for temporal dependence. Consistent localization is proved theoretically and supported by extensive numerical experiments, which illustrate state-of-the-art performance. We also provide in depth discussion of possible extensions to give more understanding and insights. | statistics |
We derive a thermodynamic first law for the electrically charged C-metric with vanishing cosmological constant. This spacetime describes a pair of identical accelerating black holes each pulled by a cosmic string. Treating the "boost time" of this spacetime as the canonical time, we find a thermodynamic first law in which every term has an unambiguous physical meaning. We then show how this first law may be derived using Noetherian methods in the covariant phase space formalism. We argue that the area of the acceleration horizon contributes to the entropy and that the appropriate notion of energy of this spacetime is a "boost mass" which vanishes identically. The recovery of the Reissner-Nordstrom first law in the limit of small string tension is also demonstrated. Finally, we compute the action of the Euclidean section of the C-metric and show it agrees with the thermodynamic grand potential, providing an independent confirmation of the validity of our first law. We also briefly speculate on the significance of firewalls in this spacetime. | high energy physics theory |
We study the phase space structure of exact quantum Wightman functions in spatially homogeneous, temporally varying systems. In addition to the usual mass shells, the Wightman functions display additional coherence shells around zero frequency $k_0=0$, which carry the information of the local quantum coherence of particle-antiparticle pairs. We find also other structures, which encode non-local correlations in time, and discuss their role and decoherence. We give a simple derivation of the cQPA formalism, a set of quantum transport equations, that can be used to study interacting systems including the local quantum coherence. We compute quantum currents created by a temporal change in a particle's mass, comparing the exact Wightman function approach, the cQPA and the semiclassical methods. We find that the semiclassical approximation, which is fully encompassed by the cQPA, works surprisingly well even for very sharp temporal features. This is encouraging for the application of semiclassical methods in electroweak baryogenesis with strong phase transitions. | high energy physics theory |
We study the eigenstate properties of a nonintegrable spin chain that was recently realized experimentally in a Rydberg-atom quantum simulator. In the experiment, long-lived coherent many-body oscillations were observed only when the system was initialized in a particular product state. This pronounced coherence has been attributed to the presence of special "scarred" eigenstates with nearly equally-spaced energies and putative nonergodic properties despite their finite energy density. In this paper we uncover a surprising connection between these scarred eigenstates and low-lying quasiparticle excitations of the spin chain. In particular, we show that these eigenstates can be accurately captured by a set of variational states containing a macroscopic number of magnons with momentum $\pi$. This leads to an interpretation of the scarred eigenstates as finite-energy-density condensates of weakly interacting $\pi$-magnons. One natural consequence of this interpretation is that the scarred eigenstates possess long-range order in both space and time, providing a rare example of the spontaneous breaking of continuous time-translation symmetry. We verify numerically the presence of this space-time crystalline order and explain how it is consistent with established no-go theorems precluding its existence in ground states and at thermal equilibrium. | condensed matter |
We present a formal verification of the functional correctness of the Muen Separation Kernel. Muen is representative of the class of modern separation kernels that leverage hardware virtualization support, and are generative in nature in that they generate a specialized kernel for each system configuration. These features pose substantial challenges to existing verification techniques. We propose a verification framework called conditional parametric refinement which allows us to formally reason about generative systems. We use this framework to carry out a conditional refinement-based proof of correctness of the Muen kernel generator. Our analysis of several system configurations shows that our technique is effective in producing mechanized proofs of correctness, and also in identifying issues that may compromise the separation property. | computer science |
The purpose of this work is to compare kinematics of small-scale current vortices located near the core-mantle boundary with high-speed anomalies of seismic wave velocity in the lowest mantle asso-ciated with the subduction zones. The small-scale vortex paths were early obtained by the authors in the frame of the macro model of the main geomagnetic field sources. Two sources were chosen whose kine-matics are characterized by the complete absence of the western drift and whose paths have a very com-plex shape. Both sources are located in the vicinity of the subduction zones characterized by the extensive coherent regions with increased speed of seismic waves in the lowest mantle. One of them is geographically located near the western coast of Canada and the second one is located in the vicinity of Sumatra. For this study we used the global models of the heterogeneities of seismic wave velocity. It was obtained that the complex trajectories of the vortices is fully consistent with the high-speed anomalies of seismic wave velocity in the lowest mantle. It can be assumed that mixing up with the matter of the lowest mantle, the substance of the liquid core rises along the lowest mantle channel and promotes its further increase. In addition, the volume of oceanic crust, subducted millions of years ago, turned out to be sufficient to penetrate into the liquid core, forming the complex shape restrictions for free circulation of the core liquid. | physics |
Neural Networks (NNs) can provide major empirical performance improvements for robotic systems, but they also introduce challenges in formally analyzing those systems' safety properties. In particular, this work focuses on estimating the forward reachable set of closed-loop systems with NN controllers. Recent work provides bounds on these reachable sets, yet the computationally efficient approaches provide overly conservative bounds (thus cannot be used to verify useful properties), whereas tighter methods are too intensive for online computation. This work bridges the gap by formulating a convex optimization problem for reachability analysis for closed-loop systems with NN controllers. While the solutions are less tight than prior semidefinite program-based methods, they are substantially faster to compute, and some of the available computation time can be used to refine the bounds through input set partitioning, which more than overcomes the tightness gap. The proposed framework further considers systems with measurement and process noise, thus being applicable to realistic systems with uncertainty. Finally, numerical comparisons show $10\times$ reduction in conservatism in $\frac{1}{2}$ of the computation time compared to the state-of-the-art, and the ability to handle various sources of uncertainty is highlighted on a quadrotor model. | electrical engineering and systems science |
In the Color Glass Condensate, the inclusive spectrum of produced quarks in a heavy ion collision is obtained as the Fourier transform of a $2$-fermion correlation function. Due to its non-locality, the two points of this function must be linked by a Wilson line in order to have a gauge invariant result, but when the quark spectrum is evaluated in a background that has a non-zero chromo-magnetic field, this procedure suffers from an ambiguity related to the choice of the contour defining the Wilson line. In this paper, we use an analytically tractable toy model of the background field in order to study this contour dependence. We show that for a straight contour, unphysical contributions to the spectrum in $p_\perp^{-2}$ and $p_\perp^{-3}$ cancel, leading to a spectrum with a tail in $p_\perp^{-4}$. If the contour defining the Wilson line deviates from a straight line, the path dependence is at most of order $p_\perp^{-5}$ if its curvature is bounded, and of order $p_\perp^{-4}$ otherwise. When the contour is forced to go through a fixed point, the path dependence is even larger, of order $p_\perp^{-2}$. | high energy physics phenomenology |
The International Virtual Observatory Alliance (IVOA) held its bi-annual Interoperability Meetings in May 2019, and in October 2019 following the ADASS XXIX conference. We provide a brief report on the status of the IVOA and the activities of the Interoperability Meetings. | astrophysics |
Travel decisions are fundamental to understanding human mobility, urban economy, and sustainability, but measuring it is challenging and controversial. Previous studies of taxis are limited to taxi stands or hail markets at aggregate spatial units. Here we estimate the dynamic demand and supply of taxis in New York City (NYC) at street segment level, using in-vehicle Global Positioning System (GPS) data which preserve individual privacy. To this end, we model taxi demand and supply as non-stationary Poisson random fields on the road network, and pickups result from income-maximizing drivers searching for impatient passengers. With 868 million trip records of all 13,237 licensed taxis in NYC in 2009 - 2013, we show that while taxi demand are almost the same in 2011 and 2012, it declined about 2% in spring 2013, possibly caused by transportation network companies (TNCs) and fare raise. Contrary to common impression, street-hail taxis out-perform TNCs such as Uber in high-demand locations, suggesting a taxi/TNC regulation change to reduce congestion and pollution. We show that our demand estimates are stable at different supply levels and across years, a property not observed in existing matching functions. We also validate that taxi pickups can be modeled as Poisson processes. Our method is thus simple, feasible, and reliable in estimating street-hail taxi activities at a high spatial resolution; it helps quantify the ongoing discussion on congestion charges to taxis and TNCs. | physics |
We use a representability theorem of G. L. Watson to examine sums of squares in Quaternion rings with integer coefficients. This allows us to determine a large family of such rings where every element expressible as the sum of squares can be written as the sum of 3 squares. | mathematics |
We put forward novel extensions of Starobinsky inflation, involving a class of 'geometric' higher-curvature corrections that yield second-order Friedmann-Lema\^itre equations and second-order-in-time linearized equations around cosmological backgrounds. We determine the range of models within this class that admit an extended phase of slow roll inflation as an attractor. By embedding these theories in anti-de Sitter space, we derive holographic 'unitarity' bounds on the two dominant higher-order curvature corrections. Finally we compute the leading corrections to the spectral properties of scalar and tensor primordial perturbations, including the modified consistency relation $r=-8n_{T}$. Remarkably, the range of models singled out by holography nearly coincides with the current observational bounds on the scalar spectral tilt. Our results indicate that future observations have the potential to discriminate between different higher-curvature corrections considered here. | high energy physics theory |
By utilizing the ultraspinning limit we generate a new class of extremal vanishing horizon (EVH) black holes in odd dimensions ($d\geq5$). Starting from the general multi-spinning Kerr-AdS metrics, we show the EVH limit commutes with the ultraspinning limit, in which the resulting solutions possess a non-compact but finite area manifold for all $(t,r\neq r_+)=const.$ slices. We also demonstrate the near horizon geometries of obtained ultraspinning EVH solutions contain an AdS$_3$ throats, where it would be a BTZ black hole in the near EVH cases. The commutativity of the ultraspinning and near horizon limits for EVH solutions is confirmed as well. Furthermore, we discuss only the five-dimensional case near the EVH point can be viewed as a super-entropic black hole. We also show that the thermodynamics of the obtained solutions agree with the BTZ black hole. Moreover we investigate the EVH/CFT proposal, demonstrating the entropy of $2$d dual CFT and Bekenstein-Hawking entropy are equivalent. | high energy physics theory |
In this paper we obtain a new lower bound on the Erd\H{o}s distinct distances problem in the plane over prime fields. More precisely, we show that for any set $A\subset \mathbb{F}_p^2$ with $|A|\le p^{7/6}$, the number of distinct distances determined by pairs of points in $A$ satisfies $$ |\Delta(A)| \gg |A|^{\frac{1}{2}+\frac{149}{4214}}.$$ Our result gives a new lower bound of $|\Delta{(A)}|$ in the range $|A|\le p^{1+\frac{149}{4065}}$. The main tools we employ are the energy of a set on a paraboloid due to Rudnev and Shkredov, a point-line incidence bound given by Stevens and de Zeeuw, and a lower bound on the number of distinct distances between a line and a set in $\mathbb{F}_p^2$. The latter is the new feature that allows us to improve the previous bound due Stevens and de Zeeuw. | mathematics |
Although common in nature, the self-assembly of small molecules at sold-liquid interfaces is difficult to control in artificial systems. The high mobility of dissolved small molecules limits their residence at the interface, typically restricting the self-assembly to systems under confinement or with mobile tethers between the molecules and the surface. Small hydrogen-bonding molecules can overcome these issues by exploiting group-effect stabilization to achieve non-tethered self-assembly at hydrophobic interfaces. Significantly, the weak molecular interactions with the solid makes it possible to influence the interfacial hydrogen bond network, potentially creating a wide variety of supramolecular structures. Here we investigate the nanoscale details of water and alcohols mixtures self-assembling at the interface with graphite through group effect. We explore the interplay between inter-molecular and surface interactions by adding small amounts of foreign molecules able to interfere with the hydrogen bond network and systematically varying the length of the alcohol hydrocarbon chain. The resulting supramolecular structures forming at room temperature are then examined using atomic force microscopy with insights from computer simulations. We show that the group-based self-assembly approach investigated here is general and can be reproduced on other substrates such as molybdenum disulphide and graphene oxide, potentially making it relevant for a wide variety of systems. | physics |
Structured electron beams carrying orbital angular momentum are currently of considerable interest, both from a fundamental point of view and for application in electron microscopy and spectroscopy. Until recently, most studies have focused on the azimuthal structure of electron vortex beams with well-defined orbital angular momentum. To unambiguously define real electron-beam states and realise them in the laboratory, the radial structure must also be specified. Here we use a specific set of orthonormal modes of electron (vortex) beams to describe both the radial and azimuthal structures of arbitrary electron wavefronts. The specific beam states are based on truncated Bessel beams localised within the lens aperture plane of an electron microscope. We show that their Fourier transform set of beams can be realised at the focal planes of the probe-forming lens using a binary computer generated electron hologram. Using astigmatic transformation optics, we demonstrate that the azimuthal indices of the diffracted beams scale with the order of the diffraction through phase amplification. However, their radial indices remain the same as those of the encoding beams for all the odd diffraction orders or are reduced to the zeroth order for the even-order diffracted beams. This simple even-odd rule can also be explained in terms of the phase amplification of the radial profiles. We envisage that the orthonormal cylindrical basis set of states could lead to new possibilities in phase contrast electron microscopy and spectroscopy using structured electron beams. | quantum physics |
Methods for the Generation of Synthetic Populations do generate the entities required for micro models or multi-agent models, such as they match field observations or hypothesis on the population under study. We tackle here the specific question of creating synthetic populations made of two types of entities linked together by 0, 1 or more links. Potential applications include the creation of dwellings inhabited by households, households owning cars, dwellings equipped with appliances, worker employed by firms, etc. We propose a theoretical framework to tackle this problem. We then highlight how this problem is over-constrained and requires relaxation of some constraints to be solved. We propose a method to solve the problem analytically which lets the user select which input data should be preserved and adapts the others in order to make the data consistent. We illustrate this method by synthesizing a population made of dwellings containing 0, 1 or 2 households in the city of Lille (France). In this population, the distributions of the dwellings' and households' characteristics are preserved, and both are linked according to statistical pairing statistics. | physics |
With the uptake of algorithmic personalization in the news domain, news organizations increasingly trust automated systems with previously considered editorial responsibilities, e.g., prioritizing news to readers. In this paper we study an automated news recommender system in the context of a news organization's editorial values. We conduct and present two online studies with a news recommender system, which span one and a half months and involve over 1,200 users. In our first study we explore how our news recommender steers reading behavior in the context of editorial values such as serendipity, dynamism, diversity, and coverage. Next, we present an intervention study where we extend our news recommender to steer our readers to more dynamic reading behavior. We find that (i) our recommender system yields more diverse reading behavior and yields a higher coverage of articles compared to non-personalized editorial rankings, and (ii) we can successfully incorporate dynamism in our recommender system as a re-ranking method, effectively steering our readers to more dynamic articles without hurting our recommender system's accuracy. | computer science |
We consider the problem of estimating a ranking on a set of items from noisy pairwise comparisons given item features. We address the fact that pairwise comparison data often reflects irrational choice, e.g. intransitivity. Our key observation is that two items compared in isolation from other items may be compared based on only a salient subset of features. Formalizing this framework, we propose the salient feature preference model and prove a finite sample complexity result for learning the parameters of our model and the underlying ranking with maximum likelihood estimation. We also provide empirical results that support our theoretical bounds and illustrate how our model explains systematic intransitivity. Finally we demonstrate strong performance of maximum likelihood estimation of our model on both synthetic data and two real data sets: the UT Zappos50K data set and comparison data about the compactness of legislative districts in the US. | statistics |
A megascopic revalidation is offered providing responses and resolutions of current inconsistencies and existing contradictions in present-day quantum theory. As the core of this study we present an independent proof of the Goldstone theorem for a quantum field formulation of molecules and solids. Along with phonons two types of new quasiparticles appear: rotons and translons. In full analogy with Lorentz covariance, combining space and time coordinates, a new covariance is necessary, binding together the internal and external degrees of freedom, without explicitly separating the centre-of-mass, which normally applies in both classical and quantum formulations. The generally accepted view regarding the lack of a simple correspondence between the Goldstone modes and broken symmetries, has significant consequences: an ambiguous BCS theory as well as a subsequent Higgs mechanism. The application of the archetype of the classical spontaneous symmetry breaking, i.e. the Mexican hat, as compared to standard quantum relations, i.e. the Jahn-Teller effect, superconductivity or the Higgs mechanism, becomes a disparity. In short, symmetry broken states have a microscopic causal origin, but transitions between them have a teleological component. The different treatments of the problem of the centre of gravity in quantum mechanics and in field theories imply a second type of Bohr complementarity on the many-body level opening the door for megascopic representations of all basic microscopic quantum axioms with further readings for teleonomic megascopic quantum phenomena, which have no microscopic rationale: isomeric transitions, Jahn-Teller effect, chemical reactions, Einstein-de Haas effect, superconductivity-superfluidity, and brittle fracture. | physics |
We present a novel approach to modelling and learning vector fields from physical systems using neural networks that explicitly satisfy known linear operator constraints. To achieve this, the target function is modelled as a linear transformation of an underlying potential field, which is in turn modelled by a neural network. This transformation is chosen such that any prediction of the target function is guaranteed to satisfy the constraints. The approach is demonstrated on both simulated and real data examples. | statistics |
We develop general tools to characterise and efficiently compute relevant observables of multimode $N$-photon states generated in non-linear decays in one-dimensional waveguides. We then consider optical interferometry in a Mach-Zender interferometer where a $d$-mode photonic state enters in each arm of the interferometer. We derive a simple expression for the Quantum Fisher Information in terms of the average photon number in each mode, and show that it can be saturated by number-resolved photon measurements that do not distinguish between the different $d$ modes. | quantum physics |
We consider the competition between decoherence processes and an iterated quantum purification protocol. We show that this competition can be modelized by a nonlinear map onto the quaternion space. This nonlinear map has complicated behaviours, inducing a fractal border between the aera of the quantum states dominated by the effects of the purification and the area of the quantum states dominated by the effects of the decoherence. The states on the border are unstable. The embedding in a 3D space of this border is like a quaternionic Julia set or a Mandelbulb with a fractal inner structure. | quantum physics |
Gaussian process classification (GPC) provides a flexible and powerful statistical framework describing joint distributions over function space. Conventional GPCs however suffer from (i) poor scalability for big data due to the full kernel matrix, and (ii) intractable inference due to the non-Gaussian likelihoods. Hence, various scalable GPCs have been proposed through (i) the sparse approximation built upon a small inducing set to reduce the time complexity; and (ii) the approximate inference to derive analytical evidence lower bound (ELBO). However, these scalable GPCs equipped with analytical ELBO are limited to specific likelihoods or additional assumptions. In this work, we present a unifying framework which accommodates scalable GPCs using various likelihoods. Analogous to GP regression (GPR), we introduce additive noises to augment the probability space for (i) the GPCs with step, (multinomial) probit and logit likelihoods via the internal variables; and particularly, (ii) the GPC using softmax likelihood via the noise variables themselves. This leads to unified scalable GPCs with analytical ELBO by using variational inference. Empirically, our GPCs showcase better results than state-of-the-art scalable GPCs for extensive binary/multi-class classification tasks with up to two million data points. | statistics |
Suppose X and Y are binary exposure and outcome variables, and we have full knowledge of the distribution of Y, given application of X. From this we know the average causal effect of X on Y. We are now interested in assessing, for a case that was exposed and exhibited a positive outcome, whether it was the exposure that caused the outcome. The relevant "probability of causation", PC, typically is not identified by the distribution of Y given X, but bounds can be placed on it, and these bounds can be improved if we have further information about the causal process. Here we consider cases where we know the probabilistic structure for a sequence of complete mediators between X and Y. We derive a general formula for calculating bounds on PC for any pattern of data on the mediators (including the case with no data). We show that the largest and smallest upper and lower bounds that can result from any complete mediation process can be obtained in processes with at most two steps. We also consider homogeneous processes with many mediators. PC can sometimes be identified as 0 with negative data, but it cannot be identified at 1 even with positive data on an infinite set of mediators. The results have implications for learning about causation from knowledge of general processes and of data on cases. | mathematics |
Unsupervised lesion detection is a challenging problem that requires accurately estimating normative distributions of healthy anatomy and detecting lesions as outliers without training examples. Recently, this problem has received increased attention from the research community following the advances in unsupervised learning with deep learning. Such advances allow the estimation of high-dimensional distributions, such as normative distributions, with higher accuracy than previous methods.The main approach of the recently proposed methods is to learn a latent-variable model parameterized with networks to approximate the normative distribution using example images showing healthy anatomy, perform prior-projection, i.e. reconstruct the image with lesions using the latent-variable model, and determine lesions based on the differences between the reconstructed and original images. While being promising, the prior-projection step often leads to a large number of false positives. In this work, we approach unsupervised lesion detection as an image restoration problem and propose a probabilistic model that uses a network-based prior as the normative distribution and detect lesions pixel-wise using MAP estimation. The probabilistic model punishes large deviations between restored and original images, reducing false positives in pixel-wise detections. Experiments with gliomas and stroke lesions in brain MRI using publicly available datasets show that the proposed approach outperforms the state-of-the-art unsupervised methods by a substantial margin, +0.13 (AUC), for both glioma and stroke detection. Extensive model analysis confirms the effectiveness of MAP-based image restoration. | electrical engineering and systems science |
Cybercriminals exploit cryptocurrencies to carry out illicit activities. In this paper, we focus on Ponzi schemes that operate on Bitcoin and perform an in-depth analysis of MMM, one of the oldest and most popular Ponzi schemes. Based on 423K transactions involving 16K addresses, we show that: (1) Starting Sep 2014, the scheme goes through three phases over three years. At its peak, MMM circulated more than 150M dollars a day, after which it collapsed by the end of Jun 2016. (2) There is a high income inequality between MMM members, with the daily Gini index reaching more than 0.9. The scheme also exhibits a zero-sum investment model, in which one member's loss is another member's gain. The percentage of victims who never made any profit has grown from 0% to 41% in five months, during which the top-earning scammer has made 765K dollars in profit. (3) The scheme has a global reach with 80 different member countries but a highly-asymmetrical flow of money between them. While India and Indonesia have the largest pairwise flow in MMM, members in Indonesia have received 12x more money than they have sent to their counterparts in India. | computer science |
Recently mean field theory has been successfully used to analyze properties of wide, random neural networks. It gave rise to a prescriptive theory for initializing feed-forward neural networks with orthogonal weights, which ensures that both the forward propagated activations and the backpropagated gradients are near $\ell_2$ isometries and as a consequence training is orders of magnitude faster. Despite strong empirical performance, the mechanisms by which critical initializations confer an advantage in the optimization of deep neural networks are poorly understood. Here we show a novel connection between the maximum curvature of the optimization landscape (gradient smoothness) as measured by the Fisher information matrix (FIM) and the spectral radius of the input-output Jacobian, which partially explains why more isometric networks can train much faster. Furthermore, given that orthogonal weights are necessary to ensure that gradient norms are approximately preserved at initialization, we experimentally investigate the benefits of maintaining orthogonality throughout training, from which we conclude that manifold optimization of weights performs well regardless of the smoothness of the gradients. Moreover, motivated by experimental results we show that a low condition number of the FIM is not predictive of faster learning. | statistics |
A third body in an eclipsing binary system causes regular periodic changes in the observed (O) minus the computed (C) eclipse epochs. Fourth bodies are rarely detected from the O-C data. We apply the new Discrete Chi-square method (DCM) to the O-C data of the eclipsing binary XZ Andromedae. These data contain the periodic signatures of at least ten wide orbit stars (WOSs). Their orbital periods are between 1.6 and 91.7 years. since no changes have been observed in the eclipses of XZ And during the past 127 years, the orbits of all these WOSs are most probably co-planar. We give detailed instructions of how the professional and the amateur astronomers can easily repeat all stages of our DCM analysis with an ordinary PC, as well as apply this method to O-C data of other eclipsing binaries. | astrophysics |
We investigate the spin-orbit torque exerted on the magnetic moments of the transition-metal impurities Cr, Mn, Fe and Co, embedded in the surface of the topological insulator Bi$_{2}$Te$ _{3} $, in response to an electric field and a consequent electrical current flow in the surface. The multiple scattering problem of electrons off impurity atoms is solved by first-principles calculations within the full-potential relativistic Korringa-Kohn-Rostoker (KKR) Green function method, while the spin-orbit torque calculations are carried out by combining the KKR method with the semiclassical Boltzmann transport equation. We analyze the correlation of the spin-orbit torque to the spin accumulation and spin flux in the defects. We compare the torque on different magnetic impurities and unveil the effect of resonant scattering. In addition, we calculate the resistivity and the Joule heat as a function of the torque in these systems. We predict that the Mn/Bi$_{2}$Te$_{3}$ is optimal among the studied systems. | condensed matter |
Brain-controlled unmanned aerial vehicle (uav) is a uav that can analyze human brain electrical signals through BCI to obtain flight commands. The research of brain-controlled uav can promote the integration of brain-computer and has a broad application prospect. At present, BCI still has some problems, such as limited recognition accuracy, limited recognition time and small number of recognition commands in the acquisition of control commands by analyzing eeg signals. Therefore, the control performance of the quadrotor which is controlled only by brain is not ideal. Based on the concept of Shared control, this paper designs an assistant controller using fuzzy PID control, and realizes the cooperative control between automatic control and brain control. By evaluating the current flight status and setting the switching rate, the switching mechanism of automatic control and brain control can be decided to improve the system control performance. Finally, a rectangular trajectory tracking control experiment of the same height is designed for small quadrotor to verify the algorithm. | computer science |
Previous works on deformed graphene predict the existence of valley-polarized states, however, optimal conditions for their detection remain challenging. We show that in the quantum Hall regime, edge-like states in strained regions can be isolated in energy within Landau gaps. We identify precise conditions for new conducting edges-like states to be valley polarized, with the flexibility of positioning them at chosen locations in the system. A map of local density of states as a function of energy and position reveals a unique braid pattern that serves as a fingerprint to identify valley polarization. | condensed matter |
We propose and investigate a class of new algorithms for sequential decision making that interacts with \textit{a batch of users} simultaneously instead of \textit{a user} at each decision epoch. This type of batch models is motivated by interactive marketing and clinical trial, where a group of people are treated simultaneously and the outcomes of the whole group are collected before the next stage of decision. In such a scenario, our goal is to allocate a batch of treatments to maximize treatment efficacy based on observed high-dimensional user covariates. We deliver a solution, named \textit{Teamwork LASSO Bandit algorithm}, that resolves a batch version of explore-exploit dilemma via switching between teamwork stage and selfish stage during the whole decision process. This is made possible based on statistical properties of LASSO estimate of treatment efficacy that adapts to a sequence of batch observations. In general, a rate of optimal allocation condition is proposed to delineate the exploration and exploitation trade-off on the data collection scheme, which is sufficient for LASSO to identify the optimal treatment for observed user covariates. An upper bound on expected cumulative regret of the proposed algorithm is provided. | statistics |
This paper frames causal structure estimation as a machine learning task. The idea is to treat indicators of causal relationships between variables as `labels' and to exploit available data on the variables of interest to provide features for the labelling task. Background scientific knowledge or any available interventional data provide labels on some causal relationships and the remainder are treated as unlabelled. To illustrate the key ideas, we develop a distance-based approach (based on bivariate histograms) within a manifold regularization framework. We present empirical results on three different biological data sets (including examples where causal effects can be verified by experimental intervention), that together demonstrate the efficacy and general nature of the approach as well as its simplicity from a user's point of view. | statistics |
Federated learning enables training collaborative machine learning models at scale with many participants whilst preserving the privacy of their datasets. Standard federated learning techniques are vulnerable to Byzantine failures, biased local datasets, and poisoning attacks. In this paper we introduce Adaptive Federated Averaging, a novel algorithm for robust federated learning that is designed to detect failures, attacks, and bad updates provided by participants in a collaborative model. We propose a Hidden Markov Model to model and learn the quality of model updates provided by each participant during training. In contrast to existing robust federated learning schemes, we propose a robust aggregation rule that detects and discards bad or malicious local model updates at each training iteration. This includes a mechanism that blocks unwanted participants, which also increases the computational and communication efficiency. Our experimental evaluation on 4 real datasets show that our algorithm is significantly more robust to faulty, noisy and malicious participants, whilst being computationally more efficient than other state-of-the-art robust federated learning methods such as Multi-KRUM and coordinate-wise median. | statistics |
We predict that Bessel-like beams of arbitrary integer order can exhibit a tunable self-similar behavior (that take an invariant form under suitable stretching transformations). Specifically, by engineering the amplitude and the phase on the input plane in real space, we show that it is possible to generate higher-order vortex Bessel-like beams with fully controllable radius of the hollow core and maximum intensity during propagation. In addition, using a similar approach, we show that it is also possible to generate zeroth order Bessel-like beams with controllable beam width and maximum intensity. Our numerical results are in excellent agreement with our theoretical predictions. | physics |
Actor-critic methods, a type of model-free Reinforcement Learning, have been successfully applied to challenging tasks in continuous control, often achieving state-of-the art performance. However, wide-scale adoption of these methods in real-world domains is made difficult by their poor sample efficiency. We address this problem both theoretically and empirically. On the theoretical side, we identify two phenomena preventing efficient exploration in existing state-of-the-art algorithms such as Soft Actor Critic. First, combining a greedy actor update with a pessimistic estimate of the critic leads to the avoidance of actions that the agent does not know about, a phenomenon we call pessimistic underexploration. Second, current algorithms are directionally uninformed, sampling actions with equal probability in opposite directions from the current mean. This is wasteful, since we typically need actions taken along certain directions much more than others. To address both of these phenomena, we introduce a new algorithm, Optimistic Actor Critic, which approximates a lower and upper confidence bound on the state-action value function. This allows us to apply the principle of optimism in the face of uncertainty to perform directed exploration using the upper bound while still using the lower bound to avoid overestimation. We evaluate OAC in several challenging continuous control tasks, achieving state-of the art sample efficiency. | statistics |
Given a small abelian category $\mathcal{A}$, the Freyd-Mitchell embedding theorem states the existence of a ring $R$ and an exact full embedding $\mathcal{A} \rightarrow R$-Mod. This theorem is useful as it allows one to prove general results about abelian categories within the context of $R$-modules. The goal of this report is to flesh out the proof of the embedding theorem. We shall follow closely the material and approach presented in Freyd (1964). This means we will encounter such concepts as projective generators, injective cogenerators, the Yoneda embedding, injective envelopes, Grothendieck categories, subcategories of mono objects and subcategories of absolutely pure objects. | mathematics |
Motivated by recent studies of superconformal mechanics extended by spin degrees of freedom, we construct minimally superintegrable models of spinning particles on 2-sphere, the spin degrees of freedom of which are represented by a 3-vector obeying the structure relations of a 3d real Lie algebra. Generalisations involving an external field of the Dirac monopole, or the motion on the group manifold of SU(2), or a scalar potential giving rise to two quadratic constants of the motion are discussed. A procedure how to build similar extensions, which rely upon d=4,5,6 real Lie algebras, is elucidated. | high energy physics theory |
Histogram-based template fits are the main technique used for estimating parameters of high energy physics Monte Carlo generators. Parametrized neural network reweighting can be used to extend this fitting procedure to many dimensions and does not require binning. If the fit is to be performed using reconstructed data, then expensive detector simulations must be used for training the neural networks. We introduce a new two-level fitting approach that only requires one dataset with detector simulation and then a set of additional generation-level datasets without detector effects included. This Simulation-level fit based on Reweighting Generator-level events with Neural networks (SRGN) is demonstrated using simulated datasets for a variety of examples including a simple Gaussian random variable, parton shower tuning, and the top quark mass extraction. | high energy physics phenomenology |
We quantify the impact of unpolarized lepton-proton and lepton-nucleus inclusive deep-inelastic scattering (DIS) cross section measurements from the future Electron-Ion Collider (EIC) on the proton and nuclear parton distribution functions (PDFs). To this purpose we include neutral- and charged-current DIS pseudodata in a self-consistent set of proton and nuclear global PDF determinations based on the NNPDF methodology. We demonstrate that the EIC measurements will reduce the uncertainty of the light quark PDFs of the proton at large values of the momentum fraction $x$, and, more significantly, of the quark and gluon PDFs of heavy nuclei, especially at small and large $x$. We illustrate the implications of the improved precision of nuclear PDFs for the interaction of ultra-high energy cosmic neutrinos with matter. | high energy physics phenomenology |
We investigate beam training and allocation for multiuser millimeter wave massive MIMO systems. An orthogonal pilot based beam training scheme is first developed to reduce the number of training times, where all users can simultaneously perform the beam training with the base station (BS). As the number of users increases, the same beam from the BS may point to different users, leading to beam conflict and multiuser interference. Therefore, a quality-of-service (QoS) constrained (QC) beam allocation scheme is proposed to maximize the equivalent channel gain of the QoS-satisfied users, under the premise that the number of the QoS-satisfied users without beam conflict is maximized. To reduce the overhead of beam training, two partial beam training schemes, an interlaced scanning (IS) and a selection probability (SP) based schemes, are proposed. The overhead of beam training for the IS-based scheme can be reduced by nearly half while the overhead for the SP-based scheme is flexible. Simulation results show that the QC-based beam allocation scheme can effectively mitigate the interference caused by the beam conflict and significantly improve the spectral efficiency while the IS-based and SP-based schemes significantly reduce the overhead of beam training at the cost of sacrificing spectral efficiency a little. | electrical engineering and systems science |
This chapter provides a comprehensive overview of the Bohmian formulation of quantum mechanics. It starts with a historical review of the difficulties found by Louis de Broglie, David Bohm, and John S. Bell to convince the scientific community about the validity and utility of Bohmian mechanics. Then, a formal explanation of Bohmian mechanics for nonrelativistic, single-particle quantum systems is presented. The generalization to many-particle systems, where the exchange interaction and the spin play an important role, is also presented. After that, the measurement process in Bohmian mechanics is discussed. It is emphasized that Bohmian mechanics exactly reproduces the mean value and temporal and spatial correlations obtained from the standard, that is the Copenhagen or orthodox, formulation. The ontological characteristics of Bohmian mechanics provide a description of measurements as another type of interaction without the need for introducing the wave function collapse. Several solved problems are presented at the end of the chapter, giving additional mathematical support to some particular issues. A detailed description of computational algorithms to obtain Bohmian trajectories from the numerical solution of the Schrodinger or the Hamilton-Jacobi equations are presented in an appendix. The motivation of this chapter is twofold: first, as a didactic introduction to Bohmian formalism, which is used in the subsequent chapters, and second, as a self-contained summary for any newcomer interested in using Bohmian mechanics in his or her daily research activity. | quantum physics |
Very sensitive responses to external forces are found near phase transitions. However, phase transition dynamics and pre-equilibrium phenomena are difficult to detect and control. We have directly observed that the equilibrium domain structure following a phase transition in BaTiO3, a ferroelectric and ferroelastic material, is attained by halving of the domain periodicity, sequentially and multiple times. The process is reversible, displaying periodicity doubling as temperature is increased. This observation is backed theoretically and can explain the fingerprints of domain period multiplicity observed in other systems, strongly suggesting this as a general model for pattern formation during phase transitions in ferroelastic materials. | condensed matter |
This work aims at giving Trotter errors in digital quantum simulation (DQS) of collective spin systems an interpretation in terms of quantum chaos of the kicked top. In particular, for DQS of such systems, regular dynamics of the kicked top ensures convergence of the Trotterized time evolution, while chaos in the top, which sets in above a sharp threshold value of the Trotter step size, corresponds to the proliferation of Trotter errors. We show the possibility to analyze this phenomenology in a wide variety of experimental realizations of the kicked top, ranging from single atomic spins to trapped-ion quantum simulators which implement DQS of all-to-all interacting spin-1/2 systems. These platforms thus enable in-depth studies of Trotter errors and their relation to signatures of quantum chaos, including the growth of out-of-time-ordered correlators. | quantum physics |
Shell-shaped hollow Bose-Einstein condensates (BECs) exhibit behavior distinct from their filled counterparts and have recently attracted attention due to their potential realization in microgravity settings. Here we study distinct features of these hollow structures stemming from vortex physics and the presence of rotation. We focus on a vortex-antivortex pair as the simplest configuration allowed by the constraints on superfluid flow imposed by the closed-surface topology. In the two-dimensional limit of an infinitesimally thin shell BEC, we characterize the long-range attraction between the vortex-antivortex pair and find the critical rotation speed that stabilizes the pair against energetically relaxing towards self-annihilation. In the three-dimensional case, we contrast the bounds on vortex stability with those in the two-dimensional limit and the filled sphere BEC, and evaluate the critical rotation speed as a function of shell thickness. We thus demonstrate that analyzing vortex stabilization provides a nondestructive means of characterizing a hollow sphere BEC and distinguishing it from its filled counterpart. | condensed matter |
We describe a fast, optimal estimator for measuring angular bispectra between two correlated weakly non-Gaussian fields ($Y$ and $Z$) from observational datasets, based on a separable modal bispectrum expansion. Our methodology is applicable to (1) any shape of the input theoretical bispectrum templates (factorizable or not), (2) both even and odd $\ell_1 + \ell_2 + \ell_3$ multipole domains and (3) both amplitude ($f_{\rm NL})$ bispectrum estimation and full bispectrum reconstruction, considering either joint estimation of ($YYY$, $ZZZ$, $YYZ$ and $ZZY$) shapes, or independent estimation of auto-bispectra ($YYY$ or $ZZZ$) and cross-bispectra ($YYZ$ or $ZZY$); hence, it has quite high versatility. The methodology described here was implemented and used for the official analysis of temperature and polarization cosmic microwave background maps from the $Planck$ satellite. | astrophysics |
The Einstein AdS black brane with a cloud of strings background in context of massive gravity is introduced. There is a momentum dissipation on the boundary because of graviton mass on the bulk. The ratio of shear viscosity to entropy density is calculated for this solution. This value violates the KSS bound if we apply the Dirichlet boundary and regularity on the horizon conditions. Our result shows that this value is independent of the cloud of strings. | high energy physics theory |
We explore mass estimation of the Local Group via the use of the simple, dynamical `timing argument' in the context of a variety of theories of dark energy and modified gravity: a cosmological constant, a perfect fluid with constant equation of state $w$, quintessence (minimally coupled scalar field), MOND, and symmetrons (coupled scalar field). We explore generic coupled scalar field theories, with the symmetron model as an explicit example. We find that theories which attempt to eliminate dark matter by fitting rotation curves produce mass estimates in the timing argument which are not compatible with the luminous mass of the galaxies alone. Assuming that the galaxies are approaching their first encounter, MOND gives of around $2.7\times 10^{10} M_\odot$, roughly 10\% of the luminous mass of the LG, although a higher mass can be obtained in the case of a previous fly-by event between the MW and M31. The symmetron model suggests a mass too high to be explained without additional dark matter ($\mathcal{O}(10^{12}) M_\odot$), suggesting that there is a missing mass problem in this model. We also demonstrate that tensions in measurements of $H_0$ can produce an uncertainty in the Local Group mass estimate comparable to observational uncertainties on the separation and relative velocity of the galaxies, with values for the mass ranging from $4.5 - 5.4 \times 10^{12} M_{\odot}$ varying $h$ between 0.67 and 0.76. | astrophysics |
We perform the general relativistic stability analysis against radial oscillations of unpaired quark stars obtained using the equation of state for cold quark matter from perturbative QCD, the only free parameter being the renormalization scale. This approach consistently incorporates the effects of interactions and includes a built-in estimate of the inherent systematic uncertainties in the evaluation of the equation of state. We also take into account the constraints imposed by the recent gravitational wave event GW 170817 to the compact star masses and radii, and restrict their vibrational spectrum. | high energy physics phenomenology |
Pion photoproduction off the nucleon close to threshold is studied in covariant baryon chiral perturbation theory at O($p^3$) in the extended-on-mass-shell scheme, with the explicit inclusion of the $\Delta(1232)$ resonance using the $\delta$ counting. The theory is compared to the available data of cross sections and polarization observables for all the charge channels. Most of the necessary low energy constants are well known from the analysis of other processes and the comparison with data strongly constrains some of the still unknown ones. The $\Delta(1232)$ contribution is significant in improving the agreement with data, even at the low energies considered. | high energy physics phenomenology |
We analyze particle acceleration in explosive reconnection events in magnetically dominated proton-electron plasmas. Reconnection is driven by large-scale magnetic stresses in interacting current-carrying flux tubes. Our model relies on development of current-driven instabilities on macroscopic scales. These tilt-kink instabilities develop in an initially force-free equilibrium of repelling current channels. Using MHD methods we study a 3D model of repelling and interacting flux tubes in which we simultaneously evolve test particles, guided by electromagnetic fields obtained from MHD. We identify two stages of particle acceleration; Initially particles accelerate in the current channels, after which the flux ropes start tilting and kinking and particles accelerate due to reconnection processes in the plasma. The explosive stage of reconnection produces non-thermal energy distributions with slopes that depend on plasma resistivity and the initial particle velocity. We also discuss the influence of the length of the flux ropes on particle acceleration and energy distributions. This study extends previous 2.5D results to 3D setups, providing all ingredients needed to model realistic scenarios like solar flares, black hole flares and particle acceleration in pulsar wind nebulae: formation of strong resistive electric fields, explosive reconnection and non-thermal particle distributions. By assuming initial energy equipartition between electrons and protons, applying low resistivity in accordance with solar corona conditions and limiting the flux rope length to a fraction of a solar radius we obtain realistic energy distributions for solar flares with non-thermal power law tails and maximum electron energies up to 11 MeV and maximum proton energies up to 1 GeV. | astrophysics |
We provide an axiomatic approach for studying support varieties of objects in a triangulated category via the action of a tensor triangulated category, where the tensor product is not necessarily symmetric. This is illustrated by examples, taken in particular from representation theory of finite dimensional algebras. | mathematics |
We study the effect of the early kinetic decoupling in a model of fermionic dark matter (DM) that interacts with the standard model particles only by exchanging the Higgs boson. There are two DM-Higgs couplings, namely CP-conserving and CP-violating couplings. If the mass of the DM is slightly below half of the Higgs boson mass, then the couplings are suppressed to obtain the measured value of the DM energy density by the freeze-out mechanism. In addition, the scattering processes of DM off particles in the thermal bath are suppressed by the small momentum transfer if the CP-violating DM-Higgs coupling is larger than the CP-conserving one. Due to the suppression, the temperature of the DM can differ from the temperature of the thermal bath. By solving coupled equations for the number density and temperature of the DM, we calculate the DM-Higgs couplings that reproduce the right amount of the DM relic abundance. We find that the couplings have to be larger than the one obtained without taking into account the difference in the temperatures. A consequence of the enhancement of the DM-Higgs couplings is the enhancement of the Higgs invisible decay branching ratio. The enhancement is testable at current and future collider experiments. | high energy physics phenomenology |
We show that a Frobenius sturcture is equivalent to a dually flat sturcture in information geometry. We define a multiplication structure on the tangent spaces of statistical manifolds, which we call the statistical product. We also define a scalar quantity, which we call the Yukawa term. By showing two examples from statistical mechanics, first the classical ideal gas, second the quantum bosonic ideal gas, we argue that the Yukawa term quantifies information generation, which resembles how mass is generated via the 3-points interaction of two fermions and a Higgs boson (Higgs mechanism). In the classical case, The Yukawa term is identically zero, whereas in the quantum case, the Yukawa term diverges as the fugacity goes to zero, which indicates the Bose-Einstein condensation. | mathematics |
The hypothesis that sub-network initializations (lottery) exist within the initializations of over-parameterized networks, which when trained in isolation produce highly generalizable models, has led to crucial insights into network initialization and has enabled efficient inferencing. Supervised models with uncalibrated confidences tend to be overconfident even when making wrong prediction. In this paper, for the first time, we study how explicit confidence calibration in the over-parameterized network impacts the quality of the resulting lottery tickets. More specifically, we incorporate a suite of calibration strategies, ranging from mixup regularization, variance-weighted confidence calibration to the newly proposed likelihood-based calibration and normalized bin assignment strategies. Furthermore, we explore different combinations of architectures and datasets, and make a number of key findings about the role of confidence calibration. Our empirical studies reveal that including calibration mechanisms consistently lead to more effective lottery tickets, in terms of accuracy as well as empirical calibration metrics, even when retrained using data with challenging distribution shifts with respect to the source dataset. | statistics |
To fully exploit the millimeter-wave bands for the fifth generation cellular systems, an accurate understanding of the channel propagation characteristics is required, and hence extensive measurement campaigns in different environments are needed. In this paper, we use a rotated directional antenna-based channel sounder for measurements at 28 GHz in large indoor environments at a library setting. We present models for power angular-delay profile and large-scale path loss based on the measurements over distances ranging from 10 m to 50 m. In total, nineteen different line-of-sight (LOS) and non-line-of-sight (NLOS) scenarios are considered, including the cases where the transmitter and the receiver are placed on different floors. Results show that the close-in free space reference distance and the floating intercept path loss models both perform well in fitting the empirical data. The path loss exponent obtained for the LOS scenarios is found to be very close to that of the free space path loss model. | electrical engineering and systems science |
This paper introduces techniques to integrate WordNet into a Fuzzy Logic Programming system. Since WordNet relates words but does not give graded information on the relation between them, we have implemented standard similarity measures and new directives allowing the proximity equations linking two words to be generated with an approximation degree. Proximity equations are the key syntactic structures which, in addition to a weak unification algorithm, make a flexible query-answering process possible in this kind of programming language. This addition widens the scope of Fuzzy Logic Programming, allowing certain forms of lexical reasoning, and reinforcing Natural Language Processing applications. [Under consideration in Theory and Practice of Logic Programming (TPLP)] | computer science |
In this paper, we demonstrate the generation of high-performance entangled photon-pairs in different degrees of freedom from a single piece of fiber pigtailed periodically poled LiNbO$_3$ (PPLN) waveguide. We utilize cascaded second-order nonlinear optical processes, i.e. second-harmonic generation (SHG) and spontaneous parametric down conversion (SPDC), to generate photon-pairs. Previously, the performance of the photon pairs is contaminated by Raman noise photons from the fiber pigtails. Here by integrating the PPLN waveguide with noise rejecting filters, we obtain a coincidence-to-accidental ratio (CAR) higher than 52,600 with photon-pair generation and detection rate of 52.3 kHz and 3.5 kHz, respectively. Energy-time, frequency-bin and time-bin entanglement is prepared by coherently superposing correlated two-photon states in these degrees of freedom, respectively. The energy-time entangled two-photon states achieve the maximum value of CHSH-Bell inequality of S=2.708$\pm$0.024 with a two-photon interference visibility of 95.74$\pm$0.86%. The frequency-bin entangled two-photon states achieve fidelity of 97.56$\pm$1.79% with a spatial quantum beating visibility of 96.85$\pm$2.46%. The time-bin entangled two-photon states achieve the maximum value of CHSH-Bell inequality of S=2.595$\pm$0.037 and quantum tomographic fidelity of 89.07$\pm$4.35%. Our results provide a potential candidate for quantum light source in quantum photonics. | quantum physics |
Fourier transform (FT) spectroscopy is a versatile technique for studying the infrared (IR) optical response of solid-, liquid-, and gas-phase samples. In standard FT-IR spectrometers, a light beam passing through a Michelson interferometer is focused onto a sample with condenser optics. This design enables us to examine relatively small samples, but the large solid angle of the focused infrared beam makes it difficult to analyze angle-dependent characteristics. Here we design and construct a high-precision angle-resolved reflection setup compatible with a commercial FT-IR spectrometer. Our setup converts the focused beam into an achromatically collimated beam with an angle dispersion as high as 0.25$^\circ$. The setup also permits us to scan the incident angle over ~8$^\circ$ across zero (normal incidence). The beam diameter can be reduced to ~1 mm, which is limited by the sensitivity of an HgCdTe detector. The small-footprint apparatus is easily installed in an FT-IR sample chamber. As a demonstration of the capability of our reflection setup we measure the angle-dependent mid-infrared reflectance of two-dimensional photonic crystal slabs and determine the in-plane dispersion relation in the vicinity of the $\Gamma$ point in momentum space. We observe the formation of photonic Dirac cones, i.e., linear dispersions with an accidental degeneracy at $\Gamma$, in an ideally designed sample. Our apparatus is useful for characterizing various systems that have a strong in-plane anisotropy, including photonic crystal waveguides, plasmonic metasurfaces, and molecular crystalline films. | physics |
The Functional Failure Rate analysis of today's complex circuits is a difficult task and requires a significant investment in terms of human efforts, processing resources and tool licenses. Thereby, de-rating or vulnerability factors are a major instrument of failure analysis efforts. Usually computationally intensive fault-injection simulation campaigns are required to obtain a fine-grained reliability metrics for the functional level. Therefore, the use of machine learning algorithms to assist this procedure and thus, optimising and enhancing fault injection efforts, is investigated in this paper. Specifically, machine learning models are used to predict accurate per-instance Functional De-Rating data for the full list of circuit instances, an objective that is difficult to reach using classical methods. The described methodology uses a set of per-instance features, extracted through an analysis approach, combining static elements (cell properties, circuit structure, synthesis attributes) and dynamic elements (signal activity). Reference data is obtained through first-principles fault simulation approaches. One part of this reference dataset is used to train the machine learning model and the remaining is used to validate and benchmark the accuracy of the trained tool. The presented methodology is applied on a practical example and various machine learning models are evaluated and compared. | electrical engineering and systems science |
Previous work on speaker adaptation for end-to-end speech synthesis still falls short in speaker similarity. We investigate an orthogonal approach to the current speaker adaptation paradigms, speaker augmentation, by creating artificial speakers and by taking advantage of low-quality data. The base Tacotron2 model is modified to account for the channel and dialect factors inherent in these corpora. In addition, we describe a warm-start training strategy that we adopted for Tacotron2 training. A large-scale listening test is conducted, and a distance metric is adopted to evaluate synthesis of dialects. This is followed by an analysis on synthesis quality, speaker and dialect similarity, and a remark on the effectiveness of our speaker augmentation approach. Audio samples are available online. | electrical engineering and systems science |
We consider the problem of selecting an optimal set of sensor precisions to estimate the states of a non-linear dynamical system using an Ensemble Kalman filter and an Unscented Kalman filter, which uses random and deterministic ensembles respectively. Specifically, the goal is to choose at run-time, a sparse set of sensor precisions for active-sensing that satisfies certain constraints on the estimated state covariance. In this paper, we show that this sensor precision selection problem is a semidefinite programming problem when we use l1 norm over precision vector as the surrogate measure to induce sparsity. We formulate a sensor selection scheme over multiple time steps, for certain constraints on the terminal estimated state covariance. | electrical engineering and systems science |
We propose a greedy variational method for decomposing a non-negative multivariate signal as a weighted sum of Gaussians, which, borrowing the terminology from statistics, we refer to as a Gaussian mixture model. Notably, our method has the following features: (1) It accepts multivariate signals, i.e. sampled multivariate functions, histograms, time series, images, etc. as input. (2) The method can handle general (i.e. ellipsoidal) Gaussians. (3) No prior assumption on the number of mixture components is needed. To the best of our knowledge, no previous method for Gaussian mixture model decomposition simultaneously enjoys all these features. We also prove an upper bound, which cannot be improved by a global constant, for the distance from any mode of a Gaussian mixture model to the set of corresponding means. For mixtures of spherical Gaussians with common variance $\sigma^2$, the bound takes the simple form $\sqrt{n}\sigma$. We evaluate our method on one- and two-dimensional signals. Finally, we discuss the relation between clustering and signal decomposition, and compare our method to the baseline expectation maximization algorithm. | statistics |
Breiman challenged statisticians to think more broadly, to step into the unknown, model-free learning world, with him paving the way forward. Statistics community responded with slight optimism, some skepticism, and plenty of disbelief. Today, we are at the same crossroad anew. Faced with the enormous practical success of model-free, deep, and machine learning, we are naturally inclined to think that everything is resolved. A new frontier has emerged; the one where the role, impact, or stability of the {\it learning} algorithms is no longer measured by prediction quality, but an inferential one -- asking the questions of {\it why} and {\it if} can no longer be safely ignored. | statistics |
We analyze a new robust method for the reconstruction of probability distributions of observed data in the presence of output outliers. It is based on a so-called gradient conjugate prior (GCP) network which outputs the parameters of a prior. By rigorously studying the dynamics of the GCP learning process, we derive an explicit formula for correcting the obtained variance of the marginal distribution and removing the bias caused by outliers in the training set. Assuming a Gaussian (input-dependent) ground truth distribution contaminated with a proportion $\varepsilon$ of outliers, we show that the fitted mean is in a $c e^{-1/\varepsilon}$-neighborhood of the ground truth mean and the corrected variance is in a $b\varepsilon$-neighborhood of the ground truth variance, whereas the uncorrected variance of the marginal distribution can even be infinite. We explicitly find $b$ as a function of the output of the GCP network, without a priori knowledge of the outliers (possibly input-dependent) distribution. Experiments with synthetic and real-world data sets indicate that the GCP network fitted with a standard optimizer outperforms other robust methods for regression. | statistics |
In the recent paper [1] the classification of non-unitary representations of the three dimensional superconformal group has been constructed. From AdS/CFT they must correspond to N=1 supermultiplets containing partially massless fields in $AdS_4$. Moreover, the simplest example of such supermultiplets which contains a partially massless spin-2 was explicitly constructed. In this paper we extend this result and develop explicit Lagrangian construction of general N=1 supermultiplets containing partially massless fields with arbitrary superspin. We use the frame-like gauge invariant description of partially massless higher spin bosonic and fermionic fields. For the two types of the supermultiplets (with integer and half-integer superspins) each one containing two partially massless bosonic and two partially massless fermionic fields we derive the supertransformations leaving the sum of four their free Lagrangians invariant such that the $AdS_4$ superalgebra is closed on-shell. | high energy physics theory |
Percolation phenomena are pervasive in nature, ranging from capillary flow, crack propagation, ionic transport, fluid permeation, etc. Modeling percolation in highly-branched media requires the use of numerical solutions, as problems can quickly become intractable due to the number of pathways available. This becomes even more challenging in dynamic scenarios where the generation of pathways can quickly become a combinatorial problem. In this work, we develop a new constriction percolation paradigm, using cellular automata to predict the transport of oxygen through a stochastically cracked Zr oxide layer within a coupled diffusion-reaction framework. We simulate such branching trees by generating a series porosity-controlled media. Additionally, we develop an analytical criterion based on compressive yielding for bridging the transition state in corrosion regime, where the percolation threshold has been achieved. Our model extends Dijkstras shortest path method to constriction pathways and predicts the arrival rate of oxygen ions at the oxide interface. This is a critical parameter to predict oxide growth in the so-called post-transition regime, when bulk diffusion is no longer the rate-limiting phenomenon. | physics |
A galaxy's stellar mass is one of its most fundamental properties, but it remains challenging to measure reliably. With the advent of very large optical spectroscopic surveys, efficient methods that can make use of low signal-to-noise spectra are needed. With this in mind, we created a new software package for estimating effective stellar mass-to-light ratios $\log \Upsilon^*$ that uses principal component analysis(PCA) basis set to optimize the comparison between observed spectra and a large library of stellar population synthesis models. In Paper I, we showed that a with a set of six PCA basis vectors we could faithfully represent most optical spectra from the Mapping Nearby Galaxies at APO (MaNGA) survey;and we tested the accuracy of our M/L estimates using synthetic spectra. Here, we explore sources of systematic error in our mass measurements by comparing our new measurements to data from the literature. We compare our stellar mass surface density estimates to kinematics-derived dynamical mass surface density measurements from the DiskMass Survey and find some tension between the two which could be resolved if the disk scale-heights used in the kinematic analysis were overestimated by a factor of $\sim 1.5$. We formulate an aperture-corrected stellar mass catalog for the MaNGA survey, and compare to previous stellar mass estimates based on multi-band optical photometry, finding typical discrepancies of 0.1 dex. Using the spatially resolved MaNGA data, we evaluate the impact of estimating total stellar masses from spatially unresolved spectra, and we explore how the biases that result from unresolved spectra depend upon the galaxy's dust extinction and star formation rate. Finally, we describe a SDSS Value-Added Catalog which will include both spatially resolved and total (aperture-corrected) stellar masses for MaNGA galaxies. | astrophysics |
The primary objective of this paper is to build classification models and strategies to identify breathing sound anomalies (wheeze, crackle) for automated diagnosis of respiratory and pulmonary diseases. In this work we propose a deep CNN-RNN model that classifies respiratory sounds based on Mel-spectrograms. We also implement a patient specific model tuning strategy that first screens respiratory patients and then builds patient specific classification models using limited patient data for reliable anomaly detection. Moreover, we devise a local log quantization strategy for model weights to reduce the memory footprint for deployment in memory constrained systems such as wearable devices. The proposed hybrid CNN-RNN model achieves a score of 66.31% on four-class classification of breathing cycles for ICBHI'17 scientific challenge respiratory sound database. When the model is re-trained with patient specific data, it produces a score of 71.81% for leave-one-out validation. The proposed weight quantization technique achieves ~4X reduction in total memory cost without loss of performance. The main contribution of the paper is as follows: Firstly, the proposed model is able to achieve state of the art score on the ICBHI'17 dataset. Secondly, deep learning models are shown to successfully learn domain specific knowledge when pre-trained with breathing data and produce significantly superior performance compared to generalized models. Finally, local log quantization of trained weights is shown to be able to reduce the memory requirement significantly. This type of patient-specific re-training strategy can be very useful in developing reliable long-term automated patient monitoring systems particularly in wearable healthcare solutions. | electrical engineering and systems science |
Bound states in the continuum (BICs), circularly polarized states (C points) and degenerate states are all of three types of singular points of polarization in the momentum space. For photonic crystal slabs (PhCSs) with linearly polarized far fields, BICs were found to be the centers of polarization vortices and attracted more attention in the previous studies. Here, we theoretically demonstrate that the far fields of PhCSs can exhibit remarkably diverse polarizations due to the robust existences of C points in the continuum. Only a pair of C points with identical handedness and opposite topological charge can be annihilated together. Continuously fine tuning of the structure parameters of PhCSs without breaking their symmetry, a pair of C points with identical topological charge and opposite handedness are able to merge into a BIC, then the BIC splits into C points again. Interestingly, a Dirac-degenerate BIC with one half of topological charge is observed when two pairs of C points with identical topological charge from the upper and lower band, respectively, simultaneously merge at the Dirac-degenerate point. The law of topological charge conservation is verified to play an important role in the evolutions and interconversions between different types of polarization singularities. Our findings might shed light on the origin of singular points of polarization, could open a gateway towards the applications of them in the generation and manipulation of vector beams. | physics |
We study supergravity BPS equations which correspond to mass-deformation of some representative AdS/CFT examples. The field theory of interest are N=4, D=4 super Yang-Mills, the ABJM model in D=3, and the Brandhuber-Oz fixed point in D=5. For these gauge theories the free energy with mass terms for matter multiplets is calculable in large-N limit using supersymmetric localization technique. We suggest a perturbative method to solve the supergravity equations. For the dual of mass-deformed ABJM model we reproduce the known exact solutions. For the mass-deformed Brandhuber-Oz theory our method gives the holographic free energy in analytic form. For N=2* theory our result is in good agreement with the localization result. | high energy physics theory |
The type IIA string theory on a non-compact Calabi-Yau geometry known as the local $\mathbb{P}^{1} \times \mathbb{P}^{1}$ gives rise to five-dimensional N =1 supersymmetric SU(2) gauge theory compactified on a circle, known as geometric engineering. So it is necessary to study the $\mathbb{P}^{1} \times \mathbb{P}^{1}$ in details. Since the spectrum of the local $\mathbb{P}^{1} \times \mathbb{P}^{1}$ can be written as $E=R^{2}\left(\mathrm{e}^{p}+\mathrm{e}^{-p}\right)+\mathrm{e}^{x}+\mathrm{e}^{-x}$, then by the result of almost Mathieu operator, we show that: (1) when $R^{2}<1$, the spectrum is absolutely continuous which meanings the medium is conductor. (2) when $1\le R^{2}<e^{\beta}$, the spectrum is singular continuous known as quantum Hall effect. (3) when $R^{2}>e^{\beta}$, the spectrum is almost surely pure point and exhibits Anderson localization. In other words, there are two phase transition points which one is $R^{2}=1$ and the other one is $R^{2}=e^{\beta}$. | high energy physics theory |
Calculation of hadronization, decay or scattering processes at non-zero temperatures and densities within the Nambu-Jona-Lasinio-like models requires some techniques for computation of Feynmann diagrams. Decomposition of Feynman diagrams at the one loop level leads to the appearance of elementary integrals with one, two, three, and four fermion lines. For example, evaluation of the $\pi\pi$ scattering amplitude requires calculating a box diagram with four fermion lines. In this work, the real and imaginary parts of the box integral at the one loop level are provided in the form suitable for numerical evaluation. The obtained expressions are applicable to any value of temperature, particle mass, particle momentum, and chemical potential. Among the expressions for the box integral, the general formulas for the integral with an arbitrary number of lines were derived for the case with zero or collinear fermion momenta. | high energy physics phenomenology |
The discovery of high-temperature conventional superconductivity in H3S with a critical temperature of Tc=203 K was followed by the recent record of Tc ~250 K in the face-centered cubic (fcc) lanthanum hydride LaH10 compound. It was realized in a new class of hydrogen-dominated compounds having a clathrate-like crystal structure in which hydrogen atoms form a 3D framework and surround a host atom of rare earth elements. Yttrium hydrides are predicted to have even higher Tc exceeding room temperature. In this paper, we synthesized and refined the crystal structure of new hydrides: YH4, YH6, and YH9 at pressures up to 237 GPa finding that YH4 crystalizes in the I4/mmm lattice, YH6 in Im-3m lattice and YH9 in P63/mmc lattice in excellent agreement with the calculations. The observed very high-temperature superconductivity is comparable to that found in fcc-LaH10: the pressure dependence of Tc for YH9 also displays a "dome like shape" with the highest Tc of 243 K at 201 GPa. We also observed a Tc of 227 K at 237 GPa for the YH6 phase. However, the measured Tcs are notably lower by ~30 K than predicted. Evidence for superconductivity includes the observation of zero electrical resistance, a decrease of Tc under an external magnetic field and an isotope effect. The theoretically predicted fcc YH10 with the promising highest Tc>300 K was not stabilized in our experiments under pressures up to 237 GPa. | condensed matter |
We report photometry and spectroscopy of the outburst of the young stellar object ESO-Halpha 99. The outburst was first noticed in Gaia alert Gaia18dvc and later by ATLAS. We have established the outburst light curve with archival ATLAS ``Orange'' filter photometry, Gaia data, new V-band photometry, and J, H, and K_s photometry from IRIS and UKIRT. The brightness has fluctuated several times near the light curve maximum. The TESS satellite observed ESO-Halpha 99 with high cadence during one of these minor minima and found brightness fluctuations on timescales of days and hours. Imaging with UKIRT shows the outline of an outflow cavity, and we find one knot of H_2 1-0 S(1) emission, now named MHO 1520, on the symmetry axis of this nebula, indicating recent collimated outflow activity from ESO-Halpha 99. Its pre-outburst SED shows a flat FIR spectrum, confirming its early evolutionary state and its similarity to other deeply embedded objects in the broader EXor class. The pre-outburst luminosity is 34 L_sun, a much higher luminosity than typical EXors, indicating that ESO-Halpha 99 may be a star of intermediate mass. Infrared and optical spectroscopy show a rich emission line spectrum, including HI lines, strong red CaII emission, as well as infrared CO bandhead emission, all characteristic EXors in the broadest sense. Comparison of the present spectra with an optical spectrum obtained in 1993, presumably in the quiescent state of the object, shows that during the present outburst the continuum component of the spectrum has increased notably more than the emission lines. The Halpha equivalent width during the outburst is down to one half of its 1993 level, and shock-excited emission lines are much less prominent. | astrophysics |
The paper is an introduction to intuitionistic mathematics. | mathematics |
Analyses of urban scaling laws assume that observations in different cities are independent of the existence of nearby cities. Here we introduce generative models and data-analysis methods that overcome this limitation by modelling explicitly the effect of interactions between individuals at different locations. Parameters that describe the scaling law and the spatial interactions are inferred from data simultaneously, allowing for rigorous (Bayesian) model comparison and overcoming the problem of defining the boundaries of urban regions. Results in five different datasets show that including spatial interactions typically leads to better models and a change in the exponent of the scaling law. Data and codes are provided in Ref. [1]. | physics |
A generic half-BPS surface defect of ${\mathcal N}=4$ supersymmetric U$(N)$ Yang-Mills theory is described by a partition of $N = n_1 + \ldots + n_M$ and a set of $4M$ continuous parameters. We show that such a defect can be realized by $n_I$ stacks of fractional D3-branes in Type II B string theory on a $\mathbb{Z}_M$ orbifold background in which the brane world-volume is partially extended along the orbifold directions. In this set up we show that the $4M$ continuous parameters correspond to constant background values of certain twisted closed string scalars of the orbifold. These results extend and generalize what we have presented for the simple defects in a previous paper. | high energy physics theory |
The visual STELLA echelle spectrograph (SES-VIS) is a new instrument for the STELLA-II telescope at the Iza\~na observatory on Tenerife. Together with the original SES spectrograph - which will still be used in the near IR - and a new H&K-optimized spectrograph, which is currently in the design phase, it will extend the capabilities of STELLA with the follow up of planetary candidates from space missions (TESS, PLATO2). SES-VIS is optimized for precise radial velocity determinations and long term stability. We have developed a ZEMAX based software package to create simulated spectra, which are then extracted using our new data reduction package developed for the PEPSI spectrograph. The focus in this paper has been put on calibration spectra, and the full range of available calibration sources (flat field, Th-Ar, and Fabry-Perot etalon), which can be compared to actual commissioning data once they are available. Furthermore we tested for the effect of changes of the environmental parameters to the wavelength calibration precision. | astrophysics |
We introduce a new online learning framework where, at each trial, the learner is required to select a subset of actions from a given known action set. Each action is associated with an energy value, a reward and a cost. The sum of the energies of the actions selected cannot exceed a given energy budget. The goal is to maximise the cumulative profit, where the profit obtained on a single trial is defined as the difference between the maximum reward among the selected actions and the sum of their costs. Action energy values and the budget are known and fixed. All rewards and costs associated with each action change over time and are revealed at each trial only after the learner's selection of actions. Our framework encompasses several online learning problems where the environment changes over time; and the solution trades-off between minimising the costs and maximising the maximum reward of the selected subset of actions, while being constrained to an action energy budget. The algorithm that we propose is efficient and general in that it may be specialised to multiple natural online combinatorial problems. | computer science |
We present a real-time abinitio description of optical orientation in bulk GaAs due to the coupling with an ultrashort circular polarized laser source. The injection of spin polarized electrons in the conduction band is correctly reproduced, and a non vanishing spin polarization ($\mathbf{P}$) parallel to the direction of propagation of the laser ($z$) emerges. A detailed analysis of the generation and the evolution of $\mathbf{P}(t)$ is discussed. The single $\mathbf{k}$-point dynamics is a motion of precession around a fixed axis with constant $|\mathbf{P}|$ and fixed frequency. Instead, the $\mathbf{k}$-integrated signal shows only a time dependent $P_z(t)$ and decays few pico seconds after the end of the laser pump due to decoherence. Decoherence emerges since the individual contributions activated by the pump give rise to destructive interference. We interpret the results in terms of the \emph{free induction decay} mechanism proposed some years ago. For the first time we are able to reproduce such effect in a full abinitio fashion, giving a quantitative estimate of the associated decay time. Our result also shows a possible explanation to the time decay of spin magnetization observed in many real-time abinitio simulations. | condensed matter |
Data-driven modeling of nonlinear dynamical systems often require an expert user to take critical decisions a priori to the identification procedure. Recently an automated strategy for data driven modeling of \textit{single-input single-output} (SISO) nonlinear dynamical systems based on \textit{Genetic Programming} (GP) and \textit{Tree Adjoining Grammars} (TAG) has been introduced. The current paper extends these latest findings by proposing a \textit{multi-input multi-output} (MIMO) TAG modeling framework for polynomial NARMAX models. Moreover we introduce a TAG identification toolbox in Matlab that provides implementation of the proposed methodology to solve multi-input multi-output identification problems under NARMAX noise assumption. The capabilities of the toolbox and the modelling methodology are demonstrated in the identification of two SISO and one MIMO nonlinear dynamical benchmark models. | electrical engineering and systems science |
We review the recent progress in formulating a quantum kinetic theory for the polarization of spin-$\frac{1}{2}$ massive quarks from the leading-log order of perturbative QCD. | high energy physics phenomenology |
In the present work, we investigate the hidden-strangeness production process in the $S=+1$ channel via $K^+p\to K^+\phi\,p$, focussing on the exotic \textit{pentaquark} molecular $K^*\Sigma$ bound state, assigned by $P^+_s(2071,3/2^-)$. For this purpose, we employ the effective Lagrangian approach in the tree-level Born approximation. Using the experimental and theoretical inputs for the exotic state and for the ground-state hadron interactions, the numerical results show a small but obvious peak structure from $P^+_s$ with the signal-to-background ratio $\approx1.7\,\%$, and it is enhanced in the backward-scattering region of the outgoing $K^+$ in the center-of-mass frame. We also find that the contribution from the $K^*(1680,1^-)$ meson plays an important role to reproduce the data. The proton-spin polarizations are taken into account to find a way to reduce the background. The effects of the possible $27$-plet pentaquark $\Theta^{++}_{27}$ is discussed as well. | high energy physics phenomenology |
High-resolution inelastic x-ray scattering measurements were carried out on molten NaI near the melting point at 680$^\circ$C at SPring-8. Small and damped indications of longitudinal optic excitation modes were observed on the tails of the longitudinal acoustic modes at small momentum transfers, $Q\sim5$ nm$^{-1}$. The measured spectra are in good agreement, in both frequency and linewidth, with {\it ab initio} molecular dynamics (MD) simulations but not classical MD simulations. The observation of these modes at small $Q$ and a good agreement with the simulation permits clear identification of these as collective optic modes with well defined phasing between different ionic motions. | condensed matter |
We study supersymmetric Yang-Mills theories on the three-sphere, with massive matter and Fayet-Iliopoulos parameter, showing second order phase transitions for the non-Abelian theory, extending a previous result for the Abelian theory. We study both partition functions and Wilson loops and also discuss the case of different $R$-charges. Two interpretations of the partition function as eigenfunctions of the $A_{1} $ and free $A_{N-1}$ hyperbolic Calogero-Moser integrable model are given as well. | high energy physics theory |
In this work, we will present the first complete calculation of the one-loop longitudinal photon-to-quark-antiquark light cone wave function, with massive quarks. The quark masses are renormalized in the pole mass scheme. The result is used to calculate the next-to-leading order correction to the high energy Deep Inelastic Scattering longitudinal structure function on a dense target in the dipole factorization framework. For massless quarks the next-to-leading order correction was already known to be sizeable, and our result makes it possible to evaluate it also for massive quarks. | high energy physics phenomenology |
Question answering and conversational systems are often baffled and need help clarifying certain ambiguities. However, limitations of existing datasets hinder the development of large-scale models capable of generating and utilising clarification questions. In order to overcome these limitations, we devise a novel bootstrapping framework (based on self-supervision) that assists in the creation of a diverse, large-scale dataset of clarification questions based on post-comment tuples extracted from stackexchange. The framework utilises a neural network based architecture for classifying clarification questions. It is a two-step method where the first aims to increase the precision of the classifier and second aims to increase its recall. We quantitatively demonstrate the utility of the newly created dataset by applying it to the downstream task of question-answering. The final dataset, ClarQ, consists of ~2M examples distributed across 173 domains of stackexchange. We release this dataset in order to foster research into the field of clarification question generation with the larger goal of enhancing dialog and question answering systems. | computer science |
We consider $3d$ $\mathcal{N}\!=\!2$ gauge theories with fundamental matter plus a single field in a rank-$2$ representation. Using iteratively a process of "deconfinement" of the rank-$2$ field, we produce a sequence of Seiberg-dual quiver theories. We detail this process in two examples with zero superpotential: $Usp(2N)$ gauge theory with an antisymmetric field and $U(N)$ gauge theory with an adjoint field. The fully deconfined dual quiver has $N$ nodes, and can be interpreted as an Aharony dual of theories with rank-$2$ matter. All chiral ring generators of the original theory are mapped into gauge singlet fields of the fully deconfined quiver dual. | high energy physics theory |
A simple, dual-site model of bolaamphiphiles (bolaforms or bipolar amphiphiles) is developed based on an earlier single-site model of (monopolar) amphiphiles [S. Dey, J. Saha, Phys. Rev. E 95, 023315 (2017)]. The model incorporates aqueous environment (both hydrophobic effect and hydration force) in its anisotropic site-site interactions, thus obviating the need to simulate solvent particles explicitly. This economy of sites and the absence of explicit solvent particles enable molecular dynamics simulations of bolaamphiphiles to achieve mesoscopic length and time-scales unattainable by any bead-spring model or explicit solvent computations. The model applies to generic bolas only, since the gain in scale can only be obtained by sacrificing the resolution of detailed molecular structure. Thanks to dual-sites, however, (as opposed to a single-site model) our model can incorporate the essential flexibility of bolas that leads to their U-conformers. The model bolas show successful self-assembly into experimentally observed nano-structures like micelles, rods, lamellae etc. and retain fluidity in very stable monolayers. Presence of membrane-spanning model bolas in bilayers of model monopolar amphiphiles increases the stability and impermeability of the lamellar phase. Model bolas are also seen to be less diffusive and to produce thicker layers compared to their monopolar counterparts. Rigid model bolas, though achiral themselves, show self-assembly into helical rods. As all these observations agree with the well-known key characteristics of archaeal lipids and synthetic bolaamphiphiles, our model promises to be effective for studies of bolas in context of biomimetics, drug-delivery and low molecular weight hydrogelators. To the best of our knowledge, no other single or dual-site, solvent-free model for bolas has been reported thus far. | condensed matter |
In this paper, we study robust tensor completion by using transformed tensor singular value decomposition (SVD), which employs unitary transform matrices instead of discrete Fourier transform matrix that is used in the traditional tensor SVD. The main motivation is that a lower tubal rank tensor can be obtained by using other unitary transform matrices than that by using discrete Fourier transform matrix. This would be more effective for robust tensor completion. Experimental results for hyperspectral, video and face datasets have shown that the recovery performance for the robust tensor completion problem by using transformed tensor SVD is better in PSNR than that by using Fourier transform and other robust tensor completion methods. | computer science |
We calculate the proton lifetime and discuss topological defects in a wide class of non-supersymmetric (non-SUSY) $SO(10)$ and $E(6)$ Grand Unified Theories (GUTs), broken via left-right subgroups with one or two intermediate scales (a total of 9 different scenarios with and without D-parity), including the important effect of threshold corrections. By performing a goodness of fit test for unification using the two-loop renormalisation group evolution equations (RGEs), we find that the inclusion of threshold corrections significantly affects the proton lifetime, allowing several scenarios, which would otherwise be excluded, to survive. Indeed we find that the threshold corrections are a saviour for many non-SUSY GUTs. For each scenario we analyse the homotopy of the vacuum manifold to estimate the possible emergence of topological defects. | high energy physics phenomenology |
We prove calibration guarantees for the popular histogram binning (also called uniform-mass binning) method of Zadrozny and Elkan [2001]. Histogram binning has displayed strong practical performance, but theoretical guarantees have only been shown for sample split versions that avoid 'double dipping' the data. We demonstrate that the statistical cost of sample splitting is practically significant on a credit default dataset. We then prove calibration guarantees for the original method that double dips the data, using a certain Markov property of order statistics. Based on our results, we make practical recommendations for choosing the number of bins in histogram binning. In our illustrative simulations, we propose a new tool for assessing calibration -- validity plots -- which provide more information than an ECE estimate. | statistics |
New relations involving the Riemann, Ricci and Einstein tensors that have to hold for a given geometry to admit Killing-Yano tensors are described. These relations are then used to introduce novel conserved "currents" involving such Killing-Yano tensors. For a particular current based on the Einstein tensor, we discuss the issue of conserved charges and consider implications for matter coupled to gravity. The condition on the background geometry to allow asymptotic conserved charges for a current introduced by Kastor and Traschen is found and a number of other new aspects of this current are commented on. In particular we show that it vanishes for rank $(D-1)$ Killing-Yano tensors in $D$ dimensions. | high energy physics theory |