text
stringlengths
11
9.77k
label
stringlengths
2
104
We document the activities performed during the second MadAnalysis 5 workshop on LHC recasting, that was organised in KIAS (Seoul, Korea) on February 12-20, 2020. We detail the implementation of 12 new ATLAS and CMS searches in the MadAnalysis 5 Public Analysis Database, and the associated validation procedures. Those searches probe the production of extra gauge and scalar/pseudoscalar bosons, supersymmetry, seesaw models and deviations from the Standard Model in four-top production.
high energy physics phenomenology
Despeckling is a key and indispensable step in SAR image preprocessing, existing deep learning-based methods achieve SAR despeckling by learning some mappings between speckled (different looks) and clean images. However, there exist no clean SAR image in the real world. To this end, in this paper, we propose a self-supervised dense dilated convolutional neural network (BDSS) for blind SAR image despeckling. Proposed BDSS can still learn to suppress speckle noise without clean ground truth by optimized for L2 loss. Besides, three enhanced dense blocks with dilated convolution are employed to improve network performance. The synthetic and real-data experiments demonstrate that proposed BDSS can achieve despeckling effectively while maintaining well features such as edges, point targets, and radiometric. At last, we demonstrate that our proposed BDSS can achieve blind despeckling excellently, i.e., do not need to care about the number of looks.
electrical engineering and systems science
During the era of primordial nucleosynthesis the background of non-equilibrium antineutrinos is being formed due to decays of neutrons and nuclei of tritium. The spectra of antineutrinos of this background were calculated taking into account the Coulomb interaction between electron and daughter nucleus in $\beta^-$-decay. The dependence of these spectra on the value of the baryon-to-photon ratio $\eta$ at the period of primordial nucleosynthesis is investigated. The observations of these antineutrinos will allow us to look directly at the very early Universe and nonequilibrium processes taken place before, during, and some time after primordial nucleosynthesis. In any case, this phenomenon is one more aspect in the picture of the standard cosmological model.
high energy physics phenomenology
Although regular expressions do not correspond univocally to regular languages, it is still worthwhile to study their properties and algorithms. For the average case analysis one often relies on the uniform random generation using a specific grammar for regular expressions, that can represent regular languages with more or less redundancy. Generators that are uniform on the set of expressions are not necessarily uniform on the set of regular languages. Nevertheless, it is not straightforward that asymptotic estimates obtained by considering the whole set of regular expressions are different from those obtained using a more refined set that avoids some large class of equivalent expressions. In this paper we study a set of expressions that avoid a given absorbing pattern. It is shown that, although this set is significantly smaller than the standard one, the asymptotic average estimates for the size of the Glushkov automaton for these expressions does not differ from the standard case.
computer science
We propose a natural realization of linear seesaw model with hidden gauge symmetry in which $SU(2)_L$ triplet fermions, one extra Higgs singlet, doublet and quartet scalar are introduced. Small neutrino mass can be realized by two suppression factors that are small vacuum expectation value of quartet scalar and inverse of Dirac mass for triplet. After formulating neutrino mass matrix, we discuss collider phenomenology of the model focusing on signals from exotic charged particles production at the LHC.
high energy physics phenomenology
Complex network reconstruction is a hot topic in many fields. Currently, the most popular data-driven reconstruction framework is based on lasso. However, it is found that, in the presence of noise, lasso loses efficiency for weighted networks. This paper builds a new framework to cope with this problem. The key idea is to employ a series of linear regression problems to model the relationship between network nodes, and then to use an efficient variational Bayesian algorithm to infer the unknown coefficients. The numerical experiments conducted on both synthetic and real data demonstrate that the new method outperforms lasso with regard to both reconstruction accuracy and running speed.
statistics
In this paper, we investigate various properties of strong and weak twisted Bruhat orders on a Coxeter group. In particular, we prove that any twisted strong Bruhat order on an affine Weyl group is locally finite, strengthening a result of Dyer in J. Algebra, 163, 861--879 (1994). We also show that for a non-finite and non-cofinite biclosed set $B$ in the positive system of an affine root system with rank greater than 2, the set of elements having a fixed $B$-twisted length is infinite. This implies that the twisted strong and weak Bruhat orders have an infinite antichain in those cases. Finally, we show that twisted weak Bruhat order can be applied to the study of the tope poset of an infinite oriented matroid arising from an affine root system.
mathematics
To restore the historical sea surface temperatures (SSTs) better, it is important to construct a good calibration model for the associated proxies. In this paper, we introduce a new model for alkenone (${\rm{U}}_{37}^{\rm{K}'}$) based on the heteroscedastic Gaussian process (GP) regression method. Our nonparametric approach not only deals with the variable pattern of noises over SSTs but also contains a Bayesian method of classifying potential outliers.
statistics
The comprehensive depiction of the many-body effects governing nanoconfined electrolytes is an essential step for the conception of nanofluidic devices with optimized performance. By incorporating self-consistently multivalent charges into the Poisson-Boltzmann equation dressed by a background monovalent salt, we investigate the impact of strong-coupling electrostatics on the nanofluidic transport of electrolyte mixtures. We find that the experimentally observed negative streaming currents in anionic nanochannels originate from the collective effect of the Cl attraction by the interfacially adsorbed multivalent cations, and the no-slip layer reducing the hydrodynamic contribution of these cations to the net current. The like-charge current condition emerging from this collective mechanism is shown to be the reversal of the average potential within the no-slip zone. Applying the formalism to surface-coated membrane nanoslits located in the giant dielectric permittivity regime, we reveal a new type of streaming current activated by attractive polarization forces. Under the effect of these forces, the addition of multivalent ions into the KCl solution sets a charge separation and generates a counterion current between the neutral slit walls. The adjustability of the current characteristics solely via the valency and amount of the added multivalent ions identifies the underlying process as a promising mechanism for nanofluidic ion separation purposes.
condensed matter
We define a class of Separation Logic formulae, whose entailment problem: given formulae $\phi, \psi_1, \ldots, \psi_n$, is every model of $\phi$ a model of some $\psi_i$? is 2EXPTIME-complete. The formulae in this class are existentially quantified separating conjunctions involving predicate atoms, interpreted by the least sets of store-heap structures that satisfy a set of inductive rules, which is also part of the input to the entailment problem. Previous work consider established sets of rules, meaning that every existentially quantified variable in a rule must eventually be bound to an allocated location, i.e. from the domain of the heap. In particular, this guarantees that each structure has treewidth bounded by the size of the largest rule in the set. In contrast, here we show that establishment, although sufficient for decidability (alongside two other natural conditions), is not necessary, by providing a condition, called equational restrictedness, which applies syntactically to (dis-)equalities. The entailment problem is more general in this case, because equationally restricted rules define richer classes of structures, of unbounded treewidth. In this paper we show that (1) every established set of rules can be converted into an equationally restricted one and (2) the entailment problem is 2EXPTIME-complete in the latter case, thus matching the complexity of entailments for established sets of rules.
computer science
We show that time crystal phases, which are known to exist for disorder-based many-body localized systems, also appear in systems where localization is due to strong magnetic field gradients. Specifically, we study a finite Heisenberg spin chain in the presence of a gradient field, which can be realized experimentally in quantum dot systems using micromagnets or nuclear spin polarization. Our numerical simulations reveal time crystalline order over a broad range of realistic quantum dot parameters, as evidenced by the long-time preservation of spin expectation values and the asymptotic form of the mutual information. We also consider the undriven system and present several diagnostics for many-body localization that are complementary to those recently studied. Our results show that these non-ergodic phases should be realizable in modest-sized quantum dot spin arrays using only demonstrated experimental capabilities.
quantum physics
The prerequisite of the chiral magnetic effect (CME) is the existence of net chiral charge in the quark gluon plasma (QGP).If we assume that the number of surplus quarks (or surplus anti-quarks) that contribute to the net chiral charge is proportional to the number of quarks (or anti-quarks), the CME will induce a flow of the quark chemical potential and will cause the QGP having three distinct layers along the strong magnetic field characterised by distinctive compositions of quark chemical potentials. This phenomenon may bring new observable outcomes, including the presence of different CEPs for u,d quark, and help us test the existence of CME.
high energy physics phenomenology
We demonstrate experimentally the feasibility of applying reinforcement learning (RL) in flow control problems by automatically discovering active control strategies without any prior knowledge of the flow physics. We consider the turbulent flow past a circular cylinder with the aim of reducing the cylinder drag force or maximizing the power gain efficiency by properly selecting the rotational speed of two small diameter cylinders, parallel to and located downstream of the larger cylinder. Given properly designed rewards and noise reduction techniques, after tens of towing experiments, the RL agent could discover the optimal control strategy, comparable to the optimal static control. While RL has been found to be effective in recent computer flow simulation studies, this is the first time that its effectiveness is demonstrated experimentally, paving the way for exploring new optimal active flow control strategies in complex fluid mechanics applications.
physics
Capturing visual image with a hyperspectral camera has been successfully applied to many areas due to its narrow-band imaging technology. Hyperspectral reconstruction from RGB images denotes a reverse process of hyperspectral imaging by discovering an inverse response function. Current works mainly map RGB images directly to corresponding spectrum but do not consider context information explicitly. Moreover, the use of encoder-decoder pair in current algorithms leads to loss of information. To address these problems, we propose a 4-level Hierarchical Regression Network (HRNet) with PixelShuffle layer as inter-level interaction. Furthermore, we adopt a residual dense block to remove artifacts of real world RGB images and a residual global block to build attention mechanism for enlarging perceptive field. We evaluate proposed HRNet with other architectures and techniques by participating in NTIRE 2020 Challenge on Spectral Reconstruction from RGB Images. The HRNet is the winning method of track 2 - real world images and ranks 3rd on track 1 - clean images. Please visit the project web page https://github.com/zhaoyuzhi/Hierarchical-Regression-Network-for-Spectral-Reconstruction-from-RGB-Images to try our codes and pre-trained models.
electrical engineering and systems science
Heavy-ion collisions at BNL's Relativistic Heavy-Ion Collider (RHIC) and CERN's Large Hadron Collider (LHC) provide strong evidence for the formation of a quark-gluon plasma, with temperatures extracted from relativistic viscous hydrodynamic simulations shown to be well above the transition temperature from hadron matter. How the strongly correlated quark-gluon matter forms in a heavy-ion collision, its properties off-equilibrium, and the thermalization process in the plasma, are outstanding problems in QCD. We review here the theoretical progress in this field in weak coupling QCD effective field theories and in strong coupling holographic approaches based on gauge-gravity duality. We outline the interdisciplinary connections of different stages of the thermalization process to non-equilibrium dynamics in other systems across energy scales ranging from inflationary cosmology, to strong field QED, to ultracold atomic gases, with emphasis on the universal dynamics of non-thermal and of hydrodynamic attractors. We survey measurements in heavy-ion collisions that are sensitive to the early non-equilibrium stages of the collision and discuss the potential for future measurements. We summarize the current state-of-the art in thermalization studies and identify promising avenues for further progress.
high energy physics theory
We measure the stress state in and around a deformation nanotwin in a twinning-induced plasticity (TWIP) steel. Using four-dimensional scanning transmission electron microscopy (4D-STEM), we measure the elastic strain field in a 68.2-by-83.1 nm area of interest with a scan step of 0.36 nm and a diffraction limit resolution of 0.73 nm. The stress field in and surrounding the twin matches the form expected from analytical theory and is on the order of 15 GPa, close to the theoretical strength of the material. We infer that the measured back-stress limits twin thickening, providing a rationale for why TWIP steel twins remain thin during deformation, continuously dividing grains to give substantial work hardening. Our results support modern mechanistic understanding of the influence of twinning on crack propagation and embrittlement in TWIP steels.
condensed matter
Quantum Key Distribution (QKD) allows unconditionally secure communication based on the laws of quantum mechanics rather then assumptions about computational hardness. Optimizing the operation parameters of a given QKD implementation is indispensable in order to achieve high secure key rates. So far, there exists no model that accurately describes entanglement-based QKD with continuous-wave pump lasers. For the first time, we analyze the underlying mechanisms for QKD with temporally uniform pair-creation probabilities and develop a simple but accurate model to calculate optimal trade-offs for maximal secure key rates. In particular, we find an optimization strategy of the source brightness for given losses and detection-time resolution. All experimental parameters utilized by the model can be inferred directly in standard QKD implementations, and no additional assessment of device performance is required. Comparison with experimental data shows the validity of our model. Our results yield a tool to determine optimal operation parameters for already existing QKD systems, to plan a full QKD implementation from scratch, and to determine fundamental key rate and distance limits of given connections.
quantum physics
We demonstrate a smart laser-diffraction analysis technique for particle mixture identification. We retrieve information about the size, geometry, and ratio concentration of two-component heterogeneous particle mixtures with an efficiency above 92%. In contrast to commonly-used laser diffraction schemes -- in which a large number of detectors is needed -- our machine-learning-assisted protocol makes use of a single far-field diffraction pattern, contained within a small angle ($\sim 0.26^{\circ}$) around the light propagation axis. Because of its reliability and ease of implementation, our work may pave the way towards the development of novel smart identification technologies for sample classification and particle contamination monitoring in industrial manufacturing processes.
electrical engineering and systems science
Deep Convolutional Neural Networks (CNNs) i.e. Residual Networks (ResNets) have been used successfully for many computer vision tasks, but are difficult to scale to 3D volumetric medical data. Memory is increasingly often the bottleneck when training 3D Convolutional Neural Networks (CNNs). Recently, invertible neural networks have been applied to significantly reduce activation memory footprint when training neural networks with backpropagation thanks to the invertible functions that allow retrieving its input from its output without storing intermediate activations in memory to perform the backpropagation. Among many successful network architectures, 3D Unet has been established as a standard architecture for volumetric medical segmentation. Thus, we choose 3D Unet as a baseline for a non-invertible network and we then extend it with the invertible residual network. In this paper, we proposed two versions of the invertible Residual Network, namely Partially Invertible Residual Network (Partially-InvRes) and Fully Invertible Residual Network (Fully-InvRes). In Partially-InvRes, the invertible residual layer is defined by a technique called additive coupling whereas in Fully-InvRes, both invertible upsampling and downsampling operations are learned based on squeezing (known as pixel shuffle). Furthermore, to avoid the overfitting problem because of less training data, a variational auto-encoder (VAE) branch is added to reconstruct the input volumetric data itself. Our results indicate that by using partially/fully invertible networks as the central workhorse in volumetric segmentation, we not only reduce memory overhead but also achieve compatible segmentation performance compared against the non-invertible 3D Unet. We have demonstrated the proposed networks on various volumetric datasets such as iSeg 2019 and BraTS 2020.
electrical engineering and systems science
First-principles based modeling on phonon dynamics and transport using density functional theory and Boltzmann transport equation has proven powerful in predicting thermal conductivity of crystalline materials, but it remains unfeasible for modeling complex crystals and disordered solids due to the prohibitive computational cost to capture the disordered structure, especially when the quasiparticle "phonon" model breaks down. Recently, machine-learning regression algorithms show great promises for building high-accuracy potential fields for atomistic modeling with length and time scales far beyond those achievable by first-principles calculations. In this work, using both crystalline and amorphous silicon as examples, we develop machine learning based potential fields for predicting thermal conductivity. The machine learning based interatomic potential is derived from density functional theory calculations by stochastically sampling the potential energy surface in the configurational space. The thermal conductivities of both amorphous and crystalline silicon are then calculated using equilibrium molecular dynamics, which agree well with experimental measurements. This work documents the procedure for training the machine-learning based potentials for modeling thermal conductivity, and demonstrates that machine-learning based potential can be a promising tool for modeling thermal conductivity of both crystalline and amorphous materials with strong disorder.
condensed matter
Many-body states described by a Schr\"{o}dinger equation include states of overlapping waves of non-vanishing interaction energies. These peculiar states formed in many-body transitions remain in asymptotic regions, and lead a new component to the transition probability. The probability is computed rigorously following the von Neumann's fundamental principle of quantum mechanics with an S-matrix that is defined with normalized functions, instead of plane waves. That includes the intriguing correction term to the Fermi's golden rule, in which a visible energy is smaller than the initial energy, and reveals macroscopic quantum phenomena for light particles. Processes in Quantum Electrodynamics are analyzed and the sizable corrections are found in the dilute systems. The results suggest that these states play important roles in natural phenomena, and the verification in laboratory would be possible with recent advanced technology.
high energy physics phenomenology
Hybrid AC/DC distribution systems are becoming a popular means to accommodate the increasing penetration of distributed energy resources and flexible loads. This paper proposes a distributed and robust state estimation (DRSE) method for hybrid AC/DC distribution systems using multiple sources of data. In the proposed distributed implementation framework, a unified robust linear state estimation model is derived for each AC and DC regions, where the regions are connected via AC/DC converters and only limited information exchange is needed. To enhance the estimation accuracy of the areas with low measurement coverage, a deep neural network (DNN) is used to extract hidden system statistical information and allow deriving nodal power injections that keep up with the real-time measurement update rate. This provides the way of integrating smart meter data, SCADA measurements and zero injections together for state estimation. Simulations on two hybrid AC/DC distribution systems show that the proposed DRSE has only slight accuracy loss by the linearization formulation but offers robustness of suppressing bad data automatically, as well as benefits of improving computational efficiency.
electrical engineering and systems science
It has been suggested in recent work that the Page curve of Hawking radiation can be recovered using computations in semi-classical gravity provided one allows for "islands" in the gravity region of quantum systems coupled to gravity. The explicit computations so far have been restricted to black holes in two-dimensional Jackiw-Teitelboim gravity. In this note, we numerically construct a five-dimensional asymptotically AdS geometry whose boundary realizes a four-dimensional Hartle-Hawking state on an eternal AdS black hole in equilibrium with a bath. We also numerically find two types of extremal surfaces: ones that correspond to having or not having an island. The version of the information paradox involving the eternal black hole exists in this setup, and it is avoided by the presence of islands. Thus, recent computations exhibiting islands in two-dimensional gravity generalize to higher dimensions as well.
high energy physics theory
We present a topological approach to the input-output relations of photonic driven-dissipative lattices acting as directional amplifiers. Our theory relies on a mapping from the optical non-Hermitian coupling matrix to an effective topological insulator Hamiltonian. This mapping is based on the singular value decomposition of non-Hermitian coupling matrices, whose inverse matrix determines the linear input-output response of the system. In topologically non-trivial regimes, the input-output response of the lattice is dominated by singular vectors with zero singular values that are the equivalent of zero-energy states in topological insulators, leading to directional amplification of a coherent input signal. In such topological amplification regime, our theoretical framework allows us to fully characterize the amplification properties of the quantum device such as gain, bandwidth, added noise, and noise-to-signal ratio. We exemplify our ideas in a one-dimensional non-reciprocal photonic lattice, for which we derive fully analytical predictions. We show that the directional amplification is near quantum-limited with a gain growing exponentially with system size, $N$, while the noise-to-signal ratio is suppressed as $1/\sqrt{N}$. This points out to interesting applications of our theory for quantum signal amplification and single-photon detection.
quantum physics
We have designed honeycomb lattices for microwave photons with a frequency imbalance between the two sites in the unit cell. This imbalance is the equivalent of a mass term that breaks the lattice inversion symmetry. At the interface between two lattices with opposite imbalance, we observe topological valley edge states. By imaging the spatial dependence of the modes along the interface, we obtain their dispersion relation that we compare to the predictions of an ab initio tight-binding model describing our microwave photonic lattices.
physics
A high-statistics determination of the differential cross section of elastic muon-electron scattering as a function of the transferred four-momentum squared, $d \sigma_{el}(\mu e \to \mu e)/dq^2$, has been argued to provide an effective constraint to the hadronic contribution to the running of the fine-structure constant, $\Delta \alpha_{had}$, a crucial input for precise theoretical predictions of the anomalous magnetic moment of the muon. An experiment called ``MUonE'' is being planned at the north area of CERN for that purpose. We consider the geometry of the detector proposed by the MUonE collaboration and offer a few suggestions on the layout of the passive target material and on the placement of silicon strip sensors, based on a fast simulation of elastic muon-electron scattering events and the investigation of a number of possible solutions for the detector geometry.
physics
In this work, we explore the possibility of decoding Imagined Speech brain waves using machine learning techniques. We propose a covariance matrix of Electroencephalogram channels as input features, projection to tangent space of covariance matrices for obtaining vectors from covariance matrices, principal component analysis for dimension reduction of vectors, an artificial feed-forward neural network as a classification model and bootstrap aggregation for creating an ensemble of neural network models. After the classification, two different Finite State Machines are designed that create an interface for controlling a computer system using an Imagined Speech-based BCI system. The proposed approach is able to decode the Imagined Speech signal with a maximum mean classification accuracy of 85% on binary classification task of one long word and a short word. We also show that our proposed approach is able to differentiate between imagined speech brain signals and rest state brain signals with maximum mean classification accuracy of 94%. We compared our proposed method with other approaches for decoding imagined speech and show that our approach performs equivalent to the state of the art approach on decoding long vs. short words and outperforms it significantly on the other two tasks of decoding three short words and three vowels with an average margin of 11% and 9%, respectively. We also obtain an information transfer rate of 21-bits-per-minute when using an IS based system to operate a computer. These results show that the proposed approach is able to decode a wide variety of imagined speech signals without any human-designed features.
electrical engineering and systems science
The Kahane--Salem--Zygmund inequality for multilinear forms in $\ell_{\infty}$ spaces is a probabilistic result which claims that, for all positive integers $m,n_{1},...,n_{m}$, there exists an $m$-linear form $A\colon\ell_{\infty }^{n_{1}}\times\cdots\times\ell_{\infty}^{n_{m}}\longrightarrow\mathbb{K}$ \ ($\mathbb{K}=\mathbb{R}$ or $\mathbb{C}$) of the type \[ A(z^{(1)},...,z^{(m)})=\sum_{j_{1}=1}^{n_{1}}\cdots\sum_{j_{m}=1}^{n_{m}}\pm z_{j_{1}}^{\left( 1\right) }\cdots z_{j_{m}}^{\left( m\right) }\text{,} \] satisfying \[ \Vert A\Vert\leq C_{m}\max\left\{ n_{1}^{1/2},\ldots,n_{m}^{1/2}\right\} {\textstyle\prod\limits_{j=1}^{m}}n_{j}^{1/2}\text{,} \] for \[ C_{m}\leq\kappa\sqrt{m\log m}\sqrt{m!} \] and a certain $\kappa>0.$ Our main result shows that given any $\epsilon>0$ and any positive integer $m,$ there exists a positive integer $N$ such that \[ C_{m}<1+\epsilon\text{,} \] when we consider $n_{1},...,n_{m}>N$. We also provide the same asymptotic bound for the constant of a related inequality proved by G. Bennett in 1977. Applications to Berlekamp's switching game are given.
mathematics
Using supersymmetric localization we compute the free energy and BPS Wilson loop vacuum expectation values for planar maximally supersymmetric Yang-Mills theory on $S^d$ in the strong coupling limit for $2\leq d<6$. The same calculation can also be performed in supergravity using the recently found spherical brane solutions. We find excellent agreement between the two sets of results. This constitutes a non-trivial precision test of holography in a non-conformal setting. The free energy of maximal SYM on $S^6$ diverges in the strong coupling limit which might signify the onset of little string theory. We show how this divergence can be regularized both in QFT and in supergravity. We also consider $d=7$ with a small negative 't Hooft coupling and show that the free energy and Wilson loop vacuum expectation value agree with the results from supergravity after addressing some subtleties.
high energy physics theory
Measurements are presented of the reduction of signal output due to radiation damage for plastic scintillator tiles used in the hadron endcap (HE) calorimeter of the CMS detector. The tiles were exposed to particles produced in proton-proton (pp) collisions at the CERN LHC with a center-of-mass energy of 13 TeV, corresponding to a delivered luminosity of 50 fb$^{-1}$. The measurements are based on readout channels of the HE that were instrumented with silicon photomultipliers, and are derived using data from several sources: a laser calibration system, a movable radioactive source, as well as hadrons and muons produced in pp collisions. Results from several irradiation campaigns using $^{60}$Co sources are also discussed. The damage is presented as a function of dose rate. Within the range of these measurements, for a fixed dose the damage increases with decreasing dose rate.
physics
In scenarios where multiple speakers talk at the same time, it is important to be able to identify the talkers accurately. This paper presents an end-to-end system that integrates speech source extraction and speaker identification, and proposes a new way to jointly optimize these two parts by max-pooling the speaker predictions along the channel dimension. Residual attention permits us to learn spectrogram masks that are optimized for the purpose of speaker identification, while residual forward connections permit dilated convolution with a sufficiently large context window to guarantee correct streaming across syllable boundaries. End-to-end training results in a system that recognizes one speaker in a two-speaker broadcast speech mixture with 99.9% accuracy and both speakers with 93.9% accuracy, and that recognizes all speakers in three-speaker scenarios with 81.2% accuracy.
electrical engineering and systems science
We develop a theory of edge states based on the Hermiticity of Hamiltonian operators for tight-binding models defined on lattices with boundaries. We describe Hamiltonians using shift operators which serve as differential operators in continuum theories. It turns out that such Hamiltonian operators are not necessarily Hermitian on lattices with boundaries, which is due to the boundary terms associated with the summation by parts. The Hermiticity of Hamiltonian operators leads to natural boundary conditions, and for models with nearest-neighbor (NN) hoppings only, there are reference states that satisfy the Hermiticity and boundary conditions simultaneously. Based on such reference states, we develop a Bloch-type theory for edge states of NN models on a half-plane. This enables us to extract Hamiltonians describing edge-states at one end, which are separated from the bulk contributions. It follows that we can describe edge states at the left and right ends separately by distinct Hamiltonians for systems of cylindrical geometry. We show various examples of such edge state Hamiltonians (ESHs), including Hofstadter model, graphene model, and higher-order topological insulators (HOTIs).
condensed matter
Generative adversarial networks (GANs) are one of the most powerful generative models, but always require a large and balanced dataset to train. Traditional GANs are not applicable to generate minority-class images in a highly imbalanced dataset. Balancing GAN (BAGAN) is proposed to mitigate this problem, but it is unstable when images in different classes look similar, e.g. flowers and cells. In this work, we propose a supervised autoencoder with an intermediate embedding model to disperse the labeled latent vectors. With the improved autoencoder initialization, we also build an architecture of BAGAN with gradient penalty (BAGAN-GP). Our proposed model overcomes the unstable issue in original BAGAN and converges faster to high quality generations. Our model achieves high performance on the imbalanced scale-down version of MNIST Fashion, CIFAR-10, and one small-scale medical image dataset.
computer science
The measurement of electric dipole moment in storage rings can potentially exceed the sensitivity of tests with neutral systems. The spin dynamics under such conditions is described by the Bargmann-Michel-Telegdi equation. It can be derived in the semiclassical approximation under several assumptions one of which is the zero pseudoscalar bilinear. However, many promising extensions to the standard model consider scalar-pseudoscalar couplings which assume nonzero electron pseudoscalar. We re-derive the spin precession equation under conditions that do not assume that pseudoscalar is zero. It leads to a correction term that might be required for matching the storage ring measurements with QFT evaluations.
high energy physics phenomenology
The application of standard sufficient dimension reduction methods for reducing the dimension space of predictors without losing regression information requires inverting the covariance matrix of the predictors. This has posed a number of challenges especially when analyzing high-dimensional data sets in which the number of predictors $\mathit{p}$ is much larger than number of samples $n,~(n\ll p)$. A new covariance estimator, called the \textit{Maximum Entropy Covariance} (MEC) that addresses loss of covariance information when similar covariance matrices are linearly combined using \textit{Maximum Entropy} (ME) principle is proposed in this work. By benefitting naturally from slicing or discretizing range of the response variable, y into \textit{H} non-overlapping categories, $\mathit{h_{1},\ldots ,h_{H}}$, MEC first combines covariance matrices arising from samples in each y slice $\mathit{h\in H}$ and then select the one that maximizes entropy under the principle of maximum uncertainty. The MEC estimator is then formed from convex mixture of such entropy-maximizing sample covariance $S_{\mbox{mec}}$ estimate and pooled sample covariance $\mathbf{S}_{\mathit{p}}$ estimate across the $\mathit{H}$ slices without requiring time-consuming covariance optimization procedures. MEC deals directly with singularity and instability of sample group covariance estimate in both regression and classification problems. The efficiency of the MEC estimator is studied with the existing sufficient dimension reduction methods such as \textit{Sliced Inverse Regression} (SIR) and \textit{Sliced Average Variance Estimator} (SAVE) as demonstrated on both classification and regression problems using real life Leukemia cancer data and customers' electricity load profiles from smart meter data sets respectively.
statistics
A new reference design is introduced for holographic coherent diffraction imaging. This consists in two references - "block" and "pinhole" shaped regions - placed adjacent to the imaging specimen. An efficient recovery algorithm is provided for the resulting holographic phase retrieval problem, which is based on solving a structured, overdetermined linear system. Analysis of the expected recovery error on noisy data, which is contaminated by Poisson shot noise, shows that this simple modification synergizes the individual references and hence leads to uniformly superior performance over single-reference schemes. Numerical experiments on simulated data confirm the theoretical prediction, and the proposed dual-reference scheme achieves a smaller recovery error than leading single-reference schemes.
electrical engineering and systems science
In Nature Human Behaviour 3/2019, an article was published entitled "Large-scale quantitative profiling of the Old English verse tradition" dealing with (besides other things) the question of the authorship of the Old English poem Beowulf. The authors provide various textual measurements that they claim present "serious obstacles to those who would advocate for composite authorship or scribal recomposition" (p. 565). In what follows we raise doubts about their methods and address serious errors in both their data and their code. We show that reliable stylometric methods actually identify significant stylistic heterogeneity in Beowulf. In what follows we discuss each method separately following the order of the original article.
statistics
The pulsating hydrogen atmosphere white dwarf star G 117-B15A has been observed since 1974. Its main pulsation period at 215.19738823(63) s, observed in optical light curves, varies by only (5.12+/-0.82)x10^{-15} s/s and shows no glitches, as pulsars do. The observed rate of period change corresponds to a change of the pulsation period by 1 s in 6.2 million years. We demonstrate that this exceptional optical clock can continue to put stringent limits on fundamental physics, such as constraints on interaction from hypothetical dark matter particles, as well as to search for the presence of external substellar companions.
astrophysics
Hybrid X-ray and magnetic resonance (MR) imaging promises large potential in interventional medical imaging applications due to the broad variety of contrast of MRI combined with fast imaging of X-ray-based modalities. To fully utilize the potential of the vast amount of existing image enhancement techniques, the corresponding information from both modalities must be present in the same domain. For image-guided interventional procedures, X-ray fluoroscopy has proven to be the modality of choice. Synthesizing one modality from another in this case is an ill-posed problem due to ambiguous signal and overlapping structures in projective geometry. To take on these challenges, we present a learning-based solution to MR to X-ray projection-to-projection translation. We propose an image generator network that focuses on high representation capacity in higher resolution layers to allow for accurate synthesis of fine details in the projection images. Additionally, a weighting scheme in the loss computation that favors high-frequency structures is proposed to focus on the important details and contours in projection imaging. The proposed extensions prove valuable in generating X-ray projection images with natural appearance. Our approach achieves a deviation from the ground truth of only $6$% and structural similarity measure of $0.913\,\pm\,0.005$. In particular the high frequency weighting assists in generating projection images with sharp appearance and reduces erroneously synthesized fine details.
electrical engineering and systems science
The objective of this study is to understand the dynamics of freely evolving particle suspensions over a wide range of particle-to-fluid density ratios. The dynamics of particle suspensions are characterized by the average momentum equation, where the dominant contribution to the average momentum transfer between particles and fluid is the average drag force. In this study, the average drag force is quantified using particle-resolved direct numerical simulation in a canonical problem: a statistically homogeneous suspension where an imposed mean pressure gradient establishes a steady mean slip velocity between the phases. The effects of particle velocity fluctuations, particle clustering, and mobility of particles are studied separately. It is shown that the competing effects of these factors could decrease, increase, or keep constant the drag of freely evolving suspensions in comparison to fixed beds at different flow conditions. It is also shown that the effects of particle clustering and particle velocity fluctuations are not independent. Finally, a correlation for interphase drag force in terms of volume fraction, Reynolds number, and density ratio is proposed. Two different approaches (symbolic regression and predefined functional forms) are used to develop the drag correlation. Since this drag correlation has been inferred from simulations of particle suspensions, it includes the effect of the motion of the particles. This drag correlation can be used in computational fluid dynamics simulations of particle-laden flows that solve the average two-fluid equations where the accuracy of the drag law affects the prediction of overall flow behavior.
physics
A rapid growth in spatial open datasets has led to a huge demand for regression approaches accommodating spatial and non-spatial effects in big data. Regression model selection is particularly important to stably estimate flexible regression models. However, conventional methods can be slow for large samples. Hence, we develop a fast and practical model-selection approach for spatial regression models, focusing on the selection of coefficient types that include constant, spatially varying, and non-spatially varying coefficients. A pre-processing approach, which replaces data matrices with small inner products through dimension reduction dramatically accelerates the computation speed of model selection. Numerical experiments show that our approach selects the model accurately and computationally efficiently, highlighting the importance of model selection in the spatial regression context. Then, the present approach is applied to open data to investigate local factors affecting crime in Japan. The results suggest that our approach is useful not only for selecting factors influencing crime risk but also for predicting crime events. This scalable model selection will be key to appropriately specifying flexible and large-scale spatial regression models in the era of big data. The developed model selection approach was implemented in the R package spmoran.
statistics
Sources of non-classical light are of paramount importance for future applications in quantum science and technology such as quantum communication, quantum computation and simulation, quantum sensing and quantum metrology. In this review we discuss the fundamentals and recent progress in the generation of single photons, entangled photon pairs and photonic cluster states using semiconductor quantum dots. Specific fundamentals which are discussed are a detailed quantum description of light, properties of semiconductor quantum dots and light-matter interactions. This includes a framework for the dynamic modelling of non-classical light generation and two-photon interference. Recent progress will be discussed in the generation of non-classical light for off-chip applications as well as implementations for scalable on-chip integration.
quantum physics
A Brownian dynamics algorithm is used to describe the static behaviour of associative polymer solutions. Predictions for the fractions of stickers bound by intra-chain and inter-chain association, as a function of system parameters, such as the number of stickers, the number of monomers between stickers, the solvent quality, and concentration are obtained. A systematic comparison with the scaling relations predicted by the mean-field theory of Dobrynin (Macromolecules, 37, 3881, 2004) is carried out. Different regimes of scaling behaviour are identified depending on the monomer concentration, the density of stickers on a chain, and the solvent quality for backbone monomers. Simulation results validate the predictions of the mean-field theory across a wide range of parameter values in all the scaling regimes. The value of the des Cloizeaux exponent proposed by Dobrynin for sticky polymer solutions, is shown to lead to a collapse of simulation data for all the scaling relations considered here. Three different signatures for the characterisation of gelation are identified, with each leading to a different value of the concentration at the sol-gel transition. The modified Flory-Stockmayer expression is found to be validated by simulations for all three gelation signatures. Simulation results confirm the prediction of scaling theory for the gelation line that separates sol and gel phases, when the modified Flory-Stockmayer expression is used. Phase separation is found to occur with increasing concentration for systems in which the backbone monomers are under theta-solvent conditions, and is shown to coincide with a breakdown in the predictions of scaling theory.
condensed matter
From systematic analysis of the high pulsed magnetic field resistance data of La$_{2-x}$Sr$_x$CuO$_{4}$ thin films, we extract an experimental phase diagram for several doping values ranging from the very underdoped to the very overdoped regimes. Our analysis highlights a competition between charge density waves and superconductivity which is ubiquitous between $x=0.08$ and $x=0.19$ and produces the previously observed double step transition. When suppressed by a strong magnetic field, superconductivity is resilient for two specific doping ranges centered around respectively $x\approx 0.09$ and $x\approx 0.19$ and the characteristic temperature for the onset of the competing charge density wave phase is found to vanish above $x = 0.19$. At $x=1/8$ the two phases are found to coexist exactly at zero magnetic field.
condensed matter
Having a large number of covariates can have a negative impact on the quality of causal effect estimation since confounding adjustment becomes unreliable when the number of covariates is large relative to the samples available. Propensity score is a common way to deal with a large covariate set, but the accuracy of propensity score estimation (normally done by logistic regression) is also challenged by large number of covariates. In this paper, we prove that a large covariate set can be reduced to a lower dimensional representation which captures the complete information for adjustment in causal effect estimation. The theoretical result enables effective data-driven algorithms for causal effect estimation. We develop an algorithm which employs a supervised kernel dimension reduction method to search for a lower dimensional representation for the original covariates, and then utilizes nearest neighbor matching in the reduced covariate space to impute the counterfactual outcomes to avoid large-sized covariate set problem. The proposed algorithm is evaluated on two semi-synthetic and three real-world datasets and the results have demonstrated the effectiveness of the algorithm.
statistics
A line of work initiated by Fortnow in 1997 has proven model-independent time-space lower bounds for the $\mathsf{SAT}$ problem and related problems within the polynomial-time hierarchy. For example, for the $\mathsf{SAT}$ problem, the state-of-the-art is that the problem cannot be solved by random-access machines in $n^c$ time and $n^{o(1)}$ space simultaneously for $c < 2\cos(\frac{\pi}{7}) \approx 1.801$. We extend this lower bound approach to the quantum and randomized domains. Combining Grover's algorithm with components from $\mathsf{SAT}$ time-space lower bounds, we show that there are problems verifiable in $O(n)$ time with quantum Merlin-Arthur protocols that cannot be solved in $n^c$ time and $n^{o(1)}$ space simultaneously for $c < \frac{3+\sqrt{3}}{2} \approx 2.366$, a super-quadratic time lower bound. This result and the prior work on $\mathsf{SAT}$ can both be viewed as consequences of a more general formula for time lower bounds against small-space algorithms, whose asymptotics we study in full. We also show lower bounds against randomized algorithms: there are problems verifiable in $O(n)$ time with (classical) Merlin-Arthur protocols that cannot be solved in $n^c$ randomized time and $n^{o(1)}$ space simultaneously for $c < 1.465$, improving a result of Diehl. For quantum Merlin-Arthur protocols, the lower bound in this setting can be improved to $c < 1.5$.
computer science
State-of-the-art multilingual machine translation relies on a universal encoder-decoder, which requires retraining the entire system to add new languages. In this paper, we propose an alternative approach that is based on language-specific encoder-decoders, and can thus be more easily extended to new languages by learning their corresponding modules. So as to encourage a common interlingua representation, we simultaneously train the N initial languages. Our experiments show that the proposed approach outperforms the universal encoder-decoder by 3.28 BLEU points on average, and when adding new languages, without the need to retrain the rest of the modules. All in all, our work closes the gap between shared and language-specific encoder-decoders, advancing toward modular multilingual machine translation systems that can be flexibly extended in lifelong learning settings.
computer science
Any loop QCD amplitude at full colour is constructed from kinematic and gauge-group building blocks. In a unitarity-based on-shell framework, both objects can be reconstructed from their respective counterparts in tree-level amplitudes. This procedure is at its most powerful when aligned with flexible colour decompositions of tree-level QCD amplitudes. In this note we derive such decompositions for amplitudes with an arbitrary number of quarks and gluons from the same principle that is used to bootstrap kinematics - unitarity factorisation. In the process we formulate new multi-quark bases and provide closed-form expressions for the new decompositions. We then elaborate upon their application in colour decompositions of loop multi-quark amplitudes.
high energy physics phenomenology
We use Gaia DR2 to hunt for runaway stars from the Orion Nebula Cluster (ONC). We search a region extending 45{\deg} around the ONC and out to 1 kpc to find sources that overlapped in angular position with the cluster in the last ~10 Myr. We find ~17,000 runaway/walkaway candidates satisfy this 2D traceback condition. Most of these are expected to be contaminants, e.g., caused by Galactic streaming motions of stars at different distances. We thus examine six further tests to help identify real runaways, namely: (1) possessing young stellar object (YSO) colors and magnitudes based on Gaia optical photometry; (2) having IR excess consistent with YSOs based on 2MASS and WISE photometry; (3) having a high degree of optical variability; (4) having closest approach distances well constrained to within the cluster half-mass radius; (5) having ejection directions that avoid the main Galactic streaming contamination zone; and (6) having a required radial velocity (RV) for 3D overlap of reasonable magnitude (or, for the 7% of candidates with measured RVs, satisfying 3D traceback). Thirteen sources, not previously noted as Orion members, pass all these tests, while another twelve are similarly promising, except they are in the main Galactic streaming contamination zone. Among these 25 ejection candidates, ten with measured RVs pass the most restrictive 3D traceback condition. We present full lists of runaway/walkaway candidates, estimate the high-velocity population ejected from the ONC and discuss its implications for cluster formation theories via comparison with numerical simulations.
astrophysics
The Bagger-Witten line bundle is a line bundle over moduli spaces of two-dimensional SCFTs, related to the Hodge line bundle of holomorphic top-forms on Calabi-Yau manifolds. It has recently been a subject of a number of conjectures, but concrete examples have proven elusive. In this paper we collect several results on this structure, including a proposal for an intrinsic geometric definition over moduli spaces of Calabi-Yau manifolds and some additional concrete examples. We also conjecture a new criterion for UV completion of four-dimensional supergravity theories in terms of properties of the Bagger-Witten line bundle.
high energy physics theory
The underlying event is an important part of high-energy collision events. In the event generators, the underlying event is tuned by fits to collision data. Usually, the underlying event observables are affected by the existence of extra jets and it is difficult to find a part of the phase space which is dominated by the underlying event. In this paper, we suggest to veto the jets in the considered region to disentangle these effects. The idea is verified to work on CMS Open Data. To our knowledge, it is the first time that such ideas are tested on real collision data.
high energy physics phenomenology
We prove that algebraic G-theory in is representable in unstable and stable motivic homotopy categories; in the stable category we identify it with the Borel-Moore theory associated to algebraic K-theory, and show that such an identification is compatible with the functorialities defined by Quillen and Thomason.
mathematics
Quantum error correction protects fragile quantum information by encoding it into a larger quantum system. These extra degrees of freedom enable the detection and correction of errors, but also increase the operational complexity of the encoded logical qubit. Fault-tolerant circuits contain the spread of errors while operating the logical qubit, and are essential for realizing error suppression in practice. While fault-tolerant design works in principle, it has not previously been demonstrated in an error-corrected physical system with native noise characteristics. In this work, we experimentally demonstrate fault-tolerant preparation, measurement, rotation, and stabilizer measurement of a Bacon-Shor logical qubit using 13 trapped ion qubits. When we compare these fault-tolerant protocols to non-fault tolerant protocols, we see significant reductions in the error rates of the logical primitives in the presence of noise. The result of fault-tolerant design is an average state preparation and measurement error of 0.6% and a Clifford gate error of 0.3% after error correction. Additionally, we prepare magic states with fidelities exceeding the distillation threshold, demonstrating all of the key single-qubit ingredients required for universal fault-tolerant operation. These results demonstrate that fault-tolerant circuits enable highly accurate logical primitives in current quantum systems. With improved two-qubit gates and the use of intermediate measurements, a stabilized logical qubit can be achieved.
quantum physics
The term `resilience' is increasingly being used in the domain of social-technical-environmental systems science and related fields. However, the diversity of resilience concepts and a certain (sometimes intended) openness of proposed definitions can lead to misunderstandings and impede their application to systems modelling. We propose an approach that aims to ease communication as well as to support systematic development of research questions and models in the context of resilience. It can be applied independently of the modelling framework or underlying theory of choice. At the heart of this guideline is a checklist consisting of four questions to be answered: (i) Resilience of what? (ii) Resilience regarding what? (iii) Resilience against what? (iv) Resilience how? We refer to the answers to these resilience questions as the "system", the "sustainant", the "adverse influence", and the "response options". The term `sustainant' is a neologism describing the feature of the system (state, structure, function, pathway etc.) that should be maintained (or restored quickly enough) in order to call the system resilient. The use of this proposed guideline is demonstrated for two application examples: fisheries, and the Amazon rainforest. The examples illustrate the diversity of possible answers to the checklist's questions as well as their benefits in structuring the modelling process. The guideline supports the modeller in communicating precisely what is actually meant by `resilience' in a specific context. This combination of freedom and precision could help to advance the resilience discourse by building a bridge between those demanding unambiguous definitions and those stressing the benefits of generality and flexibility of the resilience concept.
physics
In regression models, predictor variables with inherent ordering, such as tumor staging ranging and ECOG performance status, are commonly seen in medical settings. Statistically, it may be difficult to determine the functional form of an ordinal predictor variable. Often, such a variable is dichotomized based on whether it is above or below a certain cutoff. Other methods conveniently treat the ordinal predictor as a continuous variable and assume a linear relationship with the outcome. However, arbitrarily choosing a method may lead to inaccurate inference and treatment. In this paper, we propose a Bayesian mixture model to simultaneously assess the appropriate form of the predictor in regression models by considering the presence of a changepoint through the lens of a threshold detection problem. By using a mixture model framework to consider both dichotomous and linear forms for the variable, the estimate is a weighted average of linear and binary parameterizations. This method is applicable to continuous, binary, and survival outcomes, and easily amenable to penalized regression. We evaluated the proposed method using simulation studies and apply it to two real datasets. We provide JAGS code for easy implementation.
statistics
We show that a neural network whose output is obtained as the difference of the outputs of two feedforward networks with exponential activation function in the hidden layer and logarithmic activation function in the output node (LSE networks) is a smooth universal approximator of continuous functions over convex, compact sets. By using a logarithmic transform, this class of networks maps to a family of subtraction-free ratios of generalized posynomials, which we also show to be universal approximators of positive functions over log-convex, compact subsets of the positive orthant. The main advantage of Difference-LSE networks with respect to classical feedforward neural networks is that, after a standard training phase, they provide surrogate models for design that possess a specific difference-of-convex-functions form, which makes them optimizable via relatively efficient numerical methods. In particular, by adapting an existing difference-of-convex algorithm to these models, we obtain an algorithm for performing effective optimization-based design. We illustrate the proposed approach by applying it to data-driven design of a diet for a patient with type-2 diabetes.
computer science
We calculated reaction rate constants including atom tunneling for the hydrogen abstraction reaction CH3OH+H -> CH2OH+H2 with the instanton method. The potential energy was fitted by a neural network, that was trained to UCCSD(T)-F12/VTZ-F12 data. Bimolecular gas-phase rate constants were calculated using microcanonic instanton theory. All H/D isotope patterns on the CH3 group and the incoming H atom are studied. Unimolecular reaction rate constants, representing the reaction on a surface, down to 30 K, are presented for all isotope patterns. At 30 K they range from 4100 for the replacement of the abstracted H by D to ~ 8 for the replacement of the abstracting H to about 2--6 for secondary KIEs. The $^\text{12}$C/$^\text{13}$C kinetic isotope effect is 1.08 at 30 K, while the $^\text{16}$O/$^\text{18}$O kinetic isotope effect is vanishingly small. A simple kinetic surface model using these data predicts high abundances of the deuterated forms of methanol.
physics
The influence of implantation-induced point defects (PDs) on SiC oxidation is investigated via molecular dynamics simulations. PDs generally increase the oxidation rate of crystalline grains. Particularly, accelerations caused by Si antisites and vacancies are comparable, and followed by Si interstitials, which are higher than those by C antisites and C interstitials. However, in the grain boundary (GB) region, defect contribution to oxidation is more complex, with C antisites decelerating oxidation. The underlying reason is the formation of a C-rich region along the oxygen diffusion pathway that blocks the access of O to Si and thus reduces the oxidation rate, as compared to the oxidation along a GB without defects.
condensed matter
We provide an analytical argument for understanding the likely nature of parameter shifts between those coming from an analysis of a dataset and from a subset of that dataset, assuming differences are down to noise and any intrinsic variance alone. This gives us a measure against which we can interpret changes seen in parameters and make judgements about the coherency of the data and the suitability of a model in describing those data.
astrophysics
This note contains two new theorems about bounded holomorphic functions on the symmetrized bidisk -- a characterization of interpolating sequences and a Toeplitz corona theorem.
mathematics
In this work thin films of the La1-xSrxCoO3 (0.05 < x < 0.26) compound were grown, employing the so-called spray pyrolysis process. The as-grown thin films exhibit polycrystalline microstructure, with uniform grain size distribution, and observable porosity. Regarding their electrical transport properties, the produced thin films show semiconducting-like behavior, regardless the Sr doping level, which is most likely due to both the oxygen deficiencies and the grainy nature of the films. Furthermore, room temperature current-voltage (I-V) measurements reveal stable resistance switching behavior, which is well explained in terms of space-charge limited conduction mechanism. The presented experimental results provide essential evidence regarding the engagement of low cost, industrial-scale methods of growing perovskite transition metal oxide thin films, for potential applications in random access memory devices.
physics
As part of a generalized "prisoners' dilemma", is considered that the evolution of a population with a full set of behavioral strategies limited only by the depth of memory. Each subsequent generation of the population successively loses the most disadvantageous strategies of behavior of the previous generation. It is shown that an increase in memory in a population is evolutionarily beneficial. The winners of evolutionary selection invariably refer to agents with maximum memory. The concept of strategy complexity is introduced. It is shown that strategies that win in natural selection have maximum or near maximum complexity. Despite the fact that at a separate stage of evolution, according to the payout matrix, the individual gain, while refusing to cooperate, exceeded the gain obtained while cooperating. The winning strategies always belonged to the so-called respectable strategies that are clearly prone to cooperation.
physics
By focusing on a typical emitting wavelength of 1120 nm as an example, we present the first demonstration of a high-efficiency, narrow-linewidth kilowatt-level all-fiber amplifier based on hybrid ytterbium-Raman (Yb-Raman) gains. Notably, two temporally stable, phase-modulated single-frequency lasers operating at 1064 nm and 1120nm, respectively, were applied in the fiber amplifier, to alleviate the spectral broadening of the 1120 signal laser and suppress the stimulated Brillouin scattering (SBS) effect simultaneously. Over 1 kW narrow-linewidth 1120 nm signal laser was obtained with a slope efficiency of ~ 77% and a beam quality of M2~1.21. The amplified spontaneous emission (ASE) noise in the fiber amplifier was effectively suppressed by incorporating an ASE-filtering system between the seed laser and the main amplifier. Further examination of the influence of power ratios between the two seed lasers on the conversion efficiency had proved that the presented amplifier could work efficiently when the power ratio of 1120 nm seed laser ranged from 31% to 61%. Overall, this setup could provide a well reference for obtaining high power narrow-linewidth fiber lasers operating within 1100-1200 nm.
physics
Deep learning natural language processing models often use vector word embeddings, such as word2vec or GloVe, to represent words. A discrete sequence of words can be much more easily integrated with downstream neural layers if it is represented as a sequence of continuous vectors. Also, semantic relationships between words, learned from a text corpus, can be encoded in the relative configurations of the embedding vectors. However, storing and accessing embedding vectors for all words in a dictionary requires large amount of space, and may stain systems with limited GPU memory. Here, we used approaches inspired by quantum computing to propose two related methods, {\em word2ket} and {\em word2ketXS}, for storing word embedding matrix during training and inference in a highly efficient way. Our approach achieves a hundred-fold or more reduction in the space required to store the embeddings with almost no relative drop in accuracy in practical natural language processing tasks.
computer science
The well-known modular property of the torus characters and torus partition functions of (rational) vertex operator algebras (VOAs) and 2d conformal field theories (CFTs) has been an invaluable tool for studying this class of theories. In this work we prove that sphere four-point chiral blocks of rational VOAs are vector-valued modular forms for the groups $\Gamma(2)$, $\Gamma_0(2)$, or $\text{SL}_2(\mathbb{Z})$. Moreover, we prove that the four-point correlators, combining the holomorphic and anti-holomorphic chiral blocks, are modular invariant. In particular, in this language the crossing symmetries are simply modular symmetries. This gives the possibility of exploiting the available techniques and knowledge about modular forms to determine or constrain the physically interesting quantities such as chiral blocks and fusion coefficients, which we illustrate with a few examples. We also highlight the existence of a sphere-torus correspondence equating the sphere quantities of certain theories $\mathcal{T}_s$ with the torus quantities of another family of theories $\mathcal{T}_t$. A companion paper will delve into more examples and explore more systematically this sphere-torus duality.
high energy physics theory
As more researchers have become aware of and passionate about algorithmic fairness, there has been an explosion in papers laying out new metrics, suggesting algorithms to address issues, and calling attention to issues in existing applications of machine learning. This research has greatly expanded our understanding of the concerns and challenges in deploying machine learning, but there has been much less work in seeing how the rubber meets the road. In this paper we provide a case-study on the application of fairness in machine learning research to a production classification system, and offer new insights in how to measure and address algorithmic fairness issues. We discuss open questions in implementing equality of opportunity and describe our fairness metric, conditional equality, that takes into account distributional differences. Further, we provide a new approach to improve on the fairness metric during model training and demonstrate its efficacy in improving performance for a real-world product
computer science
Sources of quantum light, in particular correlated photon pairs that are indistinguishable in all degrees of freedom, are the fundamental resource that enables continuous-variable quantum computation and paradigms such as Gaussian boson sampling. Nanophotonic systems offer a scalable platform for implementing sources of indistinguishable correlated photon pairs. However, such sources have so far relied on the use of a single component, such as a single waveguide or a ring resonator, which offers limited ability to tune the spectral and temporal correlations between photons. Here, we demonstrate the use of a topological photonic system comprising a two-dimensional array of ring resonators to generate indistinguishable photon pairs with dynamically tunable spectral and temporal correlations. Specifically, we realize dual-pump spontaneous four-wave mixing in this array of silicon ring resonators that exhibits topological edge states. We show that the linear dispersion of the edge states over a broad bandwidth allows us to tune the correlations, and therefore, quantum interference between photons by simply tuning the two pump frequencies in the edge band. Furthermore, we demonstrate energy-time entanglement between generated photons. We also show that our topological source is inherently protected against fabrication disorders. Our results pave the way for scalable and tunable sources of squeezed light that are indispensable for quantum information processing using continuous variables.
physics
The scaling up of quantum hardware is the fundamental challenge ahead in order to realize the disruptive potential of quantum technology in information science. Among the plethora of hardware platforms, photonics stands out by offering a modular approach, where the main challenge is to construct sufficiently high-quality building blocks and develop methods to efficiently interface them. Importantly, the subsequent scaling-up will make full use of the mature integrated photonic technology provided by photonic foundry infrastructure to produce small foot-print quantum processors of immense complexity. A fully coherent and deterministic photon-emitter interface is a key enabler of quantum photonics, and can today be realized with solid-state quantum emitters with specifications reaching the quantitative benchmark referred to as Quantum Advantage. This light-matter interaction primer realizes a range of quantum photonic resources and functionalities, including on-demand single-photon and multi-photon entanglement sources, and photon-photon nonlinear quantum gates. We will present the current state-of-the-art in single-photon quantum hardware and the main photonic building blocks required in order to scale up. Furthermore, we will point out specific promising applications of the hardware building blocks within quantum communication and photonic quantum computing, laying out the road ahead for quantum photonics applications that could offer a genuine quantum advantage.
quantum physics
Mixture of linear regressions is a popular learning theoretic model that is used widely to represent heterogeneous data. In the simplest form, this model assumes that the labels are generated from either of two different linear models and mixed together. Recent works of Yin et al. and Krishnamurthy et al., 2019, focus on an experimental design setting of model recovery for this problem. It is assumed that the features can be designed and queried with to obtain their label. When queried, an oracle randomly selects one of the two different sparse linear models and generates a label accordingly. How many such oracle queries are needed to recover both of the models simultaneously? This question can also be thought of as a generalization of the well-known compressed sensing problem (Cand\`es and Tao, 2005, Donoho, 2006). In this work, we address this query complexity problem and provide efficient algorithms that improves on the previously best known results.
statistics
We consider the problem of an aggregator attempting to learn customers' load flexibility models while implementing a load shaping program by means of broadcasting daily dispatch signals. We adopt a multi-armed bandit formulation to account for the stochastic and unknown nature of customers' responses to dispatch signals. We propose a constrained Thompson sampling heuristic, Con-TS-RTP, that accounts for various possible aggregator objectives (e.g., to reduce demand at peak hours, integrate more intermittent renewable generation, track a desired daily load profile, etc) and takes into account the operational constraints of a distribution system to avoid potential grid failures as a result of uncertainty in the customers' response. We provide a discussion on the regret bounds for our algorithm as well as a discussion on the operational reliability of the distribution system's constraints being upheld throughout the learning process.
electrical engineering and systems science
Etiologies of tear breakup include evaporation-driven, divergent flow-driven, and a combination of these two. A mathematical model incorporating evaporation and lipid-driven tangential flow is fit to fluorescence imaging data. The lipid-driven motion is hypothesized to be caused by localized excess lipid, or "globs." Tear breakup quantities such as evaporation rates and tangential flow rates cannot currently be directly measured during breakup. We determine such variables by fitting mathematical models for tear breakup and the computed fluorescent intensity to experimental intensity data gathered in vivo. Parameter estimation is conducted via least squares minimization of the difference between experimental data and computed answers using either the trust-region-reflective or Levenberg-Marquardt algorithm. Best-fit determination of tear breakup parameters supports the notion that evaporation and divergent tangential flow can cooperate to drive breakup. The resulting tear breakup is typically faster than purely evaporative cases. Many instances of tear breakup may have similar causes, which suggests that interpretation of experimental results may benefit from considering multiple mechanisms.
physics
We updated the agent based Monte Carlo code HERITAGE that simulates human evolution within restrictive environments such as interstellar, sub-light speed spacecraft in order to include the effects of population genetics. We incorporated a simplified -- yet representative -- model of the whole human genome with 46 chromosomes (23 pairs), containing 2110 building blocks that simulate genetic elements (loci). Each individual is endowed with his/her own diploid genome. Each locus can take 10 different allelic (mutated) forms that can be investigated. To mimic gamete production (sperm and eggs) in human individuals, we simulate the meiosis process including crossing-over and unilateral conversions of chromosomal sequences. Mutation of the genetic information from cosmic ray bombardments is also included. In this first paper of a series of two, we use the neutral hypothesis: mutations (genetic changes) have only neutral phenotypic effects (physical manifestations), implying no natural selection on variations. We will relax this assumption in the second paper. Under such hypothesis, we demonstrate how the genetic patrimony of multi-generational crews can be affected by genetic drift and mutations. It appears that centuries-long deep space travels have small but unavoidable effects on the genetic composition/diversity of the traveling populations that herald substantial genetic differentiation on longer time-scales if the annual equivalent dose of cosmic ray radiation is similar to the Earth radioactivity background at sea level. For larger doses, genomes in the final populations can deviate more strongly with significant genetic differentiation that arises within centuries.
physics
Autonomous materials discovery with desired properties is one of the ultimate goals for modern materials science. Applying the deep learning techniques, we have developed a generative model which can predict distinct stable crystal structures by optimizing the formation energy in the latent space. It is demonstrated that the optimization of physical properties can be integrated into the generative model as on-top screening or backwards propagator, both with their own advantages. Applying the generative models on the binary Bi-Se system reveals that distinct crystal structures can be obtained covering the whole composition range, and the phases on the convex hull can be reproduced after the generated structures are fully relaxed to the equilibrium. The method can be extended to multicomponent systems for multi-objective optimization, which paves the way to achieve the inverse design of materials with optimal properties.
physics
Biased sampling designs can be highly efficient when studying rare (binary) or low variability (continuous) endpoints. We consider longitudinal data settings in which the probability of being sampled depends on a repeatedly measured response through an outcome-related, auxiliary variable. Such auxiliary variable- or outcome-dependent sampling improves observed response and possibly exposure variability over random sampling, {even though} the auxiliary variable is not of scientific interest. {For analysis,} we propose a generalized linear model based approach using a sequence of two offsetted regressions. The first estimates the relationship of the auxiliary variable to response and covariate data using an offsetted logistic regression model. The offset hinges on the (assumed) known ratio of sampling probabilities for different values of the auxiliary variable. Results from the auxiliary model are used to estimate observation-specific probabilities of being sampled conditional on the response and covariates, and these probabilities are then used to account for bias in the second, target population model. We provide asymptotic standard errors accounting for uncertainty in the estimation of the auxiliary model, and perform simulation studies demonstrating substantial bias reduction, correct coverage probability, and improved design efficiency over simple random sampling designs. We illustrate the approaches with two examples.
statistics
A disaster may not necessarily demolish the telecommunications infrastructure, but instead it might affect the national grid and cause blackouts, consequently disrupting the network operation unless there is an alternative power source(s). In this paper, power outages are considered, and the telecommunication network performance is evaluated during a blackout. Two approaches are presented to minimize the impact of power outage and maximize the survival time of the blackout node. A mixed integer linear programming (MILP) model is developed to evaluate the network performance under a single node blackout scenario. The model is used to evaluate the network under the two proposed scenarios. The results show that the proposed approach succeeds in extending the network life time while minimizing the required amount of backup energy.
computer science
Predictions have been compiled for the $p+$Pb LHC runs, focusing on production of hard probes in cold nuclear matter. These predictions were first made for the $\sqrt{s_{_{NN}}} = 5.02$ TeV $p+$Pb run and were later compared to the available data. A similar set of predictions were published for the 8.16~TeV $p+$Pb run. A selection of the predictions are reviewed here.
high energy physics phenomenology
A number of open problems hinder our present ability to extract scientific information from data that will be gathered by the near-future gravitational-wave mission LISA. Many of these relate to the modeling, detection and characterization of signals from binary inspirals with an extreme component-mass ratio of $\lesssim10^{-4}$. In this paper, we draw attention to the issue of systematic error in parameter estimation due to the use of fast but approximate waveform models; this is found to be relevant for extreme-mass-ratio inspirals even in the case of waveforms with $\gtrsim90\%$ overlap accuracy and moderate ($\gtrsim30$) signal-to-noise ratios. A scheme that uses Gaussian processes to interpolate and marginalize over waveform error is adapted and investigated as a possible precursor solution to this problem. Several new methodological results are obtained, and the viability of the technique is successfully demonstrated on a three-parameter example in the setting of the LISA Data Challenge.
astrophysics
This work presents a new strategy for multi-class classification that requires no class-specific labels, but instead leverages pairwise similarity between examples, which is a weaker form of annotation. The proposed method, meta classification learning, optimizes a binary classifier for pairwise similarity prediction and through this process learns a multi-class classifier as a submodule. We formulate this approach, present a probabilistic graphical model for it, and derive a surprisingly simple loss function that can be used to learn neural network-based models. We then demonstrate that this same framework generalizes to the supervised, unsupervised cross-task, and semi-supervised settings. Our method is evaluated against state of the art in all three learning paradigms and shows a superior or comparable accuracy, providing evidence that learning multi-class classification without multi-class labels is a viable learning option.
computer science
Multi-instance learning is common for computer vision tasks, especially in biomedical image processing. Traditional methods for multi-instance learning focus on designing feature aggregation methods and multi-instance classifiers, where the aggregation operation is performed either in feature extraction or learning phase. As deep neural networks (DNNs) achieve great success in image processing via automatic feature learning, certain feature aggregation mechanisms need to be incorporated into common DNN architecture for multi-instance learning. Moreover, flexibility and reliability are crucial considerations to deal with varying quality and number of instances. In this study, we propose a hierarchical aggregation network for multi-instance learning, called HAMIL. The hierarchical aggregation protocol enables feature fusion in a defined order, and the simple convolutional aggregation units lead to an efficient and flexible architecture. We assess the model performance on two microscopy image classification tasks, namely protein subcellular localization using immunofluorescence images and gene annotation using spatial gene expression images. The experimental results show that HAMIL outperforms the state-of-the-art feature aggregation methods and the existing models for addressing these two tasks. The visualization analyses also demonstrate the ability of HAMIL to focus on high-quality instances.
computer science
We study an $SO(1,3)$ pure connection formulation in four dimensions for real-valued fields, inspired by the Capovilla, Dell and Jacobson complex self-dual approach. By considering the CMPR BF action, also, taking into account a more general class of the Cartan-Killing form for the Lie algebra $\mathfrak{so(1,3)}$ and by refining the structure of the Lagrange multipliers, we integrate out the metric variables in order to obtain the pure connection action. Once we have obtained this action, we impose certain restrictions on the Lagrange multipliers, in such a way that the equations of motion led us to a family of torsionless conformally flat Einstein manifolds, parametrized by two numbers. Finally, we show that, by a suitable choice of parameters, that self-dual spaces (Anti-) De Sitter can be obtained.
high energy physics theory
We present a novel method for automatic and robust detection of dominant frequency (DF) in the electrogastrogram (EGG). Our new approach combines Fast Fourier Transform (FFT), Welch's method for spectral density estimation, and autocorrelation. The proposed combined method as well as other separate procedures were tested on a freely available dataset consisted of EGG recordings in 20 healthy individuals. DF was calculated in relation (1) to the fasting and postprandial states, (2) to the three recording locations, and (3) to the subjects' body mass index. For the estimation of algorithms performance in the presence of noise, we created a synthetic dataset by adding white Gaussian noise to the artifact-free EGG waveform in one subject. The individual algorithms and novel combined approach were evaluated in relation to the signal-to-noise ratio (SNR) in range from -40 dB to 20 dB. Our results showed that the novel combined method significantly outperformed the commonly used approach for DF calculation - FFT in noise presence when compared to the benchmark data being was manually corrected by an expert. The novel method outperformed autocorrelation and Welch's method in accuracy. Additionally, we presented a method for optimal window width selection when using Welch's spectrogram that showed that for DF detection, window length of N/4 (300 s), where N is the length of EGG waveform in samples, performed the best when compared to the benchmark data. The combined approach proved efficient for automatic and robust calculation of dominant frequency on openly available EGG dataset recorded in healthy individuals and is promising approach for DF detection.
electrical engineering and systems science
Let $\Gamma$ be the Fuchsian group of the first kind. For an even integer $m\ge 4$, we study $m/2$-holomorphic differentials in terms of space of (holomorphic) cuspidal modular forms $S_m(\Gamma)$. We also give in depth study of Wronskians of cuspidal modular forms and their divisors.
mathematics
Bismuth based ternary oxide photocatalysts are of considerable interest in photoelectrochemical water splitting. Yet these oxides are highly stable in different environment, their relative inertness limits the available synthesis routes to obtain desired stoichiometry on the final product. This report describes a method to prepare barium bismuth niobate (specifically, Ba2(BiNb)O6 and Ba2Bi1.4Nb0.6 O6) target by sintering a mixture of individual oxides and successfully fabricate desired barium bismuth niobate thin film on TCO substrate. The surface morphology, physical, and chemical properties of thin film were systematically investigated. Compositional uniformity was obtained by sputtering in oxygen argon plasma environment and higher photocatalytic activities were observed after subsequent surface treatments. Sodium fluoride surface treatment further enhanced the photocurrent density and improved electrode stability against corrosion. This work further suggests a viable approach to improve the PEC performance of sputtered barium bismuth niobate by modulating their fundamental energy states.
physics
A physical magnetic field has a divergence of zero. Numerical error in constructing a model field and computing the divergence, however, introduces a finite divergence into these calculations. A popular metric for measuring divergence is the average fractional flux $\langle |f_{i}| \rangle$. We show that $\langle |f_{i}| \rangle$ scales with the size of the computational mesh, and may be a poor measure of divergence because it becomes arbitrarily small for increasing mesh resolution, without the divergence actually decreasing. We define a modified version of this metric that does not scale with mesh size. We apply the new metric to the results of DeRosa et al. (2015), who measured $\langle |f_{i}| \rangle$ for a series of Nonlinear Force-Free Field (NLFFF) models of the coronal magnetic field based on solar boundary data binned at different spatial resolutions. We compute a number of divergence metrics for the DeRosa et al. (2015) data and analyze the effect of spatial resolution on these metrics using a non-parametric method. We find that some of the trends reported by DeRosa et al. (2015) are due to the intrinsic scaling of $\langle |f_{i}| \rangle$. We also find that different metrics give different results for the same data set and therefore there is value in measuring divergence via several metrics.
astrophysics
This is a contribution for the discussion on "Unbiased Markov chain Monte Carlo with couplings" by Pierre E. Jacob, John O'Leary and Yves F. Atchad\'e to appear in the Journal of the Royal Statistical Society Series B.
statistics
We study the two-dimensional multiphase Muskat problem describing the motion of three immiscible fluids with equal viscosities in a vertical homogeneous porous medium identified with $\mathbb{R}^2$ under the effect of gravity. We first formulate the governing equations as a strongly coupled evolution problem for the functions that parameterize the sharp interfaces between the fluids. Afterwards we prove that the problem is of parabolic type and establish its well-posedness together with two parabolic smoothing properties. For solutions that are not global we exclude, in a certain regime, that the interfaces come into contact along a curve segment.
mathematics
We identify structures of the young star cluster NGC 2232 in the solar neighborhood (323.0 pc), and a newly discovered star cluster LP 2439 (289.1 pc). Member candidates are identified using the Gaia DR2 sky position, parallax and proper motion data, by an unsupervised machine learning method, \textsc{StarGO}. Member contamination from the Galactic disk is further removed using the color magnitude diagram. The four identified groups (NGC 2232, LP 2439 and two filamentary structures) of stars are coeval with an age of 25 Myr and were likely formed in the same giant molecular cloud. We correct the distance asymmetry from the parallax error with a Bayesian method. The 3D morphology shows the two spherical distributions of clusters NGC 2232 and LP 2439. Two filamentary structures are spatially and kinematically connected to NGC 2232. Both NGC 2232 and LP 2439 are expanding. The expansion is more significant in LP 2439, generating a loose spatial distribution with shallow volume number and mass density profiles. The expansion is suggested to be mainly driven by gas expulsion. NGC 2232, with 73~percent of the cluster mass bound, is currently experiencing a process of re-virialization, However, LP 2439, with 52 percent cluster mass being unbound, may fully dissolve in the near future. The different survivability traces different dynamical states of NGC 2232 and LP 2439 prior to the onset of gas expulsion. NGC 2232 may have been substructured and subvirial, while LP 2439 may either have been virial/supervirial, or it has experienced a much faster rate of gas removal.
astrophysics
The $B-L$ MSSM is the MSSM with three right-handed neutrino chiral multiplets and gauged $B-L$ symmetry. The $B-L$ symmetry is broken by the third family right-handed sneutrino acquiring a VEV, thus spontaneously breaking $R$-parity. Within a natural range of soft supersymmetry breaking parameters, it is shown that a large and uncorrelated number of initial values satisfy all present phenomenological constraints; including the correct masses for the $W^{\pm}$, $Z^0$ bosons, having all sparticles exceeding their present lower bounds and giving the experimentally measured value for the Higgs boson. For this "valid" set of initial values, there are a number of different LSPs, each occurring a calculable number of times. We plot this statistically and determine that among the most prevalent LSPs are chargino and neutralino mass eigenstates. In this paper, the $R$-parity violating decay channels of charginos and neutralinos to standard model particles are determined, and the interaction vertices and decay rates computed analytically. These results are valid for any chargino and neutralino, regardless of whether or not they are the LSP. For chargino and neutralino LSPs, we will-- in a subsequent series of papers --present a numerical study of their RPV decays evaluated statistically over the range of associated valid initial points.
high energy physics phenomenology
Skilful prediction of the seasonal Indian summer monsoon (ISM) rainfall (ISMR) at least one season in advance has great socio-economic value. It represents a lifeline for about a sixth of the world's population. The ISMR prediction remained a challenging problem with the sub-critical skills of the dynamical models attributable to limited understanding of the interaction among clouds, convection, and circulation. The variability of cloud hydrometeors (cloud ice and cloud water) in different time scales (3-7 days, 10-20 days and 30-60 days bands) are examined from re-analysis data during Indian summer monsoon (ISM). Here, we also show that the 'internal' variability of cloud hydrometeors (particularly cloud ice) associated with the ISM sub-seasonal (synoptic + intra-seasonal) fluctuations is partly predictable as they are found to be tied with slowly varying forcing (e.g., El Ni\~no and Southern Oscillation). The representation of deep convective clouds, which involve ice phase processes in a coupled climate model, strongly modulates ISMR variability in association with global predictors. The results from the two sensitivity simulations using coupled global climate model (CGCM) are provided to demonstrate the importance of the cloud hydrometeors on ISM rainfall predictability. Therefore, this study provides a scientific basis for improving the simulation of the seasonal ISMR by improving the physical processes of the cloud on a sub-seasonal time scale and motivating further research in this direction.
physics
We explore the possibility of detecting gravitational waves generated by first order phase transitions in multiple dark sectors. Nnaturalness is taken as a sample model that features multiple additional sectors, many of which undergo phase transitions that produce gravitational waves. We examine the cosmological history of this framework and determine the gravitational wave profiles generated. These profiles are checked against projections of next-generation gravitational wave experiments, demonstrating that multiple hidden sectors can indeed produce unique gravitational wave signatures that will be probed by these future experiments.
high energy physics phenomenology
In 2015 we started the XMM-Newton monitoring of the young solar-like star Epsilon Eridani (440 Myr), one of the youngest solar-like stars with a known chromospheric CaII cycle. By analyzing the most recent Mount Wilson S-index CaII data of this star, we found that the chromospheric cycle lasts 2.92 +/- 0.02 yr, in agreement with past results. From the long-term X-ray lightcurve, we find clear and systematic X-ray variability of our target, consistent with the chromospheric CaII cycle. The average X-ray luminosity results to be 2 x 10^28 erg/s, with an amplitude that is only a factor 2 throughout the cycle. We apply a new method to describe the evolution of the coronal emission measure distribution of Epsilon Eridani in terms of solar magnetic structures: active regions, cores of active regions and flares covering the stellar surface at varying filling fractions. Combinations of these magnetic structures can describe the observed X-ray emission measure of Epsilon Eridani only if the solar flare emission measure distribution is restricted to events in the decay phase. The interpretation is that flares in the corona of Epsilon Eridani last longer than their solar counterparts. We ascribe this to the lower metallicity of Epsilon Eridani. Our analysis revealed also that the X-ray cycle of Epsilon Eridani is strongly dominated by cores of active regions. The coverage fraction of cores throughout the cycle changes by the same factor as the X-ray luminosity. The maxima of the cycle are characterized by a high percentage of covering fraction of the flares, consistent with the fact that flaring events are seen in the corresponding short-term X-ray lightcurves predominately at the cycle maxima. The high X-ray emission throughout the cycle of Epsilon Eridani is thus explained by the high percentage of magnetic structures on its surface.
astrophysics
If $G$ is a group acting geometrically on a CAT(0) cube complex $X$ and if $g \in G$ is an infinite-order element, we show that exactly one of the following situations occurs: (i) $g$ defines a rank-one isometry of $X$; (ii) the stable centraliser $SC_G(g)= \{ h \in G \mid \exists n \geq 1, [h,g^n]=1 \}$ of $g$ is not virtually cyclic; (iii) $\mathrm{Fix}_Y(g^n)$ is finite for every $n \geq 1$ and the sequence $(\mathrm{Fix}_Y(g^n))$ takes infinitely many values, where $Y$ is a cubical component of the Roller boundary of $X$ which contains an endpoint of an axis of $g$. We also show that (iii) cannot occur in several cases, providing a purely algebraic characterisation of rank-one isometries.
mathematics
We present a theory for the emergence of a supersolid state in a cigar-shaped dipolar quantum Bose gas. Our approach is based on a reduced three-dimensional (3D) theory, where the condensate wavefunction is decomposed into an axial field and a transverse part described variationally. This provides an accurate fully 3D description that is specific to the regime of current experiments and efficient to compute. We apply this theory to understand the phase diagram for a gas in an infinite tube potential. We find that the supersolid transition has continuous and discontinuous regions as the averaged density varies. We develop two simplified analytic models to characterize the phase diagram and elucidate the roles of quantum droplets and of the roton excitation.
condensed matter
We consider a discrete time quantum walker in one dimension, where at each step, the step length $\ell$ is chosen from a distribution $P(\ell) \propto \ell^{-\delta -1}$ with $\ell \leq \ell_{max}$. We evaluate the probability $f(x,t)$ that the walker is at position $x$ at time $t$ and its first two moments. As expected, the disorder effectively localizes the walk even for large values of $\delta$. Asymptotically, $\langle x^2 \rangle \propto t^{3/2}$ and $\langle x \rangle \propto t^{1/2}$ independent of $\delta$ and $\ell$, both finite. The scaled distribution $f(x,t)t^{1/2}$ plotted versus $x/t^{1/2}$ shows a data collapse for $x/t < \alpha(\delta,\ell_{max}) \sim \mathcal O(1) $ indicating the existence of a universal scaling function. The scaling function is shown to have a crossover behaviour at $\delta = \delta^* \approx 4.0$ beyond which the results are independent of $\ell_{max}$. We also calculate the von Neumann entropy of entanglement which gives a larger asymptotic value compared to the quantum walk with unique step length even for large $\delta$, with negligible dependence on the initial condition.
quantum physics
The proliferation of low-cost Internet of Things (IoT) devices has led to a race between wireless security and channel attacks. Traditional cryptography requires high-computational power and is not suitable for low-power IoT scenarios. Whist, recently developed physical layer security (PLS) can exploit common wireless channel state information (CSI), its sensitivity to channel estimation makes them vulnerable from attacks. In this work, we exploit an alternative common physics shared between IoT transceivers: the monitored channel-irrelevant physical networked dynamics (e.g., water/oil/gas/electrical signal-flows). Leveraging this, we propose for the first time, graph layer security (GLS), by exploiting the dependency in physical dynamics among network nodes for information encryption and decryption. A graph Fourier transform (GFT) operator is used to characterize such dependency into a graph-bandlimted subspace, which allows the generations of channel-irrelevant cipher keys by maximizing the secrecy rate. We evaluate our GLS against designed active and passive attackers, using IEEE 39-Bus system. Results demonstrate that, GLS is not reliant on wireless CSI, and can combat attackers that have partial networked dynamic knowledge (realistic access to full dynamic and critical nodes remains challenging). We believe this novel GLS has widespread applicability in secure health monitoring and for Digital Twins in adversarial radio environments.
electrical engineering and systems science
We develop the geometric description of submanifolds in Newton--Cartan spacetime. This provides the necessary starting point for a covariant spacetime formulation of Galilean-invariant hydrodynamics on curved surfaces. We argue that this is the natural geometrical framework to study fluid membranes in thermal equilibrium and their dynamics out of equilibrium. A simple model of fluid membranes that only depends on the surface tension is presented and, extracting the resulting stresses, we show that perturbations away from equilibrium yield the standard result for the dispersion of elastic waves. We also find a generalisation of the Canham--Helfrich bending energy for lipid vesicles that takes into account the requirements of thermal equilibrium.
high energy physics theory
The formation of circumstellar disks is investigated using three-dimensional resistive magnetohydrodynamic simulations, in which the initial prestellar cloud has a misaligned rotation axis with respect to the magnetic field. We examine the effects of (i) the initial angle difference between the global magnetic field and the cloud rotation axis ($\theta_0$) and (ii) the ratio of the thermal to gravitational energy ($\alpha_0$). We study $16$ models in total and calculate the cloud evolution until $\sim \! 5000$ yr after protostar formation. Our simulation results indicate that an initial non-zero $\theta_0$ ($> 0$) promotes the disk formation but tends to suppress the outflow driving, for models that are moderately gravitationally unstable, $\alpha_0 \lesssim 1$. In these models, a large-sized rotationally-supported disk forms and a weak outflow appears, in contrast to a smaller disk and strong outflow in the aligned case ($\theta_0 = 0$). Furthermore, we find that when the initial cloud is highly unstable with small $\alpha_0$, the initial angle difference $\theta_0$ does not significantly affect the disk formation and outflow driving.
astrophysics
We present a novel technique for estimating disk parameters (the centre and the radius) from its 2D image. It is based on the maximal likelihood approach utilising both edge pixels coordinates and the image intensity gradients. We emphasise the following advantages of our likelihood model. It has closed-form formulae for parameter estimating, requiring less computational resources than iterative algorithms therefore. The likelihood model naturally distinguishes the outer and inner annulus edges. The proposed technique was evaluated on both synthetic and real data.
electrical engineering and systems science
Generative Adversarial Networks (GANs) have been used in several machine learning tasks such as domain transfer, super resolution, and synthetic data generation. State-of-the-art GANs often use tens of millions of parameters, making them expensive to deploy for applications in low SWAP (size, weight, and power) hardware, such as mobile devices, and for applications with real time capabilities. There has been no work found to reduce the number of parameters used in GANs. Therefore, we propose a method to compress GANs using knowledge distillation techniques, in which a smaller "student" GAN learns to mimic a larger "teacher" GAN. We show that the distillation methods used on MNIST, CIFAR-10, and Celeb-A datasets can compress teacher GANs at ratios of 1669:1, 58:1, and 87:1, respectively, while retaining the quality of the generated image. From our experiments, we observe a qualitative limit for GAN's compression. Moreover, we observe that, with a fixed parameter budget, compressed GANs outperform GANs trained using standard training methods. We conjecture that this is partially owing to the optimization landscape of over-parameterized GANs which allows efficient training using alternating gradient descent. Thus, training an over-parameterized GAN followed by our proposed compression scheme provides a high quality generative model with a small number of parameters.
computer science
Treatment switching in a randomized controlled trial is said to occur when a patient randomized to one treatment arm switches to another treatment arm during follow-up. This can occur at the point of disease progression, whereby patients in the control arm may be offered the experimental treatment. It is widely known that failure to account for treatment switching can seriously dilute the estimated effect of treatment on overall survival. In this paper, we aim to account for the potential impact of treatment switching in a re-analysis evaluating the treatment effect of NucleosideReverse Transcriptase Inhibitors (NRTIs) on a safety outcome (time to first severe or worse sign or symptom) in participants receiving a new antiretroviral regimen that either included or omitted NRTIs in the Optimized Treatment That Includes or OmitsNRTIs (OPTIONS) trial. We propose an estimator of a treatment causal effect under a structural cumulative survival model (SCSM) that leverages randomization as an instrumental variable to account for selective treatment switching. Unlike Robins' accelerated failure time model often used to address treatment switching, the proposed approach avoids the need for artificial censoring for estimation. We establish that the proposed estimator is uniformly consistent and asymptotically Gaussian under standard regularity conditions. A consistent variance estimator is also given and a simple resampling approach provides uniform confidence bands for the causal difference comparing treatment groups overtime on the cumulative intensity scale. We develop an R package named "ivsacim" implementing all proposed methods, freely available to download from R CRAN. We examine the finite performance of the estimator via extensive simulations.
statistics