text
stringlengths
11
9.77k
label
stringlengths
2
104
The ATLAS and CMS collaborations of the LHC have observed that the Higgs boson decays into the bottom quark-antiquark pair, and have also established that the Higgs coupling with the top quark-antiquark pair is instrumental in one of the modes for Higgs production. This underlines the discovery of the Yukawa force at the LHC. We demonstrate the impact of this discovery on the Higgs properties that are related to the dynamics of electroweak symmetry breaking. We show that these measurements have considerably squeezed the allowed window for new physics contributing to the Higgs couplings with the weak gauge bosons and the third generation quarks. The expected constraints at the HL-LHC and future Higgs factories are also shown. We project these constraints on the parameter space of a few motivated scenarios beyond the Standard Model. We pick them under two broad categories, namely, the composite Higgs and its RS dual, as well as various types of multi-Higgs models. The latter category includes models with singlet scalars, Type I, II and BGL-type two-Higgs doublet models, and models with scalar triplets a la Georgi and Machacek.
high energy physics phenomenology
We present a general solution for correlators of external boundary operators in black hole states of Jackiw-Teitelboim gravity. We use the Hilbert space constructed using the particle-with-spin interpretation of the Jackiw-Teitelboim action, which consists of wavefunctions defined on Lorentzian $AdS_2$. The density of states of the gravitational system appears in the amplitude for a boundary particle to emit and reabsorb matter. Up to self-interactions of matter, a general correlator can be reduced in an energy basis to a product of amplitudes for interactions and Wilson polynomials mapping between boundary and bulk interactions.
high energy physics theory
Baksan Experiment on Sterile Neutrino (BEST) is presently at the stage of production of the artificial neutrino source Cr-51, the gallium exposure will start in July and proceed for three months. While aiming specifically at investigating the Gallium neutrino anomaly (SAGE and GALLEX experiments), BEST can do more and it is tempting to estimate its ability in testing sterile neutrino explanation of antineutrino (reactor) anomalies. We observe a moderate sensitivity to the region in model parameter space (sterile neutrino mass and mixing with active electron neutrino) outlined by the old reactor antineutrino anomaly and the best fit of DANSS experiment, while the Neutrino-4 favorite region falls right in the BEST ballpark. In particular, by analyzing SAGE+GALLEX and Neutrino-4 $\chi^2$ distributions we find that Neutrino-4 results are fully consistent with the Gallium anomaly, the significance of the combined anomaly almost reaches 4$\sigma$ level. If the BEST confirms the Neutrino-4 results, the joint analysis will indicate more than 5$\sigma$ evidence for the sterile neutrino of eV-scale mass.
high energy physics phenomenology
We consider the off-shell momentum space Green's functions in closed superstring field theory. Recently in arXiv:1810.07197, the off-shell Green's functions -- after explicitly removing contributions of massless states -- have been shown to be analytic on a domain (to be called the LES domain) in complex external momenta variables. Analyticity of off-shell Green's functions in local QFTs without massless states in the primitive domain is a well-known result. Using complex Lorentz transformations and Bochner's theorem allow to extend the LES domain to a larger subset of the primitive domain. For the 2-, 3- and 4-point functions, the full primitive domain is recovered. For the 5-point function, we are not able to obtain the full primitive domain analytically, only a large part of it is recovered. While this problem arises also for higher-point functions, it is expected to be only a technical issue.
high energy physics theory
The quantum field theoretical description of coherence in the oscillations of particles, especially neutrinos, is a standing problem in particle physics. In this talk, several inconsistencies of the standard approach to particle oscillations will be explained, and how they are resolved in a process-independent manner, by a novel approach inspired by the Bardeen--Cooper--Schrieffer theory of superconductivity and the Nambu--Jona-Lasinio model. The formalism leads to corrections to the neutrino oscillation probability originally written by Pontecorvo and Gribov, however the standard probability is validated in the ultrarelativistic neutrino limit. The massive neutrino states are interpreted as quasiparticles on a vacuum condensate of "Cooper pairs" of massless flavour neutrinos. The newly defined oscillating particle states are for neutrino oscillations what the Klauder--Sudarshan--Glauber coherent states are for quantum optics.
high energy physics phenomenology
We study the optical properties of gold nanoparticles coated with a nematic liquid crystal whose director field is distributed around the nanoparticle according to the anchoring conditions at the surface of the nanoparticle. The distribution of the nematic liquid crystal is obtained by minimization of the corresponding Frank free-energy functional whilst the optical response is calculated by the discrete-dipole approximation. We find, in particular, that the anisotropy of the nematic liquid-crystal coating does not affect much the (isotropic) optical response of the nanoparticle. However, for strong anchoring of the nematic liquid-crystal molecules on the surface of nanoparticle, the inhomogeneity of the coating which is manifested by a ring-type singularity (disclination or Saturn ring), produces an enhancement of the extinction cross spectrum over the entire visible spectrum.
condensed matter
We study the construction of the theoretical foundation of model comparison for ergodic stochastic differential equation (SDE) models and an extension of the applicable scope of the conventional Bayesian information criterion. Different from previous studies, we suppose that the candidate models are possibly misspecified models, and we consider both Wiener and a pure-jump L\'{e}vy noise driven SDE. Based on the asymptotic behavior of the marginal quasi-log likelihood, the Schwarz type statistics and stepwise model selection procedure are proposed. We also prove the model selection consistency of the proposed statistics with respect to an optimal model. We conduct some numerical experiments and they support our theoretical findings.
mathematics
In this work, we combine two different ranking methods together with several other predictors in a joint random forest approach for the scores of soccer matches. The first ranking method is based on the bookmaker consensus, the second ranking method estimates adequate ability parameters that reflect the current strength of the teams best. The proposed combined approach is then applied to the data from the two previous FIFA Women's World Cups 2011 and 2015. Finally, based on the resulting estimates, the FIFA Women's World Cup 2019 is simulated repeatedly and winning probabilities are obtained for all teams. The model clearly favors the defending champion USA before the host France.
statistics
In simple card games, cards are dealt one at a time and the player guesses each card sequentially. We study problems where feedback (e.g. correct/incorrect) is given after each guess. For decks with repeated values (as in blackjack where suits do not matter) the optimal strategy differs from the "greedy strategy" (of guessing a most likely card each round). Further, both optimal and greedy strategies are far too complicated for real time use by human players. Our main results show that simple heuristics perform close to optimal.
mathematics
Galaxies can be classified as passive ellipticals or star-forming discs. Ellipticals dominate at the high end of the mass range, and therefore there must be a mechanism responsible for the quenching of star-forming galaxies. This could either be due to the secular processes linked to the mass and star formation of galaxies or to external processes linked to the surrounding environment. In this paper, we analytically model the processes that govern galaxy evolution and quantify their contribution. We have specifically studied the effects of mass quenching, gas stripping, and mergers on galaxy quenching. To achieve this, we first assumed a set of differential equations that describe the processes that shape galaxy evolution. We then modelled the parameters of these equations by maximising likelihood. These equations describe the evolution of galaxies individually, but the parameters of the equations are constrained by matching the extrapolated intermediate-redshift galaxies with the low-redshift galaxy population. In this study, we modelled the processes that change star formation and stellar mass in massive galaxies from the GAMA survey between z~0.4 and the present. We identified and quantified the contributions from mass quenching, gas stripping, and mergers to galaxy quenching. The quenching timescale is on average 1.2 Gyr and a closer look reveals support for the slow-then-rapid quenching scenario. The major merging rate of galaxies is about once per 10~Gyr, while the rate of ram pressure stripping is significantly higher. In galaxies with decreasing star formation, we show that star formation is lost to fast quenching mechanisms such as ram pressure stripping and is countered by mergers, at a rate of about 41% Gyr$^{-1}$ and to mass quenching 49% Gyr$^{-1}$. (abridged)
astrophysics
Adversarial examples have attracted significant attention in machine learning, but the reasons for their existence and pervasiveness remain unclear. We demonstrate that adversarial examples can be directly attributed to the presence of non-robust features: features derived from patterns in the data distribution that are highly predictive, yet brittle and incomprehensible to humans. After capturing these features within a theoretical framework, we establish their widespread existence in standard datasets. Finally, we present a simple setting where we can rigorously tie the phenomena we observe in practice to a misalignment between the (human-specified) notion of robustness and the inherent geometry of the data.
statistics
Video indexing approaches such as visual concept classification and person recognition are essential to enable fine-grained semantic search in large-scale video archives such as the historical video collection of former German Democratic Republic (GDR) maintained by the German Broadcasting Archive (DRA). Typically, a lexicon of visual concepts has to be defined for semantic search. However, the definition of visual concepts can be more or less subjective due to individually differing judgments of annotators, which may have an impact on annotation quality and subsequently training of supervised machine learning methods. In this paper, we analyze the inter-coder agreement for historical TV data of the former GDR for visual concept classification and person recognition. The inter-coder agreement is evaluated for a group of expert as well as non-expert annotators in order to determine differences in annotation homogeneity. Furthermore, correlations between visual recognition performance and inter-annotator agreement are measured. In this context, information about image quantity and agreement are used to predict average precision for concept classification. Finally, the influence of expert vs. non-expert annotations acquired in the study are used to evaluate person recognition.
computer science
We examine two aspects of the mathematical basis for two-tier voting systems, such as that of the Council of the European Union. These aspects concern the use of square-root weights and the choice of quota. Square-root weights originate in the Penrose square-root system, which assumes that votes are cast independently and uniformly at random, and is based around the concept of equality of influence of the voters across the Union. There are (at least) two distinct definitions of influence in current use in probability theory, namely, absolute and conditional influence. These are in agreement when the underlying random variables are independent, but not generally. We review their possible implications for two-tier voting systems, especially in the context of the so-called collective bias model. We show that the two square-root laws invoked by Penrose are unified through the use of conditional influence. In an elaboration of the square-root system, Slomczynski and Zyczkowski have proposed an exact value for the quota $q=q^*$ to be achieved in a successful vote of a two-tier system, and they have presented numerical and theoretical evidence in its support. We indicate some numerical and mathematical issues arising in the use of a Gaussian (or normal) approximation in this context, and we propose that other values of $q$ may be as good if not better than $q^*$. We discuss certain aspects of the relationship between theoreticians and politicians in the design of a two-tier voting system, and we reach the conclusion that the choice of quota in the square-root system is an issue for politicians informed by theory.
mathematics
We study a $Z_2 \times Z'_2$ symmetric 3-Higgs Doublet Model (3HDM), wherein two of the doublets are inert and one is active (thus denoted in literature as I(2+1)HDM), yielding a two-component Dark Matter (DM) sector. The two DM candidates emerge as the lightest scalar component of a different inert doublet, each with a different odd discrete parity, and cooperate to achieve the correct relic density. When a sufficient mass difference exists between the two DM candidates, it is possible to test the presence of both in present and/or forthcoming facilities, as the corresponding masses are typically at the electroweak scale. Specifically, the light DM component can be probed by the nuclear recoil energy in direct detection experiments while the heavy DM component appears through the photon flux in indirect detection experiments. In fact, the DM mass sensitivity that the two experimental set-ups can achieve should be adequate to establish the presence of two different DM signals. This result has been obtained in the presence of a thorough theoretical analysis of the stability conditions of the vacuum structure emerging from our I(2+1)HDM construct, ensuring that the model configurations adopted are physical, and of up-to-date constraints coming from data collected by both space and ground experiments, ensuring that the coupling and mass spectra investigated are viable phenomenologically.
high energy physics phenomenology
We show that a cyclic unitary process can extract work from the thermodynamic equilibrium state of an engineered quantum dissipative process. Systems in the equilibrium states of these processes serve as batteries, storing energy. The dissipative process that brings the battery to the active equilibrium state is driven by an agent that couples the battery to thermal systems. The second law of thermodynamics imposes a work cost for the process; however, no work is needed to keep the battery in that charged state. We consider simple examples of these batteries and discuss particular cases in which the extracted work or the efficiency of the process is maximal.
quantum physics
The assumption of normality has underlain much of the development of statistics, including spatial statistics, and many tests have been proposed. In this work, we focus on the multivariate setting and we first provide a synopsis of the recent advances in multivariate normality tests for i.i.d. data, with emphasis on the skewness and kurtosis approaches. We show through simulation studies that some of these tests cannot be used directly for testing normality of spatial data, since the multivariate sample skewness and kurtosis measures, such as the Mardia's measures, deviate from their theoretical values under Gaussianity due to dependence, and some related tests exhibit inflated type I error, especially when the spatial dependence gets stronger. We review briefly the few existing tests under dependence (time or space), and then propose a new multivariate normality test for spatial data by accounting for the spatial dependence of the observations in the test statistic. The new test aggregates univariate Jarque-Bera (JB) statistics, which combine skewness and kurtosis measures, for individual variables. The asymptotic variances of sample skewness and kurtosis for standardized observations are derived given the dependence structure of the spatial data. Consistent estimators of the asymptotic variances are then constructed for finite samples. The test statistic is easy to compute, without any smoothing involved, and it is asymptotically $\chi^2_{2p}$ under normality, where $p$ is the number of variables. The new test has a good control of the type I error and a high empirical power, especially for large sample sizes.
statistics
Anomaly detection in visual data refers to the problem of differentiating abnormal appearances from normal cases. Supervised approaches have been successfully applied to different domains, but require an abundance of labeled data. Due to the nature of how anomalies occur and their underlying generating processes, it is hard to characterize and label them. Recent advances in deep generative-based models have sparked interest in applying such methods for unsupervised anomaly detection and have shown promising results in medical and industrial inspection domains. In this work we evaluate a crucial part of the unsupervised visual anomaly detection pipeline, that is needed for normal appearance modeling, as well as the ability to reconstruct closest looking normal and tumor samples. We adapt and evaluate different high-resolution state-of-the-art generative models from the face synthesis domain and demonstrate their superiority over currently used approaches on a challenging domain of digital pathology. Multifold improvement in image synthesis is demonstrated in terms of the quality and resolution of the generated images, validated also against the supervised model.
electrical engineering and systems science
Photo-emission spectroscopy directly probes individual electronic states, ranging from single excitations to high-energy satellites, which simultaneously represent multiple quasiparticles (QPs) and encode information about electronic correlation. First-principles description of the spectra requires an efficient and accurate treatment of all many-body effects. This is especially challenging for inner valence excitations where the single QP picture breaks down. Here, we provide the full valence spectra of small closed-shell molecules, exploring the independent and interacting quasiparticle regimes, computed with the fully-correlated adaptive sampling configuration interaction (ASCI) method. We critically compare these results to calculations with the many-body perturbation theory, based on the $GW$ and vertex corrected $GW\Gamma$ approaches. The latter explicitly accounts for two-QP quantum interactions, which have been often neglected. We demonstrate that for molecular systems, the vertex correction universally improves the theoretical spectra, and it is crucial for accurate prediction of QPs as well as capturing the rich satellite structures of high-energy excitations. $GW\Gamma$ offers a unified description across all relevant energy scales. Our results suggest that the multi-QP regime corresponds to dynamical correlations, which can be described via perturbation theory.
physics
One of the dramatic trends at the intersection of computing and healthcare has been patients' increased access to medical information, ranging from self-tracked physiological data to genetic data, tests, and scans. Increasingly however, patients and clinicians have access to advanced machine learning-based tools for diagnosis, prediction, and recommendation based on large amounts of data, some of it patient-generated. Consequently, just as organizations have had to deal with a "Bring Your Own Device" (BYOD) reality in which employees use their personal devices (phones and tablets) for some aspects of their work, a similar reality of "Bring Your Own Algorithm" (BYOA) is emerging in healthcare with its own challenges and support demands. BYOA is changing patient-clinician interactions and the technologies, skills and workflows related to them. In this paper we argue that: (1) BYOA is changing the patient-clinician relationship and the nature of expert work in healthcare, and (2) better patient-clinician-information-interpretation relationships can be facilitated with solutions that integrate technological and organizational perspectives.
computer science
This study proposes a new mediated asymmetric semi-quantum key distribution (MASQKD) protocol. With the help of a dishonest third party, two classical participants, who have only limited asymmetric quantum capabilities, can share a secret key with each other. The proposed protocol is shown to be immune to several well-known attacks. Furthermore, an improved MASQKD protocol is proposed in which the quantum capabilities of one participant can be further reduced.
quantum physics
Categorical data are often observed as counts resulting from a fixed number of trials in which each trial consists of making one selection from a prespecified set of categories. The multinomial distribution serves as a standard model for such clustered data but assumes that trials are independent and identically distributed. Extensions such as Dirichlet-multinomial and random-clumped multinomial can express positive association, where trials are more likely to result in a common category due to membership in a common cluster. This work considers a Conway-Maxwell-multinomial (CMM) distribution for modeling clustered categorical data exhibiting positively or negatively associated trials. The CMM distribution features a dispersion parameter which allows it to adapt to a range of association levels and includes several recognizable distributions as special cases. We explore properties of CMM, illustrate its flexible characteristics, identify a method to efficiently compute maximum likelihood (ML) estimates, present simulations of small sample properties under ML estimation, and demonstrate the model via several data analysis examples.
statistics
We revisit and generalize the concept of composite likelihood as a method to make a probabilistic inference by aggregation of multiple Bayesian agents, thereby defining a class of predictive models which we call composite Bayesian. This perspective gives insight to choose the weights associated with composite likelihood, either a priori or via learning; in the latter case, they may be tuned so as to minimize prediction cross-entropy, yielding an easy-to-solve convex problem. We argue that composite Bayesian inference is a middle way between generative and discriminative models that trades off between interpretability and prediction performance, both of which are crucial to many artificial intelligence tasks.
statistics
We consider a single server queueing system with two classes of jobs: eager jobs with small sizes that require service to begin almost immediately upon arrival, and tolerant jobs with larger sizes that can wait for service. While blocking probability is the relevant performance metric for the eager class, the tolerant class seeks to minimize its mean sojourn time. In this paper, we discuss the performance of each class under dynamic scheduling policies, where the scheduling of both classes depends on the instantaneous state of the system. This analysis is carried out under a certain fluid limit, where the arrival rate and service rate of the eager class are scaled to infinity, holding the offered load constant. Our performance characterizations reveal a (dynamic) pseudo-conservation law that ties the performance of both the classes to the standalone blocking probabilities of the eager class. Further, the performance is robust to other specifics of the scheduling policies. We also characterize the Pareto frontier of the achievable region of performance vectors under the same fluid limit, and identify a (two-parameter) class of Pareto-complete scheduling policies.
computer science
Given finite metric spaces $(X, d_X)$ and $(Y, d_Y)$, we investigate the persistent homology $PH_*(X \times Y)$ of the Cartesian product $X \times Y$ equipped with the sum metric $d_X + d_Y$. Interpreting persistent homology as a module over a polynomial ring, one might expect the usual K\"unneth short exact sequence to hold. We prove that it holds for $PH_0$ and $PH_1$, and we illustrate with the Hamming cube $\{0,1\}^k$ that it fails for $PH_n,\,\, n \geq 2$. For $n = 2$, the prediction for $PH_2(X \times Y)$ from the expected K\"unneth short exact sequence has a natural surjection onto $PH_2(X \times Y)$. We compute the nontrivial kernel of this surjection for the splitting of Hamming cubes $\{0,1\}^k = \{0,1\}^{k-1} \times \{0,1\}$. For all $n \geq 0$, the interleaving distance between the prediction for $PH_n(X \times Y)$ and the true persistent homology is bounded above by the minimum of the diameters of $X$ and $Y$. As preliminary results of independent interest, we establish an algebraic K\"unneth formula for simplicial modules over the ring $\kappa[\mathbb{R}_+]$ of polynomials with coefficients in a field $\kappa$ and exponents in $\mathbb{R}_+ = [0,\infty)$, as well as a K\"unneth formula for the persistent homology of $\mathbb{R}_+$-filtered simplicial sets -- both of these K\"unneth formulas hold in all homological dimensions $n \geq 0$.
mathematics
Given the $\beta$ functions of the closed string sigma model up to one loop in $\alpha'$, the effective action implement the condition $\beta=0$ to preserve conformal symmetry at quantum level. One of the more powerful and striking results of string theory is that this effective action contains Einstein gravity as an emergent dynamics in space-time. We show from the $\beta$ functions and its relation with the equations of motion of the effective action, that the differential identities [1] are the Noether identities associated with the effective action and its gauge symmetries. From here, we reconstruct the gauge and space time symmetries of the effective action. In turn, we can show that the differential identities are the contracted Bianchi identities of the the field strength $H$ and Riemann tensor $R$. Next, we apply the same ideas to DFT. Taking as starting point that the generalized $\beta$ functions in DFT are proportional to the equations of motion, we construct the generalized differential identities in DFT. Relating the Noether identities with the contracted Bianchi identities of DFT, we were able to reconstruct the generalized gauge and space time symmetries. Finally, we recover the original $\beta$ functions, effective action, differential identities, and symmetries when we turn off the $\tilde x$ space time coordinates from DFT.
high energy physics theory
We study M-theory compactified on twisted 7-tori with $G_2$-holonomy. The effective 4d supergravity has 7 chiral multiplets, each with a unit logarithmic K\"ahler potential. We propose octonion, Fano plane based superpotentials, codifying the error correcting Hamming (7,4) code. The corresponding 7-moduli models have Minkowski vacua with one flat direction. We also propose superpotentials based on octonions/error correcting codes for Minkowski vacua models with two flat directions. We update phenomenological $\alpha$-attractor models of inflation with $3\alpha=7,6,5,4,3,1$, based on inflation along these flat directions. These inflationary models reproduce the benchmark targets for detecting B-modes, predicting 7 different values of $r = 12\alpha/N_{e}^{2}$ in the range $10^{-2}\gtrsim r \gtrsim 10^{-3}$, to be explored by future cosmological observations.
high energy physics theory
We consider the non-oscillatory explanations of the low energy excess of events detected by MiniBooNE. We present a systematic search for phenomenological scenarios based on new physics which can produce the excess. We define scenarios as series of transitions and processes which connect interactions of accelerated protons in target with single shower events in the MiniBooNE detector. The key elements of the scenarios are production and decay of new light $\mathcal{O}(\text{keV}-100\,\text{MeV})$ particles (fermions or/and bosons). We find about $20$ scenarios with minimal possible number of new particles and interaction points. In practice, they are all reduced to few generic scenarios and in this way we develop the effective theory of the MiniBooNE excess. We consider tests of the scenarios with near or close detectors in neutrino experiments T2K ND280, NO$\nu$A, MINER$\nu$A as well as in NOMAD and PS191. The scenarios immediately connect the MiniBooNE excess and expected numbers of new physics events in these detectors. We compute the expected numbers of events as functions of lifetimes and masses of new particles and confront them with the corresponding experimental bounds. We indicate scenarios that are excluded or strongly disfavored by one or several experiments. Given our general approach, this work can also be regarded as the effective theory of new physics at accelerator based neutrino experiments, being relevant for future projects such as DUNE.
high energy physics phenomenology
We benchmark a selection of semiclassical and perturbative dynamics techniques by investigating the correlated evolution of a cavity-bound atomic system to assess their applicability to study problems involving strong light-matter interactions in quantum cavities. The model system of interest features spontaneous emission, interference, and strong coupling behaviour, and necessitates the consideration of vacuum fluctuations and correlated light-matter dynamics. We compare a selection of approximate dynamics approaches including fewest switches surface hopping, multi-trajectory Ehrenfest dynamics, linearized semiclasical dynamics, and partially linearized semiclassical dynamics. Furthermore, investigating self-consistent perturbative methods, we apply the Bogoliubov-Born-Green-Kirkwood-Yvon hierarchy in the second Born approximation. With the exception of fewest switches surface hopping, all methods provide a reasonable level of accuracy for the correlated light-matter dynamics, with most methods lacking the capacity to fully capture interference effects.
quantum physics
We recognize certain special hypergeometric motives, related to and inspired by the discoveries of Ramanujan more than a century ago, as arising from Asai $L$-functions of Hilbert modular forms.
mathematics
Internet of Things (IoT) is an Internet-based environment of connected devices and applications. IoT creates an environment where physical devices and sensors are flawlessly combined into information nodes to deliver innovative and smart services for human-being to make their life easier and more efficient. The main objective of the IoT devices-network is to generate data, which are converted into useful information by the data analysis process, it also provides useful resources to the end users. IoT resource management is a key challenge to ensure the quality of end user experience. Many IoT smart devices and technologies like sensors, actuators, RFID, UMTS, 3G, and GSM etc. are used to develop IoT networks. Cloud Computing plays an important role in these networks deployment by providing physical resources as virtualized resources consist of memory, computation power, network bandwidth, virtualized system and device drivers in secure and pay as per use basis. One of the major concerns of Cloud-based IoT environment is resource management, which ensures efficient resource utilization, load balancing, reduce SLA violation, and improve the system performance by reducing operational cost and energy consumption. Many researchers have been proposed IoT based resource management techniques. The focus of this paper is to investigate these proposed resource allocation techniques and finds which parameters must be considered for improvement in resource allocation for IoT networks. Further, this paper also uncovered challenges and issues of Cloud-based resource allocation for IoT environment.
computer science
Efficiently scaling quantum networks to long ranges requires local processing nodes to perform basic computation and communication tasks. Trapped ions have demonstrated all the properties required for the construction of such a node, storing quantum information for up to 12 minutes, implementing deterministic high fidelity logic operations on one and two qubits, and ion-photon coupling. While most ions suitable for quantum computing emit photons in visible to near ultraviolet (UV) frequency ranges poorly suited to long-distance fibre optical based networking, recent experiments in frequency conversion provide a technological solution by shifting the photons to frequencies in the telecom band with lower attenuation for fused silica fibres. Encoding qubits in frequency rather than polarization makes them more robust against decoherence from thermal or mechanical noise due to the conservation of energy. To date, ion-photonic frequency qubit entanglement has not been directly shown. Here we demonstrate a frequency encoding ion-photon entanglement protocol in $^{171}$Yb$^+$ with correlations equivalent to 92.4(8)% fidelity using a purpose-built UV hyperfine spectrometer. The same robustness against decoherence precludes our passive optical setup from rotating photonic qubits to unconditionally demonstrate entanglement, however it is sufficient to allow us to benchmark the quality of ion-UV photon correlations prior to frequency conversion to the telecom band.
quantum physics
We present a study of exclusion process on a peculiar topology of network with two intersected lanes, competing for the particles in a reservoir with finite capacity. To provide a theoretical ground for our findings, we exploit mean-field approximation along with domain-wall theory. The stationary properties of the system including phase transitions, density profiles and position of the domain-wall are derived analytically. Under the similar dynamical rules, the particles of both the lanes interact only at the intersected site. The symmetry of system is maintained till number of particles do not exceed total number of sites. However, beyond this the symmetry breaking phenomenon occurs resulting in the appearance of asymmetric phases and continues to persist even for infnite number of particles. The complexity of phase diagram shows a non-monotonic behaviour with increasing number of particles in the system. A bulk induced shock appears in a symmetric phase, whereas, a boundary induced shock is observed in symmetric as well as asymmetric phase. Monitoring the location of shock with increasing entry of particles, we explain the possible phase transitions. The theoretical results are supported by extensive Monte Carlo simulations and explained using simple physical arguments.
condensed matter
Inspired by recent improved measurements of charm semileptonic decays at BESIII, we study a large set of $D(D_s)$-meson semileptonic decays where the hadron in the final state is one of $D^0$, $\rho$, $\omega$, $\eta^{(\prime)}$ in the case of $D^+$ decays, and $D^0$, $\phi$, $K^0$, $K^\ast(892)^0$, $\eta^{(\prime)}$ in the case of $D^+_s$ decays. The required hadronic form factors are computed in the full kinematical range of momentum transfer by employing the covariant confined quark model developed by us. A detailed comparison of the form factors with those from other approaches is provided. We calculate the decay branching fractions and their ratios, which show good agreement with available experimental data. We also give predictions for the forward-backward asymmetry and the longitudinal and transverse polarizations of the charged lepton in the final state.
high energy physics phenomenology
We survey the sensitivity of past and present neutrino experiments to MeV-GeV scale dark matter, and find that these experiments possess novel sensitivity that has not yet fully explored. NO$\nu$A and BEBC are found to rule out the scalar thermal target for dark matter masses between 10 MeV to 100 MeV with existing data, while CHARM-II and MINER$\nu$A place somewhat weaker limits. These limits can be dramatically improved by off-axis searches using the NuMI beamline and the MicroBooNE, MiniBooNE or ICARUS detectors, and can even begin to probe the Majorana thermal target. We conclude that past and present neutrino facilities can search for light dark matter concurrently with their neutrino program and reach a competitive sensitivity to proposed future experiments.
high energy physics phenomenology
In the past decade, model-free reinforcement learning (RL) has provided solutions to challenging domains such as robotics. Model-based RL shows the prospect of being more sample-efficient than model-free methods in terms of agent-environment interactions, because the model enables to extrapolate to unseen situations. In the more recent past, model-based methods have shown superior results compared to model-free methods in some challenging domains with non-linear state transitions. At the same time, it has become apparent that RL is not market-ready yet and that many real-world applications are going to require model-based approaches, because model-free methods are too sample-inefficient and show poor performance in early stages of training. The latter is particularly important in industry, e.g. in production systems that directly impact a company's revenue. This demonstrates the necessity for a toolbox to push the boundaries for model-based RL. While there is a plethora of toolboxes for model-free RL, model-based RL has received little attention in terms of toolbox development. Bellman aims to fill this gap and introduces the first thoroughly designed and tested model-based RL toolbox using state-of-the-art software engineering practices. Our modular approach enables to combine a wide range of environment models with generic model-based agent classes that recover state-of-the-art algorithms. We also provide an experiment harness to compare both model-free and model-based agents in a systematic fashion w.r.t. user-defined evaluation metrics (e.g. cumulative reward). This paves the way for new research directions, e.g. investigating uncertainty-aware environment models that are not necessarily neural-network-based, or developing algorithms to solve industrially-motivated benchmarks that share characteristics with real-world problems.
computer science
Extensive research papers of three-dimensional computational techniques are widely used for the investigation of human brain pathophysiology. Eddy current analyzing could provide an indication of conductivity change within a biological body. A significant obstacle to current trend analyses is the development of a numerically stable and efficiency-finite element scheme that performs well at low frequency and does not require a large number of degrees of freedom. Here, a custom finite element method (FEM) solver based on edge elements is proposed using the weakly coupled theory, which separates the solution into two steps. First, the background field (the magnetic vector potential on each edge) is calculated and stored. Then, the electric scalar potential on each node is obtained by FEM based on Galerkin formulations. Consequently, the electric field and eddy current distribution in the object can be obtained. This solver is more efficient than typical commercial solvers since it reduces the vector eddy current equation to a scalar one, and reduces the meshing domain to just the eddy current region. It can therefore tackle complex eddy current calculations for models with much larger numbers of elements, such as those encountered in eddy current computation in biological tissues. An example is presented with a realistic human brain mesh of 2 million elements. In addition, with this solver, the equivalent magnetic field induced from the excitation coil is applied, and therefore there is no need to mesh the excitation coil. In combination, these significantly increase the efficiency of the solver.
physics
We develop a Markovian master equation in the Lindblad form that enables the efficient study of a wide range of open quantum many-body systems that would be inaccessible with existing methods. The validity of the master equation is based entirely on properties of the bath and the system-bath coupling, without any requirements on the level structure within the system itself. The master equation is derived using a Markov approximation that is distinct from that used in earlier approaches. We provide a rigorous bound for the error induced by this Markov approximation; the error is controlled by a dimensionless combination of intrinsic correlation and relaxation timescales of the bath. Our master equation is accurate on the same level of approximation as the Bloch-Redfield equation. In contrast to the Bloch-Redfield approach, our approach ensures preservation of the positivity of the density matrix. As a result, our method is robust, and can be solved efficiently using stochastic evolution of pure states (rather than density matrices). We discuss how our method can be applied to static or driven quantum many-body systems, and illustrate its power through numerical simulation of a spin chain that would be challenging to treat by existing methods.
condensed matter
We analysed an XMM-Newton plus a simultaneous Rossi X-ray Timing Explorer observation and a separate Suzaku observation of the neutron-star low-mass X-ray binary 4U 1728-34. We fitted the X-ray spectra with the self-consistent reflection model relxill. We found that the inclination angle of 4U 1728-34 is 49 degrees, consistent with the upper limit of 60 degrees deduced from the absence of eclipses or dips in this source. The inclination angle in the fit of the XMM-Newton/RXTE observation is larger than 85 degrees, which may be due to the possible calibration issues of the PN instrument in timing mode. We also found that the thermal emission from the accretion disc is not significant. This could be explained either by the relatively high column density of the interstellar medium along the line of sight to the source, which decreases the number of soft disc photons, or if most of the soft thermal photons from the disc are reprocessed in the corona. The ionisation parameter derived from the fits is larger than the value predicted in the framework of the standard reflection model, wherein the disc is irradiated by an X-ray source above the compact object. This inconsistency suggests that irradiation from the neutron star and its boundary layer may play an important role in the ionisation of the accretion disc, and hence in the reflection component in this source.
astrophysics
We introduce a general class of stochastic lattice gas models, and derive their fluctuating hydrodynamics description in the large size limit under a local equilibrium hypothesis. The model consists in energetic particles on a lattice subject to exclusion interactions, which move and collide stochastically with energy-dependent rates. The resulting fluctuating hydrodynamics equations exhibit nonlinear coupled particle and energy transport, including particle currents due to temperature gradients (Soret effect) and energy flow due to concentration gradients (Dufour effect). The microscopic dynamical complexity is condensed in just two matrices of transport coefficients: the diffusivity matrix (or equivalently the Onsager matrix) generalizing Fick-Fourier's law, and the mobility matrix controlling current fluctuations. Both transport coefficients are coupled via a fluctuation-dissipation theorem, suggesting that the noise terms affecting the local currents have Gaussian properties. We further prove the positivity of entropy production in terms of the microscopic dynamics. The so-called kinetic exclusion process has as limiting cases two of the most paradigmatic models of nonequilibrium physics, namely the symmetric simple exclusion process of particle diffusion and the Kipnis-Marchioro-Presutti model of heat flow, making it the ideal testbed where to further develop modern theories of nonequilibrium behavior.
condensed matter
The quark contribution to the QCD pressure, $P_q$, is evaluated up to next-to-leading order (NLO) within the renormalization group optimized perturbation theory (RGOPT) resummation approach. To evaluate the complete QCD pressure we simply add the perturbative NLO contribution from massless gluons to the resummed $P_q$. Despite of this unsophisticated approximation our results for $P = P_q +P_g$ at the central scale $M\sim 2\pi T$ show a remarkable agreement with ab initio lattice predictions for $0.25 \lesssim T \lesssim 1 \, {\rm GeV}$. We also show that by being imbued with RG properties, the RGOPT produces a drastic reduction of the embarrassing remnant scale dependence that plagues both standard thermal perturbative QCD and hard thermal loop perturbation theory (HTLpt) applications.
high energy physics phenomenology
We study the stress tensor multiplet four-point function in the 6d maximally supersymmetric $(2,0)$ $A_{N-1}$ and $D_N$ theories, which have no Lagrangian description, but in the large $N$ limit are holographically dual to weakly coupled M-theory on $AdS_7\times S^4$ and $AdS_7\times S^4/\mathbb{Z}_2$, respectively. We use the analytic bootstrap to compute the 1-loop correction to this holographic correlator coming from Witten diagrams with supergravity $R$ and the first higher derivative correction $R^4$ vertices, which is the first 1-loop correction computed for a non-Lagrangian theory. We then take the flat space limit and find precise agreement with the corresponding terms in the 11d M-theory S-matrix, some of which we compute for the first time using two-particle unitarity cuts.
high energy physics theory
We systematically investigate the effect of the anomalous magnetic moment(AMM) of quarks on the magnetized QCD matter, including the magnetic susceptibility, the inverse magnetic catalysis around the critical temperature and the neutral/charged pion and rho meson spectra under magnetic fields. The dynamical AMM of quarks, its coupling with magnetic field causes Zeeman splitting of the dispersion relation of quarks thus changes the magnetism properties and meson mass spectra under magnetic fields. It is found that including the AMM of quarks cannot fully understand lattice results of the magnetized matter: The AMM of quarks reduces the dynamical quark mass thus causes the inverse magnetic catalysis around $T_c$. The neutral pion mass is very sensitive to the AMM, it decreases with magnetic field quickly, and the charged pion mass shows a nonlinear behavior, i.e., firstly linearly increases with the magnetic field and then saturates at strong magnetic field. For rho meson, it is observed that AMM reduces the mass of neutral rho meson mass with different $s_z$, and reduces the mass of $s_z=+1,0$ component charged rho meson mass but enhances the $s_z=-1$ component charged rho meson mass. The magnetic susceptibility at low temperature can be either positive or negative with different AMM.
high energy physics phenomenology
Using a qubit to probe non-Gaussian noise environments is theoretically studied in the context of classical random telegraph processes. Protocols for control pulses are developed to effectively scan higher noise correlations, offering valuable information on the charge environment of the qubit. Specifically, the noise power spectrum and trispectrum are reconstructed simultaneously for a wide range of qubit-fluctuator coupling strengths, demonstrating the method's robustness. These protocols are readily testable in various qubit systems with well-developed quantum control, including quantum dot spins, superconducting qubits and NV centers in diamond.
condensed matter
The flux of high-energy neutrinos passing through the Earth is attenuated due to their interactions with matter. Their transmission probability is modulated by the neutrino interaction cross section and affects the arrival flux at the IceCube Neutrino Observatory, a cubic-kilometer neutrino detector embedded in the South Pole ice sheet. We present a measurement of the neutrino-nucleon cross section between 60 TeV--10 PeV using the high-energy starting events (HESE) sample from IceCube with 7.5 years of data.
astrophysics
Width-based planning methods exploit the use of conjunctive goals for decomposing problems into subproblems of low width. However, algorithms like SIW fail when the goal is not serializable. In this work, we address this limitation of SIW by using a simple but powerful language for expressing problem decompositions introduced recently by Bonet and Geffner, called policy sketches. A policy sketch R consists of a set of Boolean and numerical features and a set of sketch rules that express how the values of these features are supposed to change. Like general policies, policy sketches are domain general, but unlike policies, the changes captured by sketch rules do not need to be achieved in a single step. We show that many planning domains that cannot be solved by SIW are provably solvable in low polynomial time with the SIW_R algorithm, the version of SIW that employs user-provided policy sketches. Policy sketches are thus shown to be a powerful language for expressing domain-specific knowledge in a simple and compact way and a convenient alternative to languages such as HTNs or temporal logics. Furthermore, policy sketches make it easy to express general problem decompositions and prove key properties like their complexity and width.
computer science
Building a machine learning solution in real-life applications often involves the decomposition of the problem into multiple models of various complexity. This has advantages in terms of overall performance, better interpretability of the outcomes, and easier model maintenance. In this work we propose a Bayesian framework to model the interaction amongst models in such a hierarchy. We show that the framework can facilitate stress testing of the overall solution, giving more confidence in its expected performance prior to active deployment. Finally, we test the proposed framework on a toy problem and financial fraud detection dataset to demonstrate how it can be applied for any machine learning based solution, regardless of the underlying modelling required.
computer science
Count data are often subject to underreporting, especially in infectious disease surveillance. We propose an approximate maximum likelihood method to fit count time series models from the endemic-epidemic class to underreported data. The approach is based on marginal moment matching where underreported processes are approximated through completely observed processes from the same class. Moreover, the form of the bias when underreporting is ignored or taken into account via multiplication factors is analysed. Notably, we show that this leads to a downward bias in model-based estimates of the effective reproductive number. A marginal moment matching approach can also be used to account for reporting intervals which are longer than the mean serial interval of a disease. The good performance of the proposed methodology is demonstrated in simulation studies. An extension to time-varying parameters and reporting probabilities is discussed and applied in a case study on weekly rotavirus gastroenteritis counts in Berlin, Germany.
statistics
An inflation model based on dimensional (de)construction of a massive gauge theory is proposed. The inflaton in this model is the "zero-mode" of a component of the massive gauge field in the (de)constructed extra dimensions. The inflaton potential originates from the gauge invariant Stueckelberg potential. At low energy, the field range of the inflaton is enhanced by a factor $N^{\frac{d}{2}}$ compared to the field range of the original fields in the model, where $d$ is the number of the (de)constructed extra dimensions and $N$ is the number of the lattice points in each (de)constructed dimension. This enhancement of the field range is used to achieve a trans-Planckian inflaton field excursion. The extension of the mechanism ``excursions through KK modes'' to the case of (de)constructed extra dimensions is also studied. The burst of particle productions by this mechanism may have observable consequences in a region of the model parameter space.
high energy physics theory
We present space-based ultraviolet/optical photometry and spectroscopy with the Swift Ultra-Violet/Optical Telescope and Hubble Space Telescope, respectively, along with ground-based optical photometry and spectroscopy and near-infrared spectroscopy of supernova SN2017erp. The optical light curves and spectra are consistent with a normal Type Ia supernova (SN Ia). Compared to previous photometric samples in the near-ultraviolet (NUV), SN2017erp has colors similar to the NUV-red category after correcting for Milky Way and host dust reddening. We find the difference between SN2017erp and the NUV-blue SN2011fe is not consistent with dust reddening alone but is similar to the SALT color law, derived from rest-frame UV photometry of higher redshift SNe Ia. This chromatic difference is dominated by the intrinsic differences in the UV and only a small contribution from the expected dust reddening. Differentiating the two can have important consequences for determining cosmological distances with rest-frame UV photometry. This spectroscopic series is important for analyzing SNe Ia with intrinsically redder NUV colors. We also show model comparisons suggesting that metallicity could be the physical difference between NUV-blue and NUV-red SNe Ia, with emission peaks from reverse fluorescence near 3000 Angstroms implying a factor of ten higher metallicity in the upper layers of SN2017erp compared to SN~2011fe. Metallicity estimates are very model dependent however, and there are multiple effects in the UV. Further models and UV spectra of SNe Ia are needed to explore the diversity of SNe Ia which show seemingly independent differences in the near-UV peaks and mid-UV flux levels.
astrophysics
We report the serendipitous discovery of HSC J0904$-$0102, a quadruply-lensed Lyman break galaxy (LBG) in the Survey of Gravitationally-lensed Objects in Hyper Suprime-Cam Imaging (SuGOHI). Owing to its point-like appearance, the source was thought to be a lensed active galactic nucleus. We obtained follow-up spectroscopic data with the Gemini Multi-Object Spectrographs on the Gemini South Telescope, which confirmed this to be a lens system. The deflecting foreground galaxy is a typical early-type galaxy at a high redshift of $z_{\ell} = 0.957$ with stellar velocity dispersion $\sigma_v=259\pm56$ km~s$^{-1}$. The lensed source is identified as an LBG at $z_{\rm s} = 3.403$, based on the sharp drop bluewards of Ly$\alpha$ and other absorption features. A simple lens mass model for the system, assuming a singular isothermal ellipsoid, yields an Einstein radius of $\theta_{\rm Ein} = 1. 23^{\prime\prime}$ and a total mass within the Einstein radius of $M_{\rm Ein} = (5.55\pm 0.24) \times 10^{11}M_{\odot}$ corresponding to a velocity dispersion of $\sigma_{\rm SIE}= 283\pm 3$ km~s$^{-1}$, which is in good agreement with the value derived spectroscopically. The most isolated lensed LBG image has a magnification of $\sim 6.5$. In comparison with other lensed LBGs and typical $z\sim4$ LBG populations, HSC J0904$-$0102 is unusually compact, an outlier at $>2\sigma$ confidence. Together with a previously discovered SuGOHI lens, HSC J1152$+$0047, that is similarly compact, we believe that the HSC Survey is extending LBG studies down to smaller galaxy sizes.
astrophysics
We report on the detection of the [CII] 157.7 $\mu$m emission from the Lyman break galaxy (LBG) MACS0416_Y1 at z = 8.3113, by using the Atacama Large Millimeter/submillimeter Array (ALMA). The luminosity ratio of [OIII] 88 $\mu$m (from previous campaigns) to [CII] is 9.31 $\pm$ 2.6, indicative of hard interstellar radiation fields and/or a low covering fraction of photo-dissociation regions. The emission of [CII] is cospatial to the 850 $\mu$m dust emission (90 $\mu$m rest-frame, from previous campaigns), however the peak [CII] emission does not agree with the peak [OIII] emission, suggesting that the lines originate from different conditions in the interstellar medium. We fail to detect continuum emission at 1.5 mm (160 $\mu$m rest-frame) down to 18 $\mu$Jy (3$\sigma$). This nondetection places a strong limit on the dust spectrum, considering the 137 $\pm$ 26 $\mu$Jy continuum emission at 850 $\mu$m. This suggests an unusually warm dust component (T $>$ 80 K, 90% confidence limit), and/or a steep dust-emissivity index ($\beta_{\rm dust}$ $>$ 2), compared to galaxy-wide dust emission found at lower redshifts (typically T $\sim$ 30 - 50 K, $\beta_{\rm dust}$ $\sim$ 1 - 2). If such temperatures are common, this would reduce the required dust mass and relax the dust production problem at the highest redshifts. We therefore warn against the use of only single-wavelength information to derive physical properties, recommend a more thorough examination of dust temperatures in the early Universe, and stress the need for instrumentation that probes the peak of warm dust in the Epoch of Reionization.
astrophysics
Colorectal cancer (CRC) grading is typically carried out by assessing the degree of gland formation within histology images. To do this, it is important to consider the overall tissue micro-environment by assessing the cell-level information along with the morphology of the gland. However, current automated methods for CRC grading typically utilise small image patches and therefore fail to incorporate the entire tissue micro-architecture for grading purposes. To overcome the challenges of CRC grading, we present a novel cell-graph convolutional neural network (CGC-Net) that converts each large histology image into a graph, where each node is represented by a nucleus within the original image and cellular interactions are denoted as edges between these nodes according to node similarity. The CGC-Net utilises nuclear appearance features in addition to the spatial location of nodes to further boost the performance of the algorithm. To enable nodes to fuse multi-scale information, we introduce Adaptive GraphSage, which is a graph convolution technique that combines multi-level features in a data-driven way. Furthermore, to deal with redundancy in the graph, we propose a sampling technique that removes nodes in areas of dense nuclear activity. We show that modeling the image as a graph enables us to effectively consider a much larger image (around 16$\times$ larger) than traditional patch-based approaches and model the complex structure of the tissue micro-environment. We construct cell graphs with an average of over 3,000 nodes on a large CRC histology image dataset and report state-of-the-art results as compared to recent patch-based as well as contextual patch-based techniques, demonstrating the effectiveness of our method.
electrical engineering and systems science
The recent 21st Century Cures Act propagates innovations to accelerate the discovery, development, and delivery of 21st century cures. It includes the broader application of Bayesian statistics and the use of evidence from clinical expertise. An example of the latter is the use of trial-external (or historical) data, which promises more efficient or ethical trial designs. We propose a Bayesian meta-analytic approach to leveraging historical data for time-to-event endpoints, which are common in oncology and cardiovascular diseases. The approach is based on a robust hierarchical model for piecewise exponential data. It allows for various degrees of between trial-heterogeneity and for leveraging individual as well as aggregate data. An ovarian carcinoma trial and a non-small-cell cancer trial illustrate methodological and practical aspects of leveraging historical data for the analysis and design of time-to-event trials.
statistics
Learning of the cell-load in radio access networks (RANs) has to be performed within a short time period. Therefore, we propose a learning framework that is robust against uncertainties resulting from the need for learning based on a relatively small training sample set. To this end, we incorporate prior knowledge about the cell-load in the learning framework. For example, an inherent property of the cell-load is that it is monotonic in downlink (data) rates. To obtain additional prior knowledge we first study the feasible rate region, i.e., the set of all vectors of user rates that can be supported by the network. We prove that the feasible rate region is compact. Moreover, we show the existence of a Lipschitz function that maps feasible rate vectors to cell-load vectors. With these results in hand, we present a learning technique that guarantees a minimum approximation error in the worst-case scenario by using prior knowledge and a small training sample set. Simulations in the network simulator NS3 demonstrate that the proposed method exhibits better robustness and accuracy than standard multivariate learning techniques, especially for small training sample sets.
computer science
The possibility to exploit quantum coherence to strongly enhance the efficiency of charge transport in solid state devices working at ambient conditions would pave the way to disruptive technological applications. In this work, we tackle the problem of the quantum transport of photogenerated electronic excitations subject to dephasing and on-site Coulomb interactions. We show that the transport to a continuum of states representing metallic collectors can be optimized by exploiting the "superradiance" phenomena. We demonstrate that this is a coherent effect which is robust against dephasing and electron-electron interactions in a parameters range that is compatible with actual implementation in few monolayers transition-metal-oxide (TMO) heterostructures.
quantum physics
In this paper we consider the homogenization of the evolution problem associated with a jump process that involves three different smooth kernels that govern the jumps to/from different parts of the domain. We assume that the spacial domain is divided into a sequence of two subdomains $A_n \cup B_n$ and we have three different smooth kernels, one that controls the jumps from $A_n$ to $A_n$, a second one that controls the jumps from $B_n$ to $B_n$ and the third one that governs the interactions between $A_n$ and $B_n$.Assuming that $\chi_{A_n} (x) \to X(x)$ weakly in $L^\infty$ (and then $\chi_{B_n} (x) \to 1-X(x)$ weakly in $L^\infty$) as $n \to \infty$ and that the initial condition is given by a density $u_0$ in $L^2$ we show that there is an homogenized limit system in which the three kernels and the limit function $X$ appear. When the initial condition is a delta at one point, $\delta_{\bar{x}}$ (this corresponds to the process that starts at $\bar{x}$) we show that there is convergence along subsequences such that $\bar{x} \in A_{n_j}$ or $\bar{x} \in B_{n_j}$ for every $n_j$ large enough. We also provide a probabilistic interpretation of this evolution equation in terms of a stochastic process that describes the movement of a particle that jumps in $\Omega$ according to the three different kernels and show that the underlying process converges in distribution to a limit process associated with the limit equation. We focus our analysis in Neumann type boundary conditions and briefly describe at the end how to deal with Dirichlet boundary conditions.
mathematics
Thermalization of generic closed quantum systems is well described by the Eigenstate Thermalization Hypothesis (ETH). One expects, however, that the presence of conservation laws may somewhat alter the adherence to the ETH. Here we see that in the presence of a single conservation law, for given physical initial states, thermalization occurs without the system fulfilling many predictions of the ETH. We find that certain local physical observables behave non-ergodically, even for non-integrable Hamiltonians, and yet an ETH-like relation, with non-random off-diagonals, is derived for observable matrix elements. This leads to a scaling law for equilibrium fluctuations that differs from that expected by the ETH. Further, we analytically compute the time-dependence of the decay to equilibrium, showing that it is proportional to the survival probability of the initial state. We further discuss (the lack of) scrambling of quantum information in this regime, and calculate the long-time limit of the out-of-time-ordered correlator. Relating our results to previous numerical observations of initial state dependent scrambling, we uncover the mechanism behind this feature.
quantum physics
We consider thermal Wightman correlators in a relativistic quantum field theory in the limit where the spatial momenta of the insertions become large while their frequencies stay fixed. We show that, in this limit, the size of these correlators is bounded by $e^{-\beta R}$, where $R$ is the radius of the smallest sphere that contains the polygon formed by the momenta. We show that perturbative quantum field theories can saturate this bound through suitably high-order loop diagrams. We also consider holographic theories in $d$-spacetime dimensions, where we show that the leading two-point function of generalized free-fields saturates the bound in $d = 2$ and is below the bound for $d > 2$. We briefly discuss interactions in holographic theories and conclude with a discussion of several open problems.
high energy physics theory
We calculate chiral susceptibilities in (2+1)-flavour QCD for different masses of the light quarks using the functional renormalisation group (fRG) approach to first-principles QCD. We follow the evolution of the chiral susceptibilities with decreasing masses as obtained from both the light-quark and the reduced quark condensate. The latter compares very well with recent results from the HotQCD collaboration for pion masses $m_{\pi}\gtrsim 100\,\text{MeV}$. For smaller pion masses, the fRG and lattice results are still consistent. In particular, the estimates for the chiral critical temperature are in very good agreement. We close by discussing different extrapolations to the chiral limit.
high energy physics phenomenology
The recently developed hadron resonance gas model with multicomponent hard-core repulsion is used to address and resolve the long standing problem to describe the light nuclear cluster multiplicities including the hyper-triton measured by the STAR Collaboration, known as the hyper-triton chemical freeze-out puzzle. An unprecedentedly accurate description is obtained for the hadronic and other light nuclear cluster data measured by STAR at the collision energy $\sqrt{s_{NN}} =200$ GeV and by ALICE at $\sqrt{s_{NN}} =2.76$ TeV. This success is achieved by applying the new strategy of analyzing the light nuclear cluster data and by using the value for the hard-core radius of the (anti-)$\Lambda$ hyperons found in earlier work. One of the most striking results of the present work is that for the most probable scenario of chemical freeze-out for the STAR energy the obtained parameters allow to simultaneously reproduce the values of the experimental ratios $S_3$ and $\bar{S}_3$ which were not included in the fit.
high energy physics phenomenology
We discuss a method that employs a multilayer perceptron to detect deviations from a reference model in large multivariate datasets. Our data analysis strategy does not rely on any prior assumption on the nature of the deviation. It is designed to be sensitive to small discrepancies that arise in datasets dominated by the reference model. The main conceptual building blocks were introduced in Ref. [1]. Here we make decisive progress in the algorithm implementation and we demonstrate its applicability to problems in high energy physics. We show that the method is sensitive to putative new physics signals in di-muon final states at the LHC. We also compare our performances on toy problems with the ones of alternative methods proposed in the literature.
high energy physics phenomenology
Dielectric laser acceleration is a versatile scheme to accelerate and control electrons with the help of femtosecond laser pulses in nanophotonic structures. We demonstrate here the generation of a train of electron pulses with individual pulse durations as short as $270\pm80$ attoseconds(FWHM), measured in an indirect fashion, based on two subsequent dielectric laser interaction regions connected by a free-space electron drift section, all on a single photonic chip. In the first interaction region (the modulator), an energy modulation is imprinted on the electron pulse. During free propagation, this energy modulation evolves into a charge density modulation, which we probe in the second interaction region (the analyzer). These results will lead to new ways of probing ultrafast dynamics in matter and are essential for future laser-based particle accelerators on a photonic chip.
physics
We determined the quark mass matrix in terms of a small expansion parameter $\sqrt{\varepsilon}$, which gives correctly all the quark masses and the CKM matrix elements at the electroweak (EW) scale, and obtain a progenitor form at the GUT scale by running the EW scale mass matrix. Finally, a possible texture form for the progenitor quark mass matrix is suggested.
high energy physics phenomenology
Topological phases of matter are among the most intriguing research directions in Condensed Matter Physics. It is known that superconductivity induced on a topological insulator's surface can lead to exotic Majorana modes, the main ingredient of many proposed quantum computation schemes. In this context, iron-based high critical temperature superconductors are among the main candidates to host such exotic phenomenon. Moreover, it is commonly believed that the Coulomb interaction is vital for the magnetic and superconducting properties of these systems. This work bridges these two perspectives and shows that the Coulomb interaction can also drive a trivial superconductor with orbital degrees of freedom into the topological phase. Namely, we show that above some critical value of the Hubbard interaction, identified by the change in entropy behaviour, the system simultaneously develops spiral spin order, a highly unusual triplet amplitude in superconductivity, and, remarkably, Majorana fermions at the edges of the system.
condensed matter
Electron spin resonance (ESR) spectroscopy has broad applications in physics, chemistry and biology. As a complementary tool, zero-field ESR (ZF-ESR) spectroscopy has been proposed for decades and shown its own benefits for investigating the electron fine and hyperfine interaction. However, the ZF-ESR method has been rarely used due to the low sensitivity and the requirement of much larger samples than conventional ESR. In this work, we present a method for deploying ZF-ESR spectroscopy at the nanoscale by using a highly sensitive quantum sensor, the nitrogen-vacancy center in diamond. We also measure the nanoscale ZF-ESR spectrum of a few P1 centers in diamond, and show that the hyperfine coupling constant can be directly extracted from the spectrum. This method opens the door to practical applications of ZF-ESR spectroscopy, such as investigation of the structure and polarity information in spin-modified organic and biological systems.
quantum physics
In the present paper we construct plans orthogonal through the block factor (POTBs). We describe procedures for adding blocks as well as factors to an initial plan and thus generate a bigger plan. Using these procedures we construct POTBs for symmetrical experiments with factors having three or more levels. We also construct a series of plans inter-class orthogonal through the block factor for two-level factors.
statistics
Balancing numbers $n$ are originally defined as the solution of the Diophantine equation $1+2+\cdots+(n-1)=(n+1)+\cdots+(n+r)$, where $r$ is called the balancer corresponding to the balancing number $n$. By slightly modifying, $n$ is the cobalancing number with the cobalancer $r$ if $1+2+\cdots+n=(n+1)+\cdots+(n+r)$. Let $B_n$ denote the $n^{th}$ balancing number and $b_n$ denote the $n^{th}$ cobalancing number. Then $8B_n^2+1$ and $8b_n^2+8b_n+1$ are perfect squares. The $n^{th}$ Lucas-balancing number $C_n$ and the $n^{th}$ Lucas-cobalancing number $c_n$ are the positive roots of $8B_n^2+1$ and $8b_n^2+8b_n+1$, respectively. In this paper, we establish some trigonometric-type identities and some arithmetic properties concerning the parity of balancing, cobalancing, Lucas-balancing and Lucas-cobalancing numbers.
mathematics
Ultrasound Computed Tomography (USCT) has great potential for 3D quantitative imaging of acoustic breast tissue properties. Typical devices include high-frequency transducers, which makes tomography techniques based on numerical wave propagation simulations computationally challenging, especially in 3D. Therefore, despite the finite-frequency nature of ultrasonic waves, ray-theoretical approaches to transmission tomography are still widely used. This work introduces finite-frequency traveltime tomography to medical ultrasound. In addition to being computationally tractable for 3D imaging at high frequencies, the method has two main advantages: (1) It correctly accounts for the frequency dependence and volumetric sensitivity of traveltime measurements, which are related to off-ray-path scattering and diffraction. (2) It naturally enables out-of-plane imaging and the construction of 3D images from 2D slice-by-slice acquisition systems. Our method rests on the availability of calibration data in water, used to linearize the forward problem and to provide analytical expressions of cross-correlation traveltime sensitivity. As a consequence of the finite frequency content, sensitivity is distributed in multiple Fresnel volumes, thereby providing out-of-plane sensitivity. To improve computational efficiency, we develop a memory-efficient implementation by encoding the Jacobian operator with a 1D parameterization, which allows us to extend the method to large-scale domains. We validate our tomographic approach using lab measurements collected with a 2D setup of transducers and using a cylindrically symmetric phantom. We then demonstrate its applicability for 3D reconstructions by simulating a slice-by-slice acquisition systems using the same dataset.
physics
In this paper, we study a slant submanifold of a complex space form. We also obtain an integral formula of Simons' type for a Kaehlerian slant submanifold in a complex space form and apply it to prove our main result.
mathematics
In this work, we present a deep reinforcement learning based method to solve the problem of robotic grasping using visio-motor feedback. The use of a deep learning based approach reduces the complexity caused by the use of hand-designed features. Our method uses an off-policy reinforcement learning framework to learn the grasping policy. We use the double deep Q-learning framework along with a novel Grasp-Q-Network to output grasp probabilities used to learn grasps that maximize the pick success. We propose a visual servoing mechanism that uses a multi-view camera setup that observes the scene which contains the objects of interest. We performed experiments using a Baxter Gazebo simulated environment as well as on the actual robot. The results show that our proposed method outperforms the baseline Q-learning framework and increases grasping accuracy by adapting a multi-view model in comparison to a single-view model.
computer science
Let $G=(V,\overrightarrow{E})$ be a graph with some prescribed orientation for the edges and $\Gamma$ be an arbitrary group. If $f\in \mathrm{Inv}(\Gamma)$ be an anti-involution then the skew gain graph $\Phi_f=(G,\Gamma,\varphi,f)$ is such that the skew gain function $\varphi:\overrightarrow{E}\rightarrow \Gamma$ satisfies $\varphi(\overrightarrow{vu})=f(\varphi(\overrightarrow{uv}))$. In this paper, we study two different types, Laplacian and $g$-Laplacian matrices for a skew gain graph where the skew gains are taken from the multiplicative group $F^\times$ of a field $F$ of characteristic zero. Defining incidence matrix, we also prove the matrix tree theorem for skew gain graphs in the case of the $g$-Laplacian matrix.
mathematics
In this paper, we review our non-Bloch band theory in one-dimensional non-Hermitian tight-binding systems. In our theory, it is shown that in non-Hermitian systems, the Brillouin zone is determined so as to reproduce continuum energy bands in a large open chain. By using simple models, we explain the concept of the non-Bloch band theory and the method to calculate the Brillouin zone. In particular, for the non-Hermitian Su-Schrieffer-Heeger model, the bulk-edge correspondence can be established between the topological invariant defined from our theory and existence of the topological edge states.
condensed matter
An automated treatment of iterated integrals based on letters induced by real-valued quadratic forms and Kummer--Poincar\'e letters is presented. These quantities emerge in analytic single and multi--scale Feynman diagram calculations. To compactify representations, one wishes to apply general properties of these quantities in computer-algebraic implementations. We provide the reduction to basis representations, expansions, analytic continuation and numerical evaluation of these quantities.
high energy physics theory
Generative probability models are widely used for speaker verification (SV). However, the generative models are lack of discriminative feature selection ability. As a hypothesis test, the SV can be regarded as a binary classification task which can be designed as a Siamese neural network (SiamNN) with discriminative training. However, in most of the discriminative training for SiamNN, only the distribution of pair-wised sample distances is considered, and the additional discriminative information in joint distribution of samples is ignored. In this paper, we propose a novel SiamNN with consideration of the joint distribution of samples. The joint distribution of samples is first formulated based on a joint Bayesian (JB) based generative model, then a SiamNN is designed with dense layers to approximate the factorized affine transforms as used in the JB model. By initializing the SiamNN with the learned model parameters of the JB model, we further train the model parameters with the pair-wised samples as a binary discrimination task for SV. We carried out SV experiments on data corpus of speakers in the wild (SITW) and VoxCeleb. Experimental results showed that our proposed model improved the performance with a large margin compared with state of the art models for SV.
electrical engineering and systems science
We analyze the localization properties of the disordered Hubbard model in the presence of a synthetic magnetic field. An analysis of level spacing ratio shows a clear transition from ergodic to many-body localized phase. The transition shifts to larger disorder strengths with increasing magnetic flux. Study of dynamics of local correlations and entanglement entropy indicates that charge excitations remain localized whereas spin degree of freedom gets delocalized in the presence of the synthetic flux. This residual ergodicity is enhanced by the presence of the magnetic field with dynamical observables suggesting incomplete localization at large disorder strengths. Furthermore, we examine the effect of quantum statistics on the local correlations and show that the long-time spin oscillations of a hard-core boson system are destroyed as opposed to the fermionic case.
condensed matter
We report our findings on the perturbative structure of ${\cal N}=4$ supersymmetric Yang-Mills (SYM) theory in the infrared sector by computing inclusive scattering cross sections of on-shell particles. We use half-BPS, energy-momentum tensor and Konishi operators to produce singlet states in the scattering processes to probe the soft and the collinear properties of the cross sections. By appropriately defining the infrared safe observables, we obtain collinear splitting functions up to second order in the perturbation theory. The splitting functions and the infrared finite cross sections demonstrate several interesting connections with those in the perturbative QCD. We also determine the process independent soft distribution function up to third order in the perturbation theory and show that it is universal {\it i.e.} independent of the operators as well as the external states. Interestingly, the soft distribution function in ${\cal N}=4$ SYM theory matches exactly with the leading transcendental part of the corresponding one in the QCD. This enables us to predict the third order soft plus virtual cross section for the production of the on-shell singlet states.
high energy physics theory
Group-IV color centers in diamond have attracted significant attention as solid-state spin qubits because of their excellent optical and spin properties. Among these color centers, the tin-vacancy (SnV$^{\,\textrm{-}}$) center is of particular interest because its large ground-state splitting enables long spin coherence times at temperatures above 1$\,$K. However, color centers typically suffer from inhomogeneous broadening, which can be exacerbated by nanofabrication-induced strain, hindering the implementation of quantum nodes emitting indistinguishable photons. Although strain and Raman tuning have been investigated as promising techniques to overcome the spectral mismatch between distinct group-IV color centers, other approaches need to be explored to find methods that can offer more localized control without sacrificing emission intensity. Here, we study electrical tuning of SnV$^{\,\textrm{-}}$ centers in diamond via the direct-current Stark effect. We demonstrate a tuning range beyond 1.7$\,$GHz. We observe both quadratic and linear dependence on the applied electric field. We also confirm that the tuning effect we observe is a result of the applied electric field and is distinct from thermal tuning due to Joule heating. Stark tuning is a promising avenue toward overcoming detunings between emitters and enabling the realization of multiple identical quantum nodes.
physics
Magnetic-field-biased indium antimonide (InSb) is one of the most widely-discussed materials for supporting nonreciprocal surface plasmon polaritons (SPPs), which have recently been shown to be topological. In this work, we provide a critical assessment of InSb as a magneto-optical SPP platform, and show that it is only viable under a narrow set of conditions.
physics
We have assessed the accuracy for magnetic properties of a set of 51 density functional approximations, including both recently published as well as already established functionals. The accuracy assessment considers a series of 27 small molecules and is based on comparing the predicted magnetizabilities to literature reference values calculated using coupled cluster theory with full singles and doubles and perturbative triples [CCSD(T)] employing large basis sets. The most accurate magnetizabilities, defined as the smallest mean absolute error, were obtained with the BHandHLYP functional. Three of the six studied Berkeley functionals and the three range-separated Florida functionals also yield accurate magnetizabilities. Also some older functionals like CAM-B3LYP, KT1, BHLYP (BHandH), B3LYP and PBE0 perform rather well. In contrast, unsatisfactory performance was generally obtained with Minnesota functionals, which are therefore not recommended for calculations of magnetically induced current density susceptibilities, and related magnetic properties such as magnetizabilities and nuclear magnetic shieldings. We also demonstrate that magnetizabilities can be calculated by numerical integration of the magnetizability density; we have implemented this approach as a new feature in the gauge-including magnetically induced current method (GIMIC). Magnetizabilities can be calculated from magnetically induced current density susceptibilities within this approach even when analytical approaches for magnetizabilities as the second derivative of the energy have not been implemented. The magnetizability density can also be visualized, providing additional information that is not otherwise easily accessible on the spatial origin of the magnetizabilities.
physics
We propose $\beta$-graph embedding for robustly learning feature vectors from data vectors and noisy link weights. A newly introduced empirical moment $\beta$-score reduces the influence of contamination and robustly measures the difference between the underlying correct expected weights of links and the specified generative model. The proposed method is computationally tractable; we employ a minibatch-based efficient stochastic algorithm and prove that this algorithm locally minimizes the empirical moment $\beta$-score. We conduct numerical experiments on synthetic and real-world datasets.
statistics
With the rise of deep learning, there has been increased interest in using neural networks for histopathology image analysis, a field that investigates the properties of biopsy or resected specimens traditionally manually examined under a microscope by pathologists. However, challenges such as limited data, costly annotation, and processing high-resolution and variable-size images make it difficult to quickly iterate over model designs. Throughout scientific history, many significant research directions have leveraged small-scale experimental setups as petri dishes to efficiently evaluate exploratory ideas. In this paper, we introduce a minimalist histopathology image analysis dataset (MHIST), an analogous petri dish for histopathology image analysis. MHIST is a binary classification dataset of 3,152 fixed-size images of colorectal polyps, each with a gold-standard label determined by the majority vote of seven board-certified gastrointestinal pathologists and annotator agreement level. MHIST occupies less than 400 MB of disk space, and a ResNet-18 baseline can be trained to convergence on MHIST in just 6 minutes using 3.5 GB of memory on a NVIDIA RTX 3090. As example use cases, we use MHIST to study natural questions such as how dataset size, network depth, transfer learning, and high-disagreement examples affect model performance. By introducing MHIST, we hope to not only help facilitate the work of current histopathology imaging researchers, but also make the field more-accessible to the general community. Our dataset is available at https://bmirds.github.io/MHIST.
electrical engineering and systems science
Named Entity Recognition and Relation Extraction for Chinese literature text is regarded as the highly difficult problem, partially because of the lack of tagging sets. In this paper, we build a discourse-level dataset from hundreds of Chinese literature articles for improving this task. To build a high quality dataset, we propose two tagging methods to solve the problem of data inconsistency, including a heuristic tagging method and a machine auxiliary tagging method. Based on this corpus, we also introduce several widely used models to conduct experiments. Experimental results not only show the usefulness of the proposed dataset, but also provide baselines for further research. The dataset is available at https://github.com/lancopku/Chinese-Literature-NER-RE-Dataset
computer science
Bayesian optimization (BO) is a widely-used method for optimizing expensive (to evaluate) problems. At the core of most BO methods is the modeling of the objective function using a Gaussian Process (GP) whose covariance is selected from a set of standard covariance functions. From a weight-space view, this models the objective as a linear function in a feature space implied by the given covariance K, with an arbitrary Gaussian weight prior ${\bf w} \sim \mathcal{N} ({\bf 0}, {\bf I})$. In many practical applications there is data available that has a similar (covariance) structure to the objective, but which, having different form, cannot be used directly in standard transfer learning. In this paper we show how such auxiliary data may be used to construct a GP covariance corresponding to a more appropriate weight prior for the objective function. Building on this, we show that we may accelerate BO by modeling the objective function using this (learned) weight prior, which we demonstrate on both test functions and a practical application to short-polymer fibre manufacture.
statistics
Broadbent and Islam (TCC '20) proposed a quantum cryptographic primitive called quantum encryption with certified deletion. In this primitive, a receiver in possession of a quantum ciphertext can generate a classical certificate that the encrypted message is deleted. Although their construction is information-theoretically secure, it is limited to the setting of one-time symmetric key encryption (SKE), where a sender and receiver have to share a common key in advance and the key can be used only once. Moreover, the sender has to generate a quantum state and send it to the receiver over a quantum channel in their construction. Although deletion certificates are privately verifiable, which means a verification key for a certificate has to be kept secret, in the definition by Broadbent and Islam, we can also consider public verifiability. In this work, we present various constructions of encryption with certified deletion. - Quantum communication case: We achieve (reusable-key) public key encryption (PKE) and attribute-based encryption (ABE) with certified deletion. Our PKE scheme with certified deletion is constructed assuming the existence of IND-CPA secure PKE, and our ABE scheme with certified deletion is constructed assuming the existence of indistinguishability obfuscation and one-way function. These two schemes are privately verifiable. - Classical communication case: We also achieve PKE with certified deletion that uses only classical communication. We give two schemes, a privately verifiable one and a publicly verifiable one. The former is constructed assuming the LWE assumption in the quantum random oracle model. The latter is constructed assuming the existence of one-shot signatures and extractable witness encryption.
quantum physics
Wearable technology for the automatic detection of gait events has recently gained growing interest, enabling advanced analyses that were previously limited to specialist centres and equipment (e.g., instrumented walkway). In this study, we present a novel method based on dilated convolutions for an accurate detection of gait events (initial and final foot contacts) from wearable inertial sensors. A rich dataset has been used to validate the method, featuring 71 people with Parkinson's disease (PD) and 67 healthy control subjects. Multiple sensors have been considered, one located on the fifth lumbar vertebrae and two on the ankles. The aims of this study were: (i) to apply deep learning (DL) techniques on wearable sensor data for gait segmentation and quantification in older adults and in people with PD; (ii) to validate the proposed technique for measuring gait against traditional gold standard laboratory reference and a widely used algorithm based on wavelet transforms (WT); (iii) to assess the performance of DL methods in assessing high-level gait characteristics, with focus on stride, stance and swing related features. The results showed a high reliability of the proposed approach, which achieves temporal errors considerably smaller than WT, in particular for the detection of final contacts, with an inter-quartile range below 70 ms in the worst case. This study showes encouraging results, and paves the road for further research, addressing the effectiveness and the generalization of data-driven learning systems for accurate event detection in challenging conditions.
electrical engineering and systems science
We show that the Masur-Veech volumes and area Siegel-Veech constants can be obtained by intersection numbers on the strata of Abelian differentials with prescribed orders of zeros. As applications, we evaluate their large genus limits and compute the saddle connection Siegel-Veech constants for all strata. We also show that the same results hold for the spin and hyper-elliptic components of the strata.
mathematics
Pyragas control allows to stabilize unstable states in applied nonlinear science. We propose to apply a quantum version of the Pyragas protocol to control individual photon-probabilities in an otherwise only globally accessible photon-probability distribution of a quantum light emitter. The versatility of quantum Pyragas control is demonstrated for the case of a two-level emitter in a pulsed laser-driven half cavity. We show that one- and two-photon events respond in a qualitatively different way to the half-cavity induced feedback signal. One-photon events are either enhanced or suppressed, depending on the choice of parameters. In contrast, two-photon events undergo exclusively an enhancement up to $50\%$ for the chosen pulse areas. We hereby propose an implementation of quantum Pyragas control via a time-delayed feedback setup.
quantum physics
The energy landscape of helium-nitrogen mixtures is explored by ab initio evolutionary searches, which predicted several stable helium-nitrogen compounds in the pressure range from 25 to 100 GPa. In particular, the monoclinic structure of HeN$_{22}$ consists of neutral He atoms, partially ionic dimers N$_{2}$$^{\delta-}$, and lantern-like cages N$_{20}$$^{\delta+}$. The presence of helium not only greatly enhances structural diversity of nitrogen solids, but also tremendously lowers the formation pressure of nitrogen salt. The unique nitrogen framework of (HeN$_{20}$)$^{\delta+}$N$_{2}$$^{\delta-}$ may be quenchable to ambient pressure even after removing helium. The estimated energy density of N$_{20}$$^{\delta+}$N$_{2}$$^{\delta-}$ (10.44 kJ/g) is $\sim$2.4 times larger than that of trinitrotoluene (TNT), indicating a very promising high-energy-density material.
condensed matter
Accurate weak lensing mass estimates of clusters are needed in order to calibrate mass proxies for the cosmological exploitation of galaxy cluster surveys. Such measurements require accurate knowledge of the redshift distribution of the weak lensing source galaxies. In this context, we investigate the accuracy of photometric redshifts (photo-$z$s) computed by the 3D-HST team for the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey fields, which provide a relevant photometric reference data set for deep weak lensing studies. Through the comparison to spectroscopic redshifts and photo-$z$s based on very deep data from the Hubble Ultra Deep Field, we identify catastrophic redshift outliers in the 3D-HST/CANDELS catalogue. These would significantly bias weak lensing results if not accounted for. We investigate the cause of these outliers and demonstrate that the interpolation of spectral energy distribution (SED) templates and a well-selected combination of photometric data can reduce the net impact for weak lensing studies.
astrophysics
In this review, we discuss the impact of interfaces and heterojuctions on the electronic and thermoelectric transport properties of materials. We review recent progress in understanding electronic transport in two-dimensional (2D) materials ranging from graphene to transition metal dichalcogenides (TMDs), their homojunctions (grain boundaries), lateral heterojunctions (such as graphene/MoS$_2$ lateral interfaces), and vertical van der Waals (vdW) heterostructures. We also review work in thermoelectric properties of 2D heterojunctions, as well as their applications in creating devices such as resonant tunneling diodes (RTDs). Lastly, we turn our focus to work in three-dimensional (3D) heterostructures. While transport in 3D heterostructures has been researched for several decades, here we review recent progress in theory and simulation of quantum effects on transport via the Wigner and non-equilibrium Green's functions (NEGF) approaches. These simulation techniques have been successfully applied toward understanding the impact of heterojunctions on the thermoelectric properties, with applications in energy harvesting, and electron resonant tunneling, with applications in RTDs. We conclude that tremendous progress has been made in both simulation and experiments toward the goal of understanding transport in heterostructures and this progress will soon be parlayed into improved energy converters and quantum nanoelectronic devices.
condensed matter
Optically-generated nonequilibrium phonon-distribution is used for exploring the origin of a nonlocal adiabatic response in an interacting Anderson insulator. Exposing the system to weak infrared radiation is shown to effectively suppress a long-range effect observed in field-effect experiments while producing little heating and barely changing the system conductance. These effects are shown to be consistent with the quantum nature of the effect and therefore are peculiar to disordered systems that are quantum-coherent.
condensed matter
In this paper, we present a novel decoding algorithm of a polar code, named SC-Fano decoding, by appropriately incorporating the Fano sequential decoding into the standard successive-cancellation (SC) decoding. The proposed SC-Fano decoding follows the basic procedures of SC decoding with an additional operation to evaluate the reliability (or belief) of a current partial path. Specifically, at every decoding stage, it decides whether to move forward along a current path or move backward to find a more likelihood path. In this way, SC-Fano decoding can address the inherent drawback of SC decoding such as one wrong-decision will surely lead to a wrong codeword. Compared with the other improvements of SC decoding as SC-List (SCL) and SC-Stack (SCS) decodings, SC-Fano decoding has much lower memory requirement and thus is more suitable for hardware implementations. Also, SC- Fano decoding can be viewed as an efficient implementation of SC-Flip (SCF) decoding without the cost of cyclic-redundancy-code (CRC). Simulation results show that the proposed SC-Fano decoding significantly enhances the performance of SC decoding with a similar complexity as well as achieves the performance of SCL decoding with a lower complexity.
electrical engineering and systems science
We present a planar spectro-polarimeter based on Fabry-P{\'e}rot cavities with embedded polarization-sensitive high-index nanostructures. A $7~\mu$m-thick spectro-polarimetric system for 3 spectral bands and 2 linear polarization states is experimentally demonstrated. Furthermore, an optimal design is theoretically proposed, estimating that a system with a bandwidth of 127~nm and a spectral resolution of 1~nm is able to reconstruct the first three Stokes parameters \textcolor{black}{with a signal-to-noise ratio of -13.14~dB with respect to the the shot noise limited SNR}. The pixelated spectro-polarimetric system can be directly integrated on a sensor, thus enabling applicability in a variety of miniaturized optical devices, including but not limited to satellites for Earth observation.
physics
We give a complete description of the possible Hausdorff dimensions of escaping sets for meromorphic functions with a finite number of singular values. More precisely, for any given $d\in [0,2]$ we show that there exists such a meromorphic function for which the Hausdorff dimension of the escaping set is equal to $d$. The main ingredient is to glue together suitable meromorphic functions by using quasiconformal mappings. Moreover, we show that there are uncountably many quasiconformally equivalent meromorphic functions for which the escaping sets have different Hausdorff dimensions.
mathematics
In the object detection task, CNN (Convolutional neural networks) models always need a large amount of annotated examples in the training process. To reduce the dependency of expensive annotations, few-shot object detection has become an increasing research focus. In this paper, we present an effective object detection framework (MM-FSOD) that integrates metric learning and meta-learning to tackle the few-shot object detection task. Our model is a class-agnostic detection model that can accurately recognize new categories, which are not appearing in training samples. Specifically, to fast learn the features of new categories without a fine-tuning process, we propose a meta-representation module (MR module) to learn intra-class mean prototypes. MR module is trained with a meta-learning method to obtain the ability to reconstruct high-level features. To further conduct similarity of features between support prototype with query RoIs features, we propose a Pearson metric module (PR module) which serves as a classifier. Compared to the previous commonly used metric method, cosine distance metric. PR module enables the model to align features into discriminative embedding space. We conduct extensive experiments on benchmark datasets FSOD, MS COCO, and PASCAL VOC to demonstrate the feasibility and efficiency of our model. Comparing with the previous method, MM-FSOD achieves state-of-the-art (SOTA) results.
computer science
We describe the boundary of linear subvarieties in the moduli space of multi-scale differentials. Linear subvarieties are algebraic subvarieties of strata of (possibly) meromorphic differentials that in local period coordinates are given by linear equations. The main example of such are affine invariant submanifolds, that is, closures of $\operatorname{SL}(2,\mathbb{R})$-orbits. We prove that the boundary of any linear subvariety is again given by linear equations in generalized period coordinates of the boundary. Our main technical tool is an asymptotic analysis of periods near the boundary of the moduli space of multi-scale differentials which yields further techniques and results of independent interest.
mathematics
Extremal problems concerning the number of complete subgraphs have a long story in extremal graph theory. Let $k_s(G)$ be the number of $s$-cliques in a graph $G$ and $m={{r_m}\choose s}+t_m$, where $0\le t_m\leq r_m$. Edr\H{o}s showed that $k_s(G)\le {{r_m}\choose s}+{{t_m}\choose{s-1}}$ over all graphs of size $m$ and order $n\geq r_m+1$. %Clearly, $K_{r_m}^{t_m}\cup (n-r_m-1)K_1$ is an extremal graph, where $K_{r_m}^{t_m}$ is the graph by joining a new vertex to $t_m$ vertices of $K_{r_m}$. It is natural to consider an improvement in connected situation: what is the maximum number of $s$-cliques over all connected graphs of size $m$ and order $n$? In this paper, the sharp upper bound of $k_s(G)$ is obtained and extremal graphs are completely characterized. The technique and the bound are different from those in general case. As an application, this result can be used to solve a question on spectral moment.
mathematics
This paper introduces the sparse functional boxplot and the intensity sparse functional boxplot as practical exploratory tools that make visualization possible for both complete and sparse functional data. These visualization tools can be used either in the univariate or multivariate functional setting. The sparse functional boxplot, which is based on the functional boxplot, depicts sparseness characteristics in the envelope of the 50\% central region, the median curve, and the outliers. The proportion of missingness at each time index within the central region is colored in gray. The intensity sparse functional boxplot displays the relative intensity of sparse points in the central region, revealing where data are more or less sparse. The two-stage functional boxplot, a derivation from the functional boxplot to better detect outliers, is also extended to its sparse form. Several depth proposals for sparse multivariate functional data are evaluated and outlier detection is tested in simulations under various data settings and sparseness scenarios. The practical applications of the sparse functional boxplot and intensity sparse functional boxplot are illustrated with two public health datasets.
statistics
When a latent shoeprint is discovered at a crime scene, forensic analysts inspect it for distinctive patterns of wear such as scratches and holes (known as accidentals) on the source shoe's sole. If its accidentals correspond to those of a suspect's shoe, the print can be used as forensic evidence to place the suspect at the crime scene. The strength of this evidence depends on the random match probability---the chance that a shoe chosen at random would match the crime scene print's accidentals. Evaluating random match probabilities requires an accurate model for the spatial distribution of accidentals on shoe soles. A recent report by the President's Council of Advisors in Science and Technology criticized existing models in the literature, calling for new empirically validated techniques. We respond to this request with a new spatial point process model for accidental locations, developed within a hierarchical Bayesian framework. We treat the tread pattern of each shoe as a covariate, allowing us to pool information across large heterogeneous databases of shoes. Existing models ignore this information; our results show that including it leads to significantly better model fit. We demonstrate this by fitting our model to one such database.
statistics
Deep learning has gained substantial popularity in recent years. Developers mainly rely on libraries and tools to add deep learning capabilities to their software. What kinds of bugs are frequently found in such software? What are the root causes of such bugs? What impacts do such bugs have? Which stages of deep learning pipeline are more bug prone? Are there any antipatterns? Understanding such characteristics of bugs in deep learning software has the potential to foster the development of better deep learning platforms, debugging mechanisms, development practices, and encourage the development of analysis and verification frameworks. Therefore, we study 2716 high-quality posts from Stack Overflow and 500 bug fix commits from Github about five popular deep learning libraries Caffe, Keras, Tensorflow, Theano, and Torch to understand the types of bugs, root causes of bugs, impacts of bugs, bug-prone stage of deep learning pipeline as well as whether there are some common antipatterns found in this buggy software. The key findings of our study include: data bug and logic bug are the most severe bug types in deep learning software appearing more than 48% of the times, major root causes of these bugs are Incorrect Model Parameter (IPS) and Structural Inefficiency (SI) showing up more than 43% of the times. We have also found that the bugs in the usage of deep learning libraries have some common antipatterns that lead to a strong correlation of bug types among the libraries.
computer science