text
stringlengths
11
9.77k
label
stringlengths
2
104
Global buildings consumed 30% of total energy and generated 28% of total carbon emission in 2018, which leads to economic and environmental concerns. Therefore, it is of great significance to reduce energy consumption, energy cost and carbon emission of buildings while maintaining user comfort. To this end, several challenges have to be addressed. Firstly, it is very challenging to develop a building thermal dynamics model that is both accurate and efficient enough for building control. Secondly, there are many kinds of uncertainties. Thirdly, there are many spatially and temporally operational constraints. Fourthly, building energy optimization problems may have extremely large solution spaces, which can not be solved in real-time by traditional methods. Fifthly, traditional building energy management methods have respective applicable premises, which means that they have low versatility when confronted with varying building environments. As a general artificial intelligence technology, deep reinforcement learning (DRL) has the potential of addressing the above challenges. Thus, this paper presents a comprehensive literature review on DRL for smart building energy management (SBEM). To be specific, we first introduce the fundamentals of DRL and provide the classification of DRL methods used in existing works related to SBEM. Then, we review the applications of DRL in a single building energy subsystem, multiple energy subsystems of buildings, and building microgrids, respectively. Furthermore, we identify the unsolved issues and point out the possible research directions of applying DRL. Finally, we summarize the lessons learned from this survey.
electrical engineering and systems science
Chemical bimodality of the Milky Way (MW) disc stars constitutes one of the most remarkable properties of MW. The cold accretion theory for the cosmological gas accretion provides one viable explanation to this phenomenon. In this scenario, the rapid cold-mode accretion in the early epoch creates the first generation stars relatively rich in $\alpha$-elements(O,Mg,Si,S,Ca,etc) and later cooling flow produces iron-rich second generation stars, creating the bimodality in the [$\alpha$/Fe] ratio. We employ a cosmologically motivated chemical evolution model for disc galaxies to elucidate the role played by type Ia supernovae (SNIa), which serve as the major source of iron, in the creation of the bimodality. To this end, we divide SNIa into two groups, those formed from the 1st generation stars (the first SNIa) and those formed from the 2nd generation stars (the second SNIa). The model with the first SNIa suppressed during the {\it second} star formation stage produces stars having high [$\alpha$/Fe] in the early phase of this stage, whereas the model which prohibits the second SNIa produces high [$\alpha$/Fe] stars in the late phase. Both models fail to create a well-defined bimodality. We thus conclude that the cooperation of the first and the second SNIa plays a crucial role in creating the bimodality by maintaining rich iron content in the interstellar gas throughout the second star formation stage.
astrophysics
Active learning is a framework in which the learning machine can select the samples to be used for training. This technique is promising, particularly when the cost of data acquisition and labeling is high. In active learning, determining the timing at which learning should be stopped is a critical issue. In this study, we propose a criterion for automatically stopping active learning. The proposed stopping criterion is based on the difference in the expected generalization errors and hypothesis testing. We derive a novel upper bound for the difference in expected generalization errors before and after obtaining a new training datum based on PAC-Bayesian theory. Unlike ordinary PAC-Bayesian bounds, though, the proposed bound is deterministic; hence, there is no uncontrollable trade-off between the confidence and tightness of the inequality. We combine the upper bound with a statistical test to derive a stopping criterion for active learning. We demonstrate the effectiveness of the proposed method via experiments with both artificial and real datasets.
statistics
The Five-hundred-meter Aperture Spherical radio Telescope (FAST) passed national acceptance and is taking pilot cycle of 'Shared-Risk' observations. The 19-beam receiver covering 1.05-1.45 GHz was used for most of these observations. The electronics gain fluctuation of the system is better than 1\% over 3.5 hours, enabling enough stability for observations. Pointing accuracy, aperture efficiency and system temperature are three key parameters of FAST. The measured standard deviation of pointing accuracy is 7.9$''$, which satisfies the initial design of FAST. When zenith angle is less than 26.4$^\circ$, the aperture efficiency and system temperature around 1.4 GHz are $\sim$ 0.63 and less than 24 K for central beam, respectively. The measured value of these two parameters are better than designed value of 0.6 and 25 K, respectively. The sensitivity and stability of the 19-beam backend are confirmed to satisfy expectation by spectral HI observations toward N672 and polarization observations toward 3C286. The performance allows FAST to take sensitive observations in various scientific goals, from studies of pulsar to galaxy evolution.
astrophysics
We systematically study mass spectra and decay properties of $P$-wave $\Xi_c^\prime$ baryons of the $SU(3)$ flavor $\mathbf{6}_F$, using the methods of QCD sum rules and light-cone sum rules within the framework of heavy quark effective theory. Our results suggest that the three excited $\Xi_c^0$ baryons recently observed by LHCb can be well explained as $P$-wave $\Xi_c^\prime$ baryons: the $\Xi_c(2923)^0$ and $\Xi_c(2939)^0$ are partner states of $J^P = 1/2^-$ and $3/2^-$ respectively, both of which contain one $\lambda$-mode orbital excitation; the $\Xi_c(2965)^0$ has $J^P = 3/2^-$, and also contains one $\lambda$-mode orbital excitation. We propose to search for another $P$-wave $\Xi_c^\prime$ state of $J^P = 5/2^-$ in the $\Lambda_c K/\Xi_c \pi$ mass spectral in future experiments. Its mass is about $56^{+30}_{-35}$ MeV larger than the $\Xi_c(2965)^0$, and its width is about $18.1^{+19.7}_{-~8.3}$ MeV.
high energy physics phenomenology
Spectrum cartography aims at estimating power propagation patterns over a geographical region across multiple frequency bands (i.e., a radio map)---from limited samples taken sparsely over the region. Classic cartography methods are mostly concerned with recovering the aggregate radio frequency (RF) information while ignoring the constituents of the radio map---but fine-grained emitter-level RF information is of great interest. In addition, many existing cartography methods work explicitly or implicitly assume random spatial sampling schemes that may be difficult to implement, due to legal/privacy/security issues. The theoretical aspects (e.g., identifiability of the radio map) of many existing methods are also unclear. In this work, we propose a joint radio map recovery and disaggregation method that is based on coupled block-term tensor decomposition. Our method guarantees identifiability of the individual radio map of \textit{each emitter} (thereby that of the aggregate radio map as well), under realistic conditions. The identifiability result holds under a large variety of geographical sampling patterns, including a number of pragmatic systematic sampling strategies. We also propose effective optimization algorithms to carry out the formulated radio map disaggregation problems. Extensive simulations are employed to showcase the effectiveness of the proposed approach.
electrical engineering and systems science
In recent years, the Internet of Things (IoT) has grown to include the tracking of devices through the use of Indoor Positioning Systems (IPS) and Location Based Services (LBS). When designing an IPS, a popular approach involves using wireless networks to calculate the approximate location of the target from devices with predetermined positions. In many smart building applications, LBS are necessary for efficient workspaces to be developed. In this paper, we examine two memoryless positioning techniques, K-Nearest Neighbor (KNN), and Naive Bayes, and compare them with simple trilateration, in terms of accuracy, precision, and complexity. We present a comprehensive analysis between the techniques through the use of three popular IoT wireless technologies: Zigbee, Bluetooth Low Energy (BLE), and WiFi (2.4 GHz band), along with three experimental scenarios to verify results across multiple environments. According to experimental results, KNN is the most accurate localization technique as well as the most precise. The RSSI dataset of all the experiments is available online.
electrical engineering and systems science
Deep neural networks (DNNs) for supervised learning can be viewed as a pipeline of a feature extractor (i.e. last hidden layer) and a linear classifier (i.e. output layer) that is trained jointly with stochastic gradient descent (SGD). In each iteration of SGD, a mini-batch from the training data is sampled and the true gradient of the loss function is estimated as the noisy gradient calculated on this mini-batch. From the feature learning perspective, the feature extractor should be updated to learn meaningful features with respect to the entire data, and reduce the accommodation to noise in the mini-batch. With this motivation, we propose In-Training Distribution Matching (ITDM) to improve DNN training and reduce overfitting. Specifically, along with the loss function, ITDM regularizes the feature extractor by matching the moments of distributions of different mini-batches in each iteration of SGD, which is fulfilled by minimizing the maximum mean discrepancy. As such, ITDM does not assume any explicit parametric form of data distribution in the latent feature space. Extensive experiments are conducted to demonstrate the effectiveness of our proposed strategy.
computer science
While frame-independent predictions with deep neural networks have become the prominent solutions to many computer vision tasks, the potential benefits of utilizing correlations between frames have received less attention. Even though probabilistic machine learning provides the ability to encode correlation as prior knowledge for inference, there is a tangible gap between the theory and practice of applying probabilistic methods to modern vision problems. For this, we derive a principled framework to combine information coupling between camera poses (translation and orientation) with deep models. We proposed a novel view kernel that generalizes the standard periodic kernel in $\mathrm{SO}(3)$. We show how this soft-prior knowledge can aid several pose-related vision tasks like novel view synthesis and predict arbitrary points in the latent space of generative models, pointing towards a range of new applications for inter-frame reasoning.
statistics
We discuss the status of resummation of large logarithmic contributions to groomed event shapes of hadronic final states in electron-positron annihilation. We identify the missing ingredients needed for next-to-next-to-next-to-leading logarithmic (NNNLL) resummation of the mMDT groomed jet mass in $e^+e^-$ collisions: the low-scale collinear-soft constants at two-loop accuracy, $c_{S_c}^{(2)}$, and the three-loop non-cusp anomalous dimension of the global soft function, $\gamma_S^{(2)}$. We present a method for extracting those constants using fixed-order codes: the EVENT2 program to obtain the color coefficients of $c_{S_c}^{(2)}$, and MCCSM for extracting $\gamma_S^{(2)}$. We present all necessary formulae for resummation of the mMDT groomed heavy jet mass distribution at NNNLL accuracy.
high energy physics phenomenology
We explore an asymptotic behavior of entropies for sums of independent random variables that are convolved with a small continuous noise.
mathematics
In this paper, we present an analytical modeling technique for circularly symmetric piezoelectric transducers, also called as Fresnel Lens. We also present the design of a flat/piston transducer that can generate unique acoustic wave patterns, having both converging and vortexing effects. The converging effect is generated by designing the transducer electrodes in the shapes of circular rings using Fresnel formula and exciting it with an RF signal of resonant frequency. The vortexing effect is achieved by cutting the rings to different sector angles: 90, 120, 180 and 270 degrees. We use the analytical model to simulate the performance of these transducers.
physics
We study a two-band dispersive SYK model in $1+1$ dimension. We suggest a model that describes a semimetal with quadratic dispersion at half-filling. We compute the Green's function at the saddle point using a combination of analytical and numerical methods. Employing a scaling symmetry of the Schwinger Dyson equations that becomes transparent in the strongly dispersive limit, we show that the exact solution of the problem yields a distinct type of non-Fermi liquid with sublinear $\rho\propto T^{2/5}$ temperature dependence of the resistivity. A scaling analysis indicates that this state corresponds to the fixed point of the dispersive SYK model for a quadratic band touching semimetal. We examine the formation of indirect exciton condensation in a bilayer system constructed from the above model. We find that the condensation temperature scales as a fast power-law $T_{c}\propto g^{5}$, with $g$ the strength of the repulsive coupling between the layers.
condensed matter
We theoretically investigate pattern formation and nonlinear dynamics in an array of equally-coupled, optically driven, Kerr nonlinear microresonators. We show that the nonlinear dynamics of the system can be associated with an effective two dimensional space due to the multimiode structure of each resonator. As a result, two fundamentally different dynamical regimes (elliptic and hyperbolic) arise at different regions of the hybridized dispersion surface. We demonstrate the formation of global nonlinear optical patterns in both regimes which correspond to coherent optical frequency combs on the individual resonator level. In addition we show that the presence of an additional dimension leads to the observation of wave collapse.
physics
This is a proof that if the eikonal is, as usually assumed, additive in strong and electro-magnetic interactions then the application of the Bethe Ansatz for the full scattering amplitude leads to the strong interaction scattering amplitude with the ratio ( real/imaginary) independent on the transferred momentum $ t $. Moreover, the unitarity condition makes the strong interaction amplitude vanishing. Thus, the Bethe form for the Coulomb-nuclear scattering amplitude and the same amplitude based on the additive eikonal are incompatible.
high energy physics phenomenology
Sachdev-Ye-Kitaev (SYK) model has emerged as a new paradigm of the non-Fermi-liquid behavior. Here we investigate a possibility of having a superconducting off-diagonal long-range order (ODLRO) and a pseudogap phase within the SYK framework. We found that ODLRO may be established in spin-1/2 version of the model with the time-reversal invariance and an extra attractive interaction. If the latter is taken as the on-site negative-$U$ Hubbard term, it leads to the pseudogap phase at $U<U_c$ dominated by quantum fluctuations of local phases. These fluctuations are described by a quantum version of the Kuramoto model, traditionally employed to illustrate synchronization of classical non-linear oscillators. In the opposite limit of large $U$, the SYK+Hubbard model is approaching a certain generalization of the integrable Richardson model. We present exact diagonalization studies, along with analytic solutions of the aforementioned limiting cases. We also discuss possible holographic interpretations of the model, ODLRO and the pseudogap.
condensed matter
Attention is a powerful component of modern neural networks across a wide variety of domains. In this paper, we seek to quantify the regularity (i.e. the amount of smoothness) of the attention operation. To accomplish this goal, we propose a new mathematical framework that uses measure theory and integral operators to model attention. We show that this framework is consistent with the usual definition, and that it captures the essential properties of attention. Then we use this framework to prove that, on compact domains, the attention operation is Lipschitz continuous and provide an estimate of its Lipschitz constant. Additionally, by focusing on a specific type of attention, we extend these Lipschitz continuity results to non-compact domains. We also discuss the effects regularity can have on NLP models, and applications to invertible and infinitely-deep networks.
statistics
This report introduces the Transfer Waveguide (TRANSGUIDE); an ultra-thin flat technology that promises light emitting applications a practical solution to total internal reflection light trapping and diverging emission. By invoking reciprocity, light can be temporarily stored in the form of a virtual-dipole and recovered back again.
physics
Evaluation of the marginal likelihood plays an important role in model selection problems. The widely applicable Bayesian information criterion (WBIC) and singular Bayesian information criterion (sBIC) give approximations to the log marginal likelihood, which can be applied to both regular and singular models. When the real log canonical thresholds are known, the performance of sBIC is considered to be better than that of WBIC, but only few real log canonical thresholds are known. In this paper, we propose a new estimator of the real log canonical thresholds based on the variance of thermodynamic integration with an inverse temperature. In addition, we propose an application to make sBIC widely applicable. Finally, we investigate the performance of the estimator and model selection by simulation studies and application to real data.
statistics
Chance imbalance in baseline characteristics is common in randomized clinical trials. Regression adjustment such as the analysis of covariance (ANCOVA) is often used to account for imbalance and increase precision of the treatment effect estimate. An objective alternative is through inverse probability weighting (IPW) of the propensity scores. Although IPW and ANCOVA are asymptotically equivalent, the former may demonstrate inferior performance in finite samples. In this article, we point out that IPW is a special case of the general class of balancing weights, and advocate to use overlap weighting (OW) for covariate adjustment. The OW method has a unique advantage of completely removing chance imbalance when the propensity score is estimated by logistic regression. We show that the OW estimator attains the same semiparametric variance lower bound as the most efficient ANCOVA estimator and the IPW estimator for a continuous outcome, and derive closed-form variance estimators for OW when estimating additive and ratio estimands. Through extensive simulations, we demonstrate OW consistently outperforms IPW in finite samples and improves the efficiency over ANCOVA and augmented IPW when the degree of treatment effect heterogeneity is moderate or when the outcome model is incorrectly specified. We apply the proposed OW estimator to the Best Apnea Interventions for Research (BestAIR) randomized trial to evaluate the effect of continuous positive airway pressure on patient health outcomes. All the discussed propensity score weighting methods are implemented in the R package PSweight.
statistics
Probabilistic graphical models are a central tool in AI; however, they are generally not as expressive as deep neural models, and inference is notoriously hard and slow. In contrast, deep probabilistic models such as sum-product networks (SPNs) capture joint distributions in a tractable fashion, but still lack the expressive power of intractable models based on deep neural networks. Therefore, we introduce conditional SPNs (CSPNs), conditional density estimators for multivariate and potentially hybrid domains which allow harnessing the expressive power of neural networks while still maintaining tractability guarantees. One way to implement CSPNs is to use an existing SPN structure and condition its parameters on the input, e.g., via a deep neural network. This approach, however, might misrepresent the conditional independence structure present in data. Consequently, we also develop a structure-learning approach that derives both the structure and parameters of CSPNs from data. Our experimental evidence demonstrates that CSPNs are competitive with other probabilistic models and yield superior performance on multilabel image classification compared to mean field and mixture density networks. Furthermore, they can successfully be employed as building blocks for structured probabilistic models, such as autoregressive image models.
computer science
In this long overdue second installment, we continue to develop the conformal bootstrap program for ${\mathcal N}=4$ superconformal field theories in four dimensions via an analysis of the correlation function of four stress-tensor supermultiplets. We review analytic results for this correlator and make contact with the SCFT/chiral algebra correspondence of arXiv:1312.5344. We demonstrate that the constraints of unitarity and crossing symmetry require the central charge $c$ to be greater than or equal to $3/4$ in any interacting ${\mathcal N}=4$ SCFT. We apply numerical bootstrap methods to derive upper bounds on scaling dimensions and OPE coefficients for several low-lying, unprotected operators as a function of the central charge. We interpret our bounds in the context of ${\mathcal N}=4$ super Yang-Mills (SYM) theories, formulating a series of conjectures regarding the embedding of the conformal manifold --- parametrized by the complexified gauge coupling --- into the space of scaling dimensions and OPE coefficients. Our conjectures assign a distinguished role to points on the conformal manifold that are self-dual under a subgroup of the S-duality group. This paper contains a more detailed exposition of a number of results previously reported in arXiv:1304.1803 in addition to new results.
high energy physics theory
Previous studies have shown the filamentary structures in the cosmic web influence the alignments of nearby galaxies. We study this effect in the LOWZ sample of the Sloan Digital Sky Survey using the "Cosmic Web Reconstruction" filament catalogue. We find that LOWZ galaxies exhibit a small but statistically significant alignment in the direction parallel to the orientation of nearby filaments. This effect is detectable even in the absence of nearby galaxy clusters, which suggests it is an effect from the matter distribution in the filament. A nonparametric regression model suggests that the alignment effect with filaments extends over separations of 30-40 Mpc. We find that galaxies that are bright and early-forming align more strongly with the directions of nearby filaments than those that are faint and late-forming; however, trends with stellar mass are less statistically significant, within the narrow range of stellar mass of this sample.
astrophysics
Efficient sampling of many-dimensional and multimodal density functions is a task of great interest in many research fields. We describe an algorithm that allows parallelizing inherently serial Markov chain Monte Carlo (MCMC) sampling by partitioning the space of the function parameters into multiple subspaces and sampling each of them independently. The samples of the different subspaces are then reweighted by their integral values and stitched back together. This approach allows reducing sampling wall-clock time by parallel operation. It also improves sampling of multimodal target densities and results in less correlated samples. Finally, the approach yields an estimate of the integral of the target density function.
statistics
Computer-based testing is an expanding use of technology offering advantages to teachers and students. We studied Calculus II classes for STEM majors using different testing modes. Three sections with 324 students employed: Paper-and-pencil testing, computer-based testing, and both. Computer tests gave immediate feedback, allowed multiple submissions, and pooling. Paper-and-pencil tests required work and explanation allowing inspection of high cognitive demand tasks. Each test mode used the strength of its method. Students were given the same lecture by the same instructor on the same day and the same homework assignments and due dates. The design is quasi-experimental, but students were not aware of the testing mode at registration. Two basic questions examined were: (1) Do paper-and-pencil and computer-based tests measure knowledge and skill in STEM Calculus II in a consistent manner? (2) How does the knowledge and skill gained by students in a fully computer-based Calculus II class compare to students in a class requiring pencil-and-paper tests and hence some paper-and-pencil work. These results indicate that computer-based tests are as consistent with paper-and-pencil tests as computer-based tests are with themselves. Results are also consistent with classes using paper-and-pencil tests having slightly better outcomes than fully computer-based classes using only computer assessments.
mathematics
We propose a machine learning approach aiming at reducing Bond Graphs. The output of the machine learning is a hybrid modeling that contains a reduced Bond Graph coupled to a simple artificial neural network. The proposed coupling enables knowledge continuity in machine learning. In this paper, a neural network is obtained by a linear calibration procedure. We propose a method that contains two training steps. First, the method selects the components of the original Bond Graph that are kept in the Reduced Bond Graph. Secondly, the method builds an artificial neural network that supplements the reduced Bond Graph. Because the output of the machine learning is a hybrid model, not solely data, it becomes difficult to use a usual Backpropagation Through Time to calibrate the weights of the neural network. So, in a first attempt, a very simple neural network is proposed by following a model reduction approach. We consider the modeling of the automotive cabins thermal behavior. The data used for the training step are obtained via solutions of differential algebraic equations by using a design of experiment. Simple cooling simulations are run during the training step. We show a simulation speed-up when the reduced bond graph is used to simulate the driving cycle of the WLTP vehicles homologation procedure, while preserving accuracy on output variables. The variables of the original Bond Graph are split into a set of primary variables, a set of secondary variables and a set of tertiary variables. The reduced bond graph contains all the primary variables, but none of the tertiary variables. Secondary variables are coupled to primary ones via an artificial neural network. We discuss the extension of this coupling approach to more complex artificial neural networks.
electrical engineering and systems science
We consider some implications of the much-discussed circumstellar habitable zones around M-dwarf stars for the conventionally understood radio SETI. We argue that the flaring nature of these stars would further adversely impact local development of radio communication and that, therefore, their circumstellar habitable zones should be preferentially studied by other methods. This is a clear example how diversity of astrobiological habitats is introducing contingency into the cultural evolution, thus undermining the universality of cultural convergence as one of the major premises of the traditional SETI. This is yet another example of how specifics of the physical environment strongly shape cultural evolution taken in the broadest, most inclusive sense.
physics
The Backward Angle Neutron Detector (BAND) of CLAS12 detects neutrons emitted at backward angles of $155^\circ$ to $175^\circ$, with momenta between $200$ and $600$ MeV/c. It is positioned 3 meters upstream of the target, consists of $18$ rows and $5$ layers of $7.2$ cm by $7.2$ cm scintillator bars, and read out on both ends by PMTs to measure time and energy deposition in the scintillator layers. Between the target and BAND there is a 2 cm thick lead wall followed by a 2 cm veto layer to suppress gammas and reject charged particles. This paper discusses the component-selection tests and the detector assembly. Timing calibrations (including offsets and time-walk) were performed using a novel pulsed-laser calibration system, resulting in time resolutions better than $250$ ps (150 ps) for energy depositions above 2 MeVee (5 MeVee). Cosmic rays and a variety of radioactive sources were used to calibration the energy response of the detector. Scintillator bar attenuation lengths were measured. The time resolution results in a neutron momentum reconstruction resolution, $\delta p/p < 1.5$\% for neutron momentum $200\le p\le 600$ MeV/c. Final performance of the BAND with CLAS12 is shown, including electron-neutral particle timing spectra and a discussion of the off-time neutral contamination as a function of energy deposition threshold.
physics
For model-free reinforcement learning, one of the main difficulty of stochastic Bellman residual minimization is the double sampling problem, i.e., while only one single sample for the next state is available in the model-free setting, two independent samples for the next state are required in order to perform unbiased stochastic gradient descent. We propose new algorithms for addressing this problem based on the idea of borrowing extra randomness from the future. When the transition kernel varies slowly with respect to the state, it is shown that the training trajectory of new algorithms is close to the one of unbiased stochastic gradient descent. Numerical results for policy evaluation in both tabular and neural network settings are provided to confirm the theoretical findings.
mathematics
In our previous work [18] we constructed a model of a noncommutative, charged and massive scalar field based on the angular twist. Then we used this model to analyze the motion of the scalar field in the Reissner-Nordstr\"om black hole background. In particular, we determined the QNM spectrum analytically in the near-extremal limit. To broaden our analysis, in this paper we apply a well defined numerical method, the continued fraction method and calculate the QNM spectrum for a non-extremal Reissner-Nordstr\"om black hole. To check the validity of our analytic calculations, we compare results of the continued fraction method in the near extremal limit with the analytic results obtained in the previous paper. We find that the results are in good agreement. For completeness, we also study the QNM spectrum in the WKB approximation.
high energy physics theory
Continuous-variable quantum key distribution employs the quadratures of a bosonic mode to establish a secret key between two remote parties. The resulting secret-key rate depends not only on the loss and noise in the communication channel, but also on a series of data-processing steps that are needed for transforming shared correlations into a final string of secret bits. Within the general setting of composable finite-size security, we consider a coherent-state protocol, whose quantum communication is simulated and classical data is post-processed via procedures of parameter estimation, error correction and privacy amplification. Such steps are presented in detail and suitably adapted to follow recently-developed tools for composable security analysis. For this advanced approach on data-processing, we provide a Python-based environment that allows us to investigate and optimize the protocol parameters to be used in practical experimental implementations.
quantum physics
Organic superalkalis are carbon-based species possessing lower ionization energy than alkali atom. In the quest for new organic superalkalis, we study the MC6Li6 (M = Li, Na, and K) complexes and their cations by decorating hexalithiobenzene with an alkali atom using density functional theory. All MC6Li6 complexes are planar and stable against dissociation into M + C6Li6 fragments, irrespective of their charge. These complexes are stabilized by charge transfer from M to C6Li6, although the back-donation of charges tends to destabilize neutral species. Furthermore, their degree of aromaticity increases monotonically from M = Li to K, unlike MC6Li6+ cations, which are not aromatic as suggested by their NICS values. The adiabatic ionization energies of MC6Li6 (3.08-3.22 eV) and vertical electron affinities of MC6Li6+ (3.04-3.15 eV) suggest that MC6Li6 species form a new series of aromatic superalkalis. The variation of the ionization energy of MC6Li6 is found to be in accordance with the NICS values of MC6Li6+. The superalkali nature of MC6Li6 and its relation with NICS values are explained on the basis of the positive charge delocalization. We believe that these species will not only enrich the aromatic superalkalis but also their possible applications will be explored.
physics
Bayes' rule tells us how to invert a causal process in order to update our beliefs in light of new evidence. If the process is believed to have a complex compositional structure, we may ask whether composing the inversions of the component processes gives the same belief update as the inversion of the whole. We answer this question affirmatively, showing that the relevant compositional structure is precisely that of the lens pattern, and that we can think of Bayesian inversion as a particular instance of a state-dependent morphism in a corresponding fibred category. We define a general notion of (mixed) Bayesian lens, and discuss the (un)lawfulness of these lenses when their contravariant components are exact Bayesian inversions. We prove our main result both abstractly and concretely, for both discrete and continuous states, taking care to illustrate the common structures.
mathematics
We study the strength of axioms needed to prove various results related to automata on infinite words and B\"uchi's theorem on the decidability of the MSO theory of $(N, {\le})$. We prove that the following are equivalent over the weak second-order arithmetic theory $RCA_0$: (1) the induction scheme for $\Sigma^0_2$ formulae of arithmetic, (2) a variant of Ramsey's Theorem for pairs restricted to so-called additive colourings, (3) B\"uchi's complementation theorem for nondeterministic automata on infinite words, (4) the decidability of the depth-$n$ fragment of the MSO theory of $(N, {\le})$, for each $n \ge 5$. Moreover, each of (1)-(4) implies McNaughton's determinisation theorem for automata on infinite words, as well as the "bounded-width" version of K\"onig's Lemma, often used in proofs of McNaughton's theorem.
computer science
We provide a unifying approach which links results on algebraic actions by Lind and Schmidt, Chung and Li, and a topological result by Meyerovitch that relates entropy to the set of asymptotic pairs. In order to do this we introduce a series of Markovian properties and, under the assumption that they are satisfied, we prove several results that relate topological entropy and asymptotic pairs (the homoclinic group in the algebraic case). As new applications of our method, we give a characterization of the homoclinic group of any finitely presented expansive algebraic action of (1) any elementary amenable group with an upper bound on the orders of finite subgroups or (2) any left orderable amenable group, using the language of independence entropy pairs.
mathematics
In this article we propose a distributed collision avoidance scheme for multi-agent unmanned aerial vehicles(UAVs) based on nonlinear model predictive control (NMPC),where other agents in the system are considered as dynamic obstacles with respect to the ego agent. Our control scheme operates at a low level and commands roll, pitch and thrust signals at a high frequency, each agent broadcasts its predicted trajectory to the other ones, and we propose an obstacle prioritization scheme based on the shared trajectories to allow up-scaling of the system. The NMPC problem is solved using an ad hoc solver where PANOC is combined with an augmented Lagrangian method to compute collision-free trajectories. We evaluate the proposed scheme in several challenging laboratory experiments for up to ten aerial agents, in dense aerial swarms.
computer science
We sketch applications of the so-called J-equation to quantum information theory concerning fundamental properties of the von Neumann entropy. The J-equation has recently be proposed as a sort of progenitor of the various versions of the Jarzynski equation. It has been derived within a general framework of sequential measurements that is slightly generalised here.
quantum physics
The spin-$\frac{1}{2}$ kagome antiferromagnet is considered an ideal host for a quantum spin liquid ground state. We find that when the bonds of the kagome lattice are modulated with a periodic pattern, new quantum ground states emerge. Newly synthesized crystalline barlowite (Cu$_4$(OH)$_6$FBr) and Zn-substituted barlowite demonstrate the delicate interplay between singlet states and spin order on the spin-$\frac{1}{2}$ kagome lattice. Comprehensive structural measurements demonstrate that our new variant of barlowite maintains hexagonal symmetry at low temperatures with an arrangement of distorted and undistorted kagome triangles, for which numerical simulations predict a pinwheel valence bond crystal (VBC) state instead of a quantum spin liquid (QSL). The presence of interlayer spins eventually leads to an interesting pinwheel $q=0$ magnetic order. Partially Zn-substituted barlowite (Cu$_{3.44}$Zn$_{0.56}$(OH)$_6$FBr) has an ideal kagome lattice and shows QSL behavior, indicating a surprising robustness of the QSL against interlayer impurities. The magnetic susceptibility is similar to that of herbertsmithite, even though the Cu$^{2+}$ impurities are above the percolation threshold for the interlayer lattice and they couple more strongly to the nearest kagome moment. This system is a unique playground displaying QSL, VBC, and spin order, furthering our understanding of these highly competitive quantum states.
condensed matter
We propose a Multiscale Invertible Generative Network (MsIGN) and associated training algorithm that leverages multiscale structure to solve high-dimensional Bayesian inference. To address the curse of dimensionality, MsIGN exploits the low-dimensional nature of the posterior, and generates samples from coarse to fine scale (low to high dimension) by iteratively upsampling and refining samples. MsIGN is trained in a multi-stage manner to minimize the Jeffreys divergence, which avoids mode dropping in high-dimensional cases. On two high-dimensional Bayesian inverse problems, we show superior performance of MsIGN over previous approaches in posterior approximation and multiple mode capture. On the natural image synthesis task, MsIGN achieves superior performance in bits-per-dimension over baseline models and yields great interpret-ability of its neurons in intermediate layers.
statistics
In this paper, we propose a constrained linear data-feature mapping model as an interpretable mathematical model for image classification using convolutional neural network (CNN) such as the ResNet. From this viewpoint, we establish the detailed connections in a technical level between the traditional iterative schemes for constrained linear system and the architecture for the basic blocks of ResNet. Under these connections, we propose some natural modifications of ResNet type models which will have less parameters but still maintain almost the same accuracy as these corresponding original models. Some numerical experiments are shown to demonstrate the validity of this constrained learning data-feature mapping assumption.
electrical engineering and systems science
Is it possible to detect a feature in an image without ever looking at it? Images are known to have sparser representation in Wavelets and other similar transforms. Compressed Sensing is a technique which proposes simultaneous acquisition and compression of any signal by taking very few random linear measurements (M). The quality of reconstruction directly relates with M, which should be above a certain threshold for a reliable recovery. Since these measurements can non-adaptively reconstruct the signal to a faithful extent using purely analytical methods like Basis Pursuit, Matching Pursuit, Iterative thresholding, etc., we can be assured that these compressed samples contain enough information about any relevant macro-level feature contained in the (image) signal. Thus if we choose to deliberately acquire an even lower number of measurements - in order to thwart the possibility of a comprehensible reconstruction, but high enough to infer whether a relevant feature exists in an image - we can achieve accurate image classification while preserving its privacy. Through the print error detection problem, it is demonstrated that such a novel system can be implemented in practise.
electrical engineering and systems science
At present, there are outstanding discrepancies between standard model predictions and measurements of the muon's $g-2$ and several $B$-meson properties. We resolve these anomalies by considering a two-Higgs-doublet model extended to include leptoquarks and a dark Higgs boson $S$. The leptoquarks modify $B$-meson decays and also induce an $S \gamma \gamma$ coupling, which contributes to the muon's $g-2$ through a Barr-Zee diagram. We show that, for TeV-scale leptoquarks and dark Higgs boson masses $m_{S} \sim 10-200~\text{MeV}$, a consistent resolution to all of the anomalies exists. The model predicts interesting new decays, such as $B \to K^{(*)} e^+ e^-$, $B \to K^{(*)} \gamma \gamma$, $K \to \pi \gamma \gamma$, and $h \to \gamma \gamma \gamma \gamma$, with branching fractions not far below current bounds.
high energy physics phenomenology
Shock waves are examples of the far-from-equilibrium behaviour of matter; they are ubiquitous in nature, yet the underlying microscopic mechanisms behind their formation are not well understood. Here, we study the dynamics of dispersive quantum shock waves in a one-dimensional Bose gas, and show that the oscillatory train forming from a local density bump expanding into a uniform background is a result of quantum mechanical self-interference. The amplitude of oscillations, i.e., the interference contrast, decreases with the increase of both the temperature of the gas and the interaction strength due to the reduced phase coherence length. Furthermore, we show that vacuum and thermal fluctuations can significantly wash out the interference contrast, seen in the mean-field approaches, due to shot-to-shot fluctuations in the position of interference fringes around the mean.
condensed matter
Observations have shown that the majority of massive stars, progenitors of black holes (BHs), have on average more than one stellar companion. In triple systems, wide inner binaries can be driven to a merger by the third body due to long-term secular interactions (the eccentric Lidov-Kozai effect). In this Letter, we explore the properties of BH mergers in triple systems and compare their population properties to those of binaries produced in isolation and assembled in dense star clusters. Using the same stellar physics and identical assumptions for the initial populations of binaries and triples, we show that stellar triples yield a significantly flatter mass ratio distribution from $q=1$ down to $q\sim0.3$ than either binary stars or dense stellar clusters, similar to the population properties inferred from the most recent catalog of gravitational-wave events. While hierarchical mergers in clusters can also produce asymmetric mass ratios, the unique spins of such mergers can be used to distinguished them from those produced from stellar triples. All three channels occupy distinct regions in total mass-mass ratio space, which may allow them to be disentangled as more BH mergers are detected by LIGO, Virgo, and KAGRA.
astrophysics
The development and validation of 3D multiphase computational fluid dynamics (M-CFD) models and physics-informed data-driven modeling require data of high-quality and high-resolution. Considering the difficulties in acquiring the corresponding experimental data in prototypical conditions, two-phase boiling simulations by Interface Tracking Method (ITM) based models can be used to generate high-resolution numerical data in a consistent and relatively economical manner. A boiling model is developed in one of the ITM-based multiphase-flow solvers, named PHASTA, to investigate the nucleate boiling phenomenon. The interaction between bubbles forming at adjacent nucleation sites is investigated with this ITM boiling model. Nucleate pool boiling simulations with multiple nucleation sites are presented in this paper and influences of site distance, neighboring bubble size and contact angle effect are investigated. The presented boiling model can conduct boiling simulation on 3D unstructured computational meshes. These simulation results improve our understanding of the physical mechanisms of the nucleate boiling phenomenon and provide high-resolution numerical data for M-CFD validation and advanced boiling model development.
physics
Despite significant recent advances in deep neural networks, training them remains a challenge due to the highly non-convex nature of the objective function. State-of-the-art methods rely on error backpropagation, which suffers from several well-known issues, such as vanishing and exploding gradients, inability to handle non-differentiable nonlinearities and to parallelize weight-updates across layers, and biological implausibility. These limitations continue to motivate exploration of alternative training algorithms, including several recently proposed auxiliary-variable methods which break the complex nested objective function into local subproblems. However, those techniques are mainly offline (batch), which limits their applicability to extremely large datasets, as well as to online, continual or reinforcement learning. The main contribution of our work is a novel online (stochastic/mini-batch) alternating minimization (AM) approach for training deep neural networks, together with the first theoretical convergence guarantees for AM in stochastic settings and promising empirical results on a variety of architectures and datasets.
statistics
The 2d principal models without boundaries have $G\times G$ symmetry. The already known integrable boundaries have either $H\times H$ or $G_{D}$ symmetries, where $H$ is such a subgroup of $G$ for which $G/H$ is a symmetric space while $G_{D}$ is the diagonal subgroup of $G\times G$. These boundary conditions have a common feature: they do not contain free parameters. We have found new integrable boundary conditions for which the remaining symmetry groups are either $G\times H$ or $H\times G$ and they contain one free parameter. The related boundary monodromy matrices are also described.
high energy physics theory
During the spin-up phase of a large pulsar glitch - a sudden decrease of the rotational period of a neutron star - the angular velocity of the star may overshoot, namely reach values greater than that observed for the new post-glitch equilibrium. These transient phenomena are expected on the basis of theoretical models for pulsar internal dynamics and their observation has the potential to provide an important diagnostic for glitch modelling. We present a simple criterion to assess the presence of an overshoot, based on the minimal analytical model that is able to reproduce an overshooting spin-up. We employ it to fit the data of the 2016 glitch of the Vela pulsar, obtaining estimates of the fractional moments of inertia of the internal superfluid components involved in the glitch, of the rise and decay timescales of the overshoot, and of the mutual friction parameters between the superfluid components and the normal one. We study the cases with and without strong entrainment in the crust: in the former, we find indication of a large inner core strongly coupled to the observable component and of a reservoir of angular momentum extending into the core to densities below nuclear saturation, while in the latter a large reservoir extending above nuclear saturation and a standard normal component without inner core are suggested.
astrophysics
We show that a system of three parallel waveguides, among which the central one is dissipative, leads to an ultrabroadband power splitting associated with an overall 50% power loss. The present approach is reminiscent of non-Hermitian systems in quantum mechanics and does not require a perfect effective index matching between the external and the central waveguides. The present concept does not need any slow adiabatic evolution of the waveguide parameters and may therefore be realized over very short device lengths, especially in the case where the central waveguide is of the plasmonic type.
physics
The paper demonstrates the geometric optic properties of the transition radiation (TR) of an ultrarelativistic point-like charged particle crossing the transverse wall of a circular semi-infinite waveguide with ideally conducting walls. It is shown that the rays forming the TR field emanate from the point of entry and obey the laws of geometric optics. The TR field at the observation point is formed from geometric-optical rays multi re-reflected from the waveguide wall.
physics
Two X-ray sources were recently discovered by Irwin et al. in compact companions to elliptical galaxies to show ultra-luminous flares with fast rise (~ minute) and decay (~ hour), and with a peak luminosity ~10^{40-41} erg/s. Together with two other sources found earlier, they constitute a new type of fast transients which cannot be attributed to neutron stars but might be due to intermediate-mass black holes (IMBHs; 10^{2-4} M_sun). The flaring behavior is recurrent for at least two sources. If the flare represents a short period of accretion onto an IMBH during the periastron passage of a donor star on an eccentric (i.e., repeating) or parabolic (non-repeating) orbit, we argue that the flare's rise time corresponds to the duration during which the donor's tidally stripped mass joins a residual disk at the pericenter. This duration is in turn equal to three other time scales: the duration of stripping, the sound crossing time of the donor, and the circular orbit time at the pericenter radius. Only a white dwarf can have a sound crossing time as short as one minute. Therefore, the donor must be a white dwarf and it was stripped of ~10^{-10} M_sun upon each passage at several to tens of Schwarzschild radii from the IMBH. The flux decay corresponds to the viscous drainage of the supplied mass toward the hole. Aided with long-term X-ray monitoring, this type of fast transients would be an ideal target for next-generation gravitational wave detectors.
astrophysics
Analysis of competing risks data plays an important role in the lifetime data analysis. Recently Feizjavadian and Hashemi (Computational Statistics and Data Analysis, vol. 82, 19-34, 2015) provided a classical inference of a competing risks data set using four-parameter Marshall-Olkin bivariate Weibull distribution when the failure of an unit at a particular time point can happen due to more than one cause. The aim of this paper is to provide the Bayesian analysis of the same model based on a very flexible Gamma-Dirichlet prior on the scale parameters. It is observed that the Bayesian inference has certain advantages over the classical inference in this case. We provide the Bayes estimates of the unknown parameters and the associated highest posterior density credible intervals based on Gibbs sampling technique. We further consider the Bayesian inference of the model parameters assuming partially ordered Gamma-Dirichlet prior on the scale parameters when one cause is more severe than the other cause. We have extended the results for different censoring schemes also.
statistics
We revisit and clarify some aspects of perturbative renormalization in pure Chern-Simons theory by means of a localization principle associated with an underlying supersymmetry. This perspective allows the otherwise perturbative one-loop shifts to be interpreted as nonperturbative consequences of a non-renormalization theorem, while providing a unified understanding of their origin (particularly in the case of Wilson lines). We illustrate this approach explicitly for $SU(2)$ Chern-Simons theory in flat space, on Seifert manifolds, and on a solid torus.
high energy physics theory
The paper investigates recoverability of discrete time signals represented by infinite sequences from incomplte observations. It is shown that there exist wide classes of signals that are everywhere dense in the space of square-summable signals and such that signals from these classes feature robust linear recoverability of their finite traces under very mild restrictions on the location of the observed data. In particular, the case arbitrarily sparse and non-periodic subsequences of observations are not excluded.
computer science
We build on previous studies of the Higgs and Coulomb branches of SUSY quiver theories having 8 supercharges, including $3d~{\cal N}=4$, and Classical gauge groups. The vacuum moduli spaces of many such theories can be parameterised by pairs of nilpotent orbits of Classical Lie algebras; they are transverse to one orbit and intersect the closure of the second. We refer to these transverse spaces as Slodowy intersections. They embrace reduced single instanton moduli spaces, nilpotent orbits, Kraft-Procesi transitions and Slodowy slices, as well as other types. We show how quiver subtractions, between multi-flavoured unitary or ortho-symplectic quivers, can be used to find a complete set of Higgs branch constructions for the Slodowy intersections of any Classical group. We discuss the relationships between the Higgs and Coulomb branches of these quivers and $T_{\sigma}^{\rho}$ theories in the context of $3d$ mirror symmetry, including problematic aspects of Coulomb branch constructions from ortho-symplectic quivers. We review Coulomb and Higgs branch constructions for a subset of Slodowy intersections from multi-flavoured Dynkin diagram quivers. We tabulate Hilbert series and Highest Weight Generating functions for Slodowy intersections of Classical algebras up to rank 4. The results are confirmed by direct calculation of Hilbert series from a localisation formula for normal Slodowy intersections that is related to the Hall Littlewood polynomials.
high energy physics theory
We study minimal area world sheets ending on two concentric circumferences on the boundary of Euclidean $AdS_{3}$ with mixed R-R and NS-NS three-form fluxes. We solve the problem by reducing the system to a one-dimensional integrable model. We find that the NS-NS flux term either brings the surface near to the boundary or separates the circumferences. In the limit of pure NS-NS flux the solution adheres to the boundary in the former case and the outer radius diverges in the latter. We further construct the underlying elliptic spectral curve, which allows us to analyze the deformation of other related minimal surfaces. We show that in the regime of pure NS-NS flux the elliptic curve degenerates.
high energy physics theory
We discuss singularity variables which are properly suited for analyzing the kinematics of events with missing transverse energy at the LHC. We consider six of the simplest event topologies encountered in studies of leptonic W-bosons and top quarks, as well as in SUSY-like searches for new physics with dark matter particles. In each case, we illustrate the general prescription for finding the relevant singularity variable, which in turn helps delineate the visible parameter subspace on which the singularities are located. Our results can be used in two different ways - first, as a guide for targeting the signal-rich regions of parameter space during the stage of discovery, and second, as a sensitive focus point method for measuring the particle mass spectrum after the initial discovery.
high energy physics phenomenology
Quantum phase-space distributions (Wigner functions) for the plane rotator are defined using wave functions expressed in both angle and angular momentum representations, with emphasis on the quantum superposition between the Fourier dual variable and the canonically conjugate coordinate. The standard quantization condition for angular momentum appears as necessary for consistency. It is shown that at finite temperature the time dependence of the quantum wave functions may provide classical sound waves. Non-thermal quantum entropy is associated with localization along the orbit.
quantum physics
The study of sampling signals on graphs, with the goal of building an analog of sampling for standard signals in the time and spatial domains, has attracted considerable attention recently. Beyond adding to the growing theory on graph signal processing (GSP), sampling on graphs has various promising applications. In this article, we review current progress on sampling over graphs focusing on theory and potential applications. Although most methodologies used in graph signal sampling are designed to parallel those used in sampling for standard signals, sampling theory for graph signals significantly differs from the theory of Shannon--Nyquist and shift-invariant sampling. This is due in part to the fact that the definitions of several important properties, such as shift invariance and bandlimitedness, are different in GSP systems. Throughout this review, we discuss similarities and differences between standard and graph signal sampling and highlight open problems and challenges.
electrical engineering and systems science
Statistical models for social networks have enabled researchers to study complex social phenomena that give rise to observed patterns of relationships among social actors and to gain a rich understanding of the interdependent nature of social ties and actors. Much of this research has focused on social networks within medium to large social groups. To date, these advances in statistical models for social networks, and in particular, of Exponential-Family Random Graph Models (ERGMS), have rarely been applied to the study of small networks, despite small network data in teams, families, and personal networks being common in many fields. In this paper, we revisit the estimation of ERGMs for small networks and propose using exhaustive enumeration when possible. We developed an R package that implements the estimation of pooled ERGMs for small networks using Maximum Likelihood Estimation (MLE), called "ergmito". Based on the results of an extensive simulation study to assess the properties of the MLE estimator, we conclude that there are several benefits of direct MLE estimation compared to approximate methods and that this creates opportunities for valuable methodological innovations that can be applied to modeling social networks with ERGMs.
statistics
We introduce the notion of Mixed Symmetry Quantum Phase Transition (MSQPT) as singularities in the transformation of the lowest-energy state properties of a system of identical particles inside each permutation symmetry sector $\mu$, when some Hamiltonian control parameters $\lambda$ are varied. We use a three-level Lipkin-Meshkov-Glick (LMG) model, with $U(3)$ dynamical symmetry, to exemplify our construction. After reviewing the construction of $U(3)$ unirreps using Young tableaux and Gelfand basis, we firstly study the case of a finite number $N$ of three-level atoms, showing that some precursors (fidelity-susceptibility, level population, etc.) of MSQPTs appear in all permutation symmetry sectors. Using coherent (quasi-classical) states of $U(3)$ as variational states, we compute the lowest-energy density for each sector $\mu$ in the thermodynamic $N\to\infty$ limit. Extending the control parameter space by $\mu$, the phase diagram exhibits four distinct quantum phases in the $\lambda$-$\mu$ plane that coexist at a quadruple point. The ground state of the whole system belongs to the fully symmetric sector $\mu=1$ and shows a four-fold degeneracy, due to the spontaneous breakdown of the parity symmetry of the Hamiltonian. The restoration of this discrete symmetry leads to the formation of four-component Schr\"odinger cat states.
quantum physics
The occurrence of system-scale coherent structures, so-called condensates, is a well-known phenomenon in two-dimensional turbulence. Here, the transition to condensate formation is investigated as a function of the magnitude of the force and for different types of forcing. Random forces with constant mean energy input lead to a supercritical transition, while forcing through a small-scale linear instability results in a subcritical transition with bistability and hysteresis. That is, the transition to condensate formation in two-dimensional turbulence is nonuniversal. For the supercritical case we quantify the effect of large-scale friction on the value of the critical exponent and the location of the critical point.
physics
The celebrated Dyson singularity signals the relative delocalization of single-particle wave functions at the zero-energy symmetry point of disordered systems with a chiral symmetry. Here we show that analogous zero modes in interacting quantum systems can fully localize at sufficiently large disorder, but do so less strongly than nonzero modes, as signifed by their real-space and Fock-space localization characteristics. We demonstrate this effect in a spin-1 Ising chain, which naturally provides a chiral symmetry in an odd-dimensional Hilbert space, thereby guaranteeing the existence of a many-body zero mode at all disorder strengths. In the localized phase, the bipartite entanglement entropy of the zero mode follows an area law, but is enhanced by a system-size-independent factor of order unity when compared to the nonzero modes. Analytically, this feature can be attributed to a specific zero-mode hybridization pattern on neighboring spins. The zero mode also displays a symmetry-induced even-odd and spin-orientation fragmentation of excitations, characterized by real-space spin correlation functions, which generalizes the sublattice polarization of topological zero modes in noninteracting systems, and holds at any disorder strength.
condensed matter
Many of observed hot Jupiters are subject to atmospheric outflows. Numerical simulations have shown that the matter escaping from the atmosphere can accumulate outside the orbit of the planet, forming a torus. In a few 10^8 yr, the mass of the torus can become large enough to exert a significant gravitational effect on the planet. Accumulation of mass, in its own turn, is hindered by the activity of the star, which leads to the photoevaporation of the torus matter. We explore the role of these and other factors in the planet's migration in the epoch when the protoplanetary disk has already disappeared. Using HD209458 system as an example, we show that the gravitational interaction with the torus leads to the possibility of migration of the planet to its observable position, starting from an orbit >= 0.3 AU.
astrophysics
Standard decoding approaches rely on model-based channel estimation methods to compensate for varying channel effects, which degrade in performance whenever there is a model mismatch. Recently proposed Deep learning based neural decoders address this problem by leveraging a model-free approach via gradient-based training. However, they require large amounts of data to retrain to achieve the desired adaptivity, which becomes intractable in practical systems. In this paper, we propose a new decoder: Model Independent Neural Decoder (MIND), which builds on the top of neural decoders and equips them with a fast adaptation capability to varying channels. This feature is achieved via the methodology of Model-Agnostic Meta-Learning (MAML). Here the decoder: (a) learns a "good" parameter initialization in the meta-training stage where the model is exposed to a set of archetypal channels and (b) updates the parameter with respect to the observed channel in the meta-testing phase using minimal adaptation data and pilot bits. Building on top of existing state-of-the-art neural Convolutional and Turbo decoders, MIND outperforms the static benchmarks by a large margin and shows minimal performance gap when compared to the neural (Convolutional or Turbo) decoders designed for that particular channel. In addition, MIND also shows strong learning capability for channels not exposed during the meta training phase.
electrical engineering and systems science
We report on growth of InP-InAs core-shell nanowires and demonstration of the formation of single quantum structures, which show Coulomb blockade effect, over entire lengths of the nanowires. The core-shell nanowires are grown by a selective area growth technique via metal-organic vapor phase epitaxy. The as-grown core-shell nanowires are found to be of wurtzite crystals. The InP cores have a hexagonal cross section, while the InAs shell are grown preferentially on specific {1$\bar{1}$00} facets, leading to the formation of the core-shell nanowires with an overall triangular cross section. The grown core-shell nanowires are transferred on to a Si/SiO$_2$ substrate and then contacted with several narrow metal electrodes. Low-temperature transport measurements show the Coulomb-blockade effect. We analyze the measured gate capacitance and single electron charging energy of the devices and demonstrate that a quantum structure which shows the Coulomb blockade effect of a many-electron quantum dot is formed over the full length of a single core-shell nanowire and consists of the entire InAs shell in the nanowire.
condensed matter
Non-Gaussian operations have been studied intensively in recent years due to their ability to increase the secret key rate for certain CV-QKD protocols. However, most previous studies on such protocols are carried out in a single-mode setting, even though in reality any quantum state contains multi-mode components in frequency space. In this work we investigate the use of non-Gaussian operations in a multi-mode CV-QKD system. Our main finding is that, contrary to single-mode CV-QKD systems, in generic multi-mode CV-QKD systems it is possible to use non-Gaussian operations to increase the optimized secret key rate. More specifically, we find that at losses of order 30dB, which represents a distance of order 160km and the effective maximum distance for CV-QKD, the key rate for multi-mode non-Gaussian operations can be orders of magnitude higher than single-mode operations. Our results are important for real-world CV-QKD systems especially those dependent on quantum error correction - a process that requires non-Gaussian effects.
quantum physics
Out-of-training-distribution (OOD) scenarios are a common challenge of learning agents at deployment, typically leading to arbitrary deductions and poorly-informed decisions. In principle, detection of and adaptation to OOD scenes can mitigate their adverse effects. In this paper, we highlight the limitations of current approaches to novel driving scenes and propose an epistemic uncertainty-aware planning method, called \emph{robust imitative planning} (RIP). Our method can detect and recover from some distribution shifts, reducing the overconfident and catastrophic extrapolations in OOD scenes. If the model's uncertainty is too great to suggest a safe course of action, the model can instead query the expert driver for feedback, enabling sample-efficient online adaptation, a variant of our method we term \emph{adaptive robust imitative planning} (AdaRIP). Our methods outperform current state-of-the-art approaches in the nuScenes \emph{prediction} challenge, but since no benchmark evaluating OOD detection and adaption currently exists to assess \emph{control}, we introduce an autonomous car novel-scene benchmark, \texttt{CARNOVEL}, to evaluate the robustness of driving agents to a suite of tasks with distribution shifts.
computer science
The loss landscapes of deep neural networks are not well understood due to their high nonconvexity. Empirically, the local minima of these loss functions can be connected by a learned curve in model space, along which the loss remains nearly constant; a feature known as mode connectivity. Yet, current curve finding algorithms do not consider the influence of symmetry in the loss surface created by model weight permutations. We propose a more general framework to investigate the effect of symmetry on landscape connectivity by accounting for the weight permutations of the networks being connected. To approximate the optimal permutation, we introduce an inexpensive heuristic referred to as neuron alignment. Neuron alignment promotes similarity between the distribution of intermediate activations of models along the curve. We provide theoretical analysis establishing the benefit of alignment to mode connectivity based on this simple heuristic. We empirically verify that the permutation given by alignment is locally optimal via a proximal alternating minimization scheme. Empirically, optimizing the weight permutation is critical for efficiently learning a simple, planar, low-loss curve between networks that successfully generalizes. Our alignment method can significantly alleviate the recently identified robust loss barrier on the path connecting two adversarial robust models and find more robust and accurate models on the path.
computer science
Helicity is a classically conserved quantity which can be used, in addition to and independently of the (vector) charge and chirality, to characterize thermodynamic ensembles of massless Dirac fermions. We identify a symmetry of the Dirac Lagrangian which is responsible, via the Noether theorem, to the classical conservation of the helicity current. We demonstrate the existence of new nondissipative transport phenomena, helical vortical effects, that emerge in a helically-imbalanced rotating fermionic system. These phenomena lead to appearance of a new gapless hydrodynamic excitation, the helical vortical wave. Our results also imply that the helical symmetry suffers from a quantum anomaly. We conjecture the existence of a new type of triangle anomalies in QED which involve the helicity currents in addition to the standard vector and axial currents.
high energy physics theory
Moir\'e superlattices of two-dimensional (2D) materials with a small twist angle are thought to be flexoelectric, though unambiguous confirmation of their flexoelectricity is challenging due to artifacts associated with commonly used piezoresponse force microscopy (PFM). For example, unexpectedly small phase contrast, rather than the predicted 180o, between opposite flexoelectric polarization was reported in twisted bilayer graphene (tBG). Here we developed a methodology to extract intrinsic moir\'e flexoelectricity using twisted double bilayer graphene (tDBG) as a model system, probed by lateral PFM. For small twist angle samples, we found that a vectorial decomposition is essential to recover the small intrinsic flexoelectric response at domain walls from a large background signal. The obtained three-fold symmetry of commensurate domains with both phase and amplitude at domain walls is fully consistent with our theoretical calculations. Incommensurate domains in tDBG at relatively large twist angles can also be observed by this technique. Our work provides a general methodology for unraveling intrinsic flexoelectricity in van der Waals moir\'e superlattices while providing insights into engineered symmetry breaking in centrosymmetric materials.
condensed matter
We make a rigorous computation of the relative entropy between the vacuum state and a coherent state for a free scalar in the framework of AQFT. We study the case of the Rindler Wedge. Previous calculations including path integral methods and computations from the lattice, give a result for such relative entropy which involves integrals of expectation values of the energy-momentum stress tensor along the considered region. However, the stress tensor is in general non unique. That means that if we start with some stress tensor, then we can "improve" it adding a conserved term without modifying the Poincar\'e charges. On the other hand, the presence of such improving term affects the naive expectation for the relative entropy by a non vanishing boundary contribution along the entangling surface. In other words, this means that there is an ambiguity in the usual formula for the relative entropy coming from the non uniqueness of the stress tensor. The main motivation of this work is to solve this puzzle. We first show that all choices of stress tensor except the canonical one are not allowed by positivity and monotonicity of the relative entropy. Then we fully compute the relative entropy between the vacuum and a coherent state in the framework of AQFT using the Araki formula and the techniques of Modular theory. After all, both results coincides and give the usual expression for the relative entropy calculated with the canonical stress tensor.
high energy physics theory
We apply on-shell amplitude techniques to the study of neutrino oscillations in vacuum, focussing on processes involving W bosons. We start by determining the 3-point amplitude involving one neutrino, one charged lepton and one W boson, highlighting all the allowed kinematic structures. The result we obtain contains terms generated at all orders in an expansion in the cutoff scale of the theory, and we explicitly determine the lower dimensional operators behind the generation of the different structures. We then use this amplitude to construct the 4-point amplitude in which neutrinos are exchanged in the s-channel, giving rise to oscillations. We also study in detail how flavor enters in the amplitudes, and how the PMNS matrix emerges from the on-shell perspective.
high energy physics phenomenology
Competition and collaboration are at the heart of multi-agent probabilistic spreading processes. The battle on public opinion and competitive marketing campaigns are typical examples of the former, while the joint spread of multiple diseases such as HIV and tuberculosis demonstrates the latter. These spreads are influenced by the underlying network topology, the infection rates between network constituents, recovery rates and, equally importantly, the interactions between the spreading processes themselves. Here, for the first time we derive dynamic message-passing equations that provide an exact description of the dynamics of two interacting spreading processes on tree graphs, and develop systematic low-complexity models that predict the spread on general graphs. We also develop a theoretical framework for an optimal control of interacting spreading processes through an optimized resource allocation under budget constraints and within a finite time window. Derived algorithms can be used to maximize the desired spread in the presence of a rival competitive process, and to limit the spread through vaccination in the case of coupled infectious diseases. We demonstrate the efficacy of the framework and optimization method on both synthetic and real-world networks.
physics
We study a driven-dissipative model of spins one-half (qubits) on a lattice with nearest-neighbor interactions. Focusing on the role of spatially extended spin-spin correlations in determining the phases of the system, we characterize the spatial structure of the correlations in the steady state, as well as their temporal dynamics. In dimension one we use essentially exact matrix-product-operator simulations on large systems, and pushing these calculations to dimension two, we obtain accurate results on small cylinders. We also employ an approximation scheme based on solving the dynamics of the mean field dressed by the feedback of quantum fluctuations at leading order. This approach allows us to study the effect of correlations in large lattices with over one hundred thousand spins, as the spatial dimension is increased up to five. In dimension two and higher we find two new states that are stabilized by quantum correlations and do not exist in the mean-field limit of the model. One of these is a steady state with mean magnetization values that lie between the two bistable mean-field values, and whose correlation functions have properties reminiscent of both. The correlation length of the new phase diverges at a critical point, beyond which we find emerging a new limit cycle state with the magnetization and correlators oscillating periodically in time.
quantum physics
We analyzed the long and short cadence light curves of the Kepler non-Blazhko RRab stars. We prepared the Fourier spectra, the Fourier amplitude and phase variation functions, time-frequency representation, the O-C diagrams and their Fourier contents. Our main findings are: (i) All stars which are brighter a certain magnitude limit show significant cycle-to-cycle light curve variations. (ii) We found permanently excited additional modes for at least one third of the sample and some other stars show temporarily excited additional modes. (iii) The presence of the Blazhko effect was carefully checked and identified one new Blazhko candidate but for at least 16 stars the effect can be excluded. This fact has important consequences. Either the cycle-to-cycle variation phenomenon is independent from the Blazhko effect and the Blazhko incidence ratio is still much lower (51%-55%) than the extremely large (>90%) ratio published recently. The connection between the extra modes and the cycle-to-cycle variations is marginal.
astrophysics
We study the covariant derivatives of an eigenfunction for the Laplace-Beltrami operator on a complete, connected Riemannian manifold with nonzero constant sectional curvature. We show that along every parallel tensor, the covariant derivative is a scalar multiple of the eigenfunction. We also show that the scalar is a polynomial depending on the eigenvalue and prove some properties. A conjecture motivated by the study of vertex algebraic structure on space forms is also announced, suggesting the existence of interesting structures in these polynomials that awaits further exploration.
mathematics
We propose definitions and implementations of "S-money" - virtual tokens designed for high value fast transactions on networks with relativistic or other trusted signalling constraints, defined by inputs that in general are made at many network points, some or all of which may be space-like separated. We argue that one significant way of characterising types of money in space-time is via the "summoning" tasks they can solve: that is, how flexibly the money can be propagated to a desired space-time point in response to relevant information received at various space-time points. We show that S-money is more flexible than standard quantum or classical money in the sense that it can solve deterministic summoning tasks that they cannot. It requires the issuer and user to have networks of agents with classical data storage and communication, but no long term quantum state storage, and is feasible with current technology. User privacy can be incorporated by secure bit commitment and zero knowledge proof protocols. The level of privacy feasible in given scenarios depends on efficiency and composable security questions that remain to be systematically addressed.
quantum physics
Entanglement witness is a Hermitian operator that is useful for detecting the genuine multipartite entanglement of mixed states. Nonlinear entanglement witnesses have the advantage of a wider detection range in the entangled region. We construct genuine entanglement witnesses for four qubits density matrices in the mutually unbiased basis. First, we find the convex feasible region with positive partial transpose states. Then, to reveal the entangled regions, we present some appropriate linear entanglement witnesses and, we find the envelope of this family of linear witnesses as a nonlinear witness. Finally, we study thermal entanglement and we show for some Hamiltonians the witnesses can detect the entanglement at all temperatures.
quantum physics
Fluorescence lifetime imaging microscopy (FLIM) is a powerful technique in biomedical research that uses the fluorophore decay rate to provide additional contrast in fluorescence microscopy. However, at present, the calculation, analysis, and interpretation of FLIM is a complex, slow, and computationally expensive process. Machine learning (ML) techniques are well suited to extract and interpret measurements from multi-dimensional FLIM data sets with substantial improvement in speed over conventional methods. In this topical review, we first discuss the basics of FILM and ML. Second, we provide a summary of lifetime extraction strategies using ML and its applications in classifying and segmenting FILM images with higher accuracy compared to conventional methods. Finally, we discuss two potential directions to improve FLIM with ML with proof of concept demonstrations.
electrical engineering and systems science
Repulsive Bose-Einstein condensate, where the short-range interaction is included up to the third order by the interaction radius, demonstrates existence of a bright soliton in a narrow interval of parameters. This soliton is studied here for the boson-fermion mixture, where spin-1/2 fermions are consider in the regime of full spin polarization. Influence of fermions via the boson-fermion interaction is considered up to the third order by the interaction radius. Fermions themselves are considered by hydrodynamic model including the pressure evolution equation. Interaction between fermions is considered. The first order by the interaction radius gives zero contribution in the Euler equation and the pressure evolution equation, but the third order by the interaction radius provides nonzero contributions in both equations. Repulsive (attractive) boson-fermion interaction leads to the bright (dark) fermionic soliton.
condensed matter
In recent years soft factorization theorems in scattering amplitudes have been reinterpreted as conservation laws of asymptotic charges. In gauge, gravity, and higher spin theories the asymptotic charges can be understood as canonical generators of large gauge symmetries. Such a symmetry interpretation has been so far missing for scalar soft theorems. We remedy this situation by treating the massless scalar field in terms of a dual two-form gauge field. We show that the asymptotic charges associated to the scalar soft theorem can be understood as generators of large gauge transformations of the dual two-form field. The dual picture introduces two new puzzles: the charges have very unexpected Poisson brackets with the fields, and the monopole term does not always have a dual gauge transformation interpretation. We find analogs of these two properties in the Kramers-Wannier duality on a finite lattice, indicating that the free scalar theory has new edge modes at infinity that canonically commute with all the bulk degrees of freedom.
high energy physics theory
The Micro-Pattern Gaseous Detectors (MPGD) have been widely adopted in nuclear and particle physics experiments, for their fast response and other excellent characteristics. To achieve the required signal strength and detection efficiency, sometimes they are operated at a high voltage range. This often challenges the limit of high voltage stability of the detector. Discharge in gaseous detectors is a complex process and involves several responsible factors. The microscopic geometrical structures of the MPGDs may itself sometimes induce discharges. In this study, we are numerically investigating the discharge phenomena in non-resistive Micromegas. Within the COMSOL framework, a 3-dimensional model is developed to observe the occurrence and the development of discharge in Micromegas. The effect of space charge has been taken into account in the calculation. The model allows to vary the geometrical parameters of the detector as well as to study the effects of gas impurities and a different number of primary charges.
physics
We study the symmetric periodic Anderson model of the conduction electrons hybridized with the localized correlated electrons on square lattice. Using the canonical representation of electrons by Kumar, we do a self-consistent theory of its effective charge and spin dynamics, which produces an insulating ground state that undergoes continuous transition from the Kondo singlet to N\'eel antiferromagnetic phase with decreasing hybridization, and uncovers two inversion transitions for the charge quasiparticles. With suitably inverted quasiparticle bands for moderate to weaker effective Kondo couplings, this effective charge dynamics in the magnetic field coupled to electronic motion produces magnetic quantum oscillations with frequency corresponding to the half Brillouin zone.
condensed matter
We propose an operator product expansion for planar form factors of local operators in $\mathcal{N}=4$ SYM theory. This expansion is based on the dual conformal symmetry of these objects or, equivalently, the conformal symmetry of their dual description in terms of periodic Wilson loops. A form factor is decomposed into a sequence of known pentagon transitions and a new universal object that we call the "form factor transition". This transition is subject to a set of non-trivial bootstrap constraints, which we expect to be sufficient to fully determine it. We evaluate the form factor transition for MHV form factors of the chiral half of the stress tensor supermultiplet at leading order in perturbation theory and use it to produce OPE predictions at any loop order. We match the one-loop and two-loop predictions with data available in the literature.
high energy physics theory
Wireless communication applications has acquired a vastly increasing range over the past decade. This rapidly increasing demand implies limitations on utilizing wireless resources. One of the most important resources in wireless communication is frequency spectrum. This thesis provides different solutions towards increasing the spectral efficiency. The first solution provided in this thesis is to use a more accurate optimization metric: maximal acheivable rate (compared to traditional metric: ergodic capacity) to optimize training data size in wireless communication. Training data symbols are previously known symbols to the receiver inserted in data packets which are used by receiver to acquire channel state information (CSI). Optimizing training data size with respect to our proposed tight optimization metric, we could achieve higher rates especially for short packet and ultra reliable applications. Our second proposed solution to increase spectral efficiency is to design a multifunction communication and sensing platform utilizing a special training sequence design. We proposed a platform where two training sequences are designed, one for the base-station and the other for the user. By designing these two training sequence such that they are uncorrelated to each other, the base station will be able to distinguish between the two training sequence. Having one of the sequences especially designed for radar purposes (by designing it such that it has an impulse-like autocorrelation), the system will be able to sense the environment, transmit and receive the communication data simultaneously.
electrical engineering and systems science
Digital Rock Imaging is constrained by detector hardware, and a trade-off between the image field of view (FOV) and the image resolution must be made. This can be compensated for with super resolution (SR) techniques that take a wide FOV, low resolution (LR) image, and super resolve a high resolution (HR), high FOV image. The Enhanced Deep Super Resolution Generative Adversarial Network (EDSRGAN) is trained on the Deep Learning Digital Rock Super Resolution Dataset, a diverse compilation 12000 of raw and processed uCT images. The network shows comparable performance of 50% to 70% reduction in relative error over bicubic interpolation. GAN performance in recovering texture shows superior visual similarity compared to SRCNN and other methods. Difference maps indicate that the SRCNN section of the SRGAN network recovers large scale edge (grain boundaries) features while the GAN network regenerates perceptually indistinguishable high frequency texture. Network performance is generalised with augmentation, showing high adaptability to noise and blur. HR images are fed into the network, generating HR-SR images to extrapolate network performance to sub-resolution features present in the HR images themselves. Results show that under-resolution features such as dissolved minerals and thin fractures are regenerated despite the network operating outside of trained specifications. Comparison with Scanning Electron Microscope images shows details are consistent with the underlying geometry of the sample. Recovery of textures benefits the characterisation of digital rocks with a high proportion of under-resolution micro-porous features, such as carbonate and coal samples. Images that are normally constrained by the mineralogy of the rock (coal), by fast transient imaging (waterflooding), or by the energy of the source (microporosity), can be super resolved accurately for further analysis downstream.
electrical engineering and systems science
In this paper, we introduce a method for segmenting time series data using tools from Bayesian nonparametrics. We consider the task of temporal segmentation of a set of time series data into representative stationary segments. We use Gaussian process (GP) priors to impose our knowledge about the characteristics of the underlying stationary segments, and use a nonparametric distribution to partition the sequences into such segments, formulated in terms of a prior distribution on segment length. Given the segmentation, the model can be viewed as a variant of a Gaussian mixture model where the mixture components are described using the covariance function of a GP. We demonstrate the effectiveness of our model on synthetic data as well as on real time-series data of heartbeats where the task is to segment the indicative types of beats and to classify the heartbeat recordings into classes that correspond to healthy and abnormal heart sounds.
computer science
In this paper, we give the general expressions for a special series of tree amplitudes of the Yang-Mills theory. This series of amplitudes have two adjacent massless spin-1 particles with extra-dimensional momenta and any number of positive helicity gluons. With special helicity choices, we use the spinor helicity formalism to express these n-point amplitudes in compact forms, and find a clever way to use the BCFW recursion relations to prove the results. Then these amplitudes are used to form the complete 1-loop all-plus integrand with any number of gluons, expressed in the Q-cut representation.
high energy physics theory
Interest in substitutional disordered alloys has recently reemerged with focus on the symmetry-sensitive properties in the alloy such as topological insulation and Rashba effect. A substitutional random alloy manifests a distribution of local environments, creating a polymorphous network. While the macroscopic average (monomorphous) structure may have the original high symmetry of the constituent compounds, many observable physical properties are sensitive to local symmetry, and are hence $<P(S_i)>$ rather than $P(S_0)$=$P(<S_i>)$. The fundamental difference between polymorphous $<P(S_i)>$ and monomorphous $P(S_0)$ led to the often-diverging results and the missing the atomic-scale resolution needed to discern symmetry-related physics. A natural approach capturing the polymorphous aspect is supercell model, which however suffers the difficulty of band folding ('spaghetti bands'), rendering the E vs k dispersion needed in topology and Rashba physics and seen in experiments, practically inaccessible. A solution that retains the polymorphous nature but restores the E vs k relation is to unfold the supercell bands. This yields alloy Effective Band Structure (EBS), providing a 3D picture of spectral density consisting of E- and k-dependent spectral weight with coherent and incoherent features, all created naturally by the polymorphous distribution of many local environments. We illustrate this EBS approach for CdTe-HgTe, PbSe-SnSe and PbS-PbTe alloys. We found properties that are critical for e.g. topological phase transition and Rashba splitting but totally absent in conventional monomorphous approaches, including (1) co-existing, wavevector- and energy-dependent coherent band splitting and incoherent band broadening, (2) coherent-incoherent transition along different k space directions, and (3) Rashba-like band splitting having both coherent and incoherent features.
condensed matter
Sequential Monte Carlo (SMC), also known as particle filters, has been widely accepted as a powerful computational tool for making inference with dynamical systems. A key step in SMC is resampling, which plays the role of steering the algorithm towards the future dynamics. Several strategies have been proposed and used in practice, including multinomial resampling, residual resampling (Liu and Chen 1998), optimal resampling (Fearnhead and Clifford 2003), stratified resampling (Kitagawa 1996), and optimal transport resampling (Reich 2013). We show that, in the one dimensional case, optimal transport resampling is equivalent to stratified resampling on the sorted particles, and they both minimize the resampling variance as well as the expected squared energy distance between the original and resampled empirical distributions; in the multidimensional case, the variance of stratified resampling after sorting particles using Hilbert curve (Gerber et al. 2019) in $\mathbb{R}^d$ is $O(m^{-(1+2/d)})$, an improved rate compared to the original $O(m^{-(1+1/d)})$, where $m$ is the number of resampled particles. This improved rate is the lowest for ordered stratified resampling schemes, as conjectured in Gerber et al. (2019). We also present an almost sure bound on the Wasserstein distance between the original and Hilbert-curve-resampled empirical distributions. In light of these theoretical results, we propose the stratified multiple-descendant growth (SMG) algorithm, which allows us to explore the sample space more efficiently compared to the standard i.i.d. multiple-descendant sampling-resampling approach as measured by the Wasserstein metric. Numerical evidence is provided to demonstrate the effectiveness of our proposed method.
statistics
Optical responses of ferromagnetic materials with spin gauge field that drives intrinsic spin curren is theoretically studied. The conductivity tensor is calculated based on a linear response theory to the applied electric field taking account of the non-linear effects of the spin gauge field to the second order. We consider the case where the spin gauge field is uniform, realized for spiral magnetization structure or uniform spin-orbit interaction. The spin gauge field, or an intrinsic spin current, turns out to give rise to anisotropic optical responses, which is expected to be useful to experimental detection of magnetization structures.
condensed matter
We consider a system of reaction-diffusion equations including chemotaxis terms and coming out of the modeling of multiple sclerosis. The global existence of strong solutions to this system in any dimension is proved, and it is also shown that the solution is bounded uniformly in time. Finally, a nonlinear stability result is obtained when the chemotaxis term is not too big. We also perform numerical simulations to show the appearance of Turing patterns when the chemotaxis term is large.
mathematics
In relativistic nuclear collisions the production of hadrons with light (u,d,s) quarks is quantitatively described in the framework of the Statistical Hadronization Model (SHM). Charm quarks are dominantly produced in initial hard collisions but interact strongly in the hot fireball and thermalize. Therefore charmed hadrons can be incorporated into the SHM by treating charm quarks as 'impurities' with thermal distributions, while the total charm content of the fireball is fixed by the measured open charm cross section. We call this model SHMc and demonstrate that with SHMc the measured multiplicities of single charm hadrons in lead-lead collisions at LHC energies can be well described with the same thermal parameters as for (u,d,s) hadrons. Furthermore, transverse momentum distributions are computed in a blast-wave model, which includes the resonance decay kinematics. SHMc is extended to lighter collision systems down to oxygen-oxygen and includes doubly- and triply-charmed hadrons. We show predictions for production probabilities of such states exhibiting a characteristic and quite spectacular enhancement hierarchy.
high energy physics phenomenology
The Caldeira-Leggett master equation as an example of Markovian master equation without Lindblad form is investigated for mathematical consistency. We explore situations both analytically and numerically where the positivity violations of the density operator occur. We reinforce some known knowledge about this problem but also find new surprising cases. Our analytical results are based on the full solution of the Caldeira-Leggett master equation obtained via the method of characteristics. The preservation of positivity is mainly investigated with the help of the density operator's purity and we give also some numerical results about the violation of the Robertson-Schr\"odinger uncertainty relation.
quantum physics
In a classic transactional distributed database management system (DBMS), write transactions invariably synchronize with a coordinator before final commitment. While enforcing serializability, this model has long been criticized for not satisfying the applications' availability requirements. When entering the era of Internet of Things (IoT), this problem has become more severe, as an increasing number of applications call for the capability of hybrid transactional and analytical processing (HTAP), where aggregation constraints need to be enforced as part of transactions. Current systems work around this by creating escrows, allowing occasional overshoots of constraints, which are handled via compensating application logic. The WiSer DBMS targets consistency with availability, by splitting the database commit into two steps. First, a PROMISE step that corresponds to what humans are used to as commitment, and runs without talking to a coordinator. Second, a SERIALIZE step, that fixes transactions' positions in the serializable order, via a consensus procedure. We achieve this split via a novel data representation that embeds read-sets into transaction deltas, and serialization sequence numbers into table rows. WiSer does no sharding (all nodes can run transactions that modify the entire database), and yet enforces aggregation constraints. Both readwrite conflicts and aggregation constraint violations are resolved lazily in the serialized data. WiSer also covers node joins and departures as database tables, thus simplifying correctness and failure handling. We present the design of WiSer as well as experiments suggesting this approach has promise.
computer science
Since the discovery of chemically peculiar stars in globular clusters in the last century, the study of multiple populations has become increasingly important, given that chemical inhomogeneity is found in almost all globular clusters. Despite various proposed theories attempting to explain this phenomenon, fitting all the observational evidence in globular clusters with one single theory remains notoriously difficult and currently unsuccessful. In order to improve existing models and motivate new ones, we are observing globular clusters at critical conditions, e.g., metal-rich end, metal-poor end, and low mass end. In this paper, we present our first attempt to investigate multiple populations in low mass globular clusters. We obtained low-resolution spectra around 4000 A of 30 members of the globular cluster Palomar 13 using OSIRIS/Multi-object spectrograph mounted at the Gran Telescopio Canarias. The membership of red giant branch stars is confirmed by the latest proper motions from Gaia DR2 and literature velocities. After comparing the measured CN and CH spectral indices with those of the stellar models, we found a clear sign of nitrogen variation among the red giant branch stars. Palomar 13 may be the lowest mass globular cluster showing multiple populations.
astrophysics
Aspects of screening and confinement are re-examined for a recently proposed compact Abelian Higgs model with a $\theta$-term. Our discussion is accomplished using the gauge-invariant but path-dependent variables formalism, which is an alternative to the Wilson loop approach. We explicitly show that the static potential profile is the sum of an effective-Yukawa and a linear potential, leading to the confinement of static external charges. We point out the central r\^ole of the parameter measuring the stiffness of the vortex lines present in the model in both the Yukawa-like and the confining sectors of the effective inter-particle potential we have computed.
high energy physics theory
We construct several novel examples of 3d $\mathcal{N}=2$ models whose free energy scales as $N^{3/2}$ at large $N$. This is the first step towards the identification of field theories with an M-theory dual. Furthermore, we match the volumes extracted from the free energy with the ones computed from the Hilbert series. We perform a similar analysis for the 4d parents of the 3d models, matching the volume extracted from the $a$ conformal anomaly to that obtained from the Hilbert series. For some of the 4d models, we show the existence of a Sasaki-Einstein metric on the internal space of the candidate type IIB gravity dual.
high energy physics theory
Considering the potential of thermostatically controlled loads (TCLs) to provide flexibility in demand response or load control, a semi-Markov model (SMM) for the ON/OFF controlled TCL is developed in this paper. This model makes full use of the adjustment flexibility of TCLs when the control period is long and maintains the diversity of switch state in the cluster. This model also can satisfy user comfort and protect user privacy. Then, this paper adopts the cyber-physical system (CPS) to realize the coupling of the discrete control process and the continuous physical process. Finally, the proposed model is applied to the coordination of large-scale heterogenous air-conditioners (ACs) based on the equivalent thermal parameters (ETP) model. Simulation results verify that under the proposed approach, the power of TCLs cluster can track the control signal accurately, with both user comfort and diversity of TCL cluster's operation states guaranteed.
mathematics