text
stringlengths
138
2.38k
labels
sequencelengths
6
6
Predictions
sequencelengths
1
3
Title: Life and work of Egbert Brieskorn (1936 - 2013), Abstract: Egbert Brieskorn died on July 11, 2013, a few days after his 77th birthday. He was an impressive personality who has left a lasting impression on all who knew him, whether inside or outside of mathematics. Brieskorn was a great mathematician, but his interests, his knowledge, and activities ranged far beyond mathematics. In this contribution, which is strongly influenced by many years of personal connectedness of the authors with Brieskorn, we try to give a deeper insight into the life and work of Brieskorn. We illuminate both his personal commitment to peace and the environment as well as his long-term study of the life and work of Felix Hausdorff and the publication of Hausdorff's collected works. However, the main focus of the article is on the presentation of his remarkable and influential mathematical work.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: Resource Allocation for a Full-Duplex Base Station Aided OFDMA System, Abstract: Exploiting full-duplex (FD) technology on base stations (BSs) is a promising solution to enhancing the system performance. Motivated by this, we revisit a full-duplex base station (FD-BS) aided OFDMA system, which consists of one BS, several uplink/downlink users and multiple subcarriers. A joint 3-dimensional (3D) mapping scheme among subcarriers, down-link users (DUEs), uplink users (UUEs) is considered as well as an associated power allocation optimization. In detail, we first decompose the complex 3D mapping problem into three 2-dimensional sub ones and solve them by using the iterative Hungarian method, respectively. Then based on the Lagrange dual method, we sequentially solve the power allocation and 3- dimensional mapping problem by fixing a dual point. Finally, the optimal solution can be obtained by utilizing the sub-gradient method. Unlike existing work that only solves either 3D mapping or power allocation problem but with a high computation complexity, we tackle both of them and have successfully reduced computation complexity from exponential to polynomial order. Numerical simulations are conducted to verify the proposed scheme.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Production of Entanglement Entropy by Decoherence, Abstract: We examine the dynamics of entanglement entropy of all parts in an open system consisting of a two-level dimer interacting with an environment of oscillators. The dimer-environment interaction is almost energy conserving. We find the precise link between decoherence and production of entanglement entropy. We show that not all environment oscillators carry significant entanglement entropy and we identify the oscillator frequency regions which contribute to the production of entanglement entropy. Our results hold for arbitrary strengths of the dimer-environment interaction, and they are mathematically rigorous.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics", "Mathematics" ]
Title: The Unusual Effectiveness of Averaging in GAN Training, Abstract: We show empirically that the optimal strategy of parameter averaging in a minmax convex-concave game setting is also strikingly effective in the non convex-concave GAN setting, specifically alleviating the convergence issues associated with cycling behavior observed in GANs. We show that averaging over generator parameters outside of the trainig loop consistently improves inception and FID scores on different architectures and for different GAN objectives. We provide comprehensive experimental results across a range of datasets, bilinear games, mixture of Gaussians, CIFAR-10, STL-10, CelebA and ImageNet, to demonstrate its effectiveness. We achieve state-of-the-art results on CIFAR-10 and produce clean CelebA face images, demonstrating that averaging is one of the most effective techniques for training highly performant GANs.
[ 0, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Stellarator bootstrap current and plasma flow velocity at low collisionality, Abstract: The bootstrap current and flow velocity of a low-collisionality stellarator plasma are calculated. As far as possible, the analysis is carried out in a uniform way across all low-collisionality regimes in general stellarator geometry, assuming only that the confinement is good enough that the plasma is approximately in local thermodynamic equilibrium. It is found that conventional expressions for the ion flow speed and bootstrap current in the low-collisionality limit are accurate only in the $1/\nu$-collisionality regime and need to be modified in the $\sqrt{\nu}$-regime. The correction due to finite collisionality is also discussed and is found to scale as $\nu^{2/5}$.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Dynamical patterns in individual trajectories toward extremism, Abstract: Society faces a fundamental global problem of understanding which individuals are currently developing strong support for some extremist entity such as ISIS (Islamic State) -- even if they never end up doing anything in the real world. The importance of online connectivity in developing intent has been confirmed by recent case-studies of already convicted terrorists. Here we identify dynamical patterns in the online trajectories that individuals take toward developing a high level of extremist support -- specifically, for ISIS. Strong memory effects emerge among individuals whose transition is fastest, and hence may become 'out of the blue' threats in the real world. A generalization of diagrammatic expansion theory helps quantify these characteristics, including the impact of changes in geographical location, and can facilitate prediction of future risks. By quantifying the trajectories that individuals follow on their journey toward expressing high levels of pro-ISIS support -- irrespective of whether they then carry out a real-world attack or not -- our findings can help move safety debates beyond reliance on static watch-list identifiers such as ethnic background or immigration status, and/or post-fact interviews with already-convicted individuals. Given the broad commonality of social media platforms, our results likely apply quite generally: for example, even on Telegram where (like Twitter) there is no built-in group feature as in our study, individuals tend to collectively build and pass through so-called super-group accounts.
[ 1, 1, 0, 0, 0, 0 ]
[ "Quantitative Biology", "Statistics" ]
Title: Boundary Hamiltonian theory for gapped topological phases on an open surface, Abstract: In this paper we propose a Hamiltonian approach to gapped topological phases on an open surface with boundary. Our setting is an extension of the Levin-Wen model to a 2d graph on the open surface, whose boundary is part of the graph. We systematically construct a series of boundary Hamiltonians such that each of them, when combined with the usual Levin-Wen bulk Hamiltonian, gives rise to a gapped energy spectrum which is topologically protected; and the corresponding wave functions are robust under changes of the underlying graph that maintain the spatial topology of the system. We derive explicit ground-state wavefunctions of the system and show that the boundary types are classified by Morita-equivalent Frobenius algebras. We also construct boundary quasiparticle creation, measuring and hopping operators. These operators allow us to characterize the boundary quasiparticles by bimodules of Frobenius algebras. Our approach also offers a concrete set of tools for computations. We illustrate our approach by a few examples.
[ 0, 1, 1, 0, 0, 0 ]
[ "Physics", "Mathematics" ]
Title: On the Evaluation of Silicon Photomultipliers for Use as Photosensors in Liquid Xenon Detectors, Abstract: Silicon photomultipliers (SiPMs) are potential solid-state alternatives to traditional photomultiplier tubes (PMTs) for single-photon detection. In this paper, we report on evaluating SensL MicroFC-10035-SMT SiPMs for their suitability as PMT replacements. The devices were successfully operated in a liquid-xenon detector, which demonstrates that SiPMs can be used in noble element time projection chambers as photosensors. The devices were also cooled down to 170 K to observe dark count dependence on temperature. No dependencies on the direction of an applied 3.2 kV/cm electric field were observed with respect to dark-count rate, gain, or photon detection efficiency.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Universal Protocols for Information Dissemination Using Emergent Signals, Abstract: We consider a population of $n$ agents which communicate with each other in a decentralized manner, through random pairwise interactions. One or more agents in the population may act as authoritative sources of information, and the objective of the remaining agents is to obtain information from or about these source agents. We study two basic tasks: broadcasting, in which the agents are to learn the bit-state of an authoritative source which is present in the population, and source detection, in which the agents are required to decide if at least one source agent is present in the population or not.We focus on designing protocols which meet two natural conditions: (1) universality, i.e., independence of population size, and (2) rapid convergence to a correct global state after a reconfiguration, such as a change in the state of a source agent. Our main positive result is to show that both of these constraints can be met. For both the broadcasting problem and the source detection problem, we obtain solutions with a convergence time of $O(\log^2 n)$ rounds, w.h.p., from any starting configuration. The solution to broadcasting is exact, which means that all agents reach the state broadcast by the source, while the solution to source detection admits one-sided error on a $\varepsilon$-fraction of the population (which is unavoidable for this problem). Both protocols are easy to implement in practice and have a compact formulation.Our protocols exploit the properties of self-organizing oscillatory dynamics. On the hardness side, our main structural insight is to prove that any protocol which meets the constraints of universality and of rapid convergence after reconfiguration must display a form of non-stationary behavior (of which oscillatory dynamics are an example). We also observe that the periodicity of the oscillatory behavior of the protocol, when present, must necessarily depend on the number $^\\# X$ of source agents present in the population. For instance, our protocols inherently rely on the emergence of a signal passing through the population, whose period is $\Theta(\log \frac{n}{^\\# X})$ rounds for most starting configurations. The design of clocks with tunable frequency may be of independent interest, notably in modeling biological networks.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Universal elliptic Gauß sums for Atkin primes in Schoof's algorithm, Abstract: This work builds on earlier results. We define universal elliptic Gau{\ss} sums for Atkin primes in Schoof's algorithm for counting points on elliptic curves. Subsequently, we show these quantities admit an efficiently computable representation in terms of the $j$-invariant and two other modular functions. We analyse the necessary computations in detail and derive an alternative approach for determining the trace of the Frobenius homomorphism for Atkin primes using these pre-computations. A rough run-time analysis shows, however, that this new method is not competitive with existing ones.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics", "Computer Science" ]
Title: Wavelet graphs for the direct detection of gravitational waves, Abstract: A second generation of gravitational wave detectors will soon come online with the objective of measuring for the first time the tiny gravitational signal from the coalescence of black hole and/or neutron star binaries. In this communication, we propose a new time-frequency search method alternative to matched filtering techniques that are usually employed to detect this signal. This method relies on a graph that encodes the time evolution of the signal and its variability by establishing links between coefficients in the multi-scale time-frequency decomposition of the data. We provide a proof of concept for this approach.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics", "Statistics", "Computer Science" ]
Title: A deep Convolutional Neural Network for topology optimization with strong generalization ability, Abstract: This paper proposes a deep Convolutional Neural Network(CNN) with strong generalization ability for structural topology optimization. The architecture of the neural network is made up of encoding and decoding parts, which provide down- and up-sampling operations. In addition, a popular technique, namely U-Net, was adopted to improve the performance of the proposed neural network. The input of the neural network is a well-designed tensor with each channel includes different information for the problem, and the output is the layout of the optimal structure. To train the neural network, a large dataset is generated by a conventional topology optimization approach, i.e. SIMP. The performance of the proposed method was evaluated by comparing its efficiency and accuracy with SIMP on a series of typical optimization problems. Results show that a significant reduction in computation cost was achieved with little sacrifice on the optimality of design solutions. Furthermore, the proposed method can intelligently solve problems under boundary conditions not being included in the training dataset.
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Estimation of the asymptotic variance of univariate and multivariate random fields and statistical inference, Abstract: Correlated random fields are a common way to model dependence struc- tures in high-dimensional data, especially for data collected in imaging. One important parameter characterizing the degree of dependence is the asymp- totic variance which adds up all autocovariances in the temporal and spatial domain. Especially, it arises in the standardization of test statistics based on partial sums of random fields and thus the construction of tests requires its estimation. In this paper we propose consistent estimators for this parameter for strictly stationary {\phi}-mixing random fields with arbitrary dimension of the domain and taking values in a Euclidean space of arbitrary dimension, thus allowing for multivariate random fields. We establish consistency, provide cen- tral limit theorems and show that distributional approximations of related test statistics based on sample autocovariances of random fields can be obtained by the subsampling approach. As in applications the spatial-temporal correlations are often quite local, such that a large number of autocovariances vanish or are negligible, we also investigate a thresholding approach where sample autocovariances of small magnitude are omitted. Extensive simulation studies show that the proposed estimators work well in practice and, when used to standardize image test statistics, can provide highly accurate image testing procedures.
[ 0, 0, 1, 1, 0, 0 ]
[ "Statistics", "Mathematics" ]
Title: Efficient Spatial Variation Characterization via Matrix Completion, Abstract: In this paper, we propose a novel method to estimate and characterize spatial variations on dies or wafers. This new technique exploits recent developments in matrix completion, enabling estimation of spatial variation across wafers or dies with a small number of randomly picked sampling points while still achieving fairly high accuracy. This new approach can be easily generalized, including for estimation of mixed spatial and structure or device type information.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Classification of grasping tasks based on EEG-EMG coherence, Abstract: This work presents an innovative application of the well-known concept of cortico-muscular coherence for the classification of various motor tasks, i.e., grasps of different kinds of objects. Our approach can classify objects with different weights (motor-related features) and different surface frictions (haptics-related features) with high accuracy (over 0:8). The outcomes presented here provide information about the synchronization existing between the brain and the muscles during specific activities; thus, this may represent a new effective way to perform activity recognition.
[ 0, 0, 0, 0, 1, 0 ]
[ "Computer Science", "Quantitative Biology" ]
Title: Kepler sheds new and unprecedented light on the variability of a blue supergiant: gravity waves in the O9.5Iab star HD 188209, Abstract: Stellar evolution models are most uncertain for evolved massive stars. Asteroseismology based on high-precision uninterrupted space photometry has become a new way to test the outcome of stellar evolution theory and was recently applied to a multitude of stars, but not yet to massive evolved supergiants.Our aim is to detect, analyse and interpret the photospheric and wind variability of the O9.5Iab star HD 188209 from Kepler space photometry and long-term high-resolution spectroscopy. We used Kepler scattered-light photometry obtained by the nominal mission during 1460d to deduce the photometric variability of this O-type supergiant. In addition, we assembled and analysed high-resolution high signal-to-noise spectroscopy taken with four spectrographs during some 1800d to interpret the temporal spectroscopic variability of the star. The variability of this blue supergiant derived from the scattered-light space photometry is in full in agreement with the one found in the ground-based spectroscopy. We find significant low-frequency variability that is consistently detected in all spectral lines of HD 188209. The photospheric variability propagates into the wind, where it has similar frequencies but slightly higher amplitudes. The morphology of the frequency spectra derived from the long-term photometry and spectroscopy points towards a spectrum of travelling waves with frequency values in the range expected for an evolved O-type star. Convectively-driven internal gravity waves excited in the stellar interior offer the most plausible explanation of the detected variability.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Scaling the Scattering Transform: Deep Hybrid Networks, Abstract: We use the scattering network as a generic and fixed ini-tialization of the first layers of a supervised hybrid deep network. We show that early layers do not necessarily need to be learned, providing the best results to-date with pre-defined representations while being competitive with Deep CNNs. Using a shallow cascade of 1 x 1 convolutions, which encodes scattering coefficients that correspond to spatial windows of very small sizes, permits to obtain AlexNet accuracy on the imagenet ILSVRC2012. We demonstrate that this local encoding explicitly learns invariance w.r.t. rotations. Combining scattering networks with a modern ResNet, we achieve a single-crop top 5 error of 11.4% on imagenet ILSVRC2012, comparable to the Resnet-18 architecture, while utilizing only 10 layers. We also find that hybrid architectures can yield excellent performance in the small sample regime, exceeding their end-to-end counterparts, through their ability to incorporate geometrical priors. We demonstrate this on subsets of the CIFAR-10 dataset and on the STL-10 dataset.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science" ]
Title: Leontief Meets Shannon - Measuring the Complexity of the Economic System, Abstract: We develop a complexity measure for large-scale economic systems based on Shannon's concept of entropy. By adopting Leontief's perspective of the production process as a circular flow, we formulate the process as a Markov chain. Then we derive a measure of economic complexity as the average number of bits required to encode the flow of goods and services in the production process. We illustrate this measure using data from seven national economies, spanning several decades.
[ 0, 1, 0, 1, 0, 0 ]
[ "Quantitative Finance", "Statistics" ]
Title: RLE Plots: Visualising Unwanted Variation in High Dimensional Data, Abstract: Unwanted variation can be highly problematic and so its detection is often crucial. Relative log expression (RLE) plots are a powerful tool for visualising such variation in high dimensional data. We provide a detailed examination of these plots, with the aid of examples and simulation, explaining what they are and what they can reveal. RLE plots are particularly useful for assessing whether a procedure aimed at removing unwanted variation, i.e. a normalisation procedure, has been successful. These plots, while originally devised for gene expression data from microarrays, can also be used to reveal unwanted variation in many other kinds of high dimensional data, where such variation can be problematic.
[ 0, 0, 0, 1, 0, 0 ]
[ "Statistics", "Quantitative Biology" ]
Title: ALMA Observations of Gas-Rich Galaxies in z~1.6 Galaxy Clusters: Evidence for Higher Gas Fractions in High-Density Environments, Abstract: We present ALMA CO (2-1) detections in 11 gas-rich cluster galaxies at z~1.6, constituting the largest sample of molecular gas measurements in z>1.5 clusters to date. The observations span three galaxy clusters, derived from the Spitzer Adaptation of the Red-sequence Cluster Survey. We augment the >5sigma detections of the CO (2-1) fluxes with multi-band photometry, yielding stellar masses and infrared-derived star formation rates, to place some of the first constraints on molecular gas properties in z~1.6 cluster environments. We measure sizable gas reservoirs of 0.5-2x10^11 solar masses in these objects, with high gas fractions and long depletion timescales, averaging 62% and 1.4 Gyr, respectively. We compare our cluster galaxies to the scaling relations of the coeval field, in the context of how gas fractions and depletion timescales vary with respect to the star-forming main sequence. We find that our cluster galaxies lie systematically off the field scaling relations at z=1.6 toward enhanced gas fractions, at a level of ~4sigma, but have consistent depletion timescales. Exploiting CO detections in lower-redshift clusters from the literature, we investigate the evolution of the gas fraction in cluster galaxies, finding it to mimic the strong rise with redshift in the field. We emphasize the utility of detecting abundant gas-rich galaxies in high-redshift clusters, deeming them as crucial laboratories for future statistical studies.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: CardiacNET: Segmentation of Left Atrium and Proximal Pulmonary Veins from MRI Using Multi-View CNN, Abstract: Anatomical and biophysical modeling of left atrium (LA) and proximal pulmonary veins (PPVs) is important for clinical management of several cardiac diseases. Magnetic resonance imaging (MRI) allows qualitative assessment of LA and PPVs through visualization. However, there is a strong need for an advanced image segmentation method to be applied to cardiac MRI for quantitative analysis of LA and PPVs. In this study, we address this unmet clinical need by exploring a new deep learning-based segmentation strategy for quantification of LA and PPVs with high accuracy and heightened efficiency. Our approach is based on a multi-view convolutional neural network (CNN) with an adaptive fusion strategy and a new loss function that allows fast and more accurate convergence of the backpropagation based optimization. After training our network from scratch by using more than 60K 2D MRI images (slices), we have evaluated our segmentation strategy to the STACOM 2013 cardiac segmentation challenge benchmark. Qualitative and quantitative evaluations, obtained from the segmentation challenge, indicate that the proposed method achieved the state-of-the-art sensitivity (90%), specificity (99%), precision (94%), and efficiency levels (10 seconds in GPU, and 7.5 minutes in CPU).
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Quantitative Biology" ]
Title: Klt varieties with trivial canonical class - Holonomy, differential forms, and fundamental groups, Abstract: We investigate the holonomy group of singular Kähler-Einstein metrics on klt varieties with numerically trivial canonical divisor. Finiteness of the number of connected components, a Bochner principle for holomorphic tensors, and a connection between irreducibility of holonomy representations and stability of the tangent sheaf are established. As a consequence, known decompositions for tangent sheaves of varieties with trivial canonical divisor are refined. In particular, we show that up to finite quasi-étale covers, varieties with strongly stable tangent sheaf are either Calabi-Yau or irreducible holomorphic symplectic. These results form one building block for Höring-Peternell's recent proof of a singular version of the Beauville-Bogomolov Decomposition Theorem.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: Development and Characterisation of a Gas System and its Associated Slow-Control System for an ATLAS Small-Strip Thin Gap Chamber Testing Facility, Abstract: A quality assurance and performance qualification laboratory was built at McGill University for the Canadian-made small-strip Thin Gap Chamber (sTGC) muon detectors produced for the 2019-2020 ATLAS experiment muon spectrometer upgrade. The facility uses cosmic rays as a muon source to ionise the quenching gas mixture of pentane and carbon dioxide flowing through the sTGC detector. A gas system was developed and characterised for this purpose, with a simple and efficient gas condenser design utilizing a Peltier thermoelectric cooler (TEC). The gas system was tested to provide the desired 45 vol% pentane concentration. For continuous operations, a state-machine system was implemented with alerting and remote monitoring features to run all cosmic-ray data-acquisition associated slow-control systems, such as high/low voltage, gas system and environmental monitoring, in a safe and continuous mode, even in the absence of an operator.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Convergence of extreme value statistics in a two-layer quasi-geostrophic atmospheric model, Abstract: We search for the signature of universal properties of extreme events, theoretically predicted for Axiom A flows, in a chaotic and high dimensional dynamical system by studying the convergence of GEV (Generalized Extreme Value) and GP (Generalized Pareto) shape parameter estimates to a theoretical value, expressed in terms of partial dimensions of the attractor, which are global properties. We consider a two layer quasi-geostrophic (QG) atmospheric model using two forcing levels, and analyse extremes of different types of physical observables (local, zonally-averaged energy, and the average value of energy over the mid-latitudes). Regarding the predicted universality, we find closer agreement in the shape parameter estimates only in the case of strong forcing, producing a highly chaotic behaviour, for some observables (the local energy at every latitude). Due to the limited (though very large) data size and the presence of serial correlations, it is difficult to obtain robust statistics of extremes in case of the other observables. In the case of weak forcing, inducing a less pronounced chaotic flow with regime behaviour, we find worse agreement with the theory developed for Axiom A flows, which is unsurprising considering the properties of the system.
[ 0, 1, 0, 1, 0, 0 ]
[ "Physics", "Mathematics" ]
Title: On Blocking Collisions between People, Objects and other Robots, Abstract: Intentional or unintentional contacts are bound to occur increasingly more often due to the deployment of autonomous systems in human environments. In this paper, we devise methods to computationally predict imminent collisions between objects, robots and people, and use an upper-body humanoid robot to block them if they are likely to happen. We employ statistical methods for effective collision prediction followed by sensor-based trajectory generation and real-time control to attempt to stop the likely collisions using the most favorable part of the blocking robot. We thoroughly investigate collisions in various types of experimental setups involving objects, robots, and people. Overall, the main contribution of this paper is to devise sensor-based prediction, trajectory generation and control processes for highly articulated robots to prevent collisions against people, and conduct numerous experiments to validate this approach.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Robotics" ]
Title: Flexible Attributed Network Embedding, Abstract: Network embedding aims to find a way to encode network by learning an embedding vector for each node in the network. The network often has property information which is highly informative with respect to the node's position and role in the network. Most network embedding methods fail to utilize this information during network representation learning. In this paper, we propose a novel framework, FANE, to integrate structure and property information in the network embedding process. In FANE, we design a network to unify heterogeneity of the two information sources, and define a new random walking strategy to leverage property information and make the two information compensate. FANE is conceptually simple and empirically powerful. It improves over the state-of-the-art methods on Cora dataset classification task by over 5%, more than 10% on WebKB dataset classification task. Experiments also show that the results improve more than the state-of-the-art methods as increasing training size. Moreover, qualitative visualization show that our framework is helpful in network property information exploration. In all, we present a new way for efficiently learning state-of-the-art task-independent representations in complex attributed networks. The source code and datasets of this paper can be obtained from this https URL.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Intermediate curvatures and highly connected manifolds, Abstract: We show that after forming a connected sum with a homotopy sphere, all (2j-1)-connected 2j-parallelisable manifolds in dimension 4j+1, j > 0, can be equipped with Riemannian metrics of 2-positive Ricci curvature. When j=1 we extend the above to certain classes of simply-connected non-spin 5-manifolds. The condition of 2-positive Ricci curvature is defined to mean that the sum of the two smallest eigenvalues of the Ricci tensor is positive at every point. This result is a counterpart to a previous result of the authors concerning the existence of positive Ricci curvature on highly connected manifolds in dimensions 4j-1 for j > 1, and in dimensions 4j+1 for j > 0 with torsion-free cohomology.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: Iteratively reweighted $\ell_1$ algorithms with extrapolation, Abstract: Iteratively reweighted $\ell_1$ algorithm is a popular algorithm for solving a large class of optimization problems whose objective is the sum of a Lipschitz differentiable loss function and a possibly nonconvex sparsity inducing regularizer. In this paper, motivated by the success of extrapolation techniques in accelerating first-order methods, we study how widely used extrapolation techniques such as those in [4,5,22,28] can be incorporated to possibly accelerate the iteratively reweighted $\ell_1$ algorithm. We consider three versions of such algorithms. For each version, we exhibit an explicitly checkable condition on the extrapolation parameters so that the sequence generated provably clusters at a stationary point of the optimization problem. We also investigate global convergence under additional Kurdyka-$\L$ojasiewicz assumptions on certain potential functions. Our numerical experiments show that our algorithms usually outperform the general iterative shrinkage and thresholding algorithm in [21] and an adaptation of the iteratively reweighted $\ell_1$ algorithm in [23, Algorithm 7] with nonmonotone line-search for solving random instances of log penalty regularized least squares problems in terms of both CPU time and solution quality.
[ 0, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Automatic generation of analysis class diagrams from use case specifications, Abstract: In object oriented software development, the analysis modeling is concerned with the task of identifying problem level objects along with the relationships between them from software requirements. The software requirements are usually written in some natural language, and the analysis modeling is normally performed by experienced human analysts. The huge gap between the software requirements which are unstructured texts and analysis models which are usually structured UML diagrams, along with human slip-ups inevitably makes the transformation process error prone. The automation of this process can help in reducing the errors in the transformation. In this paper we propose a tool supported approach for automated transformation of use case specifications documented in English language into analysis class diagrams. The approach works in four steps. It first takes the textual specification of a use case as input, and then using a natural language parser generates type dependencies and parts of speech tags for each sentence in the specification. Then, it identifies the sentence structure of each sentence using a set of comprehensive sentence structure rules. Next, it applies a set of transformation rules on the type dependencies and parts of speech tags of the sentences to discover the problem level objects and the relationships between them. Finally, it generates and visualizes the analysis class diagram. We conducted a controlled experiment to compare the correctness, completeness and redundancy of the analysis class diagrams generated by our approach with those generated by the existing automated approaches. The results showed that the analysis class diagrams generated by our approach were more correct, more complete, and less redundant than those generated by the other approaches.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science" ]
Title: Virtual Crystals and Nakajima Monomials, Abstract: An explicit description of the virtualization map for the (modified) Nakajima monomial model for crystals is given. We give an explicit description of the Lusztig data for modified Nakajima monomials in type $A_n$.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: An Analysis of the Value of Information when Exploring Stochastic, Discrete Multi-Armed Bandits, Abstract: In this paper, we propose an information-theoretic exploration strategy for stochastic, discrete multi-armed bandits that achieves optimal regret. Our strategy is based on the value of information criterion. This criterion measures the trade-off between policy information and obtainable rewards. High amounts of policy information are associated with exploration-dominant searches of the space and yield high rewards. Low amounts of policy information favor the exploitation of existing knowledge. Information, in this criterion, is quantified by a parameter that can be varied during search. We demonstrate that a simulated-annealing-like update of this parameter, with a sufficiently fast cooling schedule, leads to an optimal regret that is logarithmic with respect to the number of episodes.
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics", "Mathematics" ]
Title: Raman Scattering by a Two-Dimensional Fermi Liquid with Spin-Orbit Coupling, Abstract: We present a microscopic theory of Raman scattering by a two-dimensional Fermi liquid (FL) with Rashba and Dresselhaus types of spin-orbit coupling, and subject to an in-plane magnetic field (B). In the long-wavelength limit, the Raman spectrum probes the collective modes of such a FL: the chiral spin waves. The characteristic features of these modes are a linear-in-q term in the dispersion and the dependence of the mode frequency on the directions of both q and B. All of these features have been observed in recent Raman experiments on CdTe quantum wells.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Computing Nonvacuous Generalization Bounds for Deep (Stochastic) Neural Networks with Many More Parameters than Training Data, Abstract: One of the defining properties of deep learning is that models are chosen to have many more parameters than available training data. In light of this capacity for overfitting, it is remarkable that simple algorithms like SGD reliably return solutions with low test error. One roadblock to explaining these phenomena in terms of implicit regularization, structural properties of the solution, and/or easiness of the data is that many learning bounds are quantitatively vacuous when applied to networks learned by SGD in this "deep learning" regime. Logically, in order to explain generalization, we need nonvacuous bounds. We return to an idea by Langford and Caruana (2001), who used PAC-Bayes bounds to compute nonvacuous numerical bounds on generalization error for stochastic two-layer two-hidden-unit neural networks via a sensitivity analysis. By optimizing the PAC-Bayes bound directly, we are able to extend their approach and obtain nonvacuous generalization bounds for deep stochastic neural network classifiers with millions of parameters trained on only tens of thousands of examples. We connect our findings to recent and old work on flat minima and MDL-based explanations of generalization.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Statistics", "Mathematics" ]
Title: Uhlenbeck's decomposition in Sobolev and Morrey-Sobolev spaces, Abstract: We present a self-contained proof of Uhlenbeck's decomposition theorem for $\Omega\in L^p(\mathbb{B}^n,so(m)\otimes\Lambda^1\mathbb{R}^n)$ for $p\in (1,n)$ with Sobolev type estimates in the case $p \in[n/2,n)$ and Morrey-Sobolev type estimates in the case $p\in (1,n/2)$. We also prove an analogous theorem in the case when $\Omega\in L^p( \mathbb{B}^n, TCO_{+}(m) \otimes \Lambda^1\mathbb{R}^n)$, which corresponds to Uhlenbeck's theorem with conformal gauge group.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: Bose-Hubbard lattice as a controllable environment for open quantum systems, Abstract: We investigate the open dynamics of an atomic impurity embedded in a one-dimensional Bose-Hubbard lattice. We derive the reduced evolution equation for the impurity and show that the Bose-Hubbard lattice behaves as a tunable engineered environment allowing to simulate both Markovian and non-Markovian dynamics in a controlled and experimentally realisable way. We demonstrate that the presence or absence of memory effects is a signature of the nature of the excitations induced by the impurity, being delocalized or localized in the two limiting cases of superfluid and Mott insulator, respectively. Furthermore, our findings show how the excitations supported in the two phases can be characterized as information carriers.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: ClipAudit: A Simple Risk-Limiting Post-Election Audit, Abstract: We propose a simple risk-limiting audit for elections, ClipAudit. To determine whether candidate A (the reported winner) actually beat candidate B in a plurality election, ClipAudit draws ballots at random, without replacement, until either all cast ballots have been drawn, or until \[ a - b \ge \beta \sqrt{a+b} \] where $a$ is the number of ballots in the sample for the reported winner A, and $b$ is the number of ballots in the sample for opponent B, and where $\beta$ is a constant determined a priori as a function of the number $n$ of ballots cast and the risk-limit $\alpha$. ClipAudit doesn't depend on the unofficial margin (as does Bravo). We show how to extend ClipAudit to contests with multiple winners or losers, or to multiple contests.
[ 1, 0, 0, 1, 0, 0 ]
[ "Statistics", "Mathematics" ]
Title: A Dichotomy for Sampling Barrier-Crossing Events of Random Walks with Regularly Varying Tails, Abstract: We study how to sample paths of a random walk up to the first time it crosses a fixed barrier, in the setting where the step sizes are iid with negative mean and have a regularly varying right tail. We introduce a desirable property for a change of measure to be suitable for exact simulation. We study whether the change of measure of Blanchet and Glynn (2008) satisfies this property and show that it does so if and only if the tail index $\alpha$ of the right tail lies in the interval $(1, \, 3/2)$.
[ 0, 0, 1, 1, 0, 0 ]
[ "Mathematics", "Statistics" ]
Title: A Hierarchical Bayesian Linear Regression Model with Local Features for Stochastic Dynamics Approximation, Abstract: One of the challenges in model-based control of stochastic dynamical systems is that the state transition dynamics are involved, and it is not easy or efficient to make good-quality predictions of the states. Moreover, there are not many representational models for the majority of autonomous systems, as it is not easy to build a compact model that captures the entire dynamical subtleties and uncertainties. In this work, we present a hierarchical Bayesian linear regression model with local features to learn the dynamics of a micro-robotic system as well as two simpler examples, consisting of a stochastic mass-spring damper and a stochastic double inverted pendulum on a cart. The model is hierarchical since we assume non-stationary priors for the model parameters. These non-stationary priors make the model more flexible by imposing priors on the priors of the model. To solve the maximum likelihood (ML) problem for this hierarchical model, we use the variational expectation maximization (EM) algorithm, and enhance the procedure by introducing hidden target variables. The algorithm yields parsimonious model structures, and consistently provides fast and accurate predictions for all our examples involving large training and test sets. This demonstrates the effectiveness of the method in learning stochastic dynamics, which makes it suitable for future use in a paradigm, such as model-based reinforcement learning, to compute optimal control policies in real time.
[ 0, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: A generalization of the Hasse-Witt matrix of a hypersurface, Abstract: The Hasse-Witt matrix of a hypersurface in ${\mathbb P}^n$ over a finite field of characteristic $p$ gives essentially complete mod $p$ information about the zeta function of the hypersurface. But if the degree $d$ of the hypersurface is $\leq n$, the zeta function is trivial mod $p$ and the Hasse-Witt matrix is zero-by-zero. We generalize a classical formula for the Hasse-Witt matrix to obtain a matrix that gives a nontrivial congruence for the zeta function for all $d$. We also describe the differential equations satisfied by this matrix and prove that it is generically invertible.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: Few-Shot Learning with Graph Neural Networks, Abstract: We propose to study the problem of few-shot learning with the prism of inference on a partially observed graphical model, constructed from a collection of input images whose label can be either observed or not. By assimilating generic message-passing inference algorithms with their neural-network counterparts, we define a graph neural network architecture that generalizes several of the recently proposed few-shot learning models. Besides providing improved numerical performance, our framework is easily extended to variants of few-shot learning, such as semi-supervised or active learning, demonstrating the ability of graph-based models to operate well on 'relational' tasks.
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Prospects of detecting HI using redshifted 21 cm radiation at z ~ 3, Abstract: Distribution of cold gas in the post-reionization era provides an important link between distribution of galaxies and the process of star formation. Redshifted 21 cm radiation from the Hyperfine transition of neutral Hydrogen allows us to probe the neutral component of cold gas, most of which is to be found in the interstellar medium of galaxies. Existing and upcoming radio telescopes can probe the large scale distribution of neutral Hydrogen via HI intensity mapping. In this paper we use an estimate of the HI power spectrum derived using an ansatz to compute the expected signal from the large scale HI distribution at z ~ 3. We find that the scale dependence of bias at small scales makes a significant difference to the expected signal even at large angular scales. We compare the predicted signal strength with the sensitivity of radio telescopes that can observe such radiation and calculate the observation time required for detecting neutral Hydrogen at these redshifts. We find that OWFA (Ooty Wide Field Array) offers the best possibility to detect neutral Hydrogen at z ~ 3 before the SKA (Square Kilometer Array) becomes operational. We find that the OWFA should be able to make a 3 sigma or a more significant detection in 2000 hours of observations at several angular scales. Calculations done using the Fisher matrix approach indicate that a 5 sigma detection of the binned HI power spectrum via measurement of the amplitude of the HI power spectrum is possible in 1000 hours (Sarkar, Bharadwaj and Ali, 2017).
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Centralities of Nodes and Influences of Layers in Large Multiplex Networks, Abstract: We formulate and propose an algorithm (MultiRank) for the ranking of nodes and layers in large multiplex networks. MultiRank takes into account the full multiplex network structure of the data and exploits the dual nature of the network in terms of nodes and layers. The proposed centrality of the layers (influences) and the centrality of the nodes are determined by a coupled set of equations. The basic idea consists in assigning more centrality to nodes that receive links from highly influential layers and from already central nodes. The layers are more influential if highly central nodes are active in them. The algorithm applies to directed/undirected as well as to weighted/unweighted multiplex networks. We discuss the application of MultiRank to three major examples of multiplex network datasets: the European Air Transportation Multiplex Network, the Pierre Auger Multiplex Collaboration Network and the FAO Multiplex Trade Network.
[ 1, 1, 0, 0, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Twofold triple systems with cyclic 2-intersecting Gray codes, Abstract: Given a combinatorial design $\mathcal{D}$ with block set $\mathcal{B}$, the block-intersection graph (BIG) of $\mathcal{D}$ is the graph that has $\mathcal{B}$ as its vertex set, where two vertices $B_{1} \in \mathcal{B}$ and $B_{2} \in \mathcal{B} $ are adjacent if and only if $|B_{1} \cap B_{2}| > 0$. The $i$-block-intersection graph ($i$-BIG) of $\mathcal{D}$ is the graph that has $\mathcal{B}$ as its vertex set, where two vertices $B_{1} \in \mathcal{B}$ and $B_{2} \in \mathcal{B}$ are adjacent if and only if $|B_{1} \cap B_{2}| = i$. In this paper several constructions are obtained that start with twofold triple systems (TTSs) with Hamiltonian $2$-BIGs and result in larger TTSs that also have Hamiltonian $2$-BIGs. These constructions collectively enable us to determine the complete spectrum of TTSs with Hamiltonian $2$-BIGs (equivalently TTSs with cyclic $2$-intersecting Gray codes) as well as the complete spectrum for TTSs with $2$-BIGs that have Hamilton paths (i.e., for TTSs with $2$-intersecting Gray codes). In order to prove these spectrum results, we sometimes require ingredient TTSs that have large partial parallel classes; we prove lower bounds on the sizes of partial parallel clasess in arbitrary TTSs, and then construct larger TTSs with both cyclic $2$-intersecting Gray codes and parallel classes.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: Consequences of Unhappiness While Developing Software, Abstract: The growing literature on affect among software developers mostly reports on the linkage between happiness, software quality, and developer productivity. Understanding the positive side of happiness -- positive emotions and moods -- is an attractive and important endeavor. Scholars in industrial and organizational psychology have suggested that also studying the negative side -- unhappiness -- could lead to cost-effective ways of enhancing working conditions, job performance, and to limiting the occurrence of psychological disorders. Our comprehension of the consequences of (un)happiness among developers is still too shallow, and is mainly expressed in terms of development productivity and software quality. In this paper, we attempt to uncover the experienced consequences of unhappiness among software developers. Using qualitative data analysis of the responses given by 181 questionnaire participants, we identified 49 consequences of unhappiness while doing software development. We found detrimental consequences on developers' mental well-being, the software development process, and the produced artifacts. Our classification scheme, available as open data, will spawn new happiness research opportunities of cause-effect type, and it can act as a guideline for practitioners for identifying damaging effects of unhappiness and for fostering happiness on the job.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science" ]
Title: Accelerated Dual Learning by Homotopic Initialization, Abstract: Gradient descent and coordinate descent are well understood in terms of their asymptotic behavior, but less so in a transient regime often used for approximations in machine learning. We investigate how proper initialization can have a profound effect on finding near-optimal solutions quickly. We show that a certain property of a data set, namely the boundedness of the correlations between eigenfeatures and the response variable, can lead to faster initial progress than expected by commonplace analysis. Convex optimization problems can tacitly benefit from that, but this automatism does not apply to their dual formulation. We analyze this phenomenon and devise provably good initialization strategies for dual optimization as well as heuristics for the non-convex case, relevant for deep learning. We find our predictions and methods to be experimentally well-supported.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Inverse Reinforcement Learning from Summary Data, Abstract: Inverse reinforcement learning (IRL) aims to explain observed strategic behavior by fitting reinforcement learning models to behavioral data. However, traditional IRL methods are only applicable when the observations are in the form of state-action paths. This assumption may not hold in many real-world modeling settings, where only partial or summarized observations are available. In general, we may assume that there is a summarizing function $\sigma$, which acts as a filter between us and the true state-action paths that constitute the demonstration. Some initial approaches to extending IRL to such situations have been presented, but with very specific assumptions about the structure of $\sigma$, such as that only certain state observations are missing. This paper instead focuses on the most general case of the problem, where no assumptions are made about the summarizing function, except that it can be evaluated. We demonstrate that inference is still possible. The paper presents exact and approximate inference algorithms that allow full posterior inference, which is particularly important for assessing parameter uncertainty in this challenging inference situation. Empirical scalability is demonstrated to reasonably sized problems, and practical applicability is demonstrated by estimating the posterior for a cognitive science RL model based on an observed user's task completion time only.
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: A micrometer-thick oxide film with high thermoelectric performance at temperature ranging from 20-400 K, Abstract: Thermoelectric (TE) materials achieve localised conversion between thermal and electric energies, and the conversion efficiency is determined by a figure of merit zT. Up to date, two-dimensional electron gas (2DEG) related TE materials hold the records for zT near room-temperature. A sharp increase in zT up to ~2.0 was observed previously for superlattice materials such as PbSeTe, Bi2Te3/Sb2Te3 and SrNb0.2Ti0.8O3/SrTiO3, when the thicknesses of these TE materials were spatially confine within sub-nanometre scale. The two-dimensional confinement of carriers enlarges the density of states near the Fermi energy3-6 and triggers electron phonon coupling. This overcomes the conventional {\sigma}-S trade-off to more independently improve S, and thereby further increases thermoelectric power factors (PF=S2{\sigma}). Nevertheless, practical applications of the present 2DEG materials for high power energy conversions are impeded by the prerequisite of spatial confinement, as the amount of TE material is insufficient. Here, we report similar TE properties to 2DEGs but achieved in SrNb0.2Ti0.8O3 films with thickness within sub-micrometer scale by regulating interfacial and lattice polarizations. High power factor (up to 103 {\mu}Wcm-1K-2) and zT value (up to 1.6) were observed for the film materials near room-temperature and below. Even reckon in the thickness of the substrate, an integrated power factor of both film and substrate approaching to be 102 {\mu}Wcm-1K-2 was achieved in a 2 {\mu}m-thick SrNb0.2Ti0.8O3 film grown on a 100 {\mu}m-thick SrTiO3 substrate. The dependence of high TE performances on size-confinement is reduced by ~103 compared to the conventional 2DEG-related TE materials. As-grown oxide films are less toxic and not dependent on large amounts of heavy elements, potentially paving the way towards applications in localised refrigeration and electric power generations.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Maximizing acquisition functions for Bayesian optimization, Abstract: Bayesian optimization is a sample-efficient approach to global optimization that relies on theoretically motivated value heuristics (acquisition functions) to guide its search process. Fully maximizing acquisition functions produces the Bayes' decision rule, but this ideal is difficult to achieve since these functions are frequently non-trivial to optimize. This statement is especially true when evaluating queries in parallel, where acquisition functions are routinely non-convex, high-dimensional, and intractable. We first show that acquisition functions estimated via Monte Carlo integration are consistently amenable to gradient-based optimization. Subsequently, we identify a common family of acquisition functions, including EI and UCB, whose properties not only facilitate but justify use of greedy approaches for their maximization.
[ 0, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Mathematics", "Statistics" ]
Title: A Regularized Framework for Sparse and Structured Neural Attention, Abstract: Modern neural networks are often augmented with an attention mechanism, which tells the network where to focus within the input. We propose in this paper a new framework for sparse and structured attention, building upon a smoothed max operator. We show that the gradient of this operator defines a mapping from real values to probabilities, suitable as an attention mechanism. Our framework includes softmax and a slight generalization of the recently-proposed sparsemax as special cases. However, we also show how our framework can incorporate modern structured penalties, resulting in more interpretable attention mechanisms, that focus on entire segments or groups of an input. We derive efficient algorithms to compute the forward and backward passes of our attention mechanisms, enabling their use in a neural network trained with backpropagation. To showcase their potential as a drop-in replacement for existing ones, we evaluate our attention mechanisms on three large-scale tasks: textual entailment, machine translation, and sentence summarization. Our attention mechanisms improve interpretability without sacrificing performance; notably, on textual entailment and summarization, we outperform the standard attention mechanisms based on softmax and sparsemax.
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Intrinsically Motivated Goal Exploration Processes with Automatic Curriculum Learning, Abstract: Intrinsically motivated spontaneous exploration is a key enabler of autonomous lifelong learning in human children. It allows them to discover and acquire large repertoires of skills through self-generation, self-selection, self-ordering and self-experimentation of learning goals. We present the unsupervised multi-goal reinforcement learning formal framework as well as an algorithmic approach called intrinsically motivated goal exploration processes (IMGEP) to enable similar properties of autonomous learning in machines. The IMGEP algorithmic architecture relies on several principles: 1) self-generation of goals as parameterized reinforcement learning problems; 2) selection of goals based on intrinsic rewards; 3) exploration with parameterized time-bounded policies and fast incremental goal-parameterized policy search; 4) systematic reuse of information acquired when targeting a goal for improving other goals. We present a particularly efficient form of IMGEP that uses a modular representation of goal spaces as well as intrinsic rewards based on learning progress. We show how IMGEPs automatically generate a learning curriculum within an experimental setup where a real humanoid robot can explore multiple spaces of goals with several hundred continuous dimensions. While no particular target goal is provided to the system beforehand, this curriculum allows the discovery of skills of increasing complexity, that act as stepping stone for learning more complex skills (like nested tool use). We show that learning several spaces of diverse problems can be more efficient for learning complex skills than only trying to directly learn these complex skills. We illustrate the computational efficiency of IMGEPs as these robotic experiments use a simple memory-based low-level policy representations and search algorithm, enabling the whole system to learn online and incrementally on a Raspberry Pi 3.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: On C-class equations, Abstract: The concept of a C-class of differential equations goes back to E. Cartan with the upshot that generic equations in a C-class can be solved without integration. While Cartan's definition was in terms of differential invariants being first integrals, all results exhibiting C-classes that we are aware of are based on the fact that a canonical Cartan geometry associated to the equations in the class descends to the space of solutions. For sufficiently low orders, these geometries belong to the class of parabolic geometries and the results follow from the general characterization of geometries descending to a twistor space. In this article we answer the question of whether a canonical Cartan geometry descends to the space of solutions in the case of scalar ODEs of order at least four and of systems of ODEs of order at least three. As in the lower order cases, this is characterized by the vanishing of the generalized Wilczynski invariants, which are defined via the linearization at a solution. The canonical Cartan geometries (which are not parabolic geometries) are a slight variation of those available in the literature based on a recent general construction. All the verifications needed to apply this construction for the classes of ODEs we study are carried out in the article, which thus also provides a complete alternative proof for the existence of canonical Cartan connections associated to higher order (systems of) ODEs.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: NGC 3105: A Young Cluster in the Outer Galaxy, Abstract: Images and spectra of the open cluster NGC 3105 have been obtained with GMOS on Gemini South. The (i', g'-i') color-magnitude diagram (CMD) constructed from these data extends from the brightest cluster members to g'~23. This is 4 - 5 mag fainter than previous CMDs at visible wavelengths and samples cluster members with sub-solar masses. Assuming a half-solar metallicity, comparisons with isochrones yield a distance of 6.6+/-0.3 kpc. An age of at least 32 Myr is found based on the photometric properties of the brightest stars, coupled with the apparent absence of pre-main sequence stars in the lower regions of the CMD. The luminosity function of stars between 50 and 70 arcsec from the cluster center is consistent with a Chabrier lognormal mass function. However, at radii smaller than 50 arcsec there is a higher specific frequency of the most massive main sequence stars than at larger radii. Photometry obtained from archival SPITZER images reveals that some of the brightest stars near NGC 3105 have excess infrared emission, presumably from warm dust envelopes. Halpha emission is detected in a few early-type stars in and around the cluster, building upon previous spectroscopic observations that found Be stars near NGC 3105. The equivalent width of the NaD lines in the spectra of early type stars is consistent with the reddening found from comparisons with isochrones. Stars with i'~18.5 that fall near the cluster main sequence have a spectral-type A5V, and a distance modulus that is consistent with that obtained by comparing isochrones with the CMD is found assuming solar neighborhood intrinsic brightnesses for these stars.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Supervised learning with quantum enhanced feature spaces, Abstract: Machine learning and quantum computing are two technologies each with the potential for altering how computation is performed to address previously untenable problems. Kernel methods for machine learning are ubiquitous for pattern recognition, with support vector machines (SVMs) being the most well-known method for classification problems. However, there are limitations to the successful solution to such problems when the feature space becomes large, and the kernel functions become computationally expensive to estimate. A core element to computational speed-ups afforded by quantum algorithms is the exploitation of an exponentially large quantum state space through controllable entanglement and interference. Here, we propose and experimentally implement two novel methods on a superconducting processor. Both methods represent the feature space of a classification problem by a quantum state, taking advantage of the large dimensionality of quantum Hilbert space to obtain an enhanced solution. One method, the quantum variational classifier builds on [1,2] and operates through using a variational quantum circuit to classify a training set in direct analogy to conventional SVMs. In the second, a quantum kernel estimator, we estimate the kernel function and optimize the classifier directly. The two methods present a new class of tools for exploring the applications of noisy intermediate scale quantum computers [3] to machine learning.
[ 0, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Physics" ]
Title: On The Limitation of Some Fully Observable Multiple Session Resilient Shoulder Surfing Defense Mechanisms, Abstract: Using password based authentication technique, a system maintains the login credentials (username, password) of the users in a password file. Once the password file is compromised, an adversary obtains both the login credentials. With the advancement of technology, even if a password is maintained in hashed format, then also the adversary can invert the hashed password to get the original one. To mitigate this threat, most of the systems nowadays store some system generated fake passwords (also known as honeywords) along with the original password of a user. This type of setup confuses an adversary while selecting the original password. If the adversary chooses any of these honeywords and submits that as a login credential, then system detects the attack. A large number of significant work have been done on designing methodologies (identified as $\text{M}^{\text{DS}}_{\text{OA}}$) that can protect password against observation or, shoulder surfing attack. Under this attack scenario, an adversary observes (or records) the login information entered by a user and later uses those credentials to impersonate the genuine user. In this paper, we have shown that because of their design principle, a large subset of $\text{M}^{\text{DS}}_{\text{OA}}$ (identified as $\text{M}^{\text{FODS}}_{\text{SOA}}$) cannot afford to store honeywords in password file. Thus these methods, belonging to $\text{M}^{\text{FODS}}_{\text{SOA}}$, are unable to provide any kind of security once password file gets compromised. Through our contribution in this paper, by still using the concept of honeywords, we have proposed few generic principles to mask the original password of $\text{M}^{\text{FODS}}_{\text{SOA}}$ category methods. We also consider few well-established methods like S3PAS, CHC, PAS and COP belonging to $\text{M}^{\text{FODS}}_{\text{SOA}}$, to show that proposed idea is implementable in practice.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Security" ]
Title: Algebraic characterization of regular fractions under level permutations, Abstract: In this paper we study the behavior of the fractions of a factorial design under permutations of the factor levels. We focus on the notion of regular fraction and we introduce methods to check whether a given symmetric orthogonal array can or can not be transformed into a regular fraction by means of suitable permutations of the factor levels. The proposed techniques take advantage of the complex coding of the factor levels and of some tools from polynomial algebra. Several examples are described, mainly involving factors with five levels.
[ 0, 0, 1, 1, 0, 0 ]
[ "Mathematics", "Statistics" ]
Title: Towards a More Reliable Privacy-preserving Recommender System, Abstract: This paper proposes a privacy-preserving distributed recommendation framework, Secure Distributed Collaborative Filtering (SDCF), to preserve the privacy of value, model and existence altogether. That says, not only the ratings from the users to the items, but also the existence of the ratings as well as the learned recommendation model are kept private in our framework. Our solution relies on a distributed client-server architecture and a two-stage Randomized Response algorithm, along with an implementation on the popular recommendation model, Matrix Factorization (MF). We further prove SDCF to meet the guarantee of Differential Privacy so that clients are allowed to specify arbitrary privacy levels. Experiments conducted on numerical rating prediction and one-class rating action prediction exhibit that SDCF does not sacrifice too much accuracy for privacy.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Using of heterogeneous corpora for training of an ASR system, Abstract: The paper summarizes the development of the LVCSR system built as a part of the Pashto speech-translation system at the SCALE (Summer Camp for Applied Language Exploration) 2015 workshop on "Speech-to-text-translation for low-resource languages". The Pashto language was chosen as a good "proxy" low-resource language, exhibiting multiple phenomena which make the speech-recognition and and speech-to-text-translation systems development hard. Even when the amount of data is seemingly sufficient, given the fact that the data originates from multiple sources, the preliminary experiments reveal that there is little to no benefit in merging (concatenating) the corpora and more elaborate ways of making use of all of the data must be worked out. This paper concentrates only on the LVCSR part and presents a range of different techniques that were found to be useful in order to benefit from multiple different corpora
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science" ]
Title: Global existence in the 1D quasilinear parabolic-elliptic chemotaxis system with critical nonlinearity, Abstract: The paper should be viewed as complement of an earlier result in [8]. In the paper just mentioned it is shown that 1d case of a quasilinear parabolic-elliptic Keller-Segel system is very special. Namely, unlike in higher dimensions, there is no critical nonlinearity. Indeed, for the nonlinear diffusion of the form 1/u all the solutions, independently on the magnitude of initial mass, stay bounded. However, the argument presented in [8] deals with the Jager-Luckhaus type system. And is very sensitive to this restriction. Namely, the change of variables introduced in [8], being a main step of the method, works only for the Jager-Luckhaus modification. It does not seem to be applicable in the usual version of the parabolic-elliptic Keller-Segel system. The present paper fulfils this gap and deals with the case of the usual parabolic-elliptic version. To handle it we establish a new Lyapunov-like functional (it is related to what was done in [8]), which leads to global existence of the initial-boundary value problem for any initial mass.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: Node Centralities and Classification Performance for Characterizing Node Embedding Algorithms, Abstract: Embedding graph nodes into a vector space can allow the use of machine learning to e.g. predict node classes, but the study of node embedding algorithms is immature compared to the natural language processing field because of a diverse nature of graphs. We examine the performance of node embedding algorithms with respect to graph centrality measures that characterize diverse graphs, through systematic experiments with four node embedding algorithms, four or five graph centralities, and six datasets. Experimental results give insights into the properties of node embedding algorithms, which can be a basis for further research on this topic.
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Multirole Logic (Extended Abstract), Abstract: We identify multirole logic as a new form of logic in which conjunction/disjunction is interpreted as an ultrafilter on the power set of some underlying set (of roles) and the notion of negation is generalized to endomorphisms on this underlying set. We formalize both multirole logic (MRL) and linear multirole logic (LMRL) as natural generalizations of classical logic (CL) and classical linear logic (CLL), respectively, and also present a filter-based interpretation for intuitionism in multirole logic. Among various meta-properties established for MRL and LMRL, we obtain one named multiparty cut-elimination stating that every cut involving one or more sequents (as a generalization of a (binary) cut involving exactly two sequents) can be eliminated, thus extending the celebrated result of cut-elimination by Gentzen.
[ 1, 0, 1, 0, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: ISM properties of a Massive Dusty Star-Forming Galaxy discovered at z ~ 7, Abstract: We report the discovery and constrain the physical conditions of the interstellar medium of the highest-redshift millimeter-selected dusty star-forming galaxy (DSFG) to date, SPT-S J031132-5823.4 (hereafter SPT0311-58), at $z=6.900 +/- 0.002$. SPT0311-58 was discovered via its 1.4mm thermal dust continuum emission in the South Pole Telescope (SPT)-SZ survey. The spectroscopic redshift was determined through an ALMA 3mm frequency scan that detected CO(6-5), CO(7-6) and [CI](2-1), and subsequently confirmed by detections of CO(3-2) with ATCA and [CII] with APEX. We constrain the properties of the ISM in SPT0311-58 with a radiative transfer analysis of the dust continuum photometry and the CO and [CI] line emission. This allows us to determine the gas content without ad hoc assumptions about gas mass scaling factors. SPT0311-58 is extremely massive, with an intrinsic gas mass of $M_{\rm gas} = 3.3 \pm 1.9 \times10^{11}\,M_{\odot}$. Its large mass and intense star formation is very rare for a source well into the Epoch of Reionization.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Multi-dueling Bandits with Dependent Arms, Abstract: The dueling bandits problem is an online learning framework for learning from pairwise preference feedback, and is particularly well-suited for modeling settings that elicit subjective or implicit human feedback. In this paper, we study the problem of multi-dueling bandits with dependent arms, which extends the original dueling bandits setting by simultaneously dueling multiple arms as well as modeling dependencies between arms. These extensions capture key characteristics found in many real-world applications, and allow for the opportunity to develop significantly more efficient algorithms than were possible in the original setting. We propose the \selfsparring algorithm, which reduces the multi-dueling bandits problem to a conventional bandit setting that can be solved using a stochastic bandit algorithm such as Thompson Sampling, and can naturally model dependencies using a Gaussian process prior. We present a no-regret analysis for multi-dueling setting, and demonstrate the effectiveness of our algorithm empirically on a wide range of simulation settings.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Observable dictionary learning for high-dimensional statistical inference, Abstract: This paper introduces a method for efficiently inferring a high-dimensional distributed quantity from a few observations. The quantity of interest (QoI) is approximated in a basis (dictionary) learned from a training set. The coefficients associated with the approximation of the QoI in the basis are determined by minimizing the misfit with the observations. To obtain a probabilistic estimate of the quantity of interest, a Bayesian approach is employed. The QoI is treated as a random field endowed with a hierarchical prior distribution so that closed-form expressions can be obtained for the posterior distribution. The main contribution of the present work lies in the derivation of \emph{a representation basis consistent with the observation chain} used to infer the associated coefficients. The resulting dictionary is then tailored to be both observable by the sensors and accurate in approximating the posterior mean. An algorithm for deriving such an observable dictionary is presented. The method is illustrated with the estimation of the velocity field of an open cavity flow from a handful of wall-mounted point sensors. Comparison with standard estimation approaches relying on Principal Component Analysis and K-SVD dictionaries is provided and illustrates the superior performance of the present approach.
[ 0, 0, 0, 1, 0, 0 ]
[ "Statistics", "Mathematics", "Computer Science" ]
Title: A Finite-Tame-Wild Trichotomy Theorem for Tensor Diagrams, Abstract: In this paper, we consider the problem of determining when two tensor networks are equivalent under a heterogeneous change of basis. In particular, to a string diagram in a certain monoidal category (which we call tensor diagrams), we formulate an associated abelian category of representations. Each representation corresponds to a tensor network on that diagram. We then classify which tensor diagrams give rise to categories that are finite, tame, or wild in the traditional sense of representation theory. For those tensor diagrams of finite and tame type, we classify the indecomposable representations. Our main result is that a tensor diagram is wild if and only if it contains a vertex of degree at least three. Otherwise, it is of tame or finite type.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics", "Computer Science" ]
Title: RCD: Rapid Close to Deadline Scheduling for Datacenter Networks, Abstract: Datacenter-based Cloud Computing services provide a flexible, scalable and yet economical infrastructure to host online services such as multimedia streaming, email and bulk storage. Many such services perform geo-replication to provide necessary quality of service and reliability to users resulting in frequent large inter- datacenter transfers. In order to meet tenant service level agreements (SLAs), these transfers have to be completed prior to a deadline. In addition, WAN resources are quite scarce and costly, meaning they should be fully utilized. Several recently proposed schemes, such as B4, TEMPUS, and SWAN have focused on improving the utilization of inter-datacenter transfers through centralized scheduling, however, they fail to provide a mechanism to guarantee that admitted requests meet their deadlines. Also, in a recent study, authors propose Amoeba, a system that allows tenants to define deadlines and guarantees that the specified deadlines are met, however, to admit new traffic, the proposed system has to modify the allocation of already admitted transfers. In this paper, we propose Rapid Close to Deadline Scheduling (RCD), a close to deadline traffic allocation technique that is fast and efficient. Through simulations, we show that RCD is up to 15 times faster than Amoeba, provides high link utilization along with deadline guarantees, and is able to make quick decisions on whether a new request can be fully satisfied before its deadline.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science" ]
Title: Transfer entropy between communities in complex networks, Abstract: With the help of transfer entropy, we analyze information flows between communities of complex networks. We show that the transfer entropy provides a coherent description of interactions between communities, including non-linear interactions. To put some flesh on the bare bones, we analyze transfer entropies between communities of five largest financial markets, represented as networks of interacting stocks. Additionally, we discuss information transfer of rare events, which is analyzed by Rényi transfer entropy.
[ 0, 1, 0, 0, 0, 0 ]
[ "Computer Science", "Quantitative Finance" ]
Title: On the spectral geometry of manifolds with conic singularities, Abstract: In the previous article we derived a detailed asymptotic expansion of the heat trace for the Laplace-Beltrami operator on functions on manifolds with conic singularities. In this article we investigate how the terms in the expansion reflect the geometry of the manifold. Since the general expansion contains a logarithmic term, its vanishing is a necessary condition for smoothness of the manifold. In the two-dimensional case this implies that the constant term of the expansion contains a non-local term that determines the length of the (circular) cross section and vanishes precisely if this length equals $2\pi$, that is, in the smooth case. We proceed to the study of higher dimensions. In the four-dimensional case, the logarithmic term in the expansion vanishes precisely when the cross section is a spherical space form, and we expect that the vanishing of a further singular term will imply again smoothness, but this is not yet clear beyond the case of cyclic space forms. In higher dimensions the situation is naturally more difficult. We illustrate this in the case of cross sections with constant curvature. Then the logarithmic term becomes a polynomial in the curvature with roots that are different from 1, which necessitates more vanishing of other terms, not isolated so far.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics", "Physics" ]
Title: Clustering with t-SNE, provably, Abstract: t-distributed Stochastic Neighborhood Embedding (t-SNE), a clustering and visualization method proposed by van der Maaten & Hinton in 2008, has rapidly become a standard tool in a number of natural sciences. Despite its overwhelming success, there is a distinct lack of mathematical foundations and the inner workings of the algorithm are not well understood. The purpose of this paper is to prove that t-SNE is able to recover well-separated clusters; more precisely, we prove that t-SNE in the `early exaggeration' phase, an optimization technique proposed by van der Maaten & Hinton (2008) and van der Maaten (2014), can be rigorously analyzed. As a byproduct, the proof suggests novel ways for setting the exaggeration parameter $\alpha$ and step size $h$. Numerical examples illustrate the effectiveness of these rules: in particular, the quality of embedding of topological structures (e.g. the swiss roll) improves. We also discuss a connection to spectral clustering methods.
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Mathematics", "Statistics" ]
Title: Noise Flooding for Detecting Audio Adversarial Examples Against Automatic Speech Recognition, Abstract: Neural models enjoy widespread use across a variety of tasks and have grown to become crucial components of many industrial systems. Despite their effectiveness and extensive popularity, they are not without their exploitable flaws. Initially applied to computer vision systems, the generation of adversarial examples is a process in which seemingly imperceptible perturbations are made to an image, with the purpose of inducing a deep learning based classifier to misclassify the image. Due to recent trends in speech processing, this has become a noticeable issue in speech recognition models. In late 2017, an attack was shown to be quite effective against the Speech Commands classification model. Limited-vocabulary speech classifiers, such as the Speech Commands model, are used quite frequently in a variety of applications, particularly in managing automated attendants in telephony contexts. As such, adversarial examples produced by this attack could have real-world consequences. While previous work in defending against these adversarial examples has investigated using audio preprocessing to reduce or distort adversarial noise, this work explores the idea of flooding particular frequency bands of an audio signal with random noise in order to detect adversarial examples. This technique of flooding, which does not require retraining or modifying the model, is inspired by work done in computer vision and builds on the idea that speech classifiers are relatively robust to natural noise. A combined defense incorporating 5 different frequency bands for flooding the signal with noise outperformed other existing defenses in the audio space, detecting adversarial examples with 91.8% precision and 93.5% recall.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science" ]
Title: Reliability study of proportional odds family of discrete distributions, Abstract: The proportional odds model gives a method of generating new family of distributions by adding a parameter, called tilt parameter, to expand an existing family of distributions. The new family of distributions so obtained is known as Marshall-Olkin family of distributions or Marshall-Olkin extended distributions. In this paper, we consider Marshall-Olkin family of distributions in discrete case with fixed tilt parameter. We study different ageing properties, as well as different stochastic orderings of this family of distributions. All the results of this paper are supported by several examples.
[ 0, 0, 1, 1, 0, 0 ]
[ "Statistics", "Mathematics" ]
Title: Feeding vs. Falling: The growth and collapse of molecular clouds in a turbulent interstellar medium, Abstract: In order to understand the origin of observed molecular cloud properties, it is critical to understand how clouds interact with their environments during their formation, growth, and collapse. It has been suggested that accretion-driven turbulence can maintain clouds in a highly turbulent state, preventing runaway collapse, and explaining the observed non-thermal velocity dispersions. We present 3D, AMR, MHD simulations of a kiloparsec-scale, stratified, supernova-driven, self-gravitating, interstellar medium, including diffuse heating and radiative cooling. These simulations model the formation and evolution of a molecular cloud population in the turbulent interstellar medium. We use zoom-in techniques to focus on the dynamics of the mass accretion and its history for individual molecular clouds. We find that mass accretion onto molecular clouds proceeds as a combination of turbulent and near free-fall accretion of a gravitationally bound envelope. Nearby supernova explosions have a dual role, compressing the envelope, boosting accreted mass, but also disrupting parts of the envelope and eroding mass from the cloud's surface. It appears that the inflow rate of kinetic energy onto clouds from supernova explosions is insufficient to explain the net rate of charge of the cloud kinetic energy. In the absence of self-consistent star formation, conversion of gravitational potential into kinetic energy during contraction seems to be the main driver of non-thermal motions within clouds. We conclude that although clouds interact strongly with their environments, bound clouds are always in a state of gravitational contraction, close to runaway, and their properties are a natural result of this collapse.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Multi-Scale Pipeline for the Search of String-Induced CMB Anisotropies, Abstract: We propose a multi-scale edge-detection algorithm to search for the Gott-Kaiser-Stebbins imprints of a cosmic string (CS) network on the Cosmic Microwave Background (CMB) anisotropies. Curvelet decomposition and extended Canny algorithm are used to enhance the string detectability. Various statistical tools are then applied to quantify the deviation of CMB maps having a cosmic string contribution with respect to pure Gaussian anisotropies of inflationary origin. These statistical measures include the one-point probability density function, the weighted two-point correlation function (TPCF) of the anisotropies, the unweighted TPCF of the peaks and of the up-crossing map, as well as their cross-correlation. We use this algorithm on a hundred of simulated Nambu-Goto CMB flat sky maps, covering approximately $10\%$ of the sky, and for different string tensions $G\mu$. On noiseless sky maps with an angular resolution of $0.9'$, we show that our pipeline detects CSs with $G\mu$ as low as $G\mu\gtrsim 4.3\times 10^{-10}$. At the same resolution, but with a noise level typical to a CMB-S4 phase II experiment, the detection threshold would be to $G\mu\gtrsim 1.2 \times 10^{-7}$.
[ 0, 1, 0, 1, 0, 0 ]
[ "Physics", "Statistics" ]
Title: A simultaneous generalization of the theorems of Chevalley-Warning and Morlaye, Abstract: Inspired by recent work of I. Baoulina, we give a simultaneous generalization of the theorems of Chevalley-Warning and Morlaye.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: Turning Internet of Things(IoT) into Internet of Vulnerabilities (IoV) : IoT Botnets, Abstract: Internet of Things (IoT) is the next big evolutionary step in the world of internet. The main intention behind the IoT is to enable safer living and risk mitigation on different levels of life. With the advent of IoT botnets, the view towards IoT devices has changed from enabler of enhanced living into Internet of vulnerabilities for cyber criminals. IoT botnets has exposed two different glaring issues, 1) A large number of IoT devices are accessible over public Internet. 2) Security (if considered at all) is often an afterthought in the architecture of many wide spread IoT devices. In this article, we briefly outline the anatomy of the IoT botnets and their basic mode of operations. Some of the major DDoS incidents using IoT botnets in recent times along with the corresponding exploited vulnerabilities will be discussed. We also provide remedies and recommendations to mitigate IoT related cyber risks and briefly illustrate the importance of cyber insurance in the modern connected world.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science" ]
Title: Cwikel estimates revisited, Abstract: In this paper, we propose a new approach to Cwikel estimates both for the Euclidean space and for the noncommutative Euclidean space.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: Smooth Pinball Neural Network for Probabilistic Forecasting of Wind Power, Abstract: Uncertainty analysis in the form of probabilistic forecasting can significantly improve decision making processes in the smart power grid for better integrating renewable energy sources such as wind. Whereas point forecasting provides a single expected value, probabilistic forecasts provide more information in the form of quantiles, prediction intervals, or full predictive densities. This paper analyzes the effectiveness of a novel approach for nonparametric probabilistic forecasting of wind power that combines a smooth approximation of the pinball loss function with a neural network architecture and a weighting initialization scheme to prevent the quantile cross over problem. A numerical case study is conducted using publicly available wind data from the Global Energy Forecasting Competition 2014. Multiple quantiles are estimated to form 10%, to 90% prediction intervals which are evaluated using a quantile score and reliability measures. Benchmark models such as the persistence and climatology distributions, multiple quantile regression, and support vector quantile regression are used for comparison where results demonstrate the proposed approach leads to improved performance while preventing the problem of overlapping quantile estimates.
[ 0, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Zero-temperature magnetic response of small fullerene molecules at the classical and full quantum limit, Abstract: The ground-state magnetic response of fullerene molecules with up to 36 vertices is calculated, when spins classical or with magnitude $s=\frac{1}{2}$ are located on their vertices and interact according to the nearest-neighbor antiferromagnetic Heisenberg model. The frustrated topology, which originates in the pentagons of the fullerenes and is enhanced by their close proximity, leads to a significant number of classical magnetization and susceptibility discontinuities, something not expected for a model lacking magnetic anisotropy. This establishes the classical discontinuities as a generic feature of fullerene molecules irrespective of their symmetry. The largest number of discontinuities have the molecule with 26 sites, four of the magnetization and two of the susceptibility, and an isomer with 34 sites, which has three each. In addition, for several of the fullerenes the classical zero-field lowest energy configuration has finite magnetization, which is unexpected for antiferromagnetic interactions between an even number of spins and with each spin having the same number of nearest-neighbors. The molecules come in different symmetries and topologies and there are only a few patterns of magnetic behavior that can be detected from such a small sample of relatively small fullerenes. Contrary to the classical case, in the full quantum limit $s=\frac{1}{2}$ there are no discontinuities for a subset of the molecules that was considered. This leaves the icosahedral symmetry fullerenes as the only ones known supporting ground-state magnetization discontinuities for $s=\frac{1}{2}$. It is also found that a molecule with 34 sites has a doubly-degenerate ground state when $s=\frac{1}{2}$.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Stochastic Chemical Reaction Networks for Robustly Approximating Arbitrary Probability Distributions, Abstract: We show that discrete distributions on the $d$-dimensional non-negative integer lattice can be approximated arbitrarily well via the marginals of stationary distributions for various classes of stochastic chemical reaction networks. We begin by providing a class of detailed balanced networks and prove that they can approximate any discrete distribution to any desired accuracy. However, these detailed balanced constructions rely on the ability to initialize a system precisely, and are therefore susceptible to perturbations in the initial conditions. We therefore provide another construction based on the ability to approximate point mass distributions and prove that this construction is capable of approximating arbitrary discrete distributions for any choice of initial condition. In particular, the developed models are ergodic, so their limit distributions are robust to a finite number of perturbations over time in the counts of molecules.
[ 0, 0, 0, 0, 1, 0 ]
[ "Mathematics", "Statistics", "Quantitative Biology" ]
Title: A unifying framework for the modelling and analysis of STR DNA samples arising in forensic casework, Abstract: This paper presents a new framework for analysing forensic DNA samples using probabilistic genotyping. Specifically it presents a mathematical framework for specifying and combining the steps in producing forensic casework electropherograms of short tandem repeat loci from DNA samples. It is applicable to both high and low template DNA samples, that is, samples containing either high or low amounts DNA. A specific model is developed within the framework, by way of particular modelling assumptions and approximations, and its interpretive power presented on examples using simulated data and data from a publicly available dataset. The framework relies heavily on the use of univariate and multivariate probability generating functions. It is shown that these provide a succinct and elegant mathematical scaffolding to model the key steps in the process. A significant development in this paper is that of new numerical methods for accurately and efficiently evaluating the probability distribution of amplicons arising from the polymerase chain reaction process, which is modelled as a discrete multi-type branching process. Source code in the scripting languages Python, R and Julia is provided for illustration of these methods. These new developments will be of general interest to persons working outside the province of forensic DNA interpretation that this paper focuses on.
[ 0, 0, 0, 1, 1, 0 ]
[ "Mathematics", "Statistics", "Quantitative Biology" ]
Title: On a Distributed Approach for Density-based Clustering, Abstract: Efficient extraction of useful knowledge from these data is still a challenge, mainly when the data is distributed, heterogeneous and of different quality depending on its corresponding local infrastructure. To reduce the overhead cost, most of the existing distributed clustering approaches generate global models by aggregating local results obtained on each individual node. The complexity and quality of solutions depend highly on the quality of the aggregation. In this respect, we proposed for distributed density-based clustering that both reduces the communication overheads due to the data exchange and improves the quality of the global models by considering the shapes of local clusters. From preliminary results we show that this algorithm is very promising.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Accretion of Planetary Material onto Host Stars, Abstract: Accretion of planetary material onto host stars may occur throughout a star's life. Especially prone to accretion, extrasolar planets in short-period orbits, while relatively rare, constitute a significant fraction of the known population, and these planets are subject to dynamical and atmospheric influences that can drive significant mass loss. Theoretical models frame expectations regarding the rates and extent of this planetary accretion. For instance, tidal interactions between planets and stars may drive complete orbital decay during the main sequence. Many planets that survive their stars' main sequence lifetime will still be engulfed when the host stars become red giant stars. There is some observational evidence supporting these predictions, such as a dearth of close-in planets around fast stellar rotators, which is consistent with tidal spin-up and planet accretion. There remains no clear chemical evidence for pollution of the atmospheres of main sequence or red giant stars by planetary materials, but a wealth of evidence points to active accretion by white dwarfs. In this article, we review the current understanding of accretion of planetary material, from the pre- to the post-main sequence and beyond. The review begins with the astrophysical framework for that process and then considers accretion during various phases of a host star's life, during which the details of accretion vary, and the observational evidence for accretion during these phases.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Quantifying Interpretability and Trust in Machine Learning Systems, Abstract: Decisions by Machine Learning (ML) models have become ubiquitous. Trusting these decisions requires understanding how algorithms take them. Hence interpretability methods for ML are an active focus of research. A central problem in this context is that both the quality of interpretability methods as well as trust in ML predictions are difficult to measure. Yet evaluations, comparisons and improvements of trust and interpretability require quantifiable measures. Here we propose a quantitative measure for the quality of interpretability methods. Based on that we derive a quantitative measure of trust in ML decisions. Building on previous work we propose to measure intuitive understanding of algorithmic decisions using the information transfer rate at which humans replicate ML model predictions. We provide empirical evidence from crowdsourcing experiments that the proposed metric robustly differentiates interpretability methods. The proposed metric also demonstrates the value of interpretability for ML assisted human decision making: in our experiments providing explanations more than doubled productivity in annotation tasks. However unbiased human judgement is critical for doctors, judges, policy makers and others. Here we derive a trust metric that identifies when human decisions are overly biased towards ML predictions. Our results complement existing qualitative work on trust and interpretability by quantifiable measures that can serve as objectives for further improving methods in this field of research.
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Constraining accretion signatures of exoplanets in the TW Hya transitional disk, Abstract: We present a near-infrared direct imaging search for accretion signatures of possible protoplanets around the young stellar object (YSO) TW Hya, a multi-ring disk exhibiting evidence of planet formation. The Pa$\beta$ line (1.282 $\mu$m) is an indication of accretion onto a protoplanet, and its intensity is much higher than that of blackbody radiation from the protoplanet. We focused on the Pa$\beta$ line and performed Keck/OSIRIS spectroscopic observations. Although spectral differential imaging (SDI) reduction detected no accretion signatures, the results of the present study allowed us to set 5$\sigma$ detection limits for Pa$\beta$ emission of $5.8\times10^{-18}$ and $1.5\times10^{-18}$ erg/s/cm$^2$ at 0\farcs4 and 1\farcs6, respectively. We considered the mass of potential planets using theoretical simulations of circumplanetary disks and hydrogen emission. The resulting masses were $1.45\pm 0.04$ M$_{\rm J}$ and $2.29 ^{+0.03}_{-0.04}$ M$_{\rm J}$ at 25 and 95 AU, respectively, which agree with the detection limits obtained from previous broadband imaging. The detection limits should allow the identification of protoplanets as small as $\sim$1 M$_{\rm J}$, which may assist in direct imaging searches around faint YSOs for which extreme adaptive optics instruments are unavailable.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Obstructions to planarity of contact 3-manifolds, Abstract: We prove that if a contact 3-manifold admits an open book decomposition of genus 0, a certain intersection pattern cannot appear in the homology of any of its symplectic fillings, and morever, fillings cannot contain certain symplectic surfaces. Applying these obstructions to canonical contact structures on links of normal surface singularities, we show that links of isolated singularities of surfaces in the complex 3-space are planar only in the case of $A_n$-singularities, and in general characterize completely planar links of normal surface singularities (in terms of their resolution graphs). We also establish non-planarity of tight contact structures on certain small Seifert fibered L-spaces and of contact structures compatible with open books given by a boundary multi-twist on a page of positive genus. Additionally, we prove that every finitely presented group is the fundamental group of a Leschetz fibration with planar fibers.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: Hyperfine state entanglement of spinor BEC and scattering atom, Abstract: Condensate of spin-1 atoms frozen in a unique spatial mode may possess large internal degrees of freedom. The scattering amplitudes of polarized cold atoms scattered by the condensate are obtained with the method of fractional parentage coefficients that treats the spin degrees of freedom rigorously. Channels with scattering cross sections enhanced by square of atom number of the condensate are found. Entanglement between the condensate and the propagating atom can be established by the scattering. The entanglement entropy is analytically obtained for arbitrary initial states. Our results also give hint for the establishment of quantum thermal ensembles in the hyperfine space.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics", "Quantitative Biology" ]
Title: Making Neural Programming Architectures Generalize via Recursion, Abstract: Empirically, neural networks that attempt to learn programs from data have exhibited poor generalizability. Moreover, it has traditionally been difficult to reason about the behavior of these models beyond a certain level of input complexity. In order to address these issues, we propose augmenting neural architectures with a key abstraction: recursion. As an application, we implement recursion in the Neural Programmer-Interpreter framework on four tasks: grade-school addition, bubble sort, topological sort, and quicksort. We demonstrate superior generalizability and interpretability with small amounts of training data. Recursion divides the problem into smaller pieces and drastically reduces the domain of each neural network component, making it tractable to prove guarantees about the overall system's behavior. Our experience suggests that in order for neural architectures to robustly learn program semantics, it is necessary to incorporate a concept like recursion.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Bagged Empirical Null p-values: A Method to Account for Model Uncertainty in Large Scale Inference, Abstract: When conducting large scale inference, such as genome-wide association studies or image analysis, nominal $p$-values are often adjusted to improve control over the family-wise error rate (FWER). When the majority of tests are null, procedures controlling the False discovery rate (Fdr) can be improved by replacing the theoretical global null with its empirical estimate. However, these other adjustment procedures remain sensitive to the working model assumption. Here we propose two key ideas to improve inference in this space. First, we propose $p$-values that are standardized to the empirical null distribution (instead of the theoretical null). Second, we propose model averaging $p$-values by bootstrap aggregation (Bagging) to account for model uncertainty and selection procedures. The combination of these two key ideas yields bagged empirical null $p$-values (BEN $p$-values) that often dramatically alter the rank ordering of significant findings. Moreover, we find that a multidimensional selection criteria based on BEN $p$-values and bagged model fit statistics is more likely to yield reproducible findings. A re-analysis of the famous Golub Leukemia data is presented to illustrate these ideas. We uncovered new findings in these data, not detected previously, that are backed by published bench work pre-dating the Gloub experiment. A pseudo-simulation using the leukemia data is also presented to explore the stability of this approach under broader conditions, and illustrates the superiority of the BEN $p$-values compared to the other approaches.
[ 0, 0, 0, 1, 0, 0 ]
[ "Statistics", "Quantitative Biology" ]
Title: An independent axiomatisation for free short-circuit logic, Abstract: Short-circuit evaluation denotes the semantics of propositional connectives in which the second argument is evaluated only if the first argument does not suffice to determine the value of the expression. Free short-circuit logic is the equational logic in which compound statements are evaluated from left to right, while atomic evaluations are not memorised throughout the evaluation, i.e., evaluations of distinct occurrences of an atom in a compound statement may yield different truth values. We provide a simple semantics for free SCL and an independent axiomatisation. Finally, we discuss evaluation strategies, some other SCLs, and side effects.
[ 1, 0, 1, 0, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Learning Effective Changes for Software Projects, Abstract: The primary motivation of much of software analytics is decision making. How to make these decisions? Should one make decisions based on lessons that arise from within a particular project? Or should one generate these decisions from across multiple projects? This work is an attempt to answer these questions. Our work was motivated by a realization that much of the current generation software analytics tools focus primarily on prediction. Indeed prediction is a useful task, but it is usually followed by "planning" about what actions need to be taken. This research seeks to address the planning task by seeking methods that support actionable analytics that offer clear guidance on what to do. Specifically, we propose XTREE and BELLTREE algorithms for generating a set of actionable plans within and across projects. Each of these plans, if followed will improve the quality of the software project.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science" ]
Title: Search for Exoplanets around Northern Circumpolar Stars- II. The Detection of Radial Velocity Variations in M Giant Stars HD 36384, HD 52030, and HD 208742, Abstract: We present the detection of long-period RV variations in HD 36384, HD 52030, and HD 208742 by using the high-resolution, fiber-fed Bohyunsan Observatory Echelle Spectrograph (BOES) for the precise radial velocity (RV) survey of about 200 northern circumpolar stars. Analyses of RV data, chromospheric activity indicators, and bisector variations spanning about five years suggest that the RV variations are compatible with planet or brown dwarf companions in Keplerian motion. However, HD 36384 shows photometric variations with a period very close to that of RV variations as well as amplitude variations in the weighted wavelet Z-transform (WWZ) analysis, which argues that the RV variations in HD~36384 are from the stellar pulsations. Assuming that the companion hypothesis is correct, HD~52030 hosts a companion with minimum mass 13.3 M_Jup$ orbiting in 484 days at a distance of 1.2 AU. HD~208742 hosts a companion of 14.0 M_Jup at 1.5 AU with a period of 602 days. All stars are located at the asymptotic giant branch (AGB) stage on the H-R diagram after undergone the helium flash and left the giant clump.With stellar radii of 53.0 R_Sun and 57.2 R_Sun for HD 52030 and HD 208742, respectively, these stars may be the largest yet, in terms of stellar radius, found to host sub-stellar companions. However, given possible RV amplitude variations and the fact that these are highly evolved stars the planet hypothesis is not yet certain.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Term Models of Horn Clauses over Rational Pavelka Predicate Logic, Abstract: This paper is a contribution to the study of the universal Horn fragment of predicate fuzzy logics, focusing on the proof of the existence of free models of theories of Horn clauses over Rational Pavelka predicate logic. We define the notion of a term structure associated to every consistent theory T over Rational Pavelka predicate logic and we prove that the term models of T are free on the class of all models of T. Finally, it is shown that if T is a set of Horn clauses, the term structure associated to T is a model of T.
[ 0, 0, 1, 0, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Coverage Centrality Maximization in Undirected Networks, Abstract: Centrality metrics are among the main tools in social network analysis. Being central for a user of a network leads to several benefits to the user: central users are highly influential and play key roles within the network. Therefore, the optimization problem of increasing the centrality of a network user recently received considerable attention. Given a network and a target user $v$, the centrality maximization problem consists in creating $k$ new links incident to $v$ in such a way that the centrality of $v$ is maximized, according to some centrality metric. Most of the algorithms proposed in the literature are based on showing that a given centrality metric is monotone and submodular with respect to link addition. However, this property does not hold for several shortest-path based centrality metrics if the links are undirected. In this paper we study the centrality maximization problem in undirected networks for one of the most important shortest-path based centrality measures, the coverage centrality. We provide several hardness and approximation results. We first show that the problem cannot be approximated within a factor greater than $1-1/e$, unless $P=NP$, and, under the stronger gap-ETH hypothesis, the problem cannot be approximated within a factor better than $1/n^{o(1)}$, where $n$ is the number of users. We then propose two greedy approximation algorithms, and show that, by suitably combining them, we can guarantee an approximation factor of $\Omega(1/\sqrt{n})$. We experimentally compare the solutions provided by our approximation algorithm with optimal solutions computed by means of an exact IP formulation. We show that our algorithm produces solutions that are very close to the optimum.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Neural State Classification for Hybrid Systems, Abstract: We introduce the State Classification Problem (SCP) for hybrid systems, and present Neural State Classification (NSC) as an efficient solution technique. SCP generalizes the model checking problem as it entails classifying each state $s$ of a hybrid automaton as either positive or negative, depending on whether or not $s$ satisfies a given time-bounded reachability specification. This is an interesting problem in its own right, which NSC solves using machine-learning techniques, Deep Neural Networks in particular. State classifiers produced by NSC tend to be very efficient (run in constant time and space), but may be subject to classification errors. To quantify and mitigate such errors, our approach comprises: i) techniques for certifying, with statistical guarantees, that an NSC classifier meets given accuracy levels; ii) tuning techniques, including a novel technique based on adversarial sampling, that can virtually eliminate false negatives (positive states classified as negative), thereby making the classifier more conservative. We have applied NSC to six nonlinear hybrid system benchmarks, achieving an accuracy of 99.25% to 99.98%, and a false-negative rate of 0.0033 to 0, which we further reduced to 0.0015 to 0 after tuning the classifier. We believe that this level of accuracy is acceptable in many practical applications, and that these results demonstrate the promise of the NSC approach.
[ 0, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Mathematics", "Statistics" ]
Title: Optimization of Tree Ensembles, Abstract: Tree ensemble models such as random forests and boosted trees are among the most widely used and practically successful predictive models in applied machine learning and business analytics. Although such models have been used to make predictions based on exogenous, uncontrollable independent variables, they are increasingly being used to make predictions where the independent variables are controllable and are also decision variables. In this paper, we study the problem of tree ensemble optimization: given a tree ensemble that predicts some dependent variable using controllable independent variables, how should we set these variables so as to maximize the predicted value? We formulate the problem as a mixed-integer optimization problem. We theoretically examine the strength of our formulation, provide a hierarchy of approximate formulations with bounds on approximation quality and exploit the structure of the problem to develop two large-scale solution methods, one based on Benders decomposition and one based on iteratively generating tree split constraints. We test our methodology on real data sets, including two case studies in drug design and customized pricing, and show that our methodology can efficiently solve large-scale instances to near or full optimality, and outperforms solutions obtained by heuristic approaches. In our drug design case, we show how our approach can identify compounds that efficiently trade-off predicted performance and novelty with respect to existing, known compounds. In our customized pricing case, we show how our approach can efficiently determine optimal store-level prices under a random forest model that delivers excellent predictive accuracy.
[ 1, 0, 1, 1, 0, 0 ]
[ "Computer Science", "Mathematics", "Statistics" ]
Title: Equations of state for real gases on the nuclear scale, Abstract: The formalism to augment the classical models of equation of state for real gases with the quantum statistical effects is presented. It allows an arbitrary excluded volume procedure to model repulsive interactions, and an arbitrary density-dependent mean field to model attractive interactions. Variations on the excluded volume mechanism include van der Waals (VDW) and Carnahan-Starling models, while the mean fields are based on VDW, Redlich-Kwong-Soave, Peng-Robinson, and Clausius equations of state. The VDW parameters of the nucleon-nucleon interaction are fitted in each model to the properties of the ground state of nuclear matter, and the following range of values is obtained: $a = 330 - 430$ MeV fm$^3$ and $b = 2.5 - 4.4$ fm$^3$. In the context of the excluded-volume approach, the fits to the nuclear ground state disfavor the values of the effective hard-core radius of a nucleon significantly smaller than $0.5$ fm, at least for the nuclear matter region of the phase diagram. Modifications to the standard VDW repulsion and attraction terms allow to improve significantly the value of the nuclear incompressibility factor $K_0$, bringing it closer to empirical estimates. The generalization to include the baryon-baryon interactions into the hadron resonance gas model is performed. The behavior of the baryon-related lattice QCD observables at zero chemical potential is shown to be strongly correlated to the nuclear matter properties: an improved description of the nuclear incompressibility also yields an improved description of the lattice data at $\mu = 0$.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: 3D Simulation of Electron and Ion Transmission of GEM-based Detectors, Abstract: Time Projection Chamber (TPC) has been chosen as the main tracking system in several high-flux and high repetition rate experiments. These include on-going experiments such as ALICE and future experiments such as PANDA at FAIR and ILC. Different $\mathrm{R}\&\mathrm{D}$ activities were carried out on the adoption of Gas Electron Multiplier (GEM) as the gas amplification stage of the ALICE-TPC upgrade version. The requirement of low ion feedback has been established through these activities. Low ion feedback minimizes distortions due to space charge and maintains the necessary values of detector gain and energy resolution. In the present work, Garfield simulation framework has been used to study the related physical processes occurring within single, triple and quadruple GEM detectors. Ion backflow and electron transmission of quadruple GEMs, made up of foils with different hole pitch under different electromagnetic field configurations (the projected solutions for the ALICE TPC) have been studied. Finally a new triple GEM detector configuration with low ion backflow fraction and good electron transmission properties has been proposed as a simpler GEM-based alternative suitable for TPCs for future collider experiments.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics", "Computer Science" ]
Title: Witt and Cohomological Invariants of Witt Classes, Abstract: We classify all invariants of the functor $I^n$ (powers of the fundamental ideal of the Witt ring) with values in $A$, that it to say functions $I^n(K)\rightarrow A(K)$ compatible with field extensions, in the cases where $A(K)=W(K)$ is the Witt ring and $A(K)=H^*(K,\mu_2)$ is mod 2 Galois cohomology. This is done in terms of some invariants $f_n^d$ that behave like divided powers with respect to sums of Pfister forms, and we show that any invariant of $I^n$ can be written uniquely as a (possibly infinite) combination of those $f_n^d$. This in particular allows to lift operations defined on mod 2 Milnor K-theory (or equivalently mod 2 Galois cohomology) to the level of $I^n$. We also study various properties of these invariants, including behaviour under products, similitudes, residues for discrete valuations, and restriction from $I^n$ to $I^{n+1}$. The goal is to use this to study invariants of algebras with involutions in future articles.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: Diagonal Rescaling For Neural Networks, Abstract: We define a second-order neural network stochastic gradient training algorithm whose block-diagonal structure effectively amounts to normalizing the unit activations. Investigating why this algorithm lacks in robustness then reveals two interesting insights. The first insight suggests a new way to scale the stepsizes, clarifying popular algorithms such as RMSProp as well as old neural network tricks such as fanin stepsize scaling. The second insight stresses the practical importance of dealing with fast changes of the curvature of the cost.
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: W-algebras associated to surfaces, Abstract: We define an integral form of the deformed W-algebra of type gl_r, and construct its action on the K-theory groups of moduli spaces of rank r stable sheaves on a smooth projective surface S, under certain assumptions. Our construction generalizes the action studied by Nakajima, Grojnowski and Baranovsky in cohomology, although the appearance of deformed W-algebras by generators and relations is a new feature. Physically, this action encodes the AGT correspondence for 5d supersymmetric gauge theory on S x circle.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics", "Physics" ]
Title: Klein-Gordonization: mapping superintegrable quantum mechanics to resonant spacetimes, Abstract: We describe a procedure naturally associating relativistic Klein-Gordon equations in static curved spacetimes to non-relativistic quantum motion on curved spaces in the presence of a potential. Our procedure is particularly attractive in application to (typically, superintegrable) problems whose energy spectrum is given by a quadratic function of the energy level number, since for such systems the spacetimes one obtains possess evenly spaced, resonant spectra of frequencies for scalar fields of a certain mass. This construction emerges as a generalization of the previously studied correspondence between the Higgs oscillator and Anti-de Sitter spacetime, which has been useful for both understanding weakly nonlinear dynamics in Anti-de Sitter spacetime and algebras of conserved quantities of the Higgs oscillator. Our conversion procedure ("Klein-Gordonization") reduces to a nonlinear elliptic equation closely reminiscent of the one emerging in relation to the celebrated Yamabe problem of differential geometry. As an illustration, we explicitly demonstrate how to apply this procedure to superintegrable Rosochatius systems, resulting in a large family of spacetimes with resonant spectra for massless wave equations.
[ 0, 1, 1, 0, 0, 0 ]
[ "Physics", "Mathematics" ]