text
stringlengths 138
2.38k
| labels
sequencelengths 6
6
| Predictions
sequencelengths 1
3
|
---|---|---|
Title: ServeNet: A Deep Neural Network for Web Service Classification,
Abstract: Automated service classification plays a crucial role in service management
such as service discovery, selection, and composition. In recent years, machine
learning techniques have been used for service classification. However, they
can only predict around 10 to 20 service categories due to the quality of
feature engineering and the imbalance problem of service dataset. In this
paper, we present a deep neural network ServeNet with a novel dataset splitting
algorithm to deal with these issues. ServeNet can automatically abstract
low-level representation to high-level features, and then predict service
classification based on the service datasets produced by the proposed splitting
algorithm. To demonstrate the effectiveness of our approach, we conducted a
comprehensive experimental study on 10,000 real-world services in 50
categories. The result shows that ServeNet can achieve higher accuracy than
other machine learning methods. | [
0,
0,
0,
1,
0,
0
] | [
"Computer Science"
] |
Title: Using Synthetic Data to Train Neural Networks is Model-Based Reasoning,
Abstract: We draw a formal connection between using synthetic training data to optimize
neural network parameters and approximate, Bayesian, model-based reasoning. In
particular, training a neural network using synthetic data can be viewed as
learning a proposal distribution generator for approximate inference in the
synthetic-data generative model. We demonstrate this connection in a
recognition task where we develop a novel Captcha-breaking architecture and
train it using synthetic data, demonstrating both state-of-the-art performance
and a way of computing task-specific posterior uncertainty. Using a neural
network trained this way, we also demonstrate successful breaking of real-world
Captchas currently used by Facebook and Wikipedia. Reasoning from these
empirical results and drawing connections with Bayesian modeling, we discuss
the robustness of synthetic data results and suggest important considerations
for ensuring good neural network generalization when training with synthetic
data. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Coherent modulation up to 100 GBd 16QAM using silicon-organic hybrid (SOH) devices,
Abstract: We demonstrate the generation of higher-order modulation formats using
silicon-based inphase/quadrature (IQ) modulators at symbol rates of up to 100
GBd. Our devices exploit the advantages of silicon-organic hybrid (SOH)
integration, which combines silicon-on-insulator waveguides with highly
efficient organic electro-optic (EO) cladding materials to enable small drive
voltages and sub-millimeter device lengths. In our experiments, we use an SOH
IQ modulator with a {\pi}-voltage of 1.6 V to generate 100 GBd 16QAM signals.
This is the first time that the 100 GBd mark is reached with an IQ modulator
realized on a semiconductor substrate, leading to a single-polarization line
rate of 400 Gbit/s. The peak-to-peak drive voltages amount to 1.5 Vpp,
corresponding to an electrical energy dissipation in the modulator of only 25
fJ/bit. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Time-Resolved High Spectral Resolution Observation of 2MASSW J0746425+200032AB,
Abstract: Many brown dwarfs exhibit photometric variability at levels from tenths to
tens of percents. The photometric variability is related to magnetic activity
or patchy cloud coverage, characteristic of brown dwarfs near the L-T
transition. Time-resolved spectral monitoring of brown dwarfs provides
diagnostics of cloud distribution and condensate properties. However, current
time-resolved spectral studies of brown dwarfs are limited to low spectral
resolution (R$\sim$100) with the exception of the study of Luhman 16 AB at
resolution of 100,000 using the VLT$+$CRIRES. This work yielded the first map
of brown dwarf surface inhomogeneity, highlighting the importance and unique
contribution of high spectral resolution observations. Here, we report on the
time-resolved high spectral resolution observations of a nearby brown dwarf
binary, 2MASSW J0746425+200032AB. We find no coherent spectral variability that
is modulated with rotation. Based on simulations we conclude that the coverage
of a single spot on 2MASSW J0746425+200032AB is smaller than 1\% or 6.25\% if
spot contrast is 50\% or 80\% of its surrounding flux, respectively. Future
high spectral resolution observations aided by adaptive optics systems can put
tighter constraints on the spectral variability of 2MASSW J0746425+200032AB and
other nearby brown dwarfs. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Macquarie University at BioASQ 5b -- Query-based Summarisation Techniques for Selecting the Ideal Answers,
Abstract: Macquarie University's contribution to the BioASQ challenge (Task 5b Phase B)
focused on the use of query-based extractive summarisation techniques for the
generation of the ideal answers. Four runs were submitted, with approaches
ranging from a trivial system that selected the first $n$ snippets, to the use
of deep learning approaches under a regression framework. Our experiments and
the ROUGE results of the five test batches of BioASQ indicate surprisingly good
results for the trivial approach. Overall, most of our runs on the first three
test batches achieved the best ROUGE-SU4 results in the challenge. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Quantitative Biology"
] |
Title: One can hear the Euler characteristic of a simplicial complex,
Abstract: We prove that that the number p of positive eigenvalues of the connection
Laplacian L of a finite abstract simplicial complex G matches the number b of
even dimensional simplices in G and that the number n of negative eigenvalues
matches the number f of odd-dimensional simplices in G. The Euler
characteristic X(G) of G therefore can be spectrally described as X(G)=p-n.
This is in contrast to the more classical Hodge Laplacian H which acts on the
same Hilbert space, where X(G) is not yet known to be accessible from the
spectrum of H. Given an ordering of G coming from a build-up as a CW complex,
every simplex x in G is now associated to a unique eigenvector of L and the
correspondence is computable. The Euler characteristic is now not only the
potential energy summing over all g(x,y) with g=L^{-1} but also agrees with a
logarithmic energy tr(log(i L)) 2/(i pi) of the spectrum of L. We also give
here examples of L-isospectral but non-isomorphic abstract finite simplicial
complexes. One example shows that we can not hear the cohomology of the
complex. | [
1,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Adaptation and Robust Learning of Probabilistic Movement Primitives,
Abstract: Probabilistic representations of movement primitives open important new
possibilities for machine learning in robotics. These representations are able
to capture the variability of the demonstrations from a teacher as a
probability distribution over trajectories, providing a sensible region of
exploration and the ability to adapt to changes in the robot environment.
However, to be able to capture variability and correlations between different
joints, a probabilistic movement primitive requires the estimation of a larger
number of parameters compared to their deterministic counterparts, that focus
on modeling only the mean behavior. In this paper, we make use of prior
distributions over the parameters of a probabilistic movement primitive to make
robust estimates of the parameters with few training instances. In addition, we
introduce general purpose operators to adapt movement primitives in joint and
task space. The proposed training method and adaptation operators are tested in
a coffee preparation and in robot table tennis task. In the coffee preparation
task we evaluate the generalization performance to changes in the location of
the coffee grinder and brewing chamber in a target area, achieving the desired
behavior after only two demonstrations. In the table tennis task we evaluate
the hit and return rates, outperforming previous approaches while using fewer
task specific heuristics. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: EPIC 220204960: A Quadruple Star System Containing Two Strongly Interacting Eclipsing Binaries,
Abstract: We present a strongly interacting quadruple system associated with the K2
target EPIC 220204960. The K2 target itself is a Kp = 12.7 magnitude star at
Teff ~ 6100 K which we designate as "B-N" (blue northerly image). The host of
the quadruple system, however, is a Kp = 17 magnitude star with a composite
M-star spectrum, which we designate as "R-S" (red southerly image). With a 3.2"
separation and similar radial velocities and photometric distances, 'B-N' is
likely physically associated with 'R-S', making this a quintuple system, but
that is incidental to our main claim of a strongly interacting quadruple system
in 'R-S'. The two binaries in 'R-S' have orbital periods of 13.27 d and 14.41
d, respectively, and each has an inclination angle of >89 degrees. From our
analysis of radial velocity measurements, and of the photometric lightcurve, we
conclude that all four stars are very similar with masses close to 0.4 Msun.
Both of the binaries exhibit significant ETVs where those of the primary and
secondary eclipses 'diverge' by 0.05 days over the course of the 80-day
observations. Via a systematic set of numerical simulations of quadruple
systems consisting of two interacting binaries, we conclude that the outer
orbital period is very likely to be between 300 and 500 days. If sufficient
time is devoted to RV studies of this faint target, the outer orbit should be
measurable within a year. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Hall-Littlewood-PushTASEP and its KPZ limit,
Abstract: We study a new model of interactive particle systems which we call the
randomly activated cascading exclusion process (RACEP). Particles wake up
according to exponential clocks and then take a geometric number of steps. If
another particle is encountered during these steps, the first particle goes to
sleep at that location and the second is activated and proceeds accordingly. We
consider a totally asymmetric version of this model which we refer as
Hall-Littlewood-PushTASEP (HL-PushTASEP) on $\mathbb{Z}_{\geq 0}$ lattice where
particles only move right and where initially particles are distributed
according to Bernoulli product measure on $\mathbb{Z}_{\geq 0}$. We prove
KPZ-class limit theorems for the height function fluctuations. Under a
particular weak scaling, we also prove convergence to the solution of the KPZ
equation. | [
0,
0,
1,
1,
0,
0
] | [
"Mathematics",
"Physics"
] |
Title: Learning best K analogies from data distribution for case-based software effort estimation,
Abstract: Case-Based Reasoning (CBR) has been widely used to generate good software
effort estimates. The predictive performance of CBR is a dataset dependent and
subject to extremely large space of configuration possibilities. Regardless of
the type of adaptation technique, deciding on the optimal number of similar
cases to be used before applying CBR is a key challenge. In this paper we
propose a new technique based on Bisecting k-medoids clustering algorithm to
better understanding the structure of a dataset and discovering the the optimal
cases for each individual project by excluding irrelevant cases. Results
obtained showed that understanding of the data characteristic prior prediction
stage can help in automatically finding the best number of cases for each test
project. Performance figures of the proposed estimation method are better than
those of other regular K-based CBR methods. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Optimal boundary gradient estimates for Lamé systems with partially infinite coefficients,
Abstract: In this paper, we derive the pointwise upper bounds and lower bounds on the
gradients of solutions to the Lamé systems with partially infinite
coefficients as the surface of discontinuity of the coefficients of the system
is located very close to the boundary. When the distance tends to zero, the
optimal blow-up rates of the gradients are established for inclusions with
arbitrary shapes and in all dimensions. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Physics"
] |
Title: Particle-based and Meshless Methods with Aboria,
Abstract: Aboria is a powerful and flexible C++ library for the implementation of
particle-based numerical methods. The particles in such methods can represent
actual particles (e.g. Molecular Dynamics) or abstract particles used to
discretise a continuous function over a domain (e.g. Radial Basis Functions).
Aboria provides a particle container, compatible with the Standard Template
Library, spatial search data structures, and a Domain Specific Language to
specify non-linear operators on the particle set. This paper gives an overview
of Aboria's design, an example of use, and a performance benchmark. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Physics",
"Mathematics"
] |
Title: Stochastic evolution equations for large portfolios of stochastic volatility models,
Abstract: We consider a large market model of defaultable assets in which the asset
price processes are modelled as Heston-type stochastic volatility models with
default upon hitting a lower boundary. We assume that both the asset prices and
their volatilities are correlated through systemic Brownian motions. We are
interested in the loss process that arises in this setting and we prove the
existence of a large portfolio limit for the empirical measure process of this
system. This limit evolves as a measure valued process and we show that it will
have a density given in terms of a solution to a stochastic partial
differential equation of filtering type in the two-dimensional half-space, with
a Dirichlet boundary condition. We employ Malliavin calculus to establish the
existence of a regular density for the volatility component, and an
approximation by models of piecewise constant volatilities combined with a
kernel smoothing technique to obtain existence and regularity for the full
two-dimensional filtering problem. We are able to establish good regularity
properties for solutions, however uniqueness remains an open problem. | [
0,
0,
1,
0,
0,
0
] | [
"Quantitative Finance",
"Mathematics",
"Statistics"
] |
Title: Randomized Optimal Transport on a Graph: Framework and New Distance Measures,
Abstract: The recently developed bag-of-paths framework consists in setting a
Gibbs-Boltzmann distribution on all feasible paths of a graph. This probability
distribution favors short paths over long ones, with a free parameter (the
temperature $T > 0$) controlling the entropic level of the distribution. This
formalism enables the computation of new distances or dissimilarities,
interpolating between the shortest-path and the resistance distance, which have
been shown to perform well in clustering and classification tasks. In this
work, the bag-of-paths formalism is extended by adding two independent equality
constraints fixing starting and ending nodes distributions of paths. When the
temperature is low, this formalism is shown to be equivalent to a relaxation of
the optimal transport problem on a network where paths carry a flow between two
discrete distributions on nodes. The randomization is achieved by considering
free energy minimization instead of traditional cost minimization. Algorithms
computing the optimal free energy solution are developed for two types of
paths: hitting (or absorbing) paths and non-hitting, regular paths, and require
the inversion of an $n \times n$ matrix with $n$ being the number of nodes.
Interestingly, for regular paths, the resulting optimal policy interpolates
between the deterministic optimal transport policy ($T \rightarrow 0^{+}$) and
the solution to the corresponding electrical circuit ($T \rightarrow \infty$).
Two distance measures between nodes and a dissimilarity between groups of
nodes, both integrating weights on nodes, are derived from this framework. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: A Generative Model for Natural Sounds Based on Latent Force Modelling,
Abstract: Recent advances in analysis of subband amplitude envelopes of natural sounds
have resulted in convincing synthesis, showing subband amplitudes to be a
crucial component of perception. Probabilistic latent variable analysis is
particularly revealing, but existing approaches don't incorporate prior
knowledge about the physical behaviour of amplitude envelopes, such as
exponential decay and feedback. We use latent force modelling, a probabilistic
learning paradigm that incorporates physical knowledge into Gaussian process
regression, to model correlation across spectral subband envelopes. We augment
the standard latent force model approach by explicitly modelling correlations
over multiple time steps. Incorporating this prior knowledge strengthens the
interpretation of the latent functions as the source that generated the signal.
We examine this interpretation via an experiment which shows that sounds
generated by sampling from our probabilistic model are perceived to be more
realistic than those generated by similar models based on nonnegative matrix
factorisation, even in cases where our model is outperformed from a
reconstruction error perspective. | [
0,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: On the relation between representations and computability,
Abstract: One of the fundamental results in computability is the existence of
well-defined functions that cannot be computed. In this paper we study the
effects of data representation on computability; we show that, while for each
possible way of representing data there exist incomputable functions, the
computability of a specific abstract function is never an absolute property,
but depends on the representation used for the function domain. We examine the
scope of this dependency and provide mathematical criteria to favour some
representations over others. As we shall show, there are strong reasons to
suggest that computational enumerability should be an additional axiom for
computation models. We analyze the link between the techniques and effects of
representation changes and those of oracle machines, showing an important
connection between their hierarchies. Finally, these notions enable us to gain
a new insight on the Church-Turing thesis: its interpretation as the underlying
algebraic structure to which computation is invariant. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: No minimal tall Borel ideal in the Katětov order,
Abstract: Answering a question of the second listed author we show that there is no
tall Borel ideal minimal among all tall Borel ideals in the Katětov order. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: $ΔN_{\text{eff}}$ and entropy production from early-decaying gravitinos,
Abstract: Gravitinos are a fundamental prediction of supergravity, their mass ($m_{G}$)
is informative of the value of the SUSY breaking scale, and, if produced during
reheating, their number density is a function of the reheating temperature
($T_{\text{rh}}$). As a result, constraining their parameter space provides in
turn significant constraints on particles physics and cosmology. We have
previously shown that for gravitinos decaying into photons or charged particles
during the ($\mu$ and $y$) distortion eras, upcoming CMB spectral distortions
bounds are highly effective in constraining the $T_{\text{rh}}-m_{G}$ space.
For heavier gravitinos (with lifetimes shorter than a few $\times10^6$ sec),
distortions are quickly thermalized and energy injections cause a temperature
rise for the CMB bath. If the decay occurs after neutrino decoupling, its
overall effect is a suppression of the effective number of relativistic degrees
of freedom ($N_{\text{eff}}$). In this paper, we utilize the observational
bounds on $N_{\text{eff}}$ to constrain gravitino decays, and hence provide new
constaints on gravitinos and reheating. For gravitino masses less than $\approx
10^5$ GeV, current observations give an upper limit on the reheating scale in
the range of $\approx 5 \times 10^{10}- 5 \times 10^{11}$GeV. For masses
greater than $\approx 4 \times 10^3$ GeV they are more stringent than previous
bounds from BBN constraints, coming from photodissociation of deuterium, by
almost 2 orders of magnitude. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: MobInsight: A Framework Using Semantic Neighborhood Features for Localized Interpretations of Urban Mobility,
Abstract: Collective urban mobility embodies the residents' local insights on the city.
Mobility practices of the residents are produced from their spatial choices,
which involve various considerations such as the atmosphere of destinations,
distance, past experiences, and preferences. The advances in mobile computing
and the rise of geo-social platforms have provided the means for capturing the
mobility practices; however, interpreting the residents' insights is
challenging due to the scale and complexity of an urban environment, and its
unique context. In this paper, we present MobInsight, a framework for making
localized interpretations of urban mobility that reflect various aspects of the
urbanism. MobInsight extracts a rich set of neighborhood features through
holistic semantic aggregation, and models the mobility between all-pairs of
neighborhoods. We evaluate MobInsight with the mobility data of Barcelona and
demonstrate diverse localized and semantically-rich interpretations. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Sitatapatra: Blocking the Transfer of Adversarial Samples,
Abstract: Convolutional Neural Networks (CNNs) are widely used to solve classification
tasks in computer vision. However, they can be tricked into misclassifying
specially crafted `adversarial' samples -- and samples built to trick one model
often work alarmingly well against other models trained on the same task. In
this paper we introduce Sitatapatra, a system designed to block the transfer of
adversarial samples. It diversifies neural networks using a key, as in
cryptography, and provides a mechanism for detecting attacks. What's more, when
adversarial samples are detected they can typically be traced back to the
individual device that was used to develop them. The run-time overheads are
minimal permitting the use of Sitatapatra on constrained systems. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Weak-strong uniqueness in fluid dynamics,
Abstract: We give a survey of recent results on weak-strong uniqueness for compressible
and incompressible Euler and Navier-Stokes equations, and also make some new
observations. The importance of the weak-strong uniqueness principle stems, on
the one hand, from the instances of non-uniqueness for the Euler equations
exhibited in the past years; and on the other hand from the question of
convergence of singular limits, for which weak-strong uniqueness represents an
elegant tool. | [
0,
1,
1,
0,
0,
0
] | [
"Mathematics",
"Physics"
] |
Title: The first result on 76Ge neutrinoless double beta decay from CDEX-1 experiment,
Abstract: We report the first result on Ge-76 neutrinoless double beta decay from
CDEX-1 experiment at China Jinping Underground Laboratory. A mass of 994 g
p-type point-contact high purity germanium detector has been installed to
search the neutrinoless double beta decay events, as well as to directly detect
dark matter particles. An exposure of 304 kg*day has been analyzed. The
wideband spectrum from 500 keV to 3 MeV was obtained and the average event rate
at the 2.039 MeV energy range is about 0.012 count per keV per kg per day. The
half-life of Ge-76 neutrinoless double beta decay has been derived based on
this result as: T 1/2 > 6.4*10^22 yr (90% C.L.). An upper limit on the
effective Majorana-neutrino mass of 5.0 eV has been achieved. The possible
methods to further decrease the background level have been discussed and will
be pursued in the next stage of CDEX experiment. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Fully Optical Spacecraft Communications: Implementing an Omnidirectional PV-Cell Receiver and 8Mb/s LED Visible Light Downlink with Deep Learning Error Correction,
Abstract: Free space optical communication techniques have been the subject of numerous
investigations in recent years, with multiple missions expected to fly in the
near future. Existing methods require high pointing accuracies, drastically
driving up overall system cost. Recent developments in LED-based visible light
communication (VLC) and past in-orbit experiments have convinced us that the
technology has reached a critical level of maturity. On these premises, we
propose a new optical communication system utilizing a VLC downlink and a high
throughput, omnidirectional photovoltaic cell receiver system. By performing
error-correction via deep learning methods and by utilizing phase-delay
interference, the system is able to deliver data rates that match those of
traditional laser-based solutions. A prototype of the proposed system has been
constructed, demonstrating the scheme to be a feasible alternative to
laser-based methods. This creates an opportunity for the full scale development
of optical communication techniques on small spacecraft as a backup telemetry
beacon or as a high throughput link. | [
1,
0,
0,
0,
0,
0
] | [
"Physics",
"Computer Science"
] |
Title: Linear compartmental models: input-output equations and operations that preserve identifiability,
Abstract: This work focuses on the question of how identifiability of a mathematical
model, that is, whether parameters can be recovered from data, is related to
identifiability of its submodels. We look specifically at linear compartmental
models and investigate when identifiability is preserved after adding or
removing model components. In particular, we examine whether identifiability is
preserved when an input, output, edge, or leak is added or deleted. Our
approach, via differential algebra, is to analyze specific input-output
equations of a model and the Jacobian of the associated coefficient map. We
clarify a prior determinantal formula for these equations, and then use it to
prove that, under some hypotheses, a model's input-output equations can be
understood in terms of certain submodels we call "output-reachable". Our proofs
use algebraic and combinatorial techniques. | [
0,
0,
0,
0,
1,
0
] | [
"Mathematics",
"Statistics"
] |
Title: A Novel Approach to Forecasting Financial Volatility with Gaussian Process Envelopes,
Abstract: In this paper we use Gaussian Process (GP) regression to propose a novel
approach for predicting volatility of financial returns by forecasting the
envelopes of the time series. We provide a direct comparison of their
performance to traditional approaches such as GARCH. We compare the forecasting
power of three approaches: GP regression on the absolute and squared returns;
regression on the envelope of the returns and the absolute returns; and
regression on the envelope of the negative and positive returns separately. We
use a maximum a posteriori estimate with a Gaussian prior to determine our
hyperparameters. We also test the effect of hyperparameter updating at each
forecasting step. We use our approaches to forecast out-of-sample volatility of
four currency pairs over a 2 year period, at half-hourly intervals. From three
kernels, we select the kernel giving the best performance for our data. We use
two published accuracy measures and four statistical loss functions to evaluate
the forecasting ability of GARCH vs GPs. In mean squared error the GP's perform
20% better than a random walk model, and 50% better than GARCH for the same
data. | [
1,
0,
0,
1,
0,
0
] | [
"Quantitative Finance",
"Statistics"
] |
Title: Multipermutation Ulam Sphere Analysis Toward Characterizing Maximal Code Size,
Abstract: Permutation codes, in the form of rank modulation, have shown promise for
applications such as flash memory. One of the metrics recently suggested as
appropriate for rank modulation is the Ulam metric, which measures the minimum
translocation distance between permutations. Multipermutation codes have also
been proposed as a generalization of permutation codes that would improve code
size (and consequently the code rate). In this paper we analyze the Ulam metric
in the context of multipermutations, noting some similarities and differences
between the Ulam metric in the context of permutations. We also consider sphere
sizes for multipermutations under the Ulam metric and resulting bounds on code
size. | [
1,
0,
1,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: CM3: Cooperative Multi-goal Multi-stage Multi-agent Reinforcement Learning,
Abstract: We propose CM3, a new deep reinforcement learning method for cooperative
multi-agent problems where agents must coordinate for joint success in
achieving different individual goals. We restructure multi-agent learning into
a two-stage curriculum, consisting of a single-agent stage for learning to
accomplish individual tasks, followed by a multi-agent stage for learning to
cooperate in the presence of other agents. These two stages are bridged by
modular augmentation of neural network policy and value functions. We further
adapt the actor-critic framework to this curriculum by formulating local and
global views of the policy gradient and learning via a double critic,
consisting of a decentralized value function and a centralized action-value
function. We evaluated CM3 on a new high-dimensional multi-agent environment
with sparse rewards: negotiating lane changes among multiple autonomous
vehicles in the Simulation of Urban Mobility (SUMO) traffic simulator. Detailed
ablation experiments show the positive contribution of each component in CM3,
and the overall synthesis converges significantly faster to higher performance
policies than existing cooperative multi-agent methods. | [
0,
0,
0,
1,
0,
0
] | [
"Computer Science"
] |
Title: Adaptive twisting sliding mode control for quadrotor unmanned aerial vehicles,
Abstract: This work addresses the problem of robust attitude control of quadcopters.
First, the mathematical model of the quadcopter is derived considering factors
such as nonlinearity, external disturbances, uncertain dynamics and strong
coupling. An adaptive twisting sliding mode control algorithm is then developed
with the objective of controlling the quadcopter to track desired attitudes
under various conditions. For this, the twisting sliding mode control law is
modified with a proposed gain adaptation scheme to improve the control
transient and tracking performance. Extensive simulation studies and
comparisons with experimental data have been carried out for a Solo quadcopter.
The results show that the proposed control scheme can achieve strong robustness
against disturbances while is adaptable to parametric variations. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Predicting regional and pan-Arctic sea ice anomalies with kernel analog forecasting,
Abstract: Predicting Arctic sea ice extent is a notoriously difficult forecasting
problem, even for lead times as short as one month. Motivated by Arctic
intraannual variability phenomena such as reemergence of sea surface
temperature and sea ice anomalies, we use a prediction approach for sea ice
anomalies based on analog forecasting. Traditional analog forecasting relies on
identifying a single analog in a historical record, usually by minimizing
Euclidean distance, and forming a forecast from the analog's historical
trajectory. Here an ensemble of analogs are used to make forecasts, where the
ensemble weights are determined by a dynamics-adapted similarity kernel, which
takes into account the nonlinear geometry on the underlying data manifold. We
apply this method for forecasting pan-Arctic and regional sea ice area and
volume anomalies from multi-century climate model data, and in many cases find
improvement over the benchmark damped persistence forecast. Examples of success
include the 3--6 month lead time prediction of pan-Arctic area, the winter sea
ice area prediction of some marginal ice zone seas, and the 3--12 month lead
time prediction of sea ice volume anomalies in many central Arctic basins. We
discuss possible connections between KAF success and sea ice reemergence, and
find KAF to be successful in regions and seasons exhibiting high interannual
variability. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Statistics"
] |
Title: Large-sample approximations for variance-covariance matrices of high-dimensional time series,
Abstract: Distributional approximations of (bi--) linear functions of sample
variance-covariance matrices play a critical role to analyze vector time
series, as they are needed for various purposes, especially to draw inference
on the dependence structure in terms of second moments and to analyze
projections onto lower dimensional spaces as those generated by principal
components. This particularly applies to the high-dimensional case, where the
dimension $d$ is allowed to grow with the sample size $n$ and may even be
larger than $n$. We establish large-sample approximations for such bilinear
forms related to the sample variance-covariance matrix of a high-dimensional
vector time series in terms of strong approximations by Brownian motions. The
results cover weakly dependent as well as many long-range dependent linear
processes and are valid for uniformly $ \ell_1 $-bounded projection vectors,
which arise, either naturally or by construction, in many statistical problems
extensively studied for high-dimensional series. Among those problems are
sparse financial portfolio selection, sparse principal components, the LASSO,
shrinkage estimation and change-point analysis for high--dimensional time
series, which matter for the analysis of big data and are discussed in greater
detail. | [
0,
0,
1,
1,
0,
0
] | [
"Statistics",
"Mathematics",
"Quantitative Finance"
] |
Title: Spatial risk measures and rate of spatial diversification,
Abstract: An accurate assessment of the risk of extreme environmental events is of
great importance for populations, authorities and the banking/insurance
industry. Koch (2017) introduced a notion of spatial risk measure and a
corresponding set of axioms which are well suited to analyze the risk due to
events having a spatial extent, precisely such as environmental phenomena. The
axiom of asymptotic spatial homogeneity is of particular interest since it
allows one to quantify the rate of spatial diversification when the region
under consideration becomes large. In this paper, we first investigate the
general concepts of spatial risk measures and corresponding axioms further. We
also explain the usefulness of this theory for the actuarial practice. Second,
in the case of a general cost field, we especially give sufficient conditions
such that spatial risk measures associated with expectation, variance,
Value-at-Risk as well as expected shortfall and induced by this cost field
satisfy the axioms of asymptotic spatial homogeneity of order 0, -2, -1 and -1,
respectively. Last but not least, in the case where the cost field is a
function of a max-stable random field, we mainly provide conditions on both the
function and the max-stable field ensuring the latter properties. Max-stable
random fields are relevant when assessing the risk of extreme events since they
appear as a natural extension of multivariate extreme-value theory to the level
of random fields. Overall, this paper improves our understanding of spatial
risk measures as well as of their properties with respect to the space variable
and generalizes many results obtained in Koch (2017). | [
0,
0,
0,
0,
0,
1
] | [
"Statistics",
"Quantitative Finance"
] |
Title: Concentration of $1$-Lipschitz functions on manifolds with boundary with Dirichlet boundary condition,
Abstract: In this paper, we consider a concentration of measure problem on Riemannian
manifolds with boundary. We study concentration phenomena of non-negative
$1$-Lipschitz functions with Dirichlet boundary condition around zero, which is
called boundary concentration phenomena. We first examine relation between
boundary concentration phenomena and large spectral gap phenomena of Dirichlet
eigenvalues of Laplacian. We will obtain analogue of the Gromov-V. D. Milman
theorem and the Funano-Shioya theorem for closed manifolds. Furthermore, to
capture boundary concentration phenomena, we introduce a new invariant called
the observable inscribed radius. We will formulate comparison theorems for such
invariant under a lower Ricci curvature bound, and a lower mean curvature bound
for the boundary. Based on such comparison theorems, we investigate various
boundary concentration phenomena of sequences of manifolds with boundary. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Greedy Strategy Works for Clustering with Outliers and Coresets Construction,
Abstract: We study the problems of clustering with outliers in high dimension. Though a
number of methods have been developed in the past decades, it is still quite
challenging to design quality guaranteed algorithms with low complexities for
the problems. Our idea is inspired by the greedy method, Gonzalez's algorithm,
for solving the problem of ordinary $k$-center clustering. Based on some novel
observations, we show that this greedy strategy actually can handle
$k$-center/median/means clustering with outliers efficiently, in terms of
qualities and complexities. We further show that the greedy approach yields
small coreset for the problem in doubling metrics, so as to reduce the time
complexity significantly. Moreover, a by-product is that the coreset
construction can be applied to speedup the popular density-based clustering
approach DBSCAN. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: On the Heat Kernel and Weyl Anomaly of Schrödinger invariant theory,
Abstract: We propose a method inspired from discrete light cone quantization (DLCQ) to
determine the heat kernel for a Schrödinger field theory (Galilean boost
invariant with $z=2$ anisotropic scaling symmetry) living in $d+1$ dimensions,
coupled to a curved Newton-Cartan background starting from a heat kernel of a
relativistic conformal field theory ($z=1$) living in $d+2$ dimensions. We use
this method to show the Schrödinger field theory of a complex scalar field
cannot have any Weyl anomalies. To be precise, we show that the Weyl anomaly
$\mathcal{A}^{G}_{d+1}$ for Schrödinger theory is related to the Weyl anomaly
of a free relativistic scalar CFT $\mathcal{A}^{R}_{d+2}$ via
$\mathcal{A}^{G}_{d+1}= 2\pi \delta (m) \mathcal{A}^{R}_{d+2}$ where $m$ is the
charge of the scalar field under particle number symmetry. We provide further
evidence of vanishing anomaly by evaluating Feynman diagrams in all orders of
perturbation theory. We present an explicit calculation of the anomaly using a
regulated Schrödinger operator, without using the null cone reduction
technique. We generalise our method to show that a similar result holds for one
time derivative theories with even $z>2$. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Mathematics"
] |
Title: A computational method for estimating Burr XII parameters with complete and multiple censored data,
Abstract: Flexibility in shape and scale of Burr XII distribution can make close
approximation of numerous well-known probability density functions. Due to
these capabilities, the usages of Burr XII distribution are applied in risk
analysis, lifetime data analysis and process capability estimation. In this
paper the Cross-Entropy (CE) method is further developed in terms of Maximum
Likelihood Estimation (MLE) to estimate the parameters of Burr XII distribution
for the complete data or in the presence of multiple censoring. A simulation
study is conducted to evaluate the performance of the MLE by means of CE method
for different parameter settings and sample sizes. The results are compared to
other existing methods in both uncensored and censored situations. | [
0,
0,
0,
1,
0,
0
] | [
"Statistics",
"Mathematics"
] |
Title: Locally stationary spatio-temporal interpolation of Argo profiling float data,
Abstract: Argo floats measure seawater temperature and salinity in the upper 2,000 m of
the global ocean. Statistical analysis of the resulting spatio-temporal dataset
is challenging due to its nonstationary structure and large size. We propose
mapping these data using locally stationary Gaussian process regression where
covariance parameter estimation and spatio-temporal prediction are carried out
in a moving-window fashion. This yields computationally tractable nonstationary
anomaly fields without the need to explicitly model the nonstationary
covariance structure. We also investigate Student-$t$ distributed fine-scale
variation as a means to account for non-Gaussian heavy tails in ocean
temperature data. Cross-validation studies comparing the proposed approach with
the existing state-of-the-art demonstrate clear improvements in point
predictions and show that accounting for the nonstationarity and
non-Gaussianity is crucial for obtaining well-calibrated uncertainties. This
approach also provides data-driven local estimates of the spatial and temporal
dependence scales for the global ocean which are of scientific interest in
their own right. | [
0,
1,
0,
1,
0,
0
] | [
"Statistics",
"Quantitative Biology"
] |
Title: Knowledge Reuse for Customization: Metamodels in an Open Design Community for 3d Printing,
Abstract: Theories of knowledge reuse posit two distinct processes: reuse for
replication and reuse for innovation. We identify another distinct process,
reuse for customization. Reuse for customization is a process in which
designers manipulate the parameters of metamodels to produce models that
fulfill their personal needs. We test hypotheses about reuse for customization
in Thingiverse, a community of designers that shares files for
three-dimensional printing. 3D metamodels are reused more often than the 3D
models they generate. The reuse of metamodels is amplified when the metamodels
are created by designers with greater community experience. Metamodels make the
community's design knowledge available for reuse for customization-or further
extension of the metamodels, a kind of reuse for innovation. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: Central limit theorem for the variable bandwidth kernel density estimators,
Abstract: In this paper we study the ideal variable bandwidth kernel density estimator
introduced by McKay (1993) and Jones, McKay and Hu (1994) and the plug-in
practical version of the variable bandwidth kernel estimator with two sequences
of bandwidths as in Giné and Sang (2013). Based on the bias and variance
analysis of the ideal and true variable bandwidth kernel density estimators, we
study the central limit theorems for each of them. | [
0,
0,
1,
1,
0,
0
] | [
"Mathematics",
"Statistics"
] |
Title: Submap-based Pose-graph Visual SLAM: A Robust Visual Exploration and Localization System,
Abstract: For VSLAM (Visual Simultaneous Localization and Mapping), localization is a
challenging task, especially for some challenging situations: textureless
frames, motion blur, etc.. To build a robust exploration and localization
system in a given space or environment, a submap-based VSLAM system is proposed
in this paper. Our system uses a submap back-end and a visual front-end. The
main advantage of our system is its robustness with respect to tracking
failure, a common problem in current VSLAM algorithms. The robustness of our
system is compared with the state-of-the-art in terms of average tracking
percentage. The precision of our system is also evaluated in terms of ATE
(absolute trajectory error) RMSE (root mean square error) comparing the
state-of-the-art. The ability of our system in solving the `kidnapped' problem
is demonstrated. Our system can improve the robustness of visual localization
in challenging situations. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Robotics"
] |
Title: SuperSpike: Supervised learning in multi-layer spiking neural networks,
Abstract: A vast majority of computation in the brain is performed by spiking neural
networks. Despite the ubiquity of such spiking, we currently lack an
understanding of how biological spiking neural circuits learn and compute
in-vivo, as well as how we can instantiate such capabilities in artificial
spiking circuits in-silico. Here we revisit the problem of supervised learning
in temporally coding multi-layer spiking neural networks. First, by using a
surrogate gradient approach, we derive SuperSpike, a nonlinear voltage-based
three factor learning rule capable of training multi-layer networks of
deterministic integrate-and-fire neurons to perform nonlinear computations on
spatiotemporal spike patterns. Second, inspired by recent results on feedback
alignment, we compare the performance of our learning rule under different
credit assignment strategies for propagating output errors to hidden units.
Specifically, we test uniform, symmetric and random feedback, finding that
simpler tasks can be solved with any type of feedback, while more complex tasks
require symmetric feedback. In summary, our results open the door to obtaining
a better scientific understanding of learning and computation in spiking neural
networks by advancing our ability to train them to solve nonlinear problems
involving transformations between different spatiotemporal spike-time patterns. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Quantitative Biology"
] |
Title: Mean Field Residual Networks: On the Edge of Chaos,
Abstract: We study randomly initialized residual networks using mean field theory and
the theory of difference equations. Classical feedforward neural networks, such
as those with tanh activations, exhibit exponential behavior on the average
when propagating inputs forward or gradients backward. The exponential forward
dynamics causes rapid collapsing of the input space geometry, while the
exponential backward dynamics causes drastic vanishing or exploding gradients.
We show, in contrast, that by adding skip connections, the network will,
depending on the nonlinearity, adopt subexponential forward and backward
dynamics, and in many cases in fact polynomial. The exponents of these
polynomials are obtained through analytic methods and proved and verified
empirically to be correct. In terms of the "edge of chaos" hypothesis, these
subexponential and polynomial laws allow residual networks to "hover over the
boundary between stability and chaos," thus preserving the geometry of the
input space and the gradient information flow. In our experiments, for each
activation function we study here, we initialize residual networks with
different hyperparameters and train them on MNIST. Remarkably, our
initialization time theory can accurately predict test time performance of
these networks, by tracking either the expected amount of gradient explosion or
the expected squared distance between the images of two input vectors.
Importantly, we show, theoretically as well as empirically, that common
initializations such as the Xavier or the He schemes are not optimal for
residual networks, because the optimal initialization variances depend on the
depth. Finally, we have made mathematical contributions by deriving several new
identities for the kernels of powers of ReLU functions by relating them to the
zeroth Bessel function of the second kind. | [
1,
1,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Majorana Spin Liquids, Topology and Superconductivity in Ladders,
Abstract: We theoretically address spin chain analogs of the Kitaev quantum spin model
on the honeycomb lattice. The emergent quantum spin liquid phases or Anderson
resonating valence bond (RVB) states can be understood, as an effective model,
in terms of p-wave superconductivity and Majorana fermions. We derive a
generalized phase diagram for the two-leg ladder system with tunable
interaction strengths between chains allowing us to vary the shape of the
lattice (from square to honeycomb ribbon or brickwall ladder). We evaluate the
winding number associated with possible emergent (topological) gapless modes at
the edges. In the Az phase, as a result of the emergent Z2 gauge fields and
pi-flux ground state, one may build spin-1/2 (loop) qubit operators by analogy
to the toric code. In addition, we show how the intermediate gapless B phase
evolves in the generalized ladder model. For the brickwall ladder, the $B$
phase is reduced to one line, which is analyzed through perturbation theory in
a rung tensor product states representation and bosonization. Finally, we show
that doping with a few holes can result in the formation of hole pairs and
leads to a mapping with the Su-Schrieffer-Heeger model in polyacetylene; a
superconducting-insulating quantum phase transition for these hole pairs is
accessible, as well as related topological properties. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Performance Improvement in Noisy Linear Consensus Networks with Time-Delay,
Abstract: We analyze performance of a class of time-delay first-order consensus
networks from a graph topological perspective and present methods to improve
it. The performance is measured by network's square of H-2 norm and it is shown
that it is a convex function of Laplacian eigenvalues and the coupling weights
of the underlying graph of the network. First, we propose a tight convex, but
simple, approximation of the performance measure in order to achieve lower
complexity in our design problems by eliminating the need for
eigen-decomposition. The effect of time-delay reincarnates itself in the form
of non-monotonicity, which results in nonintuitive behaviors of the performance
as a function of graph topology. Next, we present three methods to improve the
performance by growing, re-weighting, or sparsifying the underlying graph of
the network. It is shown that our suggested algorithms provide near-optimal
solutions with lower complexity with respect to existing methods in literature. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: On the Liouville heat kernel for k-coarse MBRW and nonuniversality,
Abstract: We study the Liouville heat kernel (in the $L^2$ phase) associated with a
class of logarithmically correlated Gaussian fields on the two dimensional
torus. We show that for each $\varepsilon>0$ there exists such a field, whose
covariance is a bounded perturbation of that of the two dimensional Gaussian
free field, and such that the associated Liouville heat kernel satisfies the
short time estimates, $$ \exp \left( - t^{ - \frac 1 { 1 + \frac 1 2 \gamma^2 }
- \varepsilon } \right) \le p_t^\gamma (x, y) \le \exp \left( - t^{- \frac 1 {
1 + \frac 1 2 \gamma^2 } + \varepsilon } \right) , $$ for $\gamma<1/2$. In
particular, these are different from predictions, due to Watabiki, concerning
the Liouville heat kernel for the two dimensional Gaussian free field. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Physics"
] |
Title: Proportionally Representative Participatory Budgeting: Axioms and Algorithms,
Abstract: Participatory budgeting is one of the exciting developments in deliberative
grassroots democracy. We concentrate on approval elections and propose
proportional representation axioms in participatory budgeting, by generalizing
relevant axioms for approval-based multi-winner elections. We observe a rich
landscape with respect to the computational complexity of identifying
proportional budgets and computing such, and present budgeting methods that
satisfy these axioms by identifying budgets that are representative to the
demands of vast segments of the voters. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Standard Zero-Free Regions for Rankin--Selberg L-Functions via Sieve Theory,
Abstract: We give a simple proof of a standard zero-free region in the $t$-aspect for
the Rankin--Selberg $L$-function $L(s,\pi \times \widetilde{\pi})$ for any
unitary cuspidal automorphic representation $\pi$ of
$\mathrm{GL}_n(\mathbb{A}_F)$ that is tempered at every nonarchimedean place
outside a set of Dirichlet density zero. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Femtosecond X-ray Fourier holography imaging of free-flying nanoparticles,
Abstract: Ultrafast X-ray imaging provides high resolution information on individual
fragile specimens such as aerosols, metastable particles, superfluid quantum
systems and live biospecimen, which is inaccessible with conventional imaging
techniques. Coherent X-ray diffractive imaging, however, suffers from intrinsic
loss of phase, and therefore structure recovery is often complicated and not
always uniquely-defined. Here, we introduce the method of in-flight holography,
where we use nanoclusters as reference X-ray scatterers in order to encode
relative phase information into diffraction patterns of a virus. The resulting
hologram contains an unambiguous three-dimensional map of a virus and two
nanoclusters with the highest lat- eral resolution so far achieved via single
shot X-ray holography. Our approach unlocks the benefits of holography for
ultrafast X-ray imaging of nanoscale, non-periodic systems and paves the way to
direct observation of complex electron dynamics down to the attosecond time
scale. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Quantitative Biology"
] |
Title: Finding Archetypal Spaces for Data Using Neural Networks,
Abstract: Archetypal analysis is a type of factor analysis where data is fit by a
convex polytope whose corners are "archetypes" of the data, with the data
represented as a convex combination of these archetypal points. While
archetypal analysis has been used on biological data, it has not achieved
widespread adoption because most data are not well fit by a convex polytope in
either the ambient space or after standard data transformations. We propose a
new approach to archetypal analysis. Instead of fitting a convex polytope
directly on data or after a specific data transformation, we train a neural
network (AAnet) to learn a transformation under which the data can best fit
into a polytope. We validate this approach on synthetic data where we add
nonlinearity. Here, AAnet is the only method that correctly identifies the
archetypes. We also demonstrate AAnet on two biological datasets. In a T cell
dataset measured with single cell RNA-sequencing, AAnet identifies several
archetypal states corresponding to naive, memory, and cytotoxic T cells. In a
dataset of gut microbiome profiles, AAnet recovers both previously described
microbiome states and identifies novel extrema in the data. Finally, we show
that AAnet has generative properties allowing us to uniformly sample from the
data geometry even when the input data is not uniformly distributed. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics",
"Quantitative Biology"
] |
Title: DeepArchitect: Automatically Designing and Training Deep Architectures,
Abstract: In deep learning, performance is strongly affected by the choice of
architecture and hyperparameters. While there has been extensive work on
automatic hyperparameter optimization for simple spaces, complex spaces such as
the space of deep architectures remain largely unexplored. As a result, the
choice of architecture is done manually by the human expert through a slow
trial and error process guided mainly by intuition. In this paper we describe a
framework for automatically designing and training deep models. We propose an
extensible and modular language that allows the human expert to compactly
represent complex search spaces over architectures and their hyperparameters.
The resulting search spaces are tree-structured and therefore easy to traverse.
Models can be automatically compiled to computational graphs once values for
all hyperparameters have been chosen. We can leverage the structure of the
search space to introduce different model search algorithms, such as random
search, Monte Carlo tree search (MCTS), and sequential model-based optimization
(SMBO). We present experiments comparing the different algorithms on CIFAR-10
and show that MCTS and SMBO outperform random search. In addition, these
experiments show that our framework can be used effectively for model
discovery, as it is possible to describe expressive search spaces and discover
competitive models without much effort from the human expert. Code for our
framework and experiments has been made publicly available. | [
0,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Cycle packings of the complete multigraph,
Abstract: Bryant, Horsley, Maenhaut and Smith recently gave necessary and sufficient
conditions for when the complete multigraph can be decomposed into cycles of
specified lengths $m_1,m_2,\ldots,m_\tau$. In this paper we characterise
exactly when there exists a packing of the complete multigraph with cycles of
specified lengths $m_1,m_2,\ldots,m_\tau$. While cycle decompositions can give
rise to packings by removing cycles from the decomposition, in general it is
not known when there exists a packing of the complete multigraph with cycles of
various specified lengths. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Integer Factorization with a Neuromorphic Sieve,
Abstract: The bound to factor large integers is dominated by the computational effort
to discover numbers that are smooth, typically performed by sieving a
polynomial sequence. On a von Neumann architecture, sieving has log-log
amortized time complexity to check each value for smoothness. This work
presents a neuromorphic sieve that achieves a constant time check for
smoothness by exploiting two characteristic properties of neuromorphic
architectures: constant time synaptic integration and massively parallel
computation. The approach is validated by modifying msieve, one of the fastest
publicly available integer factorization implementations, to use the IBM
Neurosynaptic System (NS1e) as a coprocessor for the sieving stage. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Wright-Fisher diffusions for evolutionary games with death-birth updating,
Abstract: We investigate spatial evolutionary games with death-birth updating in large
finite populations. Within growing spatial structures subject to appropriate
conditions, the density processes of a fixed type are proven to converge to the
Wright-Fisher diffusions with drift. In addition, convergence in the
Wasserstein distance of the laws of their occupation measures holds. The proofs
of these results develop along an equivalence between the laws of the
evolutionary games and certain voter models and rely on the analogous results
of voter models on large finite sets by convergences of the Radon-Nikodym
derivative processes. As another application of this equivalence of laws, we
show that in a general, large population of size $N$, for which the stationary
probabilities of the corresponding voting kernel are comparable to uniform
probabilities, a first-derivative test among the major methods for these
evolutionary games is applicable at least up to weak selection strengths in the
usual biological sense (that is, selection strengths of the order $\mathcal
O(1/N)$). | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Quantitative Biology"
] |
Title: On the sharpness and the injective property of basic justification models,
Abstract: Justification Awareness Models, JAMs, were proposed by S.~Artemov as a tool
for modelling epistemic scenarios like Russel's Prime Minister example. It was
demonstrated that the sharpness and the injective property of a model play
essential role in the epistemic usage of JAMs. The problem to axiomatize these
properties using the propositional justification language was left opened. We
propose the solution and define a decidable justification logic Jref that is
sound and complete with respect to the class of all sharp injective
justification models. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Smart TWAP trading in continuous-time equilibria,
Abstract: This paper presents a continuous-time equilibrium model of TWAP trading and
liquidity provision in a market with multiple strategic investors with
heterogeneous intraday trading targets. We solve the model in closed-form and
show there are infinitely many equilibria. We compare the competitive
equilibrium with different non-price-taking equilibria. In addition, we show
intraday TWAP benchmarking reduces market liquidity relative to just terminal
trading targets alone. The model is computationally tractable, and we provide a
number of numerical illustrations. An extension to stochastic VWAP targets is
also provided. | [
0,
0,
0,
0,
0,
1
] | [
"Quantitative Finance",
"Mathematics"
] |
Title: Characterization of Thermal Neutron Beam Monitors,
Abstract: Neutron beam monitors with high efficiency, low gamma sensitivity, high time
and space resolution are required in neutron beam experiments to continuously
diagnose the delivered beam. In this work, commercially available neutron beam
monitors have been characterized using the R2D2 beamline at IFE (Norway) and
using a Be-based neutron source. For the gamma sensitivity measurements
different gamma sources have been used. The evaluation of the monitors
includes, the study of their efficiency, attenuation, scattering and
sensitivity to gamma. In this work we report the results of this
characterization. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Deep Learning on Attributed Graphs: A Journey from Graphs to Their Embeddings and Back,
Abstract: A graph is a powerful concept for representation of relations between pairs
of entities. Data with underlying graph structure can be found across many
disciplines and there is a natural desire for understanding such data better.
Deep learning (DL) has achieved significant breakthroughs in a variety of
machine learning tasks in recent years, especially where data is structured on
a grid, such as in text, speech, or image understanding. However, surprisingly
little has been done to explore the applicability of DL on arbitrary
graph-structured data directly.
The goal of this thesis is to investigate architectures for DL on graphs and
study how to transfer, adapt or generalize concepts that work well on
sequential and image data to this domain. We concentrate on two important
primitives: embedding graphs or their nodes into a continuous vector space
representation (encoding) and, conversely, generating graphs from such vectors
back (decoding). To that end, we make the following contributions.
First, we introduce Edge-Conditioned Convolutions (ECC), a convolution-like
operation on graphs performed in the spatial domain where filters are
dynamically generated based on edge attributes. The method is used to encode
graphs with arbitrary and varying structure.
Second, we propose SuperPoint Graph, an intermediate point cloud
representation with rich edge attributes encoding the contextual relationship
between object parts. Based on this representation, ECC is employed to segment
large-scale point clouds without major sacrifice in fine details.
Third, we present GraphVAE, a graph generator allowing us to decode graphs
with variable but upper-bounded number of nodes making use of approximate graph
matching for aligning the predictions of an autoencoder with its inputs. The
method is applied to the task of molecule generation. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science"
] |
Title: Supermodular Optimization for Redundant Robot Assignment under Travel-Time Uncertainty,
Abstract: This paper considers the assignment of multiple mobile robots to goal
locations under uncertain travel time estimates. Our aim is to produce optimal
assignments, such that the average waiting time at destinations is minimized.
Our premise is that time is the most valuable asset in the system. Hence, we
make use of redundant robots to counter the effect of uncertainty. Since
solving the redundant assignment problem is strongly NP-hard, we exploit
structural properties of our problem to propose a polynomial-time, near-optimal
solution. We demonstrate that our problem can be reduced to minimizing a
supermodular cost function subject to a matroid constraint. This allows us to
develop a greedy algorithm, for which we derive sub-optimality bounds. A
comparison with the baseline non-redundant assignment shows that redundant
assignment reduces the waiting time at goals, and that this performance gap
increases as noise increases. Finally, we evaluate our method on a mobility
data set (specifying vehicle availability and passenger requests), recorded in
the area of Manhattan, New York. Our algorithm performs in real-time, and
reduces passenger waiting times when travel times are uncertain. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: $Σ$-pure-injective modules for string algebras and linear relations,
Abstract: We prove that indecomposable $\Sigma$-pure-injective modules for a string
algebra are string or band modules. The key step in our proof is a splitting
result for infinite-dimensional linear relations. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Risk-sensitive Inverse Reinforcement Learning via Semi- and Non-Parametric Methods,
Abstract: The literature on Inverse Reinforcement Learning (IRL) typically assumes that
humans take actions in order to minimize the expected value of a cost function,
i.e., that humans are risk neutral. Yet, in practice, humans are often far from
being risk neutral. To fill this gap, the objective of this paper is to devise
a framework for risk-sensitive IRL in order to explicitly account for a human's
risk sensitivity. To this end, we propose a flexible class of models based on
coherent risk measures, which allow us to capture an entire spectrum of risk
preferences from risk-neutral to worst-case. We propose efficient
non-parametric algorithms based on linear programming and semi-parametric
algorithms based on maximum likelihood for inferring a human's underlying risk
measure and cost function for a rich class of static and dynamic
decision-making settings. The resulting approach is demonstrated on a simulated
driving game with ten human participants. Our method is able to infer and mimic
a wide range of qualitatively different driving styles from highly risk-averse
to risk-neutral in a data-efficient manner. Moreover, comparisons of the
Risk-Sensitive (RS) IRL approach with a risk-neutral model show that the RS-IRL
framework more accurately captures observed participant behavior both
qualitatively and quantitatively, especially in scenarios where catastrophic
outcomes such as collisions can occur. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: TRAGALDABAS. First results on cosmic ray studies and their relation with the solar activity, the Earth magnetic field and the atmospheric properties,
Abstract: Cosmic rays originating from extraterrestrial sources are permanently
arriving at Earth atmosphere, where they produce up to billions of secondary
particles. The analysis of the secondary particles reaching to the surface of
the Earth may provide a very valuable information about the Sun activity,
changes in the geomagnetic field and the atmosphere, among others. In this
article, we present the first preliminary results of the analysis of the cosmic
rays measured with a high resolution tracking detector, TRAGALDABAS, located at
the Univ. of Santiago de Compostela, in Spain. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Smoothed GMM for quantile models,
Abstract: This paper develops theory for feasible estimators of finite-dimensional
parameters identified by general conditional quantile restrictions, under much
weaker assumptions than previously seen in the literature. This includes
instrumental variables nonlinear quantile regression as a special case. More
specifically, we consider a set of unconditional moments implied by the
conditional quantile restrictions, providing conditions for local
identification. Since estimators based on the sample moments are generally
impossible to compute numerically in practice, we study feasible estimators
based on smoothed sample moments. We propose a method of moments estimator for
exactly identified models, as well as a generalized method of moments estimator
for over-identified models. We establish consistency and asymptotic normality
of both estimators under general conditions that allow for weakly dependent
data and nonlinear structural models. Simulations illustrate the finite-sample
properties of the methods. Our in-depth empirical application concerns the
consumption Euler equation derived from quantile utility maximization.
Advantages of the quantile Euler equation include robustness to fat tails,
decoupling of risk attitude from the elasticity of intertemporal substitution,
and log-linearization without any approximation error. For the four countries
we examine, the quantile estimates of discount factor and elasticity of
intertemporal substitution are economically reasonable for a range of quantiles
above the median, even when two-stage least squares estimates are not
reasonable. | [
0,
0,
1,
1,
0,
0
] | [
"Statistics",
"Quantitative Finance",
"Mathematics"
] |
Title: Orientability of the moduli space of Spin(7)-instantons,
Abstract: Let $(M,\Omega)$ be a closed $8$-dimensional manifold equipped with a
generically non-integrable $\mathrm{Spin}(7)$-structure $\Omega$. We prove that
if $\mathrm{Hom}(H^{3}(M,\mathbb{Z}), \mathbb{Z}_{2}) = 0$ then the moduli
space of irreducible $\mathrm{Spin}(7)$-instantons on $(M,\Omega)$ with gauge
group $\mathrm{SU}(r)$, $r\geq 2$, is orientable. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Physics"
] |
Title: Context Prediction for Unsupervised Deep Learning on Point Clouds,
Abstract: Point clouds provide a flexible and natural representation usable in
countless applications such as robotics or self-driving cars. Recently, deep
neural networks operating on raw point cloud data have shown promising results
on supervised learning tasks such as object classification and semantic
segmentation. While massive point cloud datasets can be captured using modern
scanning technology, manually labelling such large 3D point clouds for
supervised learning tasks is a cumbersome process. This necessitates effective
unsupervised learning methods that can produce representations such that
downstream tasks require significantly fewer annotated samples. We propose a
novel method for unsupervised learning on raw point cloud data in which a
neural network is trained to predict the spatial relationship between two point
cloud segments. While solving this task, representations that capture semantic
properties of the point cloud are learned. Our method outperforms previous
unsupervised learning approaches in downstream object classification and
segmentation tasks and performs on par with fully supervised methods. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science"
] |
Title: A core-set approach for distributed quadratic programming in big-data classification,
Abstract: A new challenge for learning algorithms in cyber-physical network systems is
the distributed solution of big-data classification problems, i.e., problems in
which both the number of training samples and their dimension is high.
Motivated by several problem set-ups in Machine Learning, in this paper we
consider a special class of quadratic optimization problems involving a "large"
number of input data, whose dimension is "big". To solve these quadratic
optimization problems over peer-to-peer networks, we propose an asynchronous,
distributed algorithm that scales with both the number and the dimension of the
input data (training samples in the classification problem). The proposed
distributed optimization algorithm relies on the notion of "core-set" which is
used in geometric optimization to approximate the value function associated to
a given set of points with a smaller subset of points. By computing local
core-sets on a smaller version of the global problem and exchanging them with
neighbors, the nodes reach consensus on a set of active constraints
representing an approximate solution for the global quadratic program. | [
1,
0,
1,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: The Weighted Kendall and High-order Kernels for Permutations,
Abstract: We propose new positive definite kernels for permutations. First we introduce
a weighted version of the Kendall kernel, which allows to weight unequally the
contributions of different item pairs in the permutations depending on their
ranks. Like the Kendall kernel, we show that the weighted version is invariant
to relabeling of items and can be computed efficiently in $O(n \ln(n))$
operations, where $n$ is the number of items in the permutation. Second, we
propose a supervised approach to learn the weights by jointly optimizing them
with the function estimated by a kernel machine. Third, while the Kendall
kernel considers pairwise comparison between items, we extend it by considering
higher-order comparisons among tuples of items and show that the supervised
approach of learning the weights can be systematically generalized to
higher-order permutation kernels. | [
0,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics",
"Mathematics"
] |
Title: Fair k-Center Clustering for Data Summarization,
Abstract: In data summarization we want to choose k prototypes in order to summarize a
data set. We study a setting where the data set comprises several demographic
groups and we are restricted to choose k_i prototypes belonging to group i. A
common approach to the problem without the fairness constraint is to optimize a
centroid-based clustering objective such as k-center. A natural extension then
is to incorporate the fairness constraint into the clustering objective.
Existing algorithms for doing so run in time super-quadratic in the size of the
data set. This is in contrast to the standard k-center objective that can be
approximately optimized in linear time. In this paper, we resolve this gap by
providing a simple approximation algorithm for the k-center problem under the
fairness constraint with running time linear in the size of the data set and k.
If the number of demographic groups is small, the approximation guarantee of
our algorithm only incurs a constant-factor overhead. We demonstrate the
applicability of our algorithm on both synthetic and real data sets. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Extended Trust-Region Problems with One or Two Balls: Exact Copositive and Lagrangian Relaxations,
Abstract: We establish a geometric condition guaranteeing exact copositive relaxation
for the nonconvex quadratic optimization problem under two quadratic and
several linear constraints, and present sufficient conditions for global
optimality in terms of generalized Karush-Kuhn-Tucker multipliers. The
copositive relaxation is tighter than the usual Lagrangian relaxation. We
illustrate this by providing a whole class of quadratic optimization problems
that enjoys exactness of copositive relaxation while the usual Lagrangian
duality gap is infinite. Finally, we also provide verifiable conditions under
which both the usual Lagrangian relaxation and the copositive relaxation are
exact for an extended CDT (two-ball trust-region) problem. Importantly, the
sufficient conditions can be verified by solving linear optimization problems. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Computer Science"
] |
Title: Backpropagation through the Void: Optimizing control variates for black-box gradient estimation,
Abstract: Gradient-based optimization is the foundation of deep learning and
reinforcement learning. Even when the mechanism being optimized is unknown or
not differentiable, optimization using high-variance or biased gradient
estimates is still often the best strategy. We introduce a general framework
for learning low-variance, unbiased gradient estimators for black-box functions
of random variables. Our method uses gradients of a neural network trained
jointly with model parameters or policies, and is applicable in both discrete
and continuous settings. We demonstrate this framework for training discrete
latent-variable models. We also give an unbiased, action-conditional extension
of the advantage actor-critic reinforcement learning algorithm. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: A Probabilistic Disease Progression Model for Predicting Future Clinical Outcome,
Abstract: In this work, we consider the problem of predicting the course of a
progressive disease, such as cancer or Alzheimer's. Progressive diseases often
start with mild symptoms that might precede a diagnosis, and each patient
follows their own trajectory. Patient trajectories exhibit wild variability,
which can be associated with many factors such as genotype, age, or sex. An
additional layer of complexity is that, in real life, the amount and type of
data available for each patient can differ significantly. For example, for one
patient we might have no prior history, whereas for another patient we might
have detailed clinical assessments obtained at multiple prior time-points. This
paper presents a probabilistic model that can handle multiple modalities
(including images and clinical assessments) and variable patient histories with
irregular timings and missing entries, to predict clinical scores at future
time-points. We use a sigmoidal function to model latent disease progression,
which gives rise to clinical observations in our generative model. We
implemented an approximate Bayesian inference strategy on the proposed model to
estimate the parameters on data from a large population of subjects.
Furthermore, the Bayesian framework enables the model to automatically
fine-tune its predictions based on historical observations that might be
available on the test subject. We applied our method to a longitudinal
Alzheimer's disease dataset with more than 3000 subjects [23] and present a
detailed empirical analysis of prediction performance under different
scenarios, with comparisons against several benchmarks. We also demonstrate how
the proposed model can be interrogated to glean insights about temporal
dynamics in Alzheimer's disease. | [
0,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics",
"Quantitative Biology"
] |
Title: Satellite altimetry reveals spatial patterns of variations in the Baltic Sea wave climate,
Abstract: The main properties of the climate of waves in the seasonally ice-covered
Baltic Sea and its decadal changes since 1990 are estimated from satellite
altimetry data. The data set of significant wave heights (SWH) from all
existing nine satellites, cleaned and cross-validated against in situ
measurements, shows overall a very consistent picture. A comparison with visual
observations shows a good correspondence with correlation coefficients of
0.6-0.8. The annual mean SWH reveals a tentative increase of 0.005 m yr-1, but
higher quantiles behave in a cyclic manner with a timescale of 10-15 yr.
Changes in the basin-wide average SWH have a strong meridional pattern: an
increase in the central and western parts of the sea and decrease in the east.
This pattern is likely caused by a rotation of wind directions rather than by
an increase in the wind speed. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Measuring the Declared SDK Versions and Their Consistency with API Calls in Android Apps,
Abstract: Android has been the most popular smartphone system, with multiple platform
versions (e.g., KITKAT and Lollipop) active in the market. To manage the
application's compatibility with one or more platform versions, Android allows
apps to declare the supported platform SDK versions in their manifest files. In
this paper, we make a first effort to study this modern software mechanism. Our
objective is to measure the current practice of the declared SDK versions
(which we term as DSDK versions afterwards) in real apps, and the consistency
between the DSDK versions and their app API calls. To this end, we perform a
three-dimensional analysis. First, we parse Android documents to obtain a
mapping between each API and their corresponding platform versions. We then
analyze the DSDK-API consistency for over 24K apps, among which we pre-exclude
1.3K apps that provide different app binaries for different Android versions
through Google Play analysis. Besides shedding light on the current DSDK
practice, our study quantitatively measures the two side effects of
inappropriate DSDK versions: (i) around 1.8K apps have API calls that do not
exist in some declared SDK versions, which causes runtime crash bugs on those
platform versions; (ii) over 400 apps, due to claiming the outdated targeted
DSDK versions, are potentially exploitable by remote code execution. These
results indicate the importance and difficulty of declaring correct DSDK, and
our work can help developers fulfill this goal. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: NullHop: A Flexible Convolutional Neural Network Accelerator Based on Sparse Representations of Feature Maps,
Abstract: Convolutional neural networks (CNNs) have become the dominant neural network
architecture for solving many state-of-the-art (SOA) visual processing tasks.
Even though Graphical Processing Units (GPUs) are most often used in training
and deploying CNNs, their power efficiency is less than 10 GOp/s/W for
single-frame runtime inference. We propose a flexible and efficient CNN
accelerator architecture called NullHop that implements SOA CNNs useful for
low-power and low-latency application scenarios. NullHop exploits the sparsity
of neuron activations in CNNs to accelerate the computation and reduce memory
requirements. The flexible architecture allows high utilization of available
computing resources across kernel sizes ranging from 1x1 to 7x7. NullHop can
process up to 128 input and 128 output feature maps per layer in a single pass.
We implemented the proposed architecture on a Xilinx Zynq FPGA platform and
present results showing how our implementation reduces external memory
transfers and compute time in five different CNNs ranging from small ones up to
the widely known large VGG16 and VGG19 CNNs. Post-synthesis simulations using
Mentor Modelsim in a 28nm process with a clock frequency of 500 MHz show that
the VGG19 network achieves over 450 GOp/s. By exploiting sparsity, NullHop
achieves an efficiency of 368%, maintains over 98% utilization of the MAC
units, and achieves a power efficiency of over 3TOp/s/W in a core area of
6.3mm$^2$. As further proof of NullHop's usability, we interfaced its FPGA
implementation with a neuromorphic event camera for real time interactive
demonstrations. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: A Decentralized Optimization Framework for Energy Harvesting Devices,
Abstract: Designing decentralized policies for wireless communication networks is a
crucial problem, which has only been partially solved in the literature so far.
In this paper, we propose the Decentralized Markov Decision Process (Dec-MDP)
framework to analyze a wireless sensor network with multiple users which access
a common wireless channel. We consider devices with energy harvesting
capabilities, so that they aim at balancing the energy arrivals with the data
departures and with the probability of colliding with other nodes. Randomly
over time, an access point triggers a SYNC slot, wherein it recomputes the
optimal transmission parameters of the whole network, and distributes this
information. Every node receives its own policy, which specifies how it should
access the channel in the future, and, thereafter, proceeds in a fully
decentralized fashion, without interacting with other entities in the network.
We propose a multi-layer Markov model, where an external MDP manages the jumps
between SYNC slots, and an internal Dec-MDP computes the optimal policy in the
near future. We numerically show that, because of the harvesting, a fully
orthogonal scheme (e.g., TDMA-like) is suboptimal in energy harvesting
scenarios, and the optimal trade-off lies between an orthogonal and a random
access system. | [
1,
0,
1,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Inf-sup stable finite-element methods for the Landau--Lifshitz--Gilbert and harmonic map heat flow equation,
Abstract: In this paper we propose and analyze a finite element method for both the
harmonic map heat and Landau--Lifshitz--Gilbert equation, the time variable
remaining continuous. Our starting point is to set out a unified saddle point
approach for both problems in order to impose the unit sphere constraint at the
nodes since the only polynomial function satisfying the unit sphere constraint
everywhere are constants. A proper inf-sup condition is proved for the Lagrange
multiplier leading to the well-posedness of the unified formulation. \emph{A
priori} energy estimates are shown for the proposed method.
When time integrations are combined with the saddle point finite element
approximation some extra elaborations are required in order to ensure both
\emph{a priori} energy estimates for the director or magnetization vector
depending on the model and an inf-sup condition for the Lagrange multiplier.
This is due to the fact that the unit length at the nodes is not satisfied in
general when a time integration is performed. We will carry out a linear Euler
time-stepping method and a non-linear Crank--Nicolson method. The latter is
solved by using the former as a non-linear solver. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Physics",
"Computer Science"
] |
Title: Deformations of infinite-dimensional Lie algebras, exotic cohomology, and integrable nonlinear partial differential equations,
Abstract: The important unsolved problem in theory of integrable systems is to find
conditions guaranteeing existence of a Lax representation for a given PDE. The
use of the exotic cohomology of the symmetry algebras opens a way to formulate
such conditions in internal terms of the PDEs under the study. In this paper we
consider certain examples of infinite-dimensional Lie algebras with nontrivial
second exotic cohomology groups and show that the Maurer-Cartan forms of the
associated extensions of these Lie algebras generate Lax representations for
integrable systems, both known and new ones. | [
0,
1,
0,
0,
0,
0
] | [
"Mathematics",
"Physics"
] |
Title: Investigation of Language Understanding Impact for Reinforcement Learning Based Dialogue Systems,
Abstract: Language understanding is a key component in a spoken dialogue system. In
this paper, we investigate how the language understanding module influences the
dialogue system performance by conducting a series of systematic experiments on
a task-oriented neural dialogue system in a reinforcement learning based
setting. The empirical study shows that among different types of language
understanding errors, slot-level errors can have more impact on the overall
performance of a dialogue system compared to intent-level errors. In addition,
our experiments demonstrate that the reinforcement learning based dialogue
system is able to learn when and what to confirm in order to achieve better
performance and greater robustness. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: Reidemeister spectra for solvmanifolds in low dimensions,
Abstract: The Reidemeister number of an endomorphism of a group is the number of
twisted conjugacy classes determined by that endomorphism. The collection of
all Reidemeister numbers of all automorphisms of a group $G$ is called the
Reidemeister spectrum of $G$. In this paper, we determine the Reidemeister
spectra of all fundamental groups of solvmanifolds up to Hirsch length 4. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Behavioral-clinical phenotyping with type 2 diabetes self-monitoring data,
Abstract: Objective: To evaluate unsupervised clustering methods for identifying
individual-level behavioral-clinical phenotypes that relate personal biomarkers
and behavioral traits in type 2 diabetes (T2DM) self-monitoring data. Materials
and Methods: We used hierarchical clustering (HC) to identify groups of meals
with similar nutrition and glycemic impact for 6 individuals with T2DM who
collected self-monitoring data. We evaluated clusters on: 1) correspondence to
gold standards generated by certified diabetes educators (CDEs) for 3
participants; 2) face validity, rated by CDEs, and 3) impact on CDEs' ability
to identify patterns for another 3 participants. Results: Gold standard (GS)
included 9 patterns across 3 participants. Of these, all 9 were re-discovered
using HC: 4 GS patterns were consistent with patterns identified by HC (over
50% of meals in a cluster followed the pattern); another 5 were included as
sub-groups in broader clusers. 50% (9/18) of clusters were rated over 3 on
5-point Likert scale for validity, significance, and being actionable. After
reviewing clusters, CDEs identified patterns that were more consistent with
data (70% reduction in contradictions between patterns and participants'
records). Discussion: Hierarchical clustering of blood glucose and
macronutrient consumption appears suitable for discovering behavioral-clinical
phenotypes in T2DM. Most clusters corresponded to gold standard and were rated
positively by CDEs for face validity. Cluster visualizations helped CDEs
identify more robust patterns in nutrition and glycemic impact, creating new
possibilities for visual analytic solutions. Conclusion: Machine learning
methods can use diabetes self-monitoring data to create personalized
behavioral-clinical phenotypes, which may prove useful for delivering
personalized medicine. | [
0,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics",
"Quantitative Biology"
] |
Title: Feature Learning for Meta-Paths in Knowledge Graphs,
Abstract: In this thesis, we study the problem of feature learning on heterogeneous
knowledge graphs. These features can be used to perform tasks such as link
prediction, classification and clustering on graphs. Knowledge graphs provide
rich semantics encoded in the edge and node types. Meta-paths consist of these
types and abstract paths in the graph. Until now, meta-paths can only be used
as categorical features with high redundancy and are therefore unsuitable for
machine learning models. We propose meta-path embeddings to solve this problem
by learning semantical and compact vector representations of them. Current
graph embedding methods only embed nodes and edge types and therefore miss
semantics encoded in the combination of them. Our method embeds meta-paths
using the skipgram model with an extension to deal with the redundancy and high
amount of meta-paths in big knowledge graphs. We critically evaluate our
embedding approach by predicting links on Wikidata. The experiments indicate
that we learn a sensible embedding of the meta-paths but can improve it
further. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Closed-Form Exact Inverses of the Weakly Singular and Hypersingular Operators On Disks,
Abstract: We introduce new boundary integral operators which are the exact inverses of
the weakly singular and hypersingular operators for the Laplacian on flat
disks. Moreover, we provide explicit closed forms for them and prove the
continuity and ellipticity of their corresponding bilinear forms in the natural
Sobolev trace spaces. This permit us to derive new Calderón-type identities
that can provide the foundation for optimal operator preconditioning in
Galerkin boundary element methods. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Physics"
] |
Title: Focus on Imaging Methods in Granular Physics,
Abstract: Granular materials are complex multi-particle ensembles in which macroscopic
properties are largely determined by inter-particle interactions between their
numerous constituents. In order to understand and to predict their macroscopic
physical behavior, it is necessary to analyze the composition and interactions
at the level of individual contacts and grains. To do so requires the ability
to image individual particles and their local configurations to high precision.
A variety of competing and complementary imaging techniques have been developed
for that task. In this introductory paper accompanying the Focus Issue, we
provide an overview of these imaging methods and discuss their advantages and
drawbacks, as well as their limits of application. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: The Stochastic Matching Problem: Beating Half with a Non-Adaptive Algorithm,
Abstract: In the stochastic matching problem, we are given a general (not necessarily
bipartite) graph $G(V,E)$, where each edge in $E$ is realized with some
constant probability $p > 0$ and the goal is to compute a bounded-degree
(bounded by a function depending only on $p$) subgraph $H$ of $G$ such that the
expected maximum matching size in $H$ is close to the expected maximum matching
size in $G$. The algorithms in this setting are considered non-adaptive as they
have to choose the subgraph $H$ without knowing any information about the set
of realized edges in $G$. Originally motivated by an application to kidney
exchange, the stochastic matching problem and its variants have received
significant attention in recent years.
The state-of-the-art non-adaptive algorithms for stochastic matching achieve
an approximation ratio of $\frac{1}{2}-\epsilon$ for any $\epsilon > 0$,
naturally raising the question that if $1/2$ is the limit of what can be
achieved with a non-adaptive algorithm. In this work, we resolve this question
by presenting the first algorithm for stochastic matching with an approximation
guarantee that is strictly better than $1/2$: the algorithm computes a subgraph
$H$ of $G$ with the maximum degree $O(\frac{\log{(1/ p)}}{p})$ such that the
ratio of expected size of a maximum matching in realizations of $H$ and $G$ is
at least $1/2+\delta_0$ for some absolute constant $\delta_0 > 0$. The degree
bound on $H$ achieved by our algorithm is essentially the best possible (up to
an $O(\log{(1/p)})$ factor) for any constant factor approximation algorithm,
since an $\Omega(\frac{1}{p})$ degree in $H$ is necessary for a vertex to
acquire at least one incident edge in a realization. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: A computer simulation of the Volga River hydrological regime: a problem of water-retaining dam optimal location,
Abstract: We investigate of a special dam optimal location at the Volga river in area
of the Akhtuba left sleeve beginning (7 \, km to the south of the Volga
Hydroelectric Power Station dam). We claim that a new water-retaining dam can
resolve the key problem of the Volga-Akhtuba floodplain related to insufficient
water amount during the spring flooding due to the overregulation of the Lower
Volga. By using a numerical integration of Saint-Vacant equations we study the
water dynamics across the northern part of the Volga-Akhtuba floodplain with
taking into account its actual topography. As the result we found an amount of
water $V_A$ passing to the Akhtuba during spring period for a given water flow
through the Volga Hydroelectric Power Station (so-called hydrograph which
characterises the water flow per unit of time). By varying the location of the
water-retaining dam $ x_d, y_d $ we obtained various values of $V_A (x_d, y_d)
$ as well as various flow spatial structure on the territory during the flood
period. Gradient descent method provide us the dam coordinated with the maximum
value of ${V_A}$. Such approach to the dam location choice let us to find the
best solution, that the value $V_A$ increases by a factor of 2. Our analysis
demonstrate a good potential of the numerical simulations in the field of
hydraulic works. | [
1,
0,
0,
0,
0,
0
] | [
"Physics",
"Mathematics"
] |
Title: Structure of a Parabolic Partial Differential Equation on Graphs and Digital spaces. Solution of PDE on Digital Spaces: a Klein Bottle, a Projective Plane, a 4D Sphere and a Moebius Band,
Abstract: This paper studies the structure of a parabolic partial differential equation
on graphs and digital n-dimensional manifolds, which are digital models of
continuous n-manifolds. Conditions for the existence of solutions of equations
are determined and investigated. Numerical solutions of the equation on a Klein
bottle, a projective plane, a 4D sphere and a Moebius strip are presented. | [
1,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Computer Science"
] |
Title: Recognizing Union-Find trees built up using union-by-rank strategy is NP-complete,
Abstract: Disjoint-Set forests, consisting of Union-Find trees, are data structures
having a widespread practical application due to their efficiency. Despite them
being well-known, no exact structural characterization of these trees is known
(such a characterization exists for Union trees which are constructed without
using path compression) for the case assuming union-by-rank strategy for
merging. In this paper we provide such a characterization by means of a simple
push operation and show that the decision problem whether a given tree (along
with the rank info of its nodes) is a Union-Find tree is NP-complete,
complementing our earlier similar result for the union-by-size strategy. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Admissibility of solution estimators for stochastic optimization,
Abstract: We look at stochastic optimization problems through the lens of statistical
decision theory. In particular, we address admissibility, in the statistical
decision theory sense, of the natural sample average estimator for a stochastic
optimization problem (which is also known as the empirical risk minimization
(ERM) rule in learning literature). It is well known that for general
stochastic optimization problems, the sample average estimator may not be
admissible. This is known as Stein's paradox in the statistics literature. We
show in this paper that for optimizing stochastic linear functions over compact
sets, the sample average estimator is admissible. | [
0,
0,
1,
1,
0,
0
] | [
"Statistics",
"Mathematics"
] |
Title: Numerical methods to prevent pressure oscillations in transcritical flows,
Abstract: The accurate and robust simulation of transcritical real-fluid effects is
crucial for many engineering applications, such as fuel injection in internal
combustion engines, rocket engines and gas turbines. For example, in diesel
engines, the liquid fuel is injected into the ambient gas at a pressure that
exceeds its critical value, and the fuel jet will be heated to a supercritical
temperature before combustion takes place. This process is often referred to as
transcritical injection. The largest thermodynamic gradient in the
transcritical regime occurs as the fluid undergoes a liquid-like to a gas-like
transition when crossing the pseudo-boiling line (Yang 2000, Oschwald et al.
2006, Banuti 2015). The complex processes during transcritical injection are
still not well understood. Therefore, to provide insights into high-pressure
combustion systems, accurate and robust numerical simulation tools are required
for the characterization of supercritical and transcritical flows. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Mathematics",
"Computer Science"
] |
Title: Fractal curves from prime trigonometric series,
Abstract: We study the convergence of the parameter family of series
$$V_{\alpha,\beta}(t)=\sum_{p}p^{-\alpha}\exp(2\pi i p^{\beta}t),\quad
\alpha,\beta \in \mathbb{R}_{>0},\; t \in [0,1)$$ defined over prime numbers
$p$, and subsequently, their differentiability properties. The visible fractal
nature of the graphs as a function of $\alpha,\beta$ is analyzed in terms of
Hölder continuity, self similarity and fractal dimension, backed with
numerical results. We also discuss the link of this series to random walks and
consequently, explore numerically its random properties. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Facets on the convex hull of $d$-dimensional Brownian and Lévy motion,
Abstract: For stationary, homogeneous Markov processes (viz., Lévy processes,
including Brownian motion) in dimension $d\geq 3$, we establish an exact
formula for the average number of $(d-1)$-dimensional facets that can be
defined by $d$ points on the process's path. This formula defines a
universality class in that it is independent of the increments' distribution,
and it admits a closed form when $d=3$, a case which is of particular interest
for applications in biophysics, chemistry and polymer science.
We also show that the asymptotical average number of facets behaves as
$\langle \mathcal{F}_T^{(d)}\rangle \sim 2\left[\ln \left( T/\Delta
t\right)\right]^{d-1}$, where $T$ is the total duration of the motion and
$\Delta t$ is the minimum time lapse separating points that define a facet. | [
0,
1,
1,
0,
0,
0
] | [
"Mathematics",
"Physics"
] |
Title: Output-only parameter identification of a colored-noise-driven Van der Pol oscillator -- Thermoacoustic instabilities as an example,
Abstract: The problem of output-only parameter identification for nonlinear oscillators
forced by colored noise is considered. In this context, it is often assumed
that the forcing noise is white, since its actual spectral content is unknown.
The impact of this white noise forcing assumption upon parameter identification
is quantitatively analyzed. First, a Van der Pol oscillator forced by an
Ornstein-Uhlenbeck process is considered. Second, the practical case of
thermoacoustic limit cycles in combustion chambers with turbulence-induced
forcing is investigated. It is shown that in both cases, the system parameters
are accurately identified if time signals are appropriately band-pass filtered
around the oscillator eigenfrequency. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Mathematics"
] |
Title: Anomaly Detection in Hierarchical Data Streams under Unknown Models,
Abstract: We consider the problem of detecting a few targets among a large number of
hierarchical data streams. The data streams are modeled as random processes
with unknown and potentially heavy-tailed distributions. The objective is an
active inference strategy that determines, sequentially, which data stream to
collect samples from in order to minimize the sample complexity under a
reliability constraint. We propose an active inference strategy that induces a
biased random walk on the tree-structured hierarchy based on confidence bounds
of sample statistics. We then establish its order optimality in terms of both
the size of the search space (i.e., the number of data streams) and the
reliability requirement. The results find applications in hierarchical heavy
hitter detection, noisy group testing, and adaptive sampling for active
learning, classification, and stochastic root finding. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: GelSlim: A High-Resolution, Compact, Robust, and Calibrated Tactile-sensing Finger,
Abstract: This work describes the development of a high-resolution tactile-sensing
finger for robot grasping. This finger, inspired by previous GelSight sensing
techniques, features an integration that is slimmer, more robust, and with more
homogeneous output than previous vision-based tactile sensors. To achieve a
compact integration, we redesign the optical path from illumination source to
camera by combining light guides and an arrangement of mirror reflections. We
parameterize the optical path with geometric design variables and describe the
tradeoffs between the finger thickness, the depth of field of the camera, and
the size of the tactile sensing area. The sensor sustains the wear from
continuous use -- and abuse -- in grasping tasks by combining tougher materials
for the compliant soft gel, a textured fabric skin, a structurally rigid body,
and a calibration process that maintains homogeneous illumination and contrast
of the tactile images during use. Finally, we evaluate the sensor's durability
along four metrics that track the signal quality during more than 3000 grasping
experiments. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Quantitative Biology"
] |
Title: Ringel duality as an instance of Koszul duality,
Abstract: In their previous work, S. Koenig, S. Ovsienko and the second author showed
that every quasi-hereditary algebra is Morita equivalent to the right algebra,
i.e. the opposite algebra of the left dual, of a coring. Let $A$ be an
associative algebra and $V$ an $A$-coring whose right algebra $R$ is
quasi-hereditary. In this paper, we give a combinatorial description of an
associative algebra $B$ and a $B$-coring $W$ whose right algebra is the Ringel
dual of $R$. We apply our results in small examples to obtain restrictions on
the $A_\infty$-structure of the $\textrm{Ext}$-algebra of standard modules over
a class of quasi-hereditary algebras related to birational morphisms of smooth
surfaces. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Absence of cyclotron resonance in the anomalous metallic phase in InO$_x$,
Abstract: It is observed that many thin superconducting films with not too high
disorder level (generally R$_N/\Box \leq 2000 \Omega$) placed in magnetic field
show an anomalous metallic phase where the resistance is low but still finite
as temperature goes to zero. Here we report in weakly disordered amorphous
InO$_x$ thin films, that this "Bose metal" metal phase possesses no cyclotron
resonance and hence non-Drude electrodynamics. Its microwave dynamical
conductivity shows signatures of remaining short-range superconducting
correlations and strong phase fluctuations through the whole anomalous regime.
The absence of a finite frequency resonant mode can be associated with a
vanishing downstream component of the vortex current parallel to the
supercurrent and an emergent particle-hole symmetry of this anomalous metal,
which establishes its non-Fermi liquid character. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: The Informativeness of $k$-Means and Dimensionality Reduction for Learning Mixture Models,
Abstract: The learning of mixture models can be viewed as a clustering problem. Indeed,
given data samples independently generated from a mixture of distributions, we
often would like to find the correct target clustering of the samples according
to which component distribution they were generated from. For a clustering
problem, practitioners often choose to use the simple k-means algorithm.
k-means attempts to find an optimal clustering which minimizes the
sum-of-squared distance between each point and its cluster center. In this
paper, we provide sufficient conditions for the closeness of any optimal
clustering and the correct target clustering assuming that the data samples are
generated from a mixture of log-concave distributions. Moreover, we show that
under similar or even weaker conditions on the mixture model, any optimal
clustering for the samples with reduced dimensionality is also close to the
correct target clustering. These results provide intuition for the
informativeness of k-means (with and without dimensionality reduction) as an
algorithm for learning mixture models. We verify the correctness of our
theorems using numerical experiments and demonstrate using datasets with
reduced dimensionality significant speed ups for the time required to perform
clustering. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics",
"Mathematics"
] |
Title: The Complexity of Counting Surjective Homomorphisms and Compactions,
Abstract: A homomorphism from a graph G to a graph H is a function from the vertices of
G to the vertices of H that preserves edges. A homomorphism is surjective if it
uses all of the vertices of H and it is a compaction if it uses all of the
vertices of H and all of the non-loop edges of H. Hell and Nesetril gave a
complete characterisation of the complexity of deciding whether there is a
homomorphism from an input graph G to a fixed graph H. A complete
characterisation is not known for surjective homomorphisms or for compactions,
though there are many interesting results. Dyer and Greenhill gave a complete
characterisation of the complexity of counting homomorphisms from an input
graph G to a fixed graph H. In this paper, we give a complete characterisation
of the complexity of counting surjective homomorphisms from an input graph G to
a fixed graph H and we also give a complete characterisation of the complexity
of counting compactions from an input graph G to a fixed graph H. In an
addendum we use our characterisations to point out a dichotomy for the
complexity of the respective approximate counting problems (in the connected
case). | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Global stability of a network-based SIRS epidemic model with nonmonotone incidence rate,
Abstract: This paper studies the dynamics of a network-based SIRS epidemic model with
vaccination and a nonmonotone incidence rate. This type of nonlinear incidence
can be used to describe the psychological or inhibitory effect from the
behavioral change of the susceptible individuals when the number of infective
individuals on heterogeneous networks is getting larger. Using the analytical
method, epidemic threshold $R_0$ is obtained. When $R_0$ is less than one, we
prove the disease-free equilibrium is globally asymptotically stable and the
disease dies out, while $R_0$ is greater than one, there exists a unique
endemic equilibrium. By constructing a suitable Lyapunov function, we also
prove the endemic equilibrium is globally asymptotically stable if the
inhibitory factor $\alpha$ is sufficiently large. Numerical experiments are
also given to support the theoretical results. It is shown both theoretically
and numerically a larger $\alpha$ can accelerate the extinction of the disease
and reduce the level of disease. | [
0,
0,
0,
0,
1,
0
] | [
"Mathematics",
"Quantitative Biology"
] |
Title: Composite Adaptive Control for Bilateral Teleoperation Systems without Persistency of Excitation,
Abstract: Composite adaptive control schemes, which use both the system tracking errors
and the prediction error to drive the update laws, have become widespread in
achieving an improvement of system performance. However, a strong
persistent-excitation (PE) condition should be satisfied to guarantee the
parameter convergence. This paper proposes a novel composite adaptive control
to guarantee parameter convergence without PE condition for nonlinear
teleoperation systems with dynamic uncertainties and time-varying communication
delays. The stability criteria of the closed-loop teleoperation system are
given in terms of linear matrix inequalities. New tracking performance measures
are proposed to evaluate the position tracking between the master and the
slave. Simulation studies are given to show the effectiveness of the proposed
method. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: The Fan Region at 1.5 GHz. I: Polarized synchrotron emission extending beyond the Perseus Arm,
Abstract: The Fan Region is one of the dominant features in the polarized radio sky,
long thought to be a local (distance < 500 pc) synchrotron feature. We present
1.3-1.8 GHz polarized radio continuum observations of the region from the
Global Magneto-Ionic Medium Survey (GMIMS) and compare them to maps of Halpha
and polarized radio continuum intensity from 0.408-353 GHz. The high-frequency
(> 1 GHz) and low-frequency (< 600 MHz) emission have different morphologies,
suggesting a different physical origin. Portions of the 1.5 GHz Fan Region
emission are depolarized by about 30% by ionized gas structures in the Perseus
Arm, indicating that this fraction of the emission originates >2 kpc away. We
argue for the same conclusion based on the high polarization fraction at 1.5
GHz (about 40%). The Fan Region is offset with respect to the Galactic plane,
covering -5° < b < +10°; we attribute this offset to the warp in the
outer Galaxy. We discuss origins of the polarized emission, including the
spiral Galactic magnetic field. This idea is a plausible contributing factor
although no model to date readily reproduces all of the observations. We
conclude that models of the Galactic magnetic field should account for the > 1
GHz emission from the Fan Region as a Galactic-scale, not purely local,
feature. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: DeepCodec: Adaptive Sensing and Recovery via Deep Convolutional Neural Networks,
Abstract: In this paper we develop a novel computational sensing framework for sensing
and recovering structured signals. When trained on a set of representative
signals, our framework learns to take undersampled measurements and recover
signals from them using a deep convolutional neural network. In other words, it
learns a transformation from the original signals to a near-optimal number of
undersampled measurements and the inverse transformation from measurements to
signals. This is in contrast to traditional compressive sensing (CS) systems
that use random linear measurements and convex optimization or iterative
algorithms for signal recovery. We compare our new framework with
$\ell_1$-minimization from the phase transition point of view and demonstrate
that it outperforms $\ell_1$-minimization in the regions of phase transition
plot where $\ell_1$-minimization cannot recover the exact solution. In
addition, we experimentally demonstrate how learning measurements enhances the
overall recovery performance, speeds up training of recovery framework, and
leads to having fewer parameters to learn. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |