text
stringlengths 57
2.88k
| labels
sequencelengths 6
6
|
---|---|
Title: Universal Constraints on the Location of Extrema of Eigenfunctions of Non-Local Schrödinger Operators,
Abstract: We derive a lower bound on the location of global extrema of eigenfunctions
for a large class of non-local Schrödinger operators in convex domains under
Dirichlet exterior conditions, featuring the symbol of the kinetic term, the
strength of the potential, and the corresponding eigenvalue, and involving a
new universal constant. We show a number of probabilistic and spectral
geometric implications, and derive a Faber-Krahn type inequality for non-local
operators. Our study also extends to potentials with compact support, and we
establish bounds on the location of extrema relative to the boundary edge of
the support or level sets around minima of the potential. | [
0,
0,
1,
0,
0,
0
] |
Title: InScript: Narrative texts annotated with script information,
Abstract: This paper presents the InScript corpus (Narrative Texts Instantiating Script
structure). InScript is a corpus of 1,000 stories centered around 10 different
scenarios. Verbs and noun phrases are annotated with event and participant
types, respectively. Additionally, the text is annotated with coreference
information. The corpus shows rich lexical variation and will serve as a unique
resource for the study of the role of script knowledge in natural language
processing. | [
1,
0,
0,
0,
0,
0
] |
Title: Semimetallic and charge-ordered $α$-(BEDT-TTF)$_2$I$_3$: on the role of disorder in dc transport and dielectric properties,
Abstract: $\alpha$-(BEDT-TTF)$_2$I$_3$ is a prominent example of charge ordering among
organic conductors. In this work we explore the details of transport within the
charge-ordered as well as semimetallic phase at ambient pressure. In the
high-temperature semimetallic phase, the mobilities and concentrations of both
electrons and holes conspire in such a way to create an almost
temperature-independent conductivity as well as a low Hall effect. We explain
these phenomena as a consequence of a predominantly inter-pocket scattering
which equalizes mobilities of the two types of charge carriers. At low
temperatures, within the insulating charge-ordered phase two channels of
conduction can be discerned: a temperature-dependent activation which follows
the mean-field behavior, and a nearest-neighbor hopping contribution. Together
with negative magnetoresistance, the latter relies on the presence of disorder.
The charge-ordered phase also features a prominent dielectric peak which bears
a similarity to relaxor ferroelectrics. Its dispersion is determined by
free-electron screening and pushed by disorder well below the transition
temperature. The source of this disorder can be found in the anion layers which
randomly perturb BEDT-TTF molecules through hydrogen bonds. | [
0,
1,
0,
0,
0,
0
] |
Title: Robust Photometric Stereo Using Learned Image and Gradient Dictionaries,
Abstract: Photometric stereo is a method for estimating the normal vectors of an object
from images of the object under varying lighting conditions. Motivated by
several recent works that extend photometric stereo to more general objects and
lighting conditions, we study a new robust approach to photometric stereo that
utilizes dictionary learning. Specifically, we propose and analyze two
approaches to adaptive dictionary regularization for the photometric stereo
problem. First, we propose an image preprocessing step that utilizes an
adaptive dictionary learning model to remove noise and other non-idealities
from the image dataset before estimating the normal vectors. We also propose an
alternative model where we directly apply the adaptive dictionary
regularization to the normal vectors themselves during estimation. We study the
practical performance of both methods through extensive simulations, which
demonstrate the state-of-the-art performance of both methods in the presence of
noise. | [
0,
0,
0,
1,
0,
0
] |
Title: Toward Low-Flying Autonomous MAV Trail Navigation using Deep Neural Networks for Environmental Awareness,
Abstract: We present a micro aerial vehicle (MAV) system, built with inexpensive
off-the-shelf hardware, for autonomously following trails in unstructured,
outdoor environments such as forests. The system introduces a deep neural
network (DNN) called TrailNet for estimating the view orientation and lateral
offset of the MAV with respect to the trail center. The DNN-based controller
achieves stable flight without oscillations by avoiding overconfident behavior
through a loss function that includes both label smoothing and entropy reward.
In addition to the TrailNet DNN, the system also utilizes vision modules for
environmental awareness, including another DNN for object detection and a
visual odometry component for estimating depth for the purpose of low-level
obstacle detection. All vision systems run in real time on board the MAV via a
Jetson TX1. We provide details on the hardware and software used, as well as
implementation details. We present experiments showing the ability of our
system to navigate forest trails more robustly than previous techniques,
including autonomous flights of 1 km. | [
1,
0,
0,
0,
0,
0
] |
Title: Origin of Charge Separation at Organic Photovoltaic Heterojunctions: A Mesoscale Quantum Mechanical View,
Abstract: The high efficiency of charge generation within organic photovoltaic blends
apparently contrasts with the strong "classical" attraction between newly
formed electron-hole pairs. Several factors have been identified as possible
facilitators of charge dissociation, such as quantum mechanical coherence and
delocalization, structural and energetic disorder, built-in electric fields,
nanoscale intermixing of the donor and acceptor components of the blends. Our
mesoscale quantum-chemical model allows an unbiased assessment of their
relative importance, through excited-state calculations on systems containing
thousands of donor and acceptor sites. The results on several model
heterojunctions confirm that the classical model severely overestimates the
binding energy of the electron-hole pairs, produced by vertical excitation from
the electronic ground state. Using physically sensible parameters for the
individual materials, we find that the quantum mechanical energy difference
between the lowest interfacial charge transfer states and the fully separated
electron and hole is of the order of the thermal energy. | [
0,
1,
0,
0,
0,
0
] |
Title: Spectral Decimation for Families of Self-Similar Symmetric Laplacians on the Sierpinski Gasket,
Abstract: We construct a one-parameter family of Laplacians on the Sierpinski Gasket
that are symmetric and self-similar for the 9-map iterated function system
obtained by iterating the standard 3-map iterated function system. Our main
result is the fact that all these Laplacians satisfy a version of spectral
decimation that builds a precise catalog of eigenvalues and eigenfunctions for
any choice of the parameter. We give a number of applications of this spectral
decimation. We also prove analogous results for fractal Laplacians on the unit
Interval, and this yields an analogue of the classical Sturm-Liouville theory
for the eigenfunctions of these one-dimensional Laplacians. | [
0,
0,
1,
0,
0,
0
] |
Title: Identification of multi-object dynamical systems: consistency and Fisher information,
Abstract: Learning the model parameters of a multi-object dynamical system from partial
and perturbed observations is a challenging task. Despite recent numerical
advancements in learning these parameters, theoretical guarantees are extremely
scarce. In this article, we study the identifiability of these parameters and
the consistency of the corresponding maximum likelihood estimate (MLE) under
assumptions on the different components of the underlying multi-object system.
In order to understand the impact of the various sources of observation noise
on the ability to learn the model parameters, we study the asymptotic variance
of the MLE through the associated Fisher information matrix. For example, we
show that specific aspects of the multi-target tracking (MTT) problem such as
detection failures and unknown data association lead to a loss of information
which is quantified in special cases of interest. | [
0,
0,
1,
1,
0,
0
] |
Title: Elliptic curves maximal over extensions of finite base fields,
Abstract: Given an elliptic curve $E$ over a finite field $\mathbb{F}_q$ we study the
finite extensions $\mathbb{F}_{q^n}$ of $\mathbb{F}_q$ such that the number of
$\mathbb{F}_{q^n}$-rational points on $E$ attains the Hasse upper bound. We
obtain an upper bound on the degree $n$ for $E$ ordinary using an estimate for
linear forms in logarithms, which allows us to compute the pairs of isogeny
classes of such curves and degree $n$ for small $q$. Using a consequence of
Schmidt's Subspace Theorem, we improve the upper bound to $n\leq 11$ for
sufficiently large $q$. We also show that there are infinitely many isogeny
classes of ordinary elliptic curves with $n=3$. | [
0,
0,
1,
0,
0,
0
] |
Title: CURE: Curvature Regularization For Missing Data Recovery,
Abstract: Missing data recovery is an important and yet challenging problem in imaging
and data science. Successful models often adopt certain carefully chosen
regularization. Recently, the low dimension manifold model (LDMM) was
introduced by S.Osher et al. and shown effective in image inpainting. They
observed that enforcing low dimensionality on image patch manifold serves as a
good image regularizer. In this paper, we observe that having only the low
dimension manifold regularization is not enough sometimes, and we need
smoothness as well. For that, we introduce a new regularization by combining
the low dimension manifold regularization with a higher order Curvature
Regularization, and we call this new regularization CURE for short. The key
step of solving CURE is to solve a biharmonic equation on a manifold. We
further introduce a weighted version of CURE, called WeCURE, in a similar
manner as the weighted nonlocal Laplacian (WNLL) method. Numerical experiments
for image inpainting and semi-supervised learning show that the proposed CURE
and WeCURE significantly outperform LDMM and WNLL respectively. | [
1,
0,
0,
0,
0,
0
] |
Title: Nonlinear parametric excitation effect induces stability transitions in swimming direction of flexible superparamagnetic microswimmers,
Abstract: Microscopic artificial swimmers have recently become highly attractive due to
their promising potential for biomedical applications. The pioneering work of
Dreyfus et al (2005) has demonstrated the motion of a microswimmer with an
undulating chain of superparamagnetic beads, which is actuated by an
oscillating external magnetic field. Interestingly, it has also been
theoretically predicted that the swimming direction of this swimmer will
undergo a $90^\circ$-transition when the magnetic field's oscillations
amplitude is increased above a critical value of $\sqrt{2}$. In this work, we
further investigate this transition both theoretically and experimentally by
using numerical simulations and presenting a novel flexible microswimmer with a
superparamagnetic head. We realize the $90^\circ$-transition in swimming
direction, prove that this effect depends on both frequency and amplitude of
the oscillating magnetic field, and demonstrate the existence of an optimal
amplitude, under which, maximal swimming speed can be achieved. By
asymptotically analyzing the dynamic motion of microswimmer with a minimal
two-link model, we reveal that the stability transitions representing the
changes in the swimming direction are induced by the effect of nonlinear
parametric excitation. | [
0,
1,
0,
0,
0,
0
] |
Title: $L^p$ Mapping Properties for the Cauchy-Riemann Equations on Lipschitz Domains Admitting Subelliptic Estimates,
Abstract: We show that on bounded Lipschitz pseudoconvex domains that admit good weight
functions the $\overline{\partial}$-Neumann operators $N_q,
\overline{\partial}^* N_{q}$, and $\overline{\partial} N_{q}$ are bounded on
$L^p$ spaces for some values of $p$ greater than 2. | [
0,
0,
1,
0,
0,
0
] |
Title: Fast dose optimization for rotating shield brachytherapy,
Abstract: Purpose: To provide a fast computational method, based on the proximal graph
solver (POGS) - a convex optimization solver using the alternating direction
method of multipliers (ADMM), for calculating an optimal treatment plan in
rotating shield brachytherapy (RSBT). RSBT treatment planning has more degrees
of freedom than conventional high-dose-rate brachytherapy (HDR-BT) due to the
addition of emission direction, and this necessitates a fast optimization
technique to enable clinical usage. // Methods: The multi-helix RSBT (H-RSBT)
delivery technique was considered with five representative cervical cancer
patients. Treatment plans were generated for all patients using the POGS method
and the previously considered commercial solver IBM CPLEX. The rectum, bladder,
sigmoid, high-risk clinical target volume (HR-CTV), and HR-CTV boundary were
the structures considered in our optimization problem, called the asymmetric
dose-volume optimization with smoothness control. Dose calculation resolution
was 1x1x3 mm^3 for all cases. The H-RSBT applicator has 6 helices, with 33.3 mm
of translation along the applicator per helical rotation and 1.7 mm spacing
between dwell positions, yielding 17.5 degree emission angle spacing per 5 mm
along the applicator.// Results: For each patient, HR-CTV D90, HR-CTV D100,
rectum D2cc, sigmoid D2cc, and bladder D2cc matched within 1% for CPLEX and
POGS. Also, we obtained similar EQD2 figures between CPLEX and POGS. POGS was
around 18 times faster than CPLEX. Over all patients, total optimization times
were 32.1-65.4 seconds for CPLEX and 2.1-3.9 seconds for POGS. // Conclusions:
POGS substantially reduced treatment plan optimization time around 18 times for
RSBT with similar HR-CTV D90, OAR D2cc values, and EQD2 figure relative to
CPLEX, which is significant progress toward clinical translation of RSBT. POGS
is also applicable to conventional HDR-BT. | [
0,
1,
1,
0,
0,
0
] |
Title: 3D Modeling of Electric Fields in the LUX Detector,
Abstract: This work details the development of a three-dimensional (3D) electric field
model for the LUX detector. The detector took data during two periods of
searching for weakly interacting massive particle (WIMP) searches. After the
first period completed, a time-varying non-uniform negative charge developed in
the polytetrafluoroethylene (PTFE) panels that define the radial boundary of
the detector's active volume. This caused electric field variations in the
detector in time, depth and azimuth, generating an electrostatic
radially-inward force on electrons on their way upward to the liquid surface.
To map this behavior, 3D electric field maps of the detector's active volume
were built on a monthly basis. This was done by fitting a model built in COMSOL
Multiphysics to the uniformly distributed calibration data that were collected
on a regular basis. The modeled average PTFE charge density increased over the
course of the exposure from -3.6 to $-5.5~\mu$C/m$^2$. From our studies, we
deduce that the electric field magnitude varied while the mean value of the
field of $\sim200$~V/cm remained constant throughout the exposure. As a result
of this work the varying electric fields and their impact on event
reconstruction and discrimination were successfully modeled. | [
0,
1,
0,
0,
0,
0
] |
Title: Optimal Communication Strategies in Networked Cyber-Physical Systems with Adversarial Elements,
Abstract: This paper studies optimal communication and coordination strategies in
cyber-physical systems for both defender and attacker within a game-theoretic
framework. We model the communication network of a cyber-physical system as a
sensor network which involves one single Gaussian source observed by many
sensors, subject to additive independent Gaussian observation noises. The
sensors communicate with the estimator over a coherent Gaussian multiple access
channel. The aim of the receiver is to reconstruct the underlying source with
minimum mean squared error. The scenario of interest here is one where some of
the sensors are captured by the attacker and they act as the adversary
(jammer): they strive to maximize distortion. The receiver (estimator) knows
the captured sensors but still cannot simply ignore them due to the multiple
access channel, i.e., the outputs of all sensors are summed to generate the
estimator input. We show that the ability of transmitter sensors to secretly
agree on a random event, that is "coordination", plays a key role in the
analysis... | [
1,
0,
0,
0,
0,
0
] |
Title: First- and Second-Order Models of Recursive Arithmetics,
Abstract: We study a quadruple of interrelated subexponential subsystems of arithmetic
WKL$_0^-$, RCA$^-_0$, I$\Delta_0$, and $\Delta$RA$_1$, which complement the
similarly related quadruple WKL$_0$, RCA$_0$, I$\Sigma_1$, and PRA studied by
Simpson, and the quadruple WKL$_0^\ast$, RCA$_0^\ast$, I$\Delta_0$(exp), and
EFA studied by Simpson and Smith. We then explore the space of subexponential
arithmetic theories between I$\Delta_0$ and I$\Delta_0$(exp). We introduce and
study first- and second-order theories of recursive arithmetic $A$RA$_1$ and
$A$RA$_2$ capable of characterizing various computational complexity classes
and based on function algebras $A$, studied by Clote and others. | [
1,
0,
0,
0,
0,
0
] |
Title: Land Cover Classification via Multi-temporal Spatial Data by Recurrent Neural Networks,
Abstract: Nowadays, modern earth observation programs produce huge volumes of satellite
images time series (SITS) that can be useful to monitor geographical areas
through time. How to efficiently analyze such kind of information is still an
open question in the remote sensing field. Recently, deep learning methods
proved suitable to deal with remote sensing data mainly for scene
classification (i.e. Convolutional Neural Networks - CNNs - on single images)
while only very few studies exist involving temporal deep learning approaches
(i.e Recurrent Neural Networks - RNNs) to deal with remote sensing time series.
In this letter we evaluate the ability of Recurrent Neural Networks, in
particular the Long-Short Term Memory (LSTM) model, to perform land cover
classification considering multi-temporal spatial data derived from a time
series of satellite images. We carried out experiments on two different
datasets considering both pixel-based and object-based classification. The
obtained results show that Recurrent Neural Networks are competitive compared
to state-of-the-art classifiers, and may outperform classical approaches in
presence of low represented and/or highly mixed classes. We also show that
using the alternative feature representation generated by LSTM can improve the
performances of standard classifiers. | [
1,
0,
0,
0,
0,
0
] |
Title: Disentangling by Factorising,
Abstract: We define and address the problem of unsupervised learning of disentangled
representations on data generated from independent factors of variation. We
propose FactorVAE, a method that disentangles by encouraging the distribution
of representations to be factorial and hence independent across the dimensions.
We show that it improves upon $\beta$-VAE by providing a better trade-off
between disentanglement and reconstruction quality. Moreover, we highlight the
problems of a commonly used disentanglement metric and introduce a new metric
that does not suffer from them. | [
0,
0,
0,
1,
0,
0
] |
Title: Symmetries and synchronization in multilayer random networks,
Abstract: In the light of the recently proposed scenario of asymmetry-induced
synchronization (AISync), in which dynamical uniformity and consensus in a
distributed system would demand certain asymmetries in the underlying network,
we investigate here the influence of some regularities in the interlayer
connection patterns on the synchronization properties of multilayer random
networks. More specifically, by considering a Stuart-Landau model of complex
oscillators with random frequencies, we report for multilayer networks a
dynamical behavior that could be also classified as a manifestation of AISync.
We show, namely, that the presence of certain symmetries in the interlayer
connection pattern tends to diminish the synchronization capability of the
whole network or, in other words, asymmetries in the interlayer connections
would enhance synchronization in such structured networks. Our results might
help the understanding not only of the AISync mechanism itself, but also its
possible role in the determination of the interlayer connection pattern of
multilayer and other structured networks with optimal synchronization
properties. | [
0,
1,
0,
0,
0,
0
] |
Title: Application of Decision Rules for Handling Class Imbalance in Semantic Segmentation,
Abstract: As part of autonomous car driving systems, semantic segmentation is an
essential component to obtain a full understanding of the car's environment.
One difficulty, that occurs while training neural networks for this purpose, is
class imbalance of training data. Consequently, a neural network trained on
unbalanced data in combination with maximum a-posteriori classification may
easily ignore classes that are rare in terms of their frequency in the dataset.
However, these classes are often of highest interest. We approach such
potential misclassifications by weighting the posterior class probabilities
with the prior class probabilities which in our case are the inverse
frequencies of the corresponding classes in the training dataset. More
precisely, we adopt a localized method by computing the priors pixel-wise such
that the impact can be analyzed at pixel level as well. In our experiments, we
train one network from scratch using a proprietary dataset containing 20,000
annotated frames of video sequences recorded from street scenes. The evaluation
on our test set shows an increase of average recall with regard to instances of
pedestrians and info signs by $25\%$ and $23.4\%$, respectively. In addition,
we significantly reduce the non-detection rate for instances of the same
classes by $61\%$ and $38\%$. | [
1,
0,
0,
1,
0,
0
] |
Title: Muchnik degrees and cardinal characteristics,
Abstract: We provide a pair of dual results, each stating the coincidence of highness
properties from computability theory. We provide an analogous pair of dual
results on the coincidence of cardinal characteristics within ZFC.
A mass problem is a set of functions on $\omega$. For mass problems $\mathcal
C, \mathcal D$, one says that $\mathcal C$ is Muchnik reducible to $\mathcal D$
if each function in $\mathcal D$ computes a function in $\mathcal C$. In this
paper we view highness properties as mass problems, and compare them with
respect to Muchnik reducibility and its uniform strengthening, Medvedev
reducibility.
Let $\mathcal D(p)$ be the mass problem of infinite bit sequences $y$ (i.e.,
0,1 valued functions) such that for each computable bit sequence $x$, the
asymptotic lower density $\underline \rho$ of the agreement bit sequence $x
\leftrightarrow y$ is at most $p$ (this sequence takes the value 1 at a bit
position iff $x$ and $y$ agree).
We show that all members of this family of mass problems parameterized by a
real $p$ with $0 < p<1/2 $ have the same complexity in the sense of Muchnik
reducibility. This also yields a new version of Monin's affirmative answer to
the "Gamma question", whether $\Gamma(A)< 1/2$ implies $\Gamma(A)=0$ for each
Turing oracle $A$.
We also show, together with Joseph Miller, that for any order function~$g$
there exists a faster growing order function $h $ such that $\mathrm{IOE}(g) $
is strictly Muchnik below $\mathrm{IOE}(h)$.
We study cardinal characteristics analogous to the highness properties above.
For instance, $\mathfrak d (p)$ is the least size of a set $G$ of bit sequences
so that for each bit sequence $x$ there is a bit sequence $y$ in $G$ so that
$\underline \rho (x \leftrightarrow y) >p$. We prove within ZFC all the
coincidences of cardinal characteristics that are the analogs of the results
above. | [
0,
0,
1,
0,
0,
0
] |
Title: Existence of solutions for a semirelativistic Hartree equation with unbounded potentials,
Abstract: We prove the existence of a solution to the semirelativistic Hartree equation
$$\sqrt{-\Delta+m^2}u+ V(x) u = A(x)\left( W * |u|^p \right) |u|^{p-2}u $$
under suitable growth assumption on the potential functions $V$ and $A$. In
particular, both can be unbounded from above. | [
0,
0,
1,
0,
0,
0
] |
Title: Lattice Operations on Terms over Similar Signatures,
Abstract: Unification and generalization are operations on two terms computing
respectively their greatest lower bound and least upper bound when the terms
are quasi-ordered by subsumption up to variable renaming (i.e., $t_1\preceq
t_2$ iff $t_1 = t_2\sigma$ for some variable substitution $\sigma$). When term
signatures are such that distinct functor symbols may be related with a fuzzy
equivalence (called a similarity), these operations can be formally extended to
tolerate mismatches on functor names and/or arity or argument order. We
reformulate and extend previous work with a declarative approach defining
unification and generalization as sets of axioms and rules forming a complete
constraint-normalization proof system. These include the Reynolds-Plotkin
term-generalization procedures, Maria Sessa's "weak" unification with partially
fuzzy signatures and its corresponding generalization, as well as novel
extensions of such operations to fully fuzzy signatures (i.e., similar functors
with possibly different arities). One advantage of this approach is that it
requires no modification of the conventional data structures for terms and
substitutions. This and the fact that these declarative specifications are
efficiently executable conditional Horn-clauses offers great practical
potential for fuzzy information-handling applications. | [
1,
0,
0,
0,
0,
0
] |
Title: Towards Physically Safe Reinforcement Learning under Supervision,
Abstract: This paper addresses the question of how a previously available control
policy $\pi_s$ can be used as a supervisor to more quickly and safely train a
new learned control policy $\pi_L$ for a robot. A weighted average of the
supervisor and learned policies is used during trials, with a heavier weight
initially on the supervisor, in order to allow safe and useful physical trials
while the learned policy is still ineffective. During the process, the weight
is adjusted to favor the learned policy. As weights are adjusted, the learned
network must compensate so as to give safe and reasonable outputs under the
different weights. A pioneer network is introduced that pre-learns a policy
that performs similarly to the current learned policy under the planned next
step for new weights; this pioneer network then replaces the currently learned
network in the next set of trials. Experiments in OpenAI Gym demonstrate the
effectiveness of the proposed method. | [
1,
0,
0,
1,
0,
0
] |
Title: Complex Valued Risk Diversification,
Abstract: Risk diversification is one of the dominant concerns for portfolio managers.
Various portfolio constructions have been proposed to minimize the risk of the
portfolio under some constrains including expected returns. We propose a
portfolio construction method that incorporates the complex valued principal
component analysis into the risk diversification portfolio construction. The
proposed method is verified to outperform the conventional risk parity and risk
diversification portfolio constructions. | [
0,
0,
0,
0,
0,
1
] |
Title: An Overview on Application of Machine Learning Techniques in Optical Networks,
Abstract: Today's telecommunication networks have become sources of enormous amounts of
widely heterogeneous data. This information can be retrieved from network
traffic traces, network alarms, signal quality indicators, users' behavioral
data, etc. Advanced mathematical tools are required to extract meaningful
information from these data and take decisions pertaining to the proper
functioning of the networks from the network-generated data. Among these
mathematical tools, Machine Learning (ML) is regarded as one of the most
promising methodological approaches to perform network-data analysis and enable
automated network self-configuration and fault management. The adoption of ML
techniques in the field of optical communication networks is motivated by the
unprecedented growth of network complexity faced by optical networks in the
last few years. Such complexity increase is due to the introduction of a huge
number of adjustable and interdependent system parameters (e.g., routing
configurations, modulation format, symbol rate, coding schemes, etc.) that are
enabled by the usage of coherent transmission/reception technologies, advanced
digital signal processing and compensation of nonlinear effects in optical
fiber propagation. In this paper we provide an overview of the application of
ML to optical communications and networking. We classify and survey relevant
literature dealing with the topic, and we also provide an introductory tutorial
on ML for researchers and practitioners interested in this field. Although a
good number of research papers have recently appeared, the application of ML to
optical networks is still in its infancy: to stimulate further work in this
area, we conclude the paper proposing new possible research directions. | [
0,
0,
0,
1,
0,
0
] |
Title: Study of Minor Actinides Transmutation in PWR MOX fuel,
Abstract: The management of long-lived radionuclides in spent fuel is a key issue to
achieve the closed nuclear fuel cycle and the sustainable development of
nuclear energy. Partitioning-Transmutation is supposed to be an efficient
method to treat the long-lived radionuclides in spent fuel. Some Minor
Actinides (MAs) have very long half-lives among the radionuclides in the spent
fuel. Accordingly, the study of MAs transmutation is a significant work for the
post-processing of spent fuel.
In the present work, the transmutations in Pressurized Water Reactor (PWR)
mixed oxide (MOX) fuel are investigated through the Monte Carlo based code RMC.
Two kinds of MAs, $^{237}$Np and five MAs ($^{237}$Np, $^{241}$Am, $^{243}$Am,
$^{244}$Cm and $^{245}$Cm) are incorporated homogeneously into the MOX fuel
assembly. The transmutation of MAs is simulated with different initial MOX
concentrations.
The results indicate an overall nice efficiency of transmutation in both
initial MOX concentrations, especially for the two kinds of MAs primarily
generated in the UOX fuel, $^{237}$Np and $^{241}$Am. In addition, the
inclusion of $^{237}$Np in MOX has no large influence for other MAs, while the
transmutation efficiency of $^{237}$Np is excellent. The transmutation of MAs
in MOX fuel depletion is expected to be a new, efficient nuclear spent fuel
management method for the future nuclear power generation. | [
0,
1,
0,
0,
0,
0
] |
Title: Undesired parking spaces and contractible pieces of the noncrossing partition link,
Abstract: There are two natural simplicial complexes associated to the noncrossing
partition lattice: the order complex of the full lattice and the order complex
of the lattice with its bounding elements removed. The latter is a complex that
we call the noncrossing partition link because it is the link of an edge in the
former. The first author and his coauthors conjectured that various collections
of simplices of the noncrossing partition link (determined by the undesired
parking spaces in the corresponding parking functions) form contractible
subcomplexes. In this article we prove their conjecture by combining the fact
that the star of a simplex in a flag complex is contractible with the second
author's theory of noncrossing hypertrees. | [
0,
0,
1,
0,
0,
0
] |
Title: Breaking the Nonsmooth Barrier: A Scalable Parallel Method for Composite Optimization,
Abstract: Due to their simplicity and excellent performance, parallel asynchronous
variants of stochastic gradient descent have become popular methods to solve a
wide range of large-scale optimization problems on multi-core architectures.
Yet, despite their practical success, support for nonsmooth objectives is still
lacking, making them unsuitable for many problems of interest in machine
learning, such as the Lasso, group Lasso or empirical risk minimization with
convex constraints.
In this work, we propose and analyze ProxASAGA, a fully asynchronous sparse
method inspired by SAGA, a variance reduced incremental gradient algorithm. The
proposed method is easy to implement and significantly outperforms the state of
the art on several nonsmooth, large-scale problems. We prove that our method
achieves a theoretical linear speedup with respect to the sequential version
under assumptions on the sparsity of gradients and block-separability of the
proximal term. Empirical benchmarks on a multi-core architecture illustrate
practical speedups of up to 12x on a 20-core machine. | [
1,
0,
1,
1,
0,
0
] |
Title: Sparsity-promoting and edge-preserving maximum a posteriori estimators in non-parametric Bayesian inverse problems,
Abstract: We consider the inverse problem of recovering an unknown functional parameter
$u$ in a separable Banach space, from a noisy observation $y$ of its image
through a known possibly non-linear ill-posed map ${\mathcal G}$. The data $y$
is finite-dimensional and the noise is Gaussian. We adopt a Bayesian approach
to the problem and consider Besov space priors (see Lassas et al. 2009), which
are well-known for their edge-preserving and sparsity-promoting properties and
have recently attracted wide attention especially in the medical imaging
community.
Our key result is to show that in this non-parametric setup the maximum a
posteriori (MAP) estimates are characterized by the minimizers of a generalized
Onsager--Machlup functional of the posterior. This is done independently for
the so-called weak and strong MAP estimates, which as we show coincide in our
context. In addition, we prove a form of weak consistency for the MAP
estimators in the infinitely informative data limit. Our results are remarkable
for two reasons: first, the prior distribution is non-Gaussian and does not
meet the smoothness conditions required in previous research on non-parametric
MAP estimates. Second, the result analytically justifies existing uses of the
MAP estimate in finite but high dimensional discretizations of Bayesian inverse
problems with the considered Besov priors. | [
0,
0,
1,
1,
0,
0
] |
Title: Non-LTE line formation of Fe in late-type stars IV: Modelling of the solar centre-to-limb variation in 3D,
Abstract: Our ability to model the shapes and strengths of iron lines in the solar
spectrum is a critical test of the accuracy of the solar iron abundance, which
sets the absolute zero-point of all stellar metallicities. We use an extensive
463-level Fe atom with new photoionisation cross-sections for FeI as well as
quantum mechanical calculations of collisional excitation and charge transfer
with neutral hydrogen; the latter effectively remove a free parameter that has
hampered all previous line formation studies of Fe in non-local thermodynamic
equilibrium (NLTE). For the first time, we use realistic 3D NLTE calculations
of Fe for a quantitative comparison to solar observations. We confront our
theoretical line profiles with observations taken at different viewing angles
across the solar disk with the Swedish 1-m Solar Telescope. We find that 3D
modelling well reproduces the observed centre-to-limb behaviour of spectral
lines overall, but highlight aspects that may require further work, especially
cross-sections for inelastic collisions with electrons. Our inferred solar iron
abundance is log(eps(Fe))=7.48+-0.04. | [
0,
1,
0,
0,
0,
0
] |
Title: Transmission clusters in the HIV-1 epidemic among men who have sex with men in Montreal, Quebec, Canada,
Abstract: Background. Several studies have used phylogenetics to investigate Human
Immunodeficiency Virus (HIV) transmission among Men who have Sex with Men
(MSMs) in Montreal, Quebec, Canada, revealing many transmission clusters. The
Quebec HIV genotyping program sequence database now includes viral sequences
from close to 4,000 HIV-positive individuals classified as MSMs. In this paper,
we investigate clustering in those data by comparing results from several
methods: the conventional Bayesian and maximum likelihood-bootstrap methods,
and two more recent algorithms, DM-PhyClus, a Bayesian algorithm that produces
a measure of uncertainty for proposed partitions, and the Gap Procedure, a fast
distance-based approach. We estimate cluster growth by focusing on recent cases
in the Primary HIV Infection (PHI) stage. Results. The analyses reveal
considerable overlap between cluster estimates obtained from conventional
methods. The Gap Procedure and DM-PhyClus rely on different cluster definitions
and as a result, suggest moderately different partitions. All estimates lead to
similar conclusions about cluster expansion: several large clusters have
experienced sizeable growth, and a few new transmission clusters are likely
emerging. Conclusions. The lack of a gold standard measure for clustering
quality makes picking a best estimate among those proposed difficult. Work
aiming to refine clustering criteria would be required to improve estimates.
Nevertheless, the results unanimously stress the role that clusters play in
promoting HIV incidence among MSMs. | [
0,
0,
0,
1,
0,
0
] |
Title: Conformal blocks attached to twisted groups,
Abstract: The aim of this paper is to generalize the notion of conformal blocks to the
situation in which the Lie algebra they are attached to is not defined over a
field, but depends on covering data of curves. The result will be a sheaf of
conformal blocks on the Hurwitz stack parametrizing Galois coverings of curves.
Many features of the classical sheaves of conformal blocks are proved to hold
in this more general setting, in particular the fusion rules, the propagation
of vacua and the WZW connection. | [
0,
0,
1,
0,
0,
0
] |
Title: Statistical study on propagation characteristics of Omega signals (VLF) in magnetosphere detected by the Akebono satellite,
Abstract: This paper shows a statistical analysis of 10.2 kHz Omega broadcasts of an
artificial signal broadcast from ground stations, propagated in the
plasmasphere, and detected using an automatic detection method we developed. We
study the propagation patterns of the Omega signals to understand the
propagation characteristics that are strongly affected by plasmaspheric
electron density and the ambient magnetic field. We show the unique propagation
patterns of the Omega 10.2 kHz signal when it was broadcast from two
high-middle-latitude stations. We use about eight years of data captured by the
Poynting flux analyzer subsystem on board the Akebono satellite from October
1989 to September 1997. We demonstrate that the signals broadcast from almost
the same latitude (in geomagnetic coordinates) propagated differently depending
on the geographic latitude. We also study propagation characteristics as a
function of local time, season, and solar activity. The Omega signal tended to
propagate farther on the nightside than on the dayside and was more widely
distributed during winter than during summer. When solar activity was at
maximum, the Omega signal propagated at a lower intensity level. In contrast,
when solar activity was at minimum, the Omega signal propagated at a higher
intensity and farther from the transmitter station. | [
0,
1,
0,
0,
0,
0
] |
Title: Thermally induced stresses in boulders on airless body surfaces, and implications for rock breakdown,
Abstract: This work investigates the macroscopic thermomechanical behavior of lunar
boulders by modeling their response to diurnal thermal forcing. Our results
reveal a bimodal, spatiotemporally-complex stress response. During sunrise,
stresses occur in the boulders' interiors that are associated with large-scale
temperature gradients developed due to overnight cooling. During sunset,
stresses occur at the boulders' exteriors due to the cooling and contraction of
the surface. Both kinds of stresses are on the order of 10 MPa in 1 m boulders
and decrease for smaller diameters, suggesting that larger boulders break down
more quickly. Boulders <30 cm exhibit a weak response to thermal forcing,
suggesting a threshold below which crack propagation may not occur. Boulders of
any size buried by regolith are shielded from thermal breakdown. As boulders
increase in size (>1 m), stresses increase to several 10s of MPa as the
behavior of their surfaces approaches that of an infinite halfspace. As the
thermal wave loses contact with the boulder interior, stresses become limited
to the near-surface. This suggests that the survival time of a boulder is not
only controlled by the amplitude of induced stress, but also by its diameter as
compared to the diurnal skin depth. While stresses on the order of 10 MPa are
enough to drive crack propagation in terrestrial environments, crack
propagation rates in vacuum are not well constrained. We explore the
relationship between boulder size, stress, and the direction of crack
propagation, and discuss the implications for the relative breakdown rates and
estimated lifetimes of boulders on airless body surfaces. | [
0,
1,
0,
0,
0,
0
] |
Title: Some observations about generalized quantifiers in logics of imperfect information,
Abstract: We analyze the definitions of generalized quantifiers of imperfect
information that have been proposed by F.Engström. We argue that these
definitions are just embeddings of the first-order generalized quantifiers into
team semantics, and fail to capture an adequate notion of team-theoretical
generalized quantifier, save for the special cases in which the quantifiers are
applied to flat formulas. We also criticize the meaningfulness of the
monotone/nonmonotone distinction in this context. We make some proposals for a
more adequate definition of generalized quantifiers of imperfect information. | [
0,
0,
1,
0,
0,
0
] |
Title: Predicting Demographics of High-Resolution Geographies with Geotagged Tweets,
Abstract: In this paper, we consider the problem of predicting demographics of
geographic units given geotagged Tweets that are composed within these units.
Traditional survey methods that offer demographics estimates are usually
limited in terms of geographic resolution, geographic boundaries, and time
intervals. Thus, it would be highly useful to develop computational methods
that can complement traditional survey methods by offering demographics
estimates at finer geographic resolutions, with flexible geographic boundaries
(i.e. not confined to administrative boundaries), and at different time
intervals. While prior work has focused on predicting demographics and health
statistics at relatively coarse geographic resolutions such as the county-level
or state-level, we introduce an approach to predict demographics at finer
geographic resolutions such as the blockgroup-level. For the task of predicting
gender and race/ethnicity counts at the blockgroup-level, an approach adapted
from prior work to our problem achieves an average correlation of 0.389
(gender) and 0.569 (race) on a held-out test dataset. Our approach outperforms
this prior approach with an average correlation of 0.671 (gender) and 0.692
(race). | [
1,
0,
0,
1,
0,
0
] |
Title: Text Compression for Sentiment Analysis via Evolutionary Algorithms,
Abstract: Can textual data be compressed intelligently without losing accuracy in
evaluating sentiment? In this study, we propose a novel evolutionary
compression algorithm, PARSEC (PARts-of-Speech for sEntiment Compression),
which makes use of Parts-of-Speech tags to compress text in a way that
sacrifices minimal classification accuracy when used in conjunction with
sentiment analysis algorithms. An analysis of PARSEC with eight commercial and
non-commercial sentiment analysis algorithms on twelve English sentiment data
sets reveals that accurate compression is possible with (0%, 1.3%, 3.3%) loss
in sentiment classification accuracy for (20%, 50%, 75%) data compression with
PARSEC using LingPipe, the most accurate of the sentiment algorithms. Other
sentiment analysis algorithms are more severely affected by compression. We
conclude that significant compression of text data is possible for sentiment
analysis depending on the accuracy demands of the specific application and the
specific sentiment analysis algorithm used. | [
1,
0,
0,
1,
0,
0
] |
Title: Training large margin host-pathogen protein-protein interaction predictors,
Abstract: Detection of protein-protein interactions (PPIs) plays a vital role in
molecular biology. Particularly, infections are caused by the interactions of
host and pathogen proteins. It is important to identify host-pathogen
interactions (HPIs) to discover new drugs to counter infectious diseases.
Conventional wet lab PPI prediction techniques have limitations in terms of
large scale application and budget. Hence, computational approaches are
developed to predict PPIs. This study aims to develop large margin machine
learning models to predict interspecies PPIs with a special interest in
host-pathogen protein interactions (HPIs). Especially, we focus on seeking
answers to three queries that arise while developing an HPI predictor. 1) How
should we select negative samples? 2) What should be the size of negative
samples as compared to the positive samples? 3) What type of margin violation
penalty should be used to train the predictor? We compare two available methods
for negative sampling. Moreover, we propose a new method of assigning weights
to each training example in weighted SVM depending on the distance of the
negative examples from the positive examples. We have also developed a web
server for our HPI predictor called HoPItor (Host Pathogen Interaction
predicTOR) that can predict interactions between human and viral proteins. This
webserver can be accessed at the URL:
this http URL. | [
1,
0,
0,
1,
0,
0
] |
Title: A Useful Solution of the Coupon Collector's Problem,
Abstract: The Coupon Collector's Problem is one of the few mathematical problems that
make news headlines regularly. The reasons for this are on one hand the immense
popularity of soccer albums (called Paninimania) and on the other hand that no
solution is known that is able to take into account all effects such as
replacement (limited purchasing of missing stickers) or swapping. In previous
papers we have proven that the classical assumptions are not fulfilled in
practice. Therefore we define new assumptions that match reality. Based on
these assumptions we are able to derive formulae for the mean number of
stickers needed (and the associated standard deviation) that are able to take
into account all effects that occur in practical collecting. Thus collectors
can estimate the average cost of completion of an album and its standard
deviation just based on elementary calculations. From a practical point of view
we consider the Coupon Collector's problem as solved.
-----
Das Sammelbilderproblem ist eines der wenigen mathematischen Probleme, die
regelmä{\ss}ig in den Schlagzeilen der Nachrichten vorkommen. Dies liegt
einerseits an der gro{\ss}en Popularität von Fu{\ss}ball-Sammelbildern
(Paninimania genannt) und andererseits daran, dass es bisher keine Lösung
gibt, die alle relevanten Effekte wie Nachkaufen oder Tauschen
berücksichtigt. Wir haben bereits nachgewiesen, dass die klassischen Annahmen
nicht der Realität entsprechen. Deshalb stellen wir neue Annahmen auf, die
die Praxis besser abbilden. Darauf aufbauend können wir Formeln für die
mittlere Anzahl benötigter Bilder (sowie deren Standardabweichung) ableiten,
die alle in der Praxis relevanten Effekte berücksichtigen. Damit können
Sammler die mittleren Kosten eines Albums sowie deren Standardabweichung nur
mit Hilfe von elementaren Rechnungen bestimmen. Für praktische Zwecke ist das
Sammelbilderproblem damit gelöst. | [
0,
0,
1,
0,
0,
0
] |
Title: Accretion driven turbulence in filaments I: Non-gravitational accretion,
Abstract: We study accretion driven turbulence for different inflow velocities in star
forming filaments using the code ramses. Filaments are rarely isolated objects
and their gravitational potential will lead to radially dominated accretion. In
the non-gravitational case, accretion by itself can already provoke
non-isotropic, radially dominated turbulent motions responsible for the complex
structure and non-thermal line widths observed in filaments. We find that there
is a direct linear relation between the absolute value of the total density
weighted velocity dispersion and the infall velocity. The turbulent velocity
dispersion in the filaments is independent of sound speed or any net flow along
the filament. We show that the density weighted velocity dispersion acts as an
additional pressure term supporting the filament in hydrostatic equilibrium.
Comparing to observations, we find that the projected non-thermal line width
variation is generally subsonic independent of inflow velocity. | [
0,
1,
0,
0,
0,
0
] |
Title: Contextual Outlier Interpretation,
Abstract: Outlier detection plays an essential role in many data-driven applications to
identify isolated instances that are different from the majority. While many
statistical learning and data mining techniques have been used for developing
more effective outlier detection algorithms, the interpretation of detected
outliers does not receive much attention. Interpretation is becoming
increasingly important to help people trust and evaluate the developed models
through providing intrinsic reasons why the certain outliers are chosen. It is
difficult, if not impossible, to simply apply feature selection for explaining
outliers due to the distinct characteristics of various detection models,
complicated structures of data in certain applications, and imbalanced
distribution of outliers and normal instances. In addition, the role of
contrastive contexts where outliers locate, as well as the relation between
outliers and contexts, are usually overlooked in interpretation. To tackle the
issues above, in this paper, we propose a novel Contextual Outlier
INterpretation (COIN) method to explain the abnormality of existing outliers
spotted by detectors. The interpretability for an outlier is achieved from
three aspects: outlierness score, attributes that contribute to the
abnormality, and contextual description of its neighborhoods. Experimental
results on various types of datasets demonstrate the flexibility and
effectiveness of the proposed framework compared with existing interpretation
approaches. | [
1,
0,
0,
1,
0,
0
] |
Title: Game Theory for Secure Critical Interdependent Gas-Power-Water Infrastructure,
Abstract: A city's critical infrastructure such as gas, water, and power systems, are
largely interdependent since they share energy, computing, and communication
resources. This, in turn, makes it challenging to endow them with fool-proof
security solutions. In this paper, a unified model for interdependent
gas-power-water infrastructure is presented and the security of this model is
studied using a novel game-theoretic framework. In particular, a zero-sum
noncooperative game is formulated between a malicious attacker who seeks to
simultaneously alter the states of the gas-power-water critical infrastructure
to increase the power generation cost and a defender who allocates
communication resources over its attack detection filters in local areas to
monitor the infrastructure. At the mixed strategy Nash equilibrium of this
game, numerical results show that the expected power generation cost deviation
is 35\% lower than the one resulting from an equal allocation of resources over
the local filters. The results also show that, at equilibrium, the
interdependence of the power system on the natural gas and water systems can
motivate the attacker to target the states of the water and natural gas systems
to change the operational states of the power grid. Conversely, the defender
allocates a portion of its resources to the water and natural gas states of the
interdependent system to protect the grid from state deviations. | [
1,
0,
0,
0,
0,
0
] |
Title: Implicit Entity Linking in Tweets,
Abstract: Over the years, Twitter has become one of the largest communication platforms
providing key data to various applications such as brand monitoring, trend
detection, among others. Entity linking is one of the major tasks in natural
language understanding from tweets and it associates entity mentions in text to
corresponding entries in knowledge bases in order to provide unambiguous
interpretation and additional con- text. State-of-the-art techniques have
focused on linking explicitly mentioned entities in tweets with reasonable
success. However, we argue that in addition to explicit mentions i.e. The movie
Gravity was more ex- pensive than the mars orbiter mission entities (movie
Gravity) can also be mentioned implicitly i.e. This new space movie is crazy.
you must watch it!. This paper introduces the problem of implicit entity
linking in tweets. We propose an approach that models the entities by
exploiting their factual and contextual knowledge. We demonstrate how to use
these models to perform implicit entity linking on a ground truth dataset with
397 tweets from two domains, namely, Movie and Book. Specifically, we show: 1)
the importance of linking implicit entities and its value addition to the
standard entity linking task, and 2) the importance of exploiting contextual
knowledge associated with an entity for linking their implicit mentions. We
also make the ground truth dataset publicly available to foster the research in
this new research area. | [
1,
0,
0,
0,
0,
0
] |
Title: Non-cocompact Group Actions and $π_1$-Semistability at Infinity,
Abstract: A finitely presented 1-ended group $G$ has {\it semistable fundamental group
at infinity} if $G$ acts geometrically on a simply connected and locally
compact ANR $Y$ having the property that any two proper rays in $Y$ are
properly homotopic. This property of $Y$ captures a notion of connectivity at
infinity stronger than "1-ended", and is in fact a feature of $G$, being
independent of choices. It is a fundamental property in the homotopical study
of finitely presented groups. While many important classes of groups have been
shown to have semistable fundamental group at infinity, the question of whether
every $G$ has this property has been a recognized open question for nearly
forty years. In this paper we attack the problem by considering a proper {\it
but non-cocompact} action of a group $J$ on such an $Y$. This $J$ would
typically be a subgroup of infinite index in the geometrically acting
over-group $G$; for example $J$ might be infinite cyclic or some other subgroup
whose semistability properties are known. We divide the semistability property
of $G$ into a $J$-part and a "perpendicular to $J$" part, and we analyze how
these two parts fit together. Among other things, this analysis leads to a
proof (in a companion paper) that a class of groups previously considered to be
likely counter examples do in fact have the semistability property. | [
0,
0,
1,
0,
0,
0
] |
Title: Robust Distributed Control of DC Microgrids with Time-Varying Power Sharing,
Abstract: This paper addresses the problem of output voltage regulation for multiple
DC/DC converters connected to a microgrid, and prescribes a scheme for sharing
power among different sources. This architecture is structured in such a way
that it admits quantifiable analysis of the closed-loop performance of the
network of converters; the analysis simplifies to studying closed-loop
performance of an equivalent {\em single-converter} system. The proposed
architecture allows for the proportion in which the sources provide power to
vary with time; thus overcoming limitations of our previous designs.
Additionally, the proposed control framework is suitable to both centralized
and decentralized implementations, i.e., the same control architecture can be
employed for voltage regulation irrespective of the availability of common
load-current (or power) measurement, without the need to modify controller
parameters. The performance becomes quantifiably better with better
communication of the demanded load to all the controllers at all the converters
(in the centralized case); however guarantees viability when such communication
is absent. Case studies comprising of battery, PV and generic sources are
presented and demonstrate the enhanced performance of prescribed optimal
controllers for voltage regulation and power sharing. | [
1,
0,
1,
0,
0,
0
] |
Title: Quantum Lower Bounds for Tripartite Versions of the Hidden Shift and the Set Equality Problems,
Abstract: In this paper, we study quantum query complexity of the following rather
natural tripartite generalisations (in the spirit of the 3-sum problem) of the
hidden shift and the set equality problems, which we call the 3-shift-sum and
the 3-matching-sum problems.
The 3-shift-sum problem is as follows: given a table of $3\times n$ elements,
is it possible to circularly shift its rows so that the sum of the elements in
each column becomes zero? It is promised that, if this is not the case, then no
3 elements in the table sum up to zero. The 3-matching-sum problem is defined
similarly, but it is allowed to arbitrarily permute elements within each row.
For these problems, we prove lower bounds of $\Omega(n^{1/3})$ and
$\Omega(\sqrt n)$, respectively. The second lower bound is tight.
The lower bounds are proven by a novel application of the dual learning graph
framework and by using representation-theoretic tools. | [
1,
0,
0,
0,
0,
0
] |
Title: A Parallel Direct Cut Algorithm for High-Order Overset Methods with Application to a Spinning Golf Ball,
Abstract: Overset methods are commonly employed to enable the effective simulation of
problems involving complex geometries and moving objects such as rotorcraft.
This paper presents a novel overset domain connectivity algorithm based upon
the direct cut approach suitable for use with GPU-accelerated solvers on
high-order curved grids. In contrast to previous methods it is capable of
exploiting the highly data-parallel nature of modern accelerators. Further, the
approach is also substantially more efficient at handling the curved grids
which arise within the context of high-order methods. An implementation of this
new algorithm is presented and combined with a high-order fluid dynamics code.
The algorithm is validated against several benchmark problems, including flow
over a spinning golf ball at a Reynolds number of 150,000. | [
0,
1,
0,
0,
0,
0
] |
Title: A data assimilation algorithm: the paradigm of the 3D Leray-alpha model of turbulence,
Abstract: In this paper we survey the various implementations of a new data
assimilation (downscaling) algorithm based on spatial coarse mesh measurements.
As a paradigm, we demonstrate the application of this algorithm to the 3D
Leray-$\alpha$ subgrid scale turbulence model. Most importantly, we use this
paradigm to show that it is not always necessary that one has to collect coarse
mesh measurements of all the state variables, that are involved in the
underlying evolutionary system, in order to recover the corresponding exact
reference solution. Specifically, we show that in the case of the 3D
Leray$-\alpha$ model of turbulence the solutions of the algorithm, constructed
using only coarse mesh observations of any two components of the
three-dimensional velocity field, and without any information of the third
component, converge, at an exponential rate in time, to the corresponding exact
reference solution of the 3D Leray$-\alpha$ model. This study serves as an
addendum to our recent work on abridged continuous data assimilation for the 2D
Navier-Stokes equations. Notably, similar results have also been recently
established for the 3D viscous Planetary Geostrophic circulation model in which
we show that coarse mesh measurements of the temperature alone are sufficient
for recovering, through our data assimilation algorithm, the full solution;
viz. the three components of velocity vector field and the temperature.
Consequently, this proves the Charney conjecture for the 3D Planetary
Geostrophic model; namely, that the history of the large spatial scales of
temperature is sufficient for determining all the other quantities (state
variables) of the model. | [
0,
1,
1,
0,
0,
0
] |
Title: Playing a true Parrondo's game with a three state coin on a quantum walk,
Abstract: Playing a Parrondo's game with a qutrit is the subject of this paper. We show
that a true quantum Parrondo's game can be played with a 3 state coin(qutrit)
in a 1D quantum walk in contrast to the fact that playing a true Parrondo's
game with a 2 state coin(qubit) in 1D quantum walk fails in the asymptotic
limits. | [
1,
1,
0,
0,
0,
0
] |
Title: Cross-modal Recurrent Models for Weight Objective Prediction from Multimodal Time-series Data,
Abstract: We analyse multimodal time-series data corresponding to weight, sleep and
steps measurements. We focus on predicting whether a user will successfully
achieve his/her weight objective. For this, we design several deep long
short-term memory (LSTM) architectures, including a novel cross-modal LSTM
(X-LSTM), and demonstrate their superiority over baseline approaches. The
X-LSTM improves parameter efficiency by processing each modality separately and
allowing for information flow between them by way of recurrent
cross-connections. We present a general hyperparameter optimisation technique
for X-LSTMs, which allows us to significantly improve on the LSTM and a prior
state-of-the-art cross-modal approach, using a comparable number of parameters.
Finally, we visualise the model's predictions, revealing implications about
latent variables in this task. | [
1,
0,
0,
1,
0,
0
] |
Title: Spatial Variational Auto-Encoding via Matrix-Variate Normal Distributions,
Abstract: The key idea of variational auto-encoders (VAEs) resembles that of
traditional auto-encoder models in which spatial information is supposed to be
explicitly encoded in the latent space. However, the latent variables in VAEs
are vectors, which can be interpreted as multiple feature maps of size 1x1.
Such representations can only convey spatial information implicitly when
coupled with powerful decoders. In this work, we propose spatial VAEs that use
feature maps of larger size as latent variables to explicitly capture spatial
information. This is achieved by allowing the latent variables to be sampled
from matrix-variate normal (MVN) distributions whose parameters are computed
from the encoder network. To increase dependencies among locations on latent
feature maps and reduce the number of parameters, we further propose spatial
VAEs via low-rank MVN distributions. Experimental results show that the
proposed spatial VAEs outperform original VAEs in capturing rich structural and
spatial information. | [
1,
0,
0,
1,
0,
0
] |
Title: Optimal Ramp Schemes and Related Combinatorial Objects,
Abstract: In 1996, Jackson and Martin proved that a strong ideal ramp scheme is
equivalent to an orthogonal array. However, there was no good characterization
of ideal ramp schemes that are not strong. Here we show the equivalence of
ideal ramp schemes to a new variant of orthogonal arrays that we term augmented
orthogonal arrays. We give some constructions for these new kinds of arrays,
and, as a consequence, we also provide parameter situations where ideal ramp
schemes exist but strong ideal ramp schemes do not exist. | [
1,
0,
0,
0,
0,
0
] |
Title: Do Neural Nets Learn Statistical Laws behind Natural Language?,
Abstract: The performance of deep learning in natural language processing has been
spectacular, but the reasons for this success remain unclear because of the
inherent complexity of deep learning. This paper provides empirical evidence of
its effectiveness and of a limitation of neural networks for language
engineering. Precisely, we demonstrate that a neural language model based on
long short-term memory (LSTM) effectively reproduces Zipf's law and Heaps' law,
two representative statistical properties underlying natural language. We
discuss the quality of reproducibility and the emergence of Zipf's law and
Heaps' law as training progresses. We also point out that the neural language
model has a limitation in reproducing long-range correlation, another
statistical property of natural language. This understanding could provide a
direction for improving the architectures of neural networks. | [
1,
0,
0,
0,
0,
0
] |
Title: Super-speeds with Zero-RAM: Next Generation Large-Scale Optimization in Your Laptop!,
Abstract: This article presents the novel breakthrough general purpose algorithm for
large scale optimization problems. The novel algorithm is capable of achieving
breakthrough speeds for very large-scale optimization on general purpose
laptops and embedded systems. Application of the algorithm to the Griewank
function was possible in up to 1 billion decision variables in double precision
took only 64485 seconds (~18 hours) to solve, while consuming 7,630 MB (7.6 GB)
or RAM on a single threaded laptop CPU. It shows that the algorithm is
computationally and memory (space) linearly efficient, and can find the optimal
or near-optimal solution in a fraction of the time and memory that many
conventional algorithms require. It is envisaged that this will open up new
possibilities of real-time large-scale problems on personal laptops and
embedded systems. | [
1,
0,
0,
0,
0,
0
] |
Title: Recoverable Energy of Dissipative Electromagnetic Systems,
Abstract: Ambiguities in the definition of stored energy within distributed or
radiating electromagnetic systems motivate the discussion of the well-defined
concept of recoverable energy. This concept is commonly overlooked by the
community and the purpose of this communication is to recall its existence and
to discuss its relationship to fractional bandwidth. Using a rational function
approximation of a system's input impedance, the recoverable energy of lumped
and radiating systems is calculated in closed form and is related to stored
energy and fractional bandwidth. Lumped circuits are also used to demonstrate
the relationship between recoverable energy and the energy stored within
equivalent circuits produced by the minimum phase-shift Darlington's synthesis
procedure. | [
0,
1,
0,
0,
0,
0
] |
Title: Elliptic Hall algebra on $\mathbb{F}_1$,
Abstract: We construct Hall algebra of elliptic curve over $\mathbb{F}_1$ using the
theory of monoidal scheme due to Deitmar and the theory of Hall algebra for
monoidal representations due to Szczesny. The resulting algebra is shown to be
a specialization of elliptic Hall algebra studied by Burban and Schiffmann.
Thus our algebra is isomorphic to the skein algebra for torus by the recent
work of Morton and Samuelson. | [
0,
0,
1,
0,
0,
0
] |
Title: Approximate Bayesian inference with queueing networks and coupled jump processes,
Abstract: Queueing networks are systems of theoretical interest that give rise to
complex families of stochastic processes, and find widespread use in the
performance evaluation of interconnected resources. Yet, despite their
importance within applications, and in comparison to their counterpart
stochastic models in genetics or mathematical biology, there exist few relevant
approaches for transient inference and uncertainty quantification tasks in
these systems. This is a consequence of strong computational impediments and
distinctive properties of the Markov jump processes induced by queueing
networks. In this paper, we offer a comprehensive overview of the inferential
challenge and its comparison to analogue tasks within related mathematical
domains. We then discuss a model augmentation over an approximating network
system, and present a flexible and scalable variational Bayesian framework,
which is targeted at general-form open and closed queueing systems, with varied
service disciplines and priorities. The inferential procedure is finally
validated in a couple of uncertainty quantification tasks for network service
rates. | [
1,
0,
0,
1,
0,
0
] |
Title: Universal features of price formation in financial markets: perspectives from Deep Learning,
Abstract: Using a large-scale Deep Learning approach applied to a high-frequency
database containing billions of electronic market quotes and transactions for
US equities, we uncover nonparametric evidence for the existence of a universal
and stationary price formation mechanism relating the dynamics of supply and
demand for a stock, as revealed through the order book, to subsequent
variations in its market price. We assess the model by testing its
out-of-sample predictions for the direction of price moves given the history of
price and order flow, across a wide range of stocks and time periods. The
universal price formation model is shown to exhibit a remarkably stable
out-of-sample prediction accuracy across time, for a wide range of stocks from
different sectors. Interestingly, these results also hold for stocks which are
not part of the training sample, showing that the relations captured by the
model are universal and not asset-specific.
The universal model --- trained on data from all stocks --- outperforms, in
terms of out-of-sample prediction accuracy, asset-specific linear and nonlinear
models trained on time series of any given stock, showing that the universal
nature of price formation weighs in favour of pooling together financial data
from various stocks, rather than designing asset- or sector-specific models as
commonly done. Standard data normalizations based on volatility, price level or
average spread, or partitioning the training data into sectors or categories
such as large/small tick stocks, do not improve training results. On the other
hand, inclusion of price and order flow history over many past observations is
shown to improve forecasting performance, showing evidence of path-dependence
in price dynamics. | [
0,
0,
0,
1,
0,
1
] |
Title: A New Tracking Algorithm for Multiple Colloidal Particles Close to Contact,
Abstract: In this paper, we propose a new algorithm based on radial symmetry center
method to track colloidal particles close to contact, where the optical images
of the particles start to overlap in digital video microscopy. This overlapping
effect is important to observe the pair interaction potential in colloidal
studies and it appears as additional interaction in the measurement of the
interaction with conventional tracking analysis. The proposed algorithm in this
work is simple, fast and applicable for not only two particles but also three
and more particles without any modification. The algorithm uses gradient
vectors of the particle intensity distribution, which allows us to use a part
of the symmetric intensity distribution in the calculation of the actual
particle position. In this study, simulations are performed to see the
performance of the proposed algorithm for two and three particles, where the
simulation images are generated by using fitted curve to experimental particle
image for different sized particles. As a result, the algorithm yields the
maximum error smaller than 2 nm for 5.53 {\mu}m silica particles in contact
condition. | [
0,
1,
0,
0,
0,
0
] |
Title: Critical Percolation Without Fine Tuning on the Surface of a Topological Superconductor,
Abstract: We present numerical evidence that most two-dimensional surface states of a
bulk topological superconductor (TSC) sit at an integer quantum Hall plateau
transition. We study TSC surface states in class CI with quenched disorder.
Low-energy (finite-energy) surface states were expected to be critically
delocalized (Anderson localized). We confirm the low-energy picture, but find
instead that finite-energy states are also delocalized, with universal
statistics that are independent of the TSC winding number, and consistent with
the spin quantum Hall plateau transition (percolation). | [
0,
1,
0,
0,
0,
0
] |
Title: Low-luminosity stellar wind accretion onto neutron stars in HMXBs,
Abstract: Features and applications of quasi-spherical settling accretion onto rotating
magnetized neutron stars in high-mass X-ray binaries are discussed. The
settling accretion occurs in wind-fed HMXBs when the plasma cooling time is
longer than the free-fall time from the gravitational capture radius, which can
take place in low-luminosity HMXBs with $L_x\lesssim 4\times 10^{36}$ erg/s. We
briefly review the implications of the settling accretion, focusing on the SFXT
phenomenon, which can be related to instability of the quasi-spherical
convective shell above the neutron star magnetosphere due to magnetic
reconnection from fast temporarily magnetized winds from OB-supergiant. If a
young neutron star in a wind-fed HMXB is rapidly rotating, the propeller regime
in a quasi-spherical hot shell occurs. We show that X-ray spectral and temporal
properties of enigmatic $\gamma$ Cas Be-stars are consistent with failed
settling accretion regime onto a propelling neutron star. The subsequent
evolutionary stage of $\gamma$ Cas and its analogs should be the X Per-type
binaries comprising low-luminosity slowly rotating X-ray pulsars. | [
0,
1,
0,
0,
0,
0
] |
Title: Faithful Inversion of Generative Models for Effective Amortized Inference,
Abstract: Inference amortization methods share information across multiple
posterior-inference problems, allowing each to be carried out more efficiently.
Generally, they require the inversion of the dependency structure in the
generative model, as the modeller must learn a mapping from observations to
distributions approximating the posterior. Previous approaches have involved
inverting the dependency structure in a heuristic way that fails to capture
these dependencies correctly, thereby limiting the achievable accuracy of the
resulting approximations. We introduce an algorithm for faithfully, and
minimally, inverting the graphical model structure of any generative model.
Such inverses have two crucial properties: (a) they do not encode any
independence assertions that are absent from the model and; (b) they are local
maxima for the number of true independencies encoded. We prove the correctness
of our approach and empirically show that the resulting minimally faithful
inverses lead to better inference amortization than existing heuristic
approaches. | [
1,
0,
0,
1,
0,
0
] |
Title: A social Network Analysis of the Operations Research/Industrial Engineering Faculty Hiring Network,
Abstract: We study the U.S. Operations Research/Industrial-Systems Engineering (ORIE)
faculty hiring network, consisting of 1,179 faculty origin and destination data
together with attribute data from 83 ORIE departments. A social network
analysis of faculty hires can reveal important patterns in an academic field,
such as the existence of a hierarchy or sociological aspects such as the
presence of communities of departments. We first statistically test for the
existence of a linear hierarchy in the network and for its steepness. We find a
near linear hierarchical order of the departments, proposing a new index for
hiring networks, which we contrast with other indicators of hierarchy,
including published rankings. A single index is not capable to capture the full
structure of a complex network, however, so we next fit a latent exponential
random graph model (ERGM) to the network, which is able to reproduce its main
observed characteristics: high incidence of self-hiring, skewed out-degree
distribution, low density and clustering. Finally, we use the latent variables
in the ERGM to simplify the network to one where faculty hires take place among
three groups of departments. We contrast our findings with those reported for
other related disciplines, Computer Science and Business. | [
1,
0,
0,
1,
0,
0
] |
Title: One-sample aggregate data meta-analysis of medians,
Abstract: An aggregate data meta-analysis is a statistical method that pools the
summary statistics of several selected studies to estimate the outcome of
interest. When considering a continuous outcome, typically each study must
report the same measure of the outcome variable and its spread (e.g., the
sample mean and its standard error). However, some studies may instead report
the median along with various measures of spread. Recently, the task of
incorporating medians in meta-analysis has been achieved by estimating the
sample mean and its standard error from each study that reports a median in
order to meta-analyze the means. In this paper, we propose two alternative
approaches to meta-analyze data that instead rely on medians. We systematically
compare these approaches via simulation study to each other and to methods that
transform the study-specific medians and spread into sample means and their
standard errors. We demonstrate that the proposed median-based approaches
perform better than the transformation-based approaches, especially when
applied to skewed data and data with high inter-study variance. In addition,
when meta-analyzing data that consists of medians, we show that the
median-based approaches perform considerably better than or comparably to the
best-case scenario for a transformation approach: conducting a meta-analysis
using the actual sample mean and standard error of the mean of each study.
Finally, we illustrate these approaches in a meta-analysis of patient delay in
tuberculosis diagnosis. | [
0,
0,
0,
1,
0,
0
] |
Title: QuickCast: Fast and Efficient Inter-Datacenter Transfers using Forwarding Tree Cohorts,
Abstract: Large inter-datacenter transfers are crucial for cloud service efficiency and
are increasingly used by organizations that have dedicated wide area networks
between datacenters. A recent work uses multicast forwarding trees to reduce
the bandwidth needs and improve completion times of point-to-multipoint
transfers. Using a single forwarding tree per transfer, however, leads to poor
performance because the slowest receiver dictates the completion time for all
receivers. Using multiple forwarding trees per transfer alleviates this
concern--the average receiver could finish early; however, if done naively,
bandwidth usage would also increase and it is apriori unclear how best to
partition receivers, how to construct the multiple trees and how to determine
the rate and schedule of flows on these trees. This paper presents QuickCast, a
first solution to these problems. Using simulations on real-world network
topologies, we see that QuickCast can speed up the average receiver's
completion time by as much as $10\times$ while only using $1.04\times$ more
bandwidth; further, the completion time for all receivers also improves by as
much as $1.6\times$ faster at high loads. | [
1,
0,
0,
0,
0,
0
] |
Title: Contemporary machine learning: a guide for practitioners in the physical sciences,
Abstract: Machine learning is finding increasingly broad application in the physical
sciences. This most often involves building a model relationship between a
dependent, measurable output and an associated set of controllable, but
complicated, independent inputs. We present a tutorial on current techniques in
machine learning -- a jumping-off point for interested researchers to advance
their work. We focus on deep neural networks with an emphasis on demystifying
deep learning. We begin with background ideas in machine learning and some
example applications from current research in plasma physics. We discuss
supervised learning techniques for modeling complicated functions, beginning
with familiar regression schemes, then advancing to more sophisticated deep
learning methods. We also address unsupervised learning and techniques for
reducing the dimensionality of input spaces. Along the way, we describe methods
for practitioners to help ensure that their models generalize from their
training data to as-yet-unseen test data. We describe classes of tasks --
predicting scalars, handling images, fitting time-series -- and prepare the
reader to choose an appropriate technique. We finally point out some
limitations to modern machine learning and speculate on some ways that
practitioners from the physical sciences may be particularly suited to help. | [
1,
1,
0,
0,
0,
0
] |
Title: Uniform diamond coatings on WC-Co hard alloy cutting inserts deposited by a microwave plasma CVD,
Abstract: Polycrystalline diamond coatings have been grown on cemented carbide
substrates with different aspect ratios by a microwave plasma CVD in
methane-hydrogen gas mixtures. To protect the edges of the substrates from
non-uniform heating due to the plasma edge effect, a special plateholder with
pockets for group growth has been used. The difference in heights of the
substrates and plateholder, and its influence on the diamond film mean grain
size, growth rate, phase composition and stress was investigated. The substrate
temperature range, within which uniform diamond films are produced with good
adhesion, is determined. The diamond-coated cutting inserts produced at
optimized process exhibited a reduction of cutting force and wear resistance by
a factor of two, and cutting efficiency increase by 4.3 times upon turning A390
Al-Si alloy as compared to performance of uncoated tools. | [
0,
1,
0,
0,
0,
0
] |
Title: Analysing Soccer Games with Clustering and Conceptors,
Abstract: We present a new approach for identifying situations and behaviours, which we
call "moves", from soccer games in the 2D simulation league. Being able to
identify key situations and behaviours are useful capabilities for analysing
soccer matches, anticipating opponent behaviours to aid selection of
appropriate tactics, and also as a prerequisite for automatic learning of
behaviours and policies. To support a wide set of strategies, our goal is to
identify situations from data, in an unsupervised way without making use of
pre-defined soccer specific concepts such as "pass" or "dribble". The recurrent
neural networks we use in our approach act as a high-dimensional projection of
the recent history of a situation on the field. Similar situations, i.e., with
similar histories, are found by clustering of network states. The same networks
are also used to learn so-called conceptors, that are lower-dimensional
manifolds that describe trajectories through a high-dimensional state space
that enable situation-specific predictions from the same neural network. With
the proposed approach, we can segment games into sequences of situations that
are learnt in an unsupervised way, and learn conceptors that are useful for the
prediction of the near future of the respective situation. | [
1,
0,
0,
0,
0,
0
] |
Title: On the Efficient Simulation of the Left-Tail of the Sum of Correlated Log-normal Variates,
Abstract: The sum of Log-normal variates is encountered in many challenging
applications such as in performance analysis of wireless communication systems
and in financial engineering. Several approximation methods have been developed
in the literature, the accuracy of which is not ensured in the tail regions.
These regions are of primordial interest wherein small probability values have
to be evaluated with high precision. Variance reduction techniques are known to
yield accurate, yet efficient, estimates of small probability values. Most of
the existing approaches, however, have considered the problem of estimating the
right-tail of the sum of Log-normal random variables (RVS). In the present
work, we consider instead the estimation of the left-tail of the sum of
correlated Log-normal variates with Gaussian copula under a mild assumption on
the covariance matrix. We propose an estimator combining an existing
mean-shifting importance sampling approach with a control variate technique.
The main result is that the proposed estimator has an asymptotically vanishing
relative error which represents a major finding in the context of the left-tail
simulation of the sum of Log-normal RVs. Finally, we assess by various
simulation results the performances of the proposed estimator compared to
existing estimators. | [
0,
0,
1,
1,
0,
0
] |
Title: Why optional stopping is a problem for Bayesians,
Abstract: Recently, optional stopping has been a subject of debate in the Bayesian
psychology community. Rouder (2014) argues that optional stopping is no problem
for Bayesians, and even recommends the use of optional stopping in practice, as
do Wagenmakers et al. (2012). This article addresses the question whether
optional stopping is problematic for Bayesian methods, and specifies under
which circumstances and in which sense it is and is not. By slightly varying
and extending Rouder's (2014) experiment, we illustrate that, as soon as the
parameters of interest are equipped with default or pragmatic priors - which
means, in most practical applications of Bayes Factor hypothesis testing -
resilience to optional stopping can break down. We distinguish between four
types of default priors, each having their own specific issues with optional
stopping, ranging from no-problem-at-all (Type 0 priors) to quite severe (Type
II and III priors). | [
0,
0,
1,
1,
0,
0
] |