abstract
stringlengths 6
6.09k
| id
stringlengths 9
16
| time
int64 725k
738k
|
---|---|---|
One of the consequences of persistent technological change is that it force
individuals to make decisions under extreme uncertainty. This means that
traditional decision-making frameworks cannot be applied. To address this issue
we introduce a variant of Case-Based Decision Theory, in which the solution to
a problem obtains in terms of the distance to previous problems. We formalize
this by defining a space based on an orthogonal basis of features of problems.
We show how this framework evolves upon the acquisition of new information,
namely features or values of them arising in new problems. We discuss how this
can be useful to evaluate decisions based on not yet existing data.
| 2104.14268 | 737,909 |
Structural and magnetic transitions in a double perovskite hosting 5d1 Re
ions are discussed on the basis of recently published high-resolution x-ray
diffraction patterns [D. Hirai, et al., Phys. Rev. Res. 2, 022063(R) (2020)]. A
reported structural transition below room temperature, from cubic to tetragonal
symmetry, appears not to be driven by T2g-type quadrupoles, as suggested. A
magnetic motif at lower temperature is shown to be composed of two order
parameters, associated with propagation vectors k = (0, 0, 1) and k = (0, 0,
0). Findings from our studies, for structural and magnetic properties of
Ba2MgReO6, surface in predicted amplitudes for x-ray diffraction at rhenium L2
and L3 absorption edges, and magnetic neutron Bragg diffraction. Specifically,
entanglement of anapole and spatial degrees of freedom creates a quadrupole in
the neutron scattering amplitude. It would be excluded in an unexpected
scenario whereby the rhenium atomic state is a manifold. Also, a chiral
signature visible in resonant x-ray diffraction will be one consequence of
predicted electronic quadrupole and magnetic dipole orders. A model Re wave
function consistent with all current knowledge is a guide to electronic and
magnetic multipoles engaged in x-ray and neutron diffraction investigations.
| 2104.14269 | 737,909 |
Robotic applications nowadays are widely adopted to enhance operational
automation and performance of real-world Cyber-Physical Systems (CPSs)
including Industry 4.0, agriculture, healthcare, and disaster management. These
applications are composed of latency-sensitive, data-heavy, and
compute-intensive tasks. The robots, however, are constrained in the
computational power and storage capacity. The concept of multi-agent cloud
robotics enables robot-to-robot cooperation and creates a complementary
environment for the robots in executing large-scale applications with the
capability to utilize the edge and cloud resources. However, in such a
collaborative environment, the optimal resource allocation for robotic tasks is
challenging to achieve. Heterogeneous energy consumption rates and application
of execution costs associated with the robots and computing instances make it
even more complex. In addition, the data transmission delay between local
robots, edge nodes, and cloud data centres adversely affects the real-time
interactions and impedes service performance guarantee. Taking all these issues
into account, this paper comprehensively surveys the state-of-the-art on
resource allocation and service provisioning in multi-agent cloud robotics. The
paper presents the application domains of multi-agent cloud robotics through
explicit comparison with the contemporary computing paradigms and identifies
the specific research challenges. A complete taxonomy on resource allocation is
presented for the first time, together with the discussion of resource pooling,
computation offloading, and task scheduling for efficient service provisioning.
Furthermore, we highlight the research gaps from the learned lessons, and
present future directions deemed beneficial to further advance this emerging
field.
| 2104.14270 | 737,909 |
The differential cross-section in squared momentum transfer of $\rho$,
$\rho^0$, $\omega$, $\phi$, $f_{0}(980)$, $f_{1}(1285)$, $f_{0}(1370)$,
$f_{1}(1420)$, $f_{0}(1500)$, and $J/\psi$ produced in high energy virtual
photon-proton ($\gamma$$^{*} p$), photon-proton ($\gamma p$), and proton-proton
($pp$) collisions measured by the H1, ZEUS, and WA102 Collaborations are
analyzed by the Monte Carlo calculations. In the calculations, the Erlang
distribution, Tsallis distribution, and Hagedorn function are separately used
to describe the transverse momentum spectra of the emitted particles. Our
results show that the initial and final-state temperatures increase from lower
squared photon virtuality to higher one, and decrease with increasing of
center-of-mass energy.
| 2104.14271 | 737,909 |
The first phase of table recognition is to detect the tabular area in a
document. Subsequently, the tabular structures are recognized in the second
phase in order to extract information from the respective cells. Table
detection and structural recognition are pivotal problems in the domain of
table understanding. However, table analysis is a perplexing task due to the
colossal amount of diversity and asymmetry in tables. Therefore, it is an
active area of research in document image analysis. Recent advances in the
computing capabilities of graphical processing units have enabled deep neural
networks to outperform traditional state-of-the-art machine learning methods.
Table understanding has substantially benefited from the recent breakthroughs
in deep neural networks. However, there has not been a consolidated description
of the deep learning methods for table detection and table structure
recognition. This review paper provides a thorough analysis of the modern
methodologies that utilize deep neural networks. This work provided a thorough
understanding of the current state-of-the-art and related challenges of table
understanding in document images. Furthermore, the leading datasets and their
intricacies have been elaborated along with the quantitative results. Moreover,
a brief overview is given regarding the promising directions that can serve as
a guide to further improve table analysis in document images.
| 2104.14272 | 737,909 |
Since the mapping relationship between definitized intra-interventional 2D
X-ray and undefined pre-interventional 3D Computed Tomography(CT) is uncertain,
auxiliary positioning devices or body markers, such as medical implants, are
commonly used to determine this relationship. However, such approaches can not
be widely used in clinical due to the complex realities. To determine the
mapping relationship, and achieve a initializtion post estimation of human body
without auxiliary equipment or markers, a cross-modal matching transformer
network is proposed to matching 2D X-ray and 3D CT images directly. The
proposed approach first deep learns skeletal features from 2D X-ray and 3D CT
images. The features are then converted into 1D X-ray and CT representation
vectors, which are combined using a multi-modal transformer. As a result, the
well-trained network can directly predict the spatial correspondence between
arbitrary 2D X-ray and 3D CT. The experimental results show that when combining
our approach with the conventional approach, the achieved accuracy and speed
can meet the basic clinical intervention needs, and it provides a new direction
for intra-interventional registration.
| 2104.14273 | 737,909 |
A central line of inquiry in condensed matter science has been to understand
how the competition between different states of matter give rise to emergent
physical properties. Perhaps some of the most studied systems in this respect
are the hole-doped LaMnO$_3$ perovskites, with interest in the past three
decades being stimulated on account of their colossal magnetoresistance (CMR).
However, phase segregation between ferromagnetic (FM) metallic and
antiferromagnetic (AFM) insulating states, which itself is believed to be
responsible for the colossal change in resistance under applied magnetic field,
has until now prevented a full atomistic level understanding of the orbital
ordered (OO) state at the optimally doped level. Here, through the detailed
crystallographic analysis of the hole-doped phase diagram of a prototype
system, we show that the superposition of two distinct lattice modes gives rise
to a striped structure of OO Jahn-Teller active Mn$^{3+}$ and charge disordered
(CD) Mn$^{3.5+}$ layers in a 1:3 ratio. This superposition leads to an exact
cancellation of the Jahn-Teller-like oxygen atom displacements in the CD layers
only at the 3/8th doping level, coincident with the maximum CMR response of the
manganties. Furthermore, the periodic striping of layers containing
Mn$^{3.5+}$, separated by layers of fully ordered Mn$^{3+}$, provides a natural
mechanism though which long range OO can melt, a prerequisite for the emergence
of the FM conducting state. The competition between insulating and conducting
states is seen to be a key feature in understanding the properties in highly
correlated electron systems, many of which, such as the CMR and high
temperature superconductivity, only emerge at or near specific doping values.
| 2104.14274 | 737,909 |
In recent years, Evolutionary Algorithms (EAs) have frequently been adopted
to evolve instances for optimization problems that pose difficulties for one
algorithm while being rather easy for a competitor and vice versa. Typically,
this is achieved by either minimizing or maximizing the performance difference
or ratio which serves as the fitness function. Repeating this process is useful
to gain insights into strengths/weaknesses of certain algorithms or to build a
set of instances with strong performance differences as a foundation for
automatic per-instance algorithm selection or configuration. We contribute to
this branch of research by proposing fitness-functions to evolve instances that
show large performance differences for more than just two algorithms
simultaneously. As a proof-of-principle, we evolve instances of the
multi-component Traveling Thief Problem~(TTP) for three incomplete TTP-solvers.
Our results point out that our strategies are promising, but unsurprisingly
their success strongly relies on the algorithms' performance complementarity.
| 2104.14275 | 737,909 |
In this letter, we investigate the population dynamics in a May-Leonard
formulation of the rock-paper-scissors game in which one or two species, which
we shall refer to as "weak", have a reduced predation or reproduction
probability. We show that in a nonspatial model the stationary solution where
all three species coexist is always unstable, while in a spatial stochastic
model coexistence is possible for a wide parameter space. We find, that a
reduced predation probability results in a significantly higher abundance of
"weak" species, in models with either one or two "weak" species, as long as the
simulation lattices are sufficiently large for coexistence to prevail. On the
other hand, we show that a reduced reproduction probability has a smaller
impact on the abundance of "weak" species, generally leading to a slight
decrease of its population size -- the increase of the population size of one
of the "weak" species being more than compensated by the reduction of the
other, in the two species case. We further show that the species abundances in
models where both predation and reproduction probabilities are simultaneously
reduced may be accurately estimated from the results obtained considering only
a reduction of either the predation or the reproduction probability.
| 2104.14276 | 737,909 |
For a gambler with side information, Kelly betting gives the optimal log
growth rate of the gambler's fortune, which is closely related to the mutual
information between the correct winner and the noisy side information. We show
conditions under which optimal Kelly betting can be implemented using
single-letter codes. We show that single-letter coding is optimal for a wide
variety of systems; for example, all systems with diagonal reward matrices
admit optimal single-letter codes. We also show that important classes of
systems do not admit optimal single-letter codes for Kelly betting, such as
when the side information is passed through a Z channel. Our results are
important to situations where the computational complexity of the gambler is
constrained, and may lead to new insights into the fitness value of information
for biological systems.
| 2104.14277 | 737,909 |
Continuous and multimodal stress detection has been performed recently
through wearable devices and machine learning algorithms. However, a well-known
and important challenge of working on physiological signals recorded by
conventional monitoring devices is missing data due to sensors insufficient
contact and interference by other equipment. This challenge becomes more
problematic when the user/patient is mentally or physically active or stressed
because of more frequent conscious or subconscious movements. In this paper, we
propose ReLearn, a robust machine learning framework for stress detection from
biomarkers extracted from multimodal physiological signals. ReLearn effectively
copes with missing data and outliers both at training and inference phases.
ReLearn, composed of machine learning models for feature selection, outlier
detection, data imputation, and classification, allows us to classify all
samples, including those with missing values at inference. In particular,
according to our experiments and stress database, while by discarding all
missing data, as a simplistic yet common approach, no prediction can be made
for 34% of the data at inference, our approach can achieve accurate
predictions, as high as 78%, for missing samples. Also, our experiments show
that the proposed framework obtains a cross-validation accuracy of 86.8% even
if more than 50% of samples within the features are missing.
| 2104.14278 | 737,909 |
The mapping of lexical meanings to wordforms is a major feature of natural
languages. While usage pressures might assign short words to frequent meanings
(Zipf's law of abbreviation), the need for a productive and open-ended
vocabulary, local constraints on sequences of symbols, and various other
factors all shape the lexicons of the world's languages. Despite their
importance in shaping lexical structure, the relative contributions of these
factors have not been fully quantified. Taking a coding-theoretic view of the
lexicon and making use of a novel generative statistical model, we define upper
bounds for the compressibility of the lexicon under various constraints.
Examining corpora from 7 typologically diverse languages, we use those upper
bounds to quantify the lexicon's optimality and to explore the relative costs
of major constraints on natural codes. We find that (compositional) morphology
and graphotactics can sufficiently account for most of the complexity of
natural codes -- as measured by code length.
| 2104.14279 | 737,909 |
In cataract surgery, the operation is performed with the help of a
microscope. Since the microscope enables watching real-time surgery by up to
two people only, a major part of surgical training is conducted using the
recorded videos. To optimize the training procedure with the video content, the
surgeons require an automatic relevance detection approach. In addition to
relevance-based retrieval, these results can be further used for skill
assessment and irregularity detection in cataract surgery videos. In this
paper, a three-module framework is proposed to detect and classify the relevant
phase segments in cataract videos. Taking advantage of an idle frame
recognition network, the video is divided into idle and action segments. To
boost the performance in relevance detection, the cornea where the relevant
surgical actions are conducted is detected in all frames using Mask R-CNN. The
spatiotemporally localized segments containing higher-resolution information
about the pupil texture and actions, and complementary temporal information
from the same phase are fed into the relevance detection module. This module
consists of four parallel recurrent CNNs being responsible to detect four
relevant phases that have been defined with medical experts. The results will
then be integrated to classify the action phases as irrelevant or one of four
relevant phases. Experimental results reveal that the proposed approach
outperforms static CNNs and different configurations of feature-based and
end-to-end recurrent networks.
| 2104.14280 | 737,909 |
Ubiquitous internet access is reshaping the way we live, but it is
accompanied by unprecedented challenges to prevent chronic diseases planted in
long exposure to unhealthy lifestyles. This paper proposes leveraging online
shopping behaviors as a proxy for personal lifestyle choices to freshen chronic
disease prevention literacy targeted for times when e-commerce user experience
has been assimilated into most people's daily life. Here, retrospective
longitudinal query logs and purchase records from millions of online shoppers
were accessed, constructing a broad spectrum of lifestyle features covering
assorted product categories and buyer personas. Using the lifestyle-related
information preceding their first purchases of prescription drugs, we could
determine associations between online shoppers' past lifestyle choices and if
they suffered from a particular chronic disease. Novel lifestyle risk factors
were discovered in two exemplars -- depression and diabetes, most of which
showed cognitive congruence with existing healthcare knowledge. Further, such
empirical findings could be adopted to locate online shoppers at high risk of
chronic diseases with fair accuracy (e.g., [area under the receiver operating
characteristic curve] AUC=0.68 for depression and AUC=0.70 for diabetes),
closely matching the performance of screening surveys benchmarked against
medical diagnosis. Unobtrusive chronic disease surveillance via e-commerce
sites may soon meet consenting individuals in the digital space they already
inhabit.
| 2104.14281 | 737,909 |
Clinicians conduct routine diagnosis by scrutinizing signs and symptoms of
patients in treating epidemics. This skill evolves through trial-and-error and
improves with time. The success of the therapeutic regimen relies largely on
the accuracy of interpretation of such sign-symptoms, based on which the
clinician ranks the potent causes of the epidemic and analyzes their
interdependence to devise sustainable containment strategies. This study
proposed an alternative medical front, a VIRtual DOCtor (VIRDOC), that can
self-consistently rank key contributors of an epidemic and also correctly
identify the infection stage, using the language of statistical modelling and
Machine Learning. VIRDOC analyzes medical data and then translates these into a
vector comprising Multiple Linear Regression (MLR) coefficients to
probabilistically predict scores that compare with clinical experience-based
assessment. The VIRDOC algorithm, risk managed through ANOVA, has been tested
on dengue epidemic data (N=100 with 11 weighted sign-symptoms). Results highly
encouraging with ca 75% accurate fatality prediction, compared to 71.4% from
traditional diagnosis. The algorithm can be generically extended to analyze
other epidemic forms.
| 2104.14282 | 737,909 |
We present a new uncertainty principle for risk-aware statistical estimation,
effectively quantifying the inherent trade-off between mean squared error
($\mse$) and risk, the latter measured by the associated average predictive
squared error variance ($\sev$), for every admissible estimator of choice. Our
uncertainty principle has a familiar form and resembles fundamental and
classical results arising in several other areas, such as the Heisenberg
principle in statistical and quantum mechanics, and the Gabor limit (time-scale
trade-offs) in harmonic analysis. In particular, we prove that, provided a
joint generative model of states and observables, the product between $\mse$
and $\sev$ is bounded from below by a computable model-dependent constant,
which is explicitly related to the Pareto frontier of a recently studied
$\sev$-constrained minimum $\mse$ (MMSE) estimation problem. Further, we show
that the aforementioned constant is inherently connected to an intuitive new
and rigorously topologically grounded statistical measure of distribution
skewness in multiple dimensions, consistent with Pearson's moment coefficient
of skewness for variables on the line. Our results are also illustrated via
numerical simulations.
| 2104.14283 | 737,909 |
We continue studying $6D, {\cal N}=(1,1)$ supersymmetric Yang-Mills (SYM)
theory in the ${\cal N}=(1,0)$ harmonic superspace formulation. Using the
superfield background field method we explore the two-loop divergencies of the
effective action in the gauge multiplet sector. It is explicitly demonstrated
that among four two-loop background-field dependent supergraphs contributing to
the effective action, only one diverges off shell. It is also shown that the
divergences are proportional to the superfield classical equations of motion
and hence vanish on shell.
| 2104.14284 | 737,909 |
Path tracking system plays a key technology in autonomous driving. The system
should be driven accurately along the lane and be careful not to cause any
inconvenience to passengers. To address such tasks, this paper proposes hybrid
tracker based optimal path tracking system. By applying a deep learning based
lane detection algorithm and a designated fast lane fitting algorithm, this
paper developed a lane processing algorithm that shows a match rate with actual
lanes with minimal computational cost. In addition, three modified path
tracking algorithms were designed using the GPS based path or the vision based
path. In the driving system, a match rate for the correct ideal path does not
necessarily represent driving stability. This paper proposes hybrid tracker
based optimal path tracking system by applying the concept of an observer that
selects the optimal tracker appropriately in complex road environments. The
driving stability has been studied in complex road environments such as
straight road with multiple 3-way junctions, roundabouts, intersections, and
tunnels. Consequently, the proposed system experimentally showed the high
performance with consistent driving comfort by maintaining the vehicle within
the lanes accurately even in the presence of high complexity of road
conditions. Code will be available in https://github.com/DGIST-ARTIV.
| 2104.14285 | 737,909 |
Advancing models for accurate estimation of food production is essential for
policymaking and managing national plans of action for food security. This
research proposes two machine learning models for the prediction of food
production. The adaptive network-based fuzzy inference system (ANFIS) and
multilayer perceptron (MLP) methods are used to advance the prediction models.
In the present study, two variables of livestock production and agricultural
production were considered as the source of food production. Three variables
were used to evaluate livestock production, namely livestock yield, live
animals, and animal slaughtered, and two variables were used to assess
agricultural production, namely agricultural production yields and losses. Iran
was selected as the case study of the current study. Therefore, time-series
data related to livestock and agricultural productions in Iran from 1961 to
2017 have been collected from the FAOSTAT database. First, 70% of this data was
used to train ANFIS and MLP, and the remaining 30% of the data was used to test
the models. The results disclosed that the ANFIS model with Generalized
bell-shaped (Gbell) built-in membership functions has the lowest error level in
predicting food production. The findings of this study provide a suitable tool
for policymakers who can use this model and predict the future of food
production to provide a proper plan for the future of food security and food
supply for the next generations.
| 2104.14286 | 737,909 |
A comprehensive analysis of energy requirements and emissions associated with
electric vehicles, ranging from mining and making the rare-earth magnets
required in electric motor to assembling the Li-ion battery, including charging
and regular running of the electric vehicles has been performed. A simple,
analytical procedure is used to determine the embodied energy and emissions.
The objective is to assess the potential of electric cars to reduce green house
gases emission to limit global warming to < 1.5 degrees C by the Year 2050 as
per IPCC recommendations and also to compare them with conventional fuel driven
cars. The combined embodied energy for Nd- and Dy-metals production which are
required in electric motors and battery assembly for 150 million cars,
projected to be on the road in the year 2050 is ~ 1500 TWh and the CO2
emissions is found to be > 600 MT. The emissions includes carbon intensity of
electrical energy required to run these electric vehicles. The projected
emissions due to fossil fuels, gasoline production as well as burning it in
combustion engines however is only 412 MT, far less than that due to electric
vehicles. The main contributor to emissions from electric vehicles is the
battery assembling process which releases ~ 379 MT of CO2-e gases. The
emissions from both electric vehicles as well as combustion engine vehicles
scale linearly with the number of vehicles, indicating that a breakeven is not
possible with the currently available manufacturing technologies. These results
clearly show that significant technological developments have to take place in
electric vehicles so that they become environmentally better placed compared to
combustion engine based cars.
| 2104.14287 | 737,909 |
In a system with inversion symmetry broken, a second-order nonlinear Hall
effect can survive even in the presence of time-reversal symmetry. In this
work, we show that a giant nonlinear Hall effect can exist in twisted bilayer
WTe2 system. The Berry curvature dipole of twisted bilayer WTe2 ({\theta} =
29.4{\deg}) can reach up to ~1400 {\AA}, which is much larger than that in
previously reported nonlinear Hall systems. In twisted bilayer WTe2 system,
there exists abundant band anticrossings and band inversions around the Fermi
level, which brings a complicated distribution of Berry curvature, and leading
to the nonlinear Hall signals exhibit dramatically oscillating behavior in this
system. Its large amplitude and high tunability indicate that the twisted
bilayer WTe2 can be an excellent platform for studying the nonlinear Hall
effect.
| 2104.14288 | 737,909 |
Neural ODE Processes approach the problem of meta-learning for dynamics using
a latent variable model, which permits a flexible aggregation of contextual
information. This flexibility is inherited from the Neural Process framework
and allows the model to aggregate sets of context observations of arbitrary
size into a fixed-length representation. In the physical sciences, we often
have access to structured knowledge in addition to raw observations of a
system, such as the value of a conserved quantity or a description of an
understood component. Taking advantage of the aggregation flexibility, we
extend the Neural ODE Process model to use additional information within the
Learning Using Privileged Information setting, and we validate our extension
with experiments showing improved accuracy and calibration on simulated
dynamics tasks.
| 2104.14290 | 737,909 |
Control of molecular orientation is emerging as crucial for the
characterization of the stereodynamics of kinetics processes beyond structural
stereochemistry. The special role played in chiral discrimination phenomena has
been particularly emphasized by the authors after their extensive probes of
experimental control of molecular alignment and orientation. In this work, the
role of the orientation has been demonstrated for the first time in
first-principles molecular dynamics simulations: stationary points
characterized on potential energy surfaces have been calculated for the study
of chemical reactions occurring between the bisulfide anion HS- and oriented
prototypical chiral molecules CHFXY (where X = CH3 or CN and Y = Cl or I). The
important reaction channels are those corresponding to bimolecular nucleophilic
substitution (SN2) and to bimolecular elimination (E2): their relative role has
been assessed and alternative pathways due to the mirror forms of the oriented
chiral molecule are revealed by the different reactivity of the two enantiomers
of CHFCNI in SN2 reaction.
| 2104.14292 | 737,909 |
An energetic muon beam is an attractive key to unlock new physics beyond the
Standard Model: the lepton flavor violation or the anomalous magnetic moment,
and also is a competitive candidate for the expected neutrino factory. Lots of
the muon scientific applications are limited by low flux cosmic-ray muons, low
energy muon sources or extremely expensive muon accelerators. An prompt
acceleration of the low-energy muon beam is found in the beam-driven plasma
wakefield up to $\mathrm{TV/m}$. The muon beam is accelerated from
$275\mathrm{MeV}$ to more than $10\mathrm{GeV}$ within $22.5\mathrm{ps}$.
Choosing the injection time of the muon beam in a proper range, the
longitudinal spatial distribution and the energy distribution of the
accelerated muon beam are compressed. The efficiency of the energy transfer
from the driven electron beam to the muon beam can reach $20\%$. The prompt
acceleration scheme is a promising avenue to bring the expected neutrino
factory and the muon collider into reality and to catch new physics beyond the
Standard Model.
| 2104.14293 | 737,909 |
In this paper, we question if self-supervised learning provides new
properties to Vision Transformer (ViT) that stand out compared to convolutional
networks (convnets). Beyond the fact that adapting self-supervised methods to
this architecture works particularly well, we make the following observations:
first, self-supervised ViT features contain explicit information about the
semantic segmentation of an image, which does not emerge as clearly with
supervised ViTs, nor with convnets. Second, these features are also excellent
k-NN classifiers, reaching 78.3% top-1 on ImageNet with a small ViT. Our study
also underlines the importance of momentum encoder, multi-crop training, and
the use of small patches with ViTs. We implement our findings into a simple
self-supervised method, called DINO, which we interpret as a form of
self-distillation with no labels. We show the synergy between DINO and ViTs by
achieving 80.1% top-1 on ImageNet in linear evaluation with ViT-Base.
| 2104.14294 | 737,909 |
The purpose of this paper is to prove that if on a commutative hypergroup an
exponential monomial has the property that the linear subspace of all sine
functions in its variety is one dimensional, then this exponential monomial is
a linear combination of generalized moment functions.
| 2104.14295 | 737,909 |
BACKGROUND: Software engineering is a human activity. People naturally make
sense of their activities and experience through storytelling. But storytelling
does not appear to have been properly studied by software engineering research.
AIM: We explore the question: what contribution can storytelling make to
human--centric software engineering research? METHOD: We define concepts,
identify types of story and their purposes, outcomes and effects, briefly
review prior literature, identify several contributions and propose next steps.
RESULTS: Storytelling can, amongst other contributions, contribute to data
collection, data analyses, ways of knowing, research outputs, interventions in
practice, and advocacy, and can integrate with evidence and arguments. Like all
methods, storytelling brings risks. These risks can be managed. CONCLUSION:
Storytelling provides a potential counter--balance to abstraction, and an
approach to retain and honour human meaning in software engineering.
| 2104.14296 | 737,909 |
Training Automatic Speech Recognition (ASR) models under federated learning
(FL) settings has recently attracted considerable attention. However, the FL
scenarios often presented in the literature are artificial and fail to capture
the complexity of real FL systems. In this paper, we construct a challenging
and realistic ASR federated experimental setup consisting of clients with
heterogeneous data distributions using the French Common Voice dataset, a large
heterogeneous dataset containing over 10k speakers. We present the first
empirical study on attention-based sequence-to-sequence E2E ASR model with
three aggregation weighting strategies -- standard FedAvg, loss-based
aggregation and a novel word error rate (WER)-based aggregation, are conducted
in two realistic FL scenarios: cross-silo with 10-clients and cross-device with
2k-clients. In particular, the WER-based weighting method is proposed to better
adapt FL to the context of ASR by integrating the error rate metric with the
aggregation process. Our analysis on E2E ASR from heterogeneous and realistic
federated acoustic models provides the foundations for future research and
development of realistic FL-based ASR applications.
| 2104.14297 | 737,909 |
In this paper we develop and significantly extend the thermal phase change
model, introduced in [12], describing the process of paraffinic wax layer
formation on the interior wall of a circular pipe transporting heated oil, when
subject to external cooling. In particular we allow for the natural dependence
of the solidifying paraffinic wax conductivity on local temperature. We are
able to develop a complete theory, and provide efficient numerical
computations, for this extended model. Comparison with recent experimental
observations is made, and this, together with recent reviews of the physical
mechanisms associated with wax layer formation, provide significant support for
the thermal model considered here.
| 2104.14298 | 737,909 |
The four-bar linkage is a basic arrangement of mechanical engineering and
represents the simplest movable system formed by a closed sequence of
bar-shaped bodies. Although the mechanism can have in general a spatial
arrangement, we focus here on the prototypical planar case, starting however
from a spatial viewpoint. The classification of the mechanism relies on the
angular range spanned by the rotational motion of the bars allowed by the
ratios among their lengths and is established by conditions for the existence
of either one or more bars allowed to move as cranks, namely to be permitted to
rotate the full 360 degrees range (Grashof cases), or as rockers with limited
angular ranges (non-Grashof cases). In this paper, we provide a view on the
connections between the "classic" four-bar problem and the theory of 6j symbols
of quantum mechanical angular momentum theory, occurring in a variety of
contexts in pure and applied quantum mechanics. The general case and a series
of symmetric configurations are illustrated, by representing the range of
existence of the related quadrilaterals on a square "screen" (namely as a
function of their diagonals) and by discussing their behavior according both to
the Grashof conditions and to the Regge symmetries, concertedly considering the
classification of the two mechanisms and that of the corresponding objects of
the quantum mechanical theory of angular momentum. An interesting topological
difference is demonstrated between mechanisms belonging to the two Regge
symmetric configurations: the movements in the Grashof cases span chirality
preserving configurations with a 2 pi-cycle of a rotating bar, while by
contrast the non-Grashof cases span both enantiomeric configurations with a 4
pi-cycle.
| 2104.14299 | 737,909 |
Path planning is an important topic in robotics. Recently, value iteration
based deep learning models have achieved good performance such as Value
Iteration Network(VIN). However, previous methods suffer from slow convergence
and low accuracy on large maps, hence restricted in path planning for agents
with complex kinematics such as legged robots. Therefore, we propose a new
value iteration based path planning method called Capability Iteration
Network(CIN). CIN utilizes sparse reward maps and encodes the capability of the
agent with state-action transition probability, rather than a convolution
kernel in previous models. Furthermore, two training methods including
end-to-end training and training capability module alone are proposed, both of
which speed up convergence greatly. Several path planning experiments in
various scenarios, including on 2D, 3D grid world and real robots with
different map sizes are conducted. The results demonstrate that CIN has higher
accuracy, faster convergence, and lower sensitivity to random seed compared to
previous VI-based models, hence more applicable for real robot path planning.
| 2104.14300 | 737,909 |
Analyzing the financial benefit of marketing is still a critical topic for
both practitioners and researchers. Companies consider marketing costs as a
type of investment and expect this investment to be returned to the company in
the form of profit. On the other hand, companies adopt different innovative
strategies to increase their value. Therefore, this study aims to test the
impact of marketing investment on firm value and systematic risk. To do so,
data related to four Arabic emerging markets during the period 2010-2019 are
considered, and firm share price and beta share are considered to measure firm
value and systematic risk, respectively. Since a firm's ownership concentration
is a determinant factor in firm value and systematic risk, this variable is
considered a moderated variable in the relationship between marketing
investment and firm value and systematic risk. The findings of the study, using
panel data regression, indicate that increasing investment in marketing has a
positive effect on the firm value valuation model. It is also found that the
ownership concentration variable has a reinforcing role in the relationship
between marketing investment and firm value. It is also disclosed that it
moderates the systematic risk aligned with the monitoring impact of controlling
shareholders. This study provides a logical combination of governance-marketing
dimensions to interpret performance indicators in the capital market.
| 2104.14301 | 737,909 |
Observation of quantum phenomena in cryogenic, optically cooled mechanical
resonators has been recently achieved by a few experiments based on cavity
optomechanics. A well-established experimental platform is based on a thin film
stoichiometric ($ Si_3 N_4 $) nanomembrane embedded in a Fabry-Perot cavity,
where the coupling with the light field is provided by the radiation pressure
of the light impinging on the membrane surface. Two crucial parameters have to
be optimized to ensure that these systems work at the quantum level: the
cooperativity $ C$ describing the optomechanical coupling and the product $ Q
\times \nu$ (quality factor - resonance frequency) related to the decoherence
rate. A significant increase of the latter can be obtained with high
aspect-ratio membrane resonators where uniform stress dilutes the mechanical
dissipation. Furthermore, ultra-high $Q \times \nu$ can be reached by
drastically reducing the edge dissipation via clamp-tapering and/or by
soft-clamping, virtually a clamp-free resonator configuration. In this work, we
investigate, theoretically and experimentally, the edge loss mechanisms
comparing two state-of-the-art resonators built by standard
micro/nanofabrication techniques. The corresponding results would provide
meaningful guidelines for designing new ultra-coherent resonating devices.
| 2104.14302 | 737,909 |
For the measurement of the dynamics of fusion-born alpha particles $E_\alpha
\leq 3.5$ MeV in ITER using collective Thomson scattering (CTS), safe
transmission of a gyrotron beam at mm-wavelength (1 MW, 60 GHz) passing the
electron cyclotron resonance (ECR) in the in-vessel tokamak `port plug' vacuum
is a prerequisite. Depending on neutral gas pressure and composition,
ECR-assisted gas breakdown may occur at the location of the resonance, which
must be mitigated for diagnostic performance and safety reasons. The concept of
a split electrically biased waveguide (SBWG) has been previously demonstrated
in [C.P. Moeller, U.S. Patent 4,687,616 (1987)]. The waveguide is
longitudinally split and a kV bias voltage applied between the two halves.
Electrons are rapidly removed from the central region of high radio frequency
electric field strength, mitigating breakdown. As a full scale experimental
investigation of gas and electromagnetic field conditions inside the ITER
equatorial port plugs is currently unattainable, a corresponding Monte Carlo
simulation study is presented. Validity of the Monte Carlo electron model is
demonstrated with a prediction of ECR breakdown and the mitigation pressure
limits for the above quoted reference case with $^1$H$_2$ (and pollutant high
$Z$ elements). For the proposed ITER CTS design with a 88.9 mm inner diameter
SBWG, ECR breakdown is predicted to occur down to a pure $^1$H$_2$ pressure of
0.3 Pa, while mitigation is shown to be effective at least up to 10 Pa using a
bias voltage of 1 kV. The analysis is complemented by results for relevant
electric/magnetic field arrangements and limitations of the SBWG mitigation
concept are addressed.
| 2104.14303 | 737,909 |
Coded modulation with probabilistic amplitude shaping (PAS) is considered for
intensity modulation/direct detection channels with a transmitter peak-power
constraint. PAS is used to map bits to a uniform PAM-6 distribution and
outperforms PAM-8 for rates up to around 2.3 bits per channel use. PAM-6 with
PAS also outperforms a cross-shaped QAM-32 constellation by up to 1 dB and 0.65
dB after bit-metric soft- and hard decoding, respectively. An alternative PAM-6
scheme based on a framed cross-shaped QAM-32 constellation is proposed that
shows similar gains.
| 2104.14304 | 737,909 |
Latent heat thermal energy storage (LHTES) has been recommended as an
effective technology to the thermal management system of space exploration for
its excellent ability of storing thermal energy. However, it is well known that
the low thermal conductivity of phase change material (PCM) seriously weakens
the heat charging and discharging rates of LHTES system. In present study, the
electrohydrodynamic (EHD), which is a popular active heat transfer enhancement
technology, is introduced to enhance the PCM melting in a shell-tube LHTES
system under microgravity. In our numerical simulations, we mainly focus on the
combined effects of the electric Rayleigh number $T$ and the eccentricity
$\Gamma$ on the melting performance under microgravity. Based on the numerical
results, it is found that in contrast to the case without the electric field,
the presence of the electric field causes the heat transfer efficiency and
melting behavior of LHTES system to be enhanced significantly. In addition, our
results indicates that the EHD technique always shows a good performance in
accelerating the melting process even under microgravity, and specially, the
maximum time saving in some cases is more than $90\%$. Furthermore, we note
that although the concentric annulus is always the optimal configuration under
no-gravity condition, the optimal eccentric position of the internal tube
strongly depends on the electric Rayleigh number if the gravity effects are
taken into account.
| 2104.14305 | 737,909 |
We construct recently introduced palatial NC twistors by considering the pair
of conjugated (Born-dual) twist-deformed $D=4$ quantum inhomegeneous conformal
Hopf algebras $\mathcal{U}_{\theta }(su(2,2)\ltimes T^{4}$) and
$\mathcal{U}_{\bar{\theta}}(su(2,2)\ltimes\bar{T}^{4}$), where $T^{4}$ describe
complex twistor coordinatesand $\bar{T}^{4}$ the conjugated dual twistor
momenta. The palatial twistors are suitably chosen as the quantum-covariant
modules (NC representations) of the introduced
Born-dual Hopf algebras. Subsequently we introduce the quantum deformations
of $D=4$ Heisenberg-conformal algebra (HCA) $su(2,2)\ltimes H^{4,4}_\hslash$
($H^{4,4}_\hslash=\bar{T}^4 \ltimes_\hslash T_4$ is the Heisenberg algebra of
twistorial oscillators) providing in twistorial framework the basic covariant
quantum elementary system.
The class of algebras describing deformation of HCA with dimensionfull
deformation parameter, linked with Planck length $\lambda_p$ will be called the
twistorial DSR (TDSR) algebra, following the terminology of DSR algebra in
space-time framework.
We shall describe the examples of TDSR algebra linked with Palatial twistors
which are introduced by the Drinfeld twist and by the quantization map in
$H_\hslash^{4,4}$. We introduce as well generalized quantum twistorial phase
space by considering the Heisenberg double of Hopf algebra
$\mathcal{U}_\theta(su(2,2)\ltimes T^4).$
| 2104.14306 | 737,909 |
We report the detection of a short-lived narrow quasi-periodic oscillation
(QPO) at ~88 mHz in an Insight-HXMT observation during the soft state of the
persistent black hole high mass X-ray binary Cygnus X-1. This QPO is
significantly detected in all the three instruments of Insight-HXMT, so in the
broad energy range 1-250 keV. The fractional RMS of the QPO does not show
significant variations above 3 keV (~5%) while decreases at lower energy (~2%).
We show that this QPO is different from the type-A, -B, and -C QPOs usually
observed in black hole X-ray binaries. We compare QPOs at similar frequencies
that have been previously detected in another persistent high mass X-ray
binaries in the soft state, we speculate that such QPOs might relate to some
local inhomogeneity rarely formed in the accretion flow of wind-fed accretion
systems.
| 2104.14307 | 737,909 |
Aberrations limit scanning fluorescence microscopy when imaging in scattering
materials such as biological tissue. Model-based approaches for adaptive optics
take advantage of a computational model of the optical setup. Such models can
be combined with the optimization techniques of machine learning frameworks to
find aberration corrections, as was demonstrated for focusing a laser beam
through aberrations onto a camera [arXiv:2007.13400]. Here, we extend this
approach to two-photon scanning microscopy. The developed sensorless technique
finds corrections for aberrations in scattering samples and will be useful for
a range of imaging application, for example in brain tissue.
| 2104.14308 | 737,909 |
Improving the clock stability is of fundamental importance for the
development of quantum-enhanced metrology. One of the main limitations arises
from the randomly-fluctuating local oscillator (LO) frequency, which introduces
"phase slips" for long interrogation times and hence failure of the
frequency-feedback loop. Here we propose a strategy to improve the stability of
atomic clocks by interrogating two out-of-phase state sharing the same LO.
While standard Ramsey interrogation can only determine phases unambiguously in
the interval $[-\pi/2,\pi/2]$, the joint interrogation allows for an extension
to $[-\pi,\pi]$, resulting in a relaxed restriction of the Ramsey time and
improvement of absolute clock stability. Theoretical predictions are supported
by ab-initio numerical simulation for white and correlated LO noise. While our
basic protocol uses uncorrelated atoms, we have further extended it to include
spin-squeezing and further improving the scaling of clock stability with the
number of atoms. Our protocol can be readily tested in current state-of-the-art
experiments.
| 2104.14309 | 737,909 |
We present a Dicke state preparation scheme which uses global control of $N$
spin qubits: our scheme is based on the standard phase estimation algorithm,
which estimates the eigenvalue of a unitary operator. The scheme prepares a
Dicke state non-deterministically by collectively coupling the spins to an
ancilla qubit via a $ZZ$-interaction, using $\ceil*{\log_2 N} + 1$ ancilla
qubit measurements. The preparation of such Dicke states can be useful if the
spins in the ensemble are used for magnetic sensing: we discuss a possible
realization using an ensemble of electronic spins located at diamond
Nitrogen-Vacancy (NV) centers coupled to a single superconducting flux qubit.
We also analyze the effect of noise and limitations in our scheme.
| 2104.14310 | 737,909 |
We revisit large spectroscopic data sets for field stars from the literature
to derive the upper Li envelope in the high metallicity regime in our Galaxy.
We take advantage of Gaia EDR3 data and state-of-the-art stellar models to
precisely determine the position of the sample dwarf stars in the
Hertzsprung-Russell diagram. The highest Li abundances are found in field
metal-rich warm dwarfs from the GALAH survey, located on the hot side of the
Li-dip. Their mean Li value agrees with what was recently derived for warm
dwarfs in metal-rich clusters, pointing towards a continuous increase of Li up
to super-solar metallicity. However, if only cool dwarfs are considered in
GALAH, as done in the other literature surveys, it is found that the upper Li
envelope decreases at super-solar metallicities, blurring the actual Li
evolution picture. We confirm the suggestion that field and open cluster
surveys that found opposite Li behaviour in the high metallicity regime do not
sample the same types of stars: The first ones, with the exception of GALAH,
miss warm dwarfs that can potentially preserve their original Li content.
Although we can discard the bending of the Li upper envelope at high
metallicity derived from the analysis of cool star samples, we still need to
evaluate the effects of atomic diffusion on warm, metal-rich early-F and late-A
type dwarfs before deriving the actual Li abundance at high metallicity.
| 2104.14311 | 737,909 |
We consider the Navier-Stokes system in three dimensions perturbed by a
transport noise which is sufficiently smooth in space and rough in time. The
existence of a weak solution was proved recently, however, as in the
deterministic setting the question of uniqueness remains a major open problem.
An important feature of systems with uniqueness is the semigroup property
satisfied by their solutions. Without uniqueness, this property cannot hold
generally. We select a system of solutions satisfying the semigroup property
with appropriately shifted rough path. In addition, the selected solutions
respect the well accepted admissibility criterium for physical solutions,
namely, maximization of the energy dissipation. Finally, under suitable
assumptions on the driving rough path, we show that the Navier-Stokes system
generates a measurable random dynamical system. To the best of our knowledge,
this is the first construction of a measurable single-valued random dynamical
system in the state space for an SPDE without uniqueness.
| 2104.14312 | 737,909 |
We develop a reliable, fully automatic method for the detection of coronal
holes, that provides consistent full-disk segmentation maps over the full solar
cycle and can perform in real-time. We use a convolutional neural network to
identify the boundaries of coronal holes from the seven EUV channels of the
Atmospheric Imaging Assembly (AIA) as well as from line-of-sight magnetograms
from the Helioseismic and Magnetic Imager (HMI) onboard the Solar Dynamics
Observatory (SDO). For our primary model (Coronal Hole RecOgnition Neural
Network Over multi-Spectral-data; CHRONNOS) we use a progressively growing
network approach that allows for efficient training, provides detailed
segmentation maps and takes relations across the full solar-disk into account.
We provide a thorough evaluation for performance, reliability and consistency
by comparing the model results to an independent manually curated test set. Our
model shows good agreement to the manual labels with an intersection-over-union
(IoU) of 0.63. From the total of 261 coronal holes with an area
$>1.5\cdot10^{10}$ km$^2$ identified during the time range 11/2010 - 12/2016,
98.1% were correctly detected by our model. The evaluation over almost the full
solar cycle no. 24 shows that our model provides reliable coronal hole
detections, independent of the level of solar activity. From the direct
comparison over short time scales of days to weeks, we find that our model
exceeds human performance in terms of consistency and reliability. In addition,
we train our model to identify coronal holes from each channel separately and
show that the neural network provides the best performance with the combined
channel information, but that coronal hole segmentation maps can be also
obtained solely from line-of-sight magnetograms.
| 2104.14313 | 737,909 |
The new generation of pre-trained NLP models push the SOTA to the new limits,
but at the cost of computational resources, to the point that their use in real
production environments is often prohibitively expensive. We tackle this
problem by evaluating not only the standard quality metrics on downstream tasks
but also the memory footprint and inference time. We present MOROCCO, a
framework to compare language models compatible with \texttt{jiant} environment
which supports over 50 NLU tasks, including SuperGLUE benchmark and multiple
probing suites. We demonstrate its applicability for two GLUE-like suites in
different languages.
| 2104.14314 | 737,909 |
In many multiagent environments, a designer has some, but limited control
over the game being played. In this paper, we formalize this by considering
incompletely specified games, in which some entries of the payoff matrices can
be chosen from a specified set. We show that it is NP-hard for the designer to
make these choices optimally, even in zero-sum games. In fact, it is already
intractable to decide whether a given action is (potentially or necessarily)
played in equilibrium. We also consider incompletely specified symmetric games
in which all completions are required to be symmetric. Here, hardness holds
even in weak tournament games (symmetric zero-sum games whose entries are all
-1, 0, or 1) and in tournament games (symmetric zero-sum games whose
non-diagonal entries are all -1 or 1). The latter result settles the complexity
of the possible and necessary winner problems for a social-choice-theoretic
solution concept known as the bipartisan set. We finally give a mixed-integer
linear programming formulation for weak tournament games and evaluate it
experimentally.
| 2104.14317 | 737,909 |
The decoupling of heavy fields as required by the Appelquisst-Carazzone
theorem plays a fundamental role in the construction of any effective field
theory. However, it is not a trivial task to implement a renormalization
prescription that produces the expected decoupling of massive fields, and it is
even more difficult in curved spacetime. Focused on this idea, we consider the
renormalization of the one-loop effective action for the Yukawa interaction
with a background scalar field in curved space. We compute the beta functions
within a generalized DeWitt-Schwinger subtraction procedure and discuss the
decoupling in the running of the coupling constants. For the case of a
quantized scalar field, all the beta function exhibit decoupling, including
also the gravitational ones. For a quantized Dirac field, decoupling appears
almost for all the beta functions. We obtain the anomalous result that the mass
of the background scalar field does not decouple.
| 2104.14318 | 737,909 |
Every x-adjustment in the so-called xVA financial risk management framework
relies on the computation of exposures. Considering thousands of Monte Carlo
paths and tens of simulation steps, a financial portfolio needs to be evaluated
numerous times during the lifetime of the underlying assets. This is the
bottleneck of every simulation of xVA. In this article, we explore numerical
techniques for improving the simulation of exposures. We aim to decimate the
number of portfolio evaluations, particularly for large portfolios involving
multiple, correlated risk factors. The usage of the Stochastic Collocation (SC)
method, together with Smolyaks sparse grid extension, allows for a significant
reduction in the number of portfolio evaluations, even when dealing with many
risk factors. The proposed model can be easily applied to any portfolio and
size. We report that for a realistic portfolio comprising linear derivatives,
the expected reduction in the portfolio evaluations may exceed 6000 times,
depending on the dimensionality and the required accuracy. We give illustrative
examples and examine the method with realistic multi-currency portfolios.
| 2104.14319 | 737,909 |
Recently, researchers have utilized neural networks to accurately solve
partial differential equations (PDEs), enabling the mesh-free method for
scientific computation. Unfortunately, the network performance drops when
encountering a high nonlinearity domain. To improve the generalizability, we
introduce the novel approach of employing multi-task learning techniques, the
uncertainty-weighting loss and the gradients surgery, in the context of
learning PDE solutions. The multi-task scheme exploits the benefits of learning
shared representations, controlled by cross-stitch modules, between multiple
related PDEs, which are obtainable by varying the PDE parameterization
coefficients, to generalize better on the original PDE. Encouraging the network
pay closer attention to the high nonlinearity domain regions that are more
challenging to learn, we also propose adversarial training for generating
supplementary high-loss samples, similarly distributed to the original training
distribution. In the experiments, our proposed methods are found to be
effective and reduce the error on the unseen data points as compared to the
previous approaches in various PDE examples, including high-dimensional
stochastic PDEs.
| 2104.14320 | 737,909 |
The nanoscale mode volumes of surface plasmon polaritons have enabled
plasmonic lasers and condensates with ultrafast operation. Most plasmonic
lasers are based on noble metals, rendering the optical mode structure inert to
external fields. Here, we demonstrate active magnetic-field control over lasing
in a periodic array of Co/Pt multilayer nanodots immersed in an IR-140 dye
solution. We exploit magnetic circular dichroism (MCD) at the excitation
wavelength to modify the optical absorption of the nanodots as a function of
their magnetization. Under circularly polarized excitation, angle-resolved
photoluminescence measurements reveal a transition between lasing action and
non-lasing emission as the nanodot magnetization is reversed. Our results
introduce magnetization as a means of externally controlling plasmonic
nanolasers, complementary to the modulation by excitation, gain medium, or
substrate. Further, the results show how the effects of magnetization on light
that are inherently weak become prominent at the lasing regime, inspiring
studies of topological photonics.
| 2104.14321 | 737,909 |
In this paper we continue the discussion about relations between exponential
polynomials and generalized moment generating functions on a commutative
hypergroup. We are interested in the following problem: is it true that every
finite dimensional variety is spanned by moment functions? Let $m$ be an
exponential on $X$. In our former paper we have proved that if the linear space
of all $m$-sine functions in the variety of an $m$-exponential monomial is (at
most) one dimensional, then this variety is spanned by moment functions
generated by $m$. In this paper we show that this may happen also in cases
where the $m$-sine functions span a more than one dimensional subspace in the
variety. We recall the notion of a polynomial hypergroup in $d$ variables,
describe exponentials on it and give the characterization of the so called
$m$-sine functions. Next we show that the Fourier algebra of a polynomial
hypergroup in $d$ variables is the polynomial ring in $d$ variables. Finally,
using Ehrenpreis--Palamodov Theorem we show that every exponential polynomial
on the polynomial hypergroup in $d$ variables is a linear combination of moment
functions contained in its variety.
| 2104.14322 | 737,909 |
JSON is a popular file and data format that is precisely specified by the
IETF in RFC 8259. Yet, this specification implicitly and explicitly leaves room
for many design choices when it comes to parsing and generating JSON. This
yields the opportunity of diverse behavior among independent implementations of
JSON libraries. A thorough analysis of this diversity can be used by developers
to choose one implementation or to design a resilient multi-version
architecture.
We present the first systematic analysis and comparison of the input / output
behavior of 20 JSON libraries, in Java. We analyze the diversity of
architectural choices among libraries, and we execute each library with
well-formed and ill-formed JSON files to assess their behavior. We first find
that the data structure selected to represent JSON objects and the encoding of
numbers are the main design differences, which influence the behavior of the
libraries. Second, we observe that the libraries behave in a similar way with
regular, well-formed JSON files. However, there is a remarkable behavioral
diversity with ill-formed files, or corner cases such as large numbers or
duplicate data.
| 2104.14323 | 737,909 |
Motivated by the recent discovery of superconductivity in infinite-layer
nickelate thin films, we report on a synthesis and magnetization study on bulk
samples of the parent compounds ${R}$NiO$_{2}$ (${R}$=La, Pr, Nd). The
frequency-dependent peaks of the AC magnetic susceptibility, along with
remarkable memory effects, characterize spin-glass states. Furthermore, various
phenomenological parameters via different spin glass models show strong
similarity within these three compounds as well as with other rare-earth metal
nickelates. The universal spin-glass behaviour distinguishes the nickelates
from the parent compound CaCuO$_{2}$ of cuprate superconductors, which has the
same crystal structure and $d^9$ electronic configuration but undergoes a
long-range antiferromagnetic order. Our investigations may indicate a
distinctly different nature of magnetism and superconductivity in the bulk
nickelates than in the cuprates.
| 2104.14324 | 737,909 |
We present the far ultraviolet (FUV) imaging of the nearest Jellyfish or
Fireball galaxy IC3418/VCC 1217, in the Virgo cluster of galaxies, using
Ultraviolet Imaging Telescope (UVIT) onboard the ASTROSAT satellite. The young
star formation observed here in the 17 kpc long turbulent wake of IC3418, due
to ram pressure stripping of cold gas surrounded by hot intra-cluster medium,
is a unique laboratory that is unavailable in the Milkyway. We have tried to
resolve star forming clumps, seen compact to GALEX UV images, using better
resolution available with the UVIT and incorporated UV-optical images from
Hubble Space Telescope archive. For the first time, we resolve the compact star
forming clumps (fireballs) into sub-clumps and subsequently into a possibly
dozen isolated stars. We speculate that many of them could be blue supergiant
stars which are cousins of SDSS J122952.66+112227.8, the farthest star (~17
Mpc) we had found earlier surrounding one of these compact clumps. We found
evidence of star formation rate (4 - 7.4 x 10^-4 M_sun per yr ) in these
fireballs, estimated from UVIT flux densities, to be increasing with the
distance from the parent galaxy. We propose a new dynamical model in which the
stripped gas may be developing vortex street where the vortices grow to compact
star forming clumps due to self-gravity. Gravity winning over turbulent force
with time or length along the trail can explain the puzzling trend of higher
star formation rate and bluer/younger stars observed in fireballs farther away
from the parent galaxy.
| 2104.14325 | 737,909 |
The polarization properties of the elastic electron scattering on H-like ions
are investigated within the framework of the relativistic QED theory. The
polarization properties are determined by a combination of relativistic effects
and spin exchange between the incident and bound electrons. The scattering of a
polarized electron on an initially unpolarized ion is fully described by five
parameters. We study these parameters for non-resonant scattering, as well as
in the vicinity of LL resonances, where scattering occurs through the formation
and subsequent decay of intermediate autoionizing states. The study was carried
out for ions from $\txt{B}^{4+}$ to $\txt{Xe}^{53+}$. Special attention was
paid to the study of asymmetry in electron scattering.
| 2104.14326 | 737,909 |
Cascade prediction estimates the size or the state of a cascade from either
microscope or macroscope. It is of paramount importance for understanding the
information diffusion process such as the spread of rumors and the propagation
of new technologies in social networks. Recently, instead of extracting
hand-crafted features or embedding cascade sequences into feature vectors for
cascade prediction, graph neural networks (GNNs) are introduced to utilize the
network structure which governs the cascade effect. However, these models do
not take into account social factors such as personality traits which drive
human's participation in the information diffusion process. In this work, we
propose a novel multitask framework for enhancing cascade prediction with a
personality recognition task. Specially, we design a general plug-and-play GNN
gate, named PersonalityGate, to couple into existing GNN-based cascade
prediction models to enhance their effectiveness and extract individuals'
personality traits jointly. Experimental results on two real-world datasets
demonstrate the effectiveness of our proposed framework in enhancing GNN-based
cascade prediction models and in predicting individuals' personality traits as
well.
| 2104.14327 | 737,909 |
In this paper, we establish the entropy-entropy production estimate for the
ES-BGK model, a generalized version of the BGK model of the Boltzmann equation
introduced for better approximation in the Navier-Stokes limit. Our result
improves the previous entropy production estimate [39] in that (1) the full
range of Prandtl parameters $-1/2\leq\nu <1$ including the critical case
$\nu=-1/2$ is covered, and (2) a sharper entropy production bound is obtained.
An explicit characterization of the coefficient of the entropy-entropy
production estimate is also presented.
| 2104.14328 | 737,909 |
A new method is proposed for the solution of the data-driven optimal
transport barycenter problem and of the more general distributional barycenter
problem that the article introduces. The method improves on previous approaches
based on adversarial games, by slaving the discriminator to the generator,
minimizing the need for parameterizations and by allowing the adoption of
general cost functions. It is applied to numerical examples, which include
analyzing the MNIST data set with a new cost function that penalizes
non-isometric maps.
| 2104.14329 | 737,909 |
Models of stellar structure and evolution can be constrained using accurate
measurements of the parameters of eclipsing binary members of open clusters.
Multiple binary stars provide the means to tighten the constraints and, in
turn, to improve the precision and accuracy of the age estimate of the host
cluster. In the previous two papers of this series, we have demonstrated the
use of measurements of multiple eclipsing binaries in the old open cluster
NGC6791 to set tighter constraints on the properties of stellar models than was
previously possible, thereby improving both the accuracy and precision of the
cluster age. We identify and measure the properties of a non-eclipsing cluster
member, V56, in NGC\,6791 and demonstrate how this provides additional model
constraints that support and strengthen our previous findings. We analyse
multi-epoch spectra of V56 from FLAMES in conjunction with the existing
photometry and measurements of eclipsing binaries in NGC6971. The parameters of
the V56 components are found to be $M_{\rm p}=1.103\pm 0.008 M_{\odot}$ and
$M_{\rm s}=0.974\pm 0.007 M_{\odot}$, $R_{\rm p}=1.764\pm0.099 R_{\odot}$ and
$R_{\rm s}=1.045\pm0.057 R_{\odot}$, $T_{\rm eff,p}=5447\pm125$ K and $T_{\rm
eff,s}=5552\pm125$ K, and surface [Fe/H]=$+0.29\pm0.06$ assuming that they have
the same abundance. The derived properties strengthen our previous best
estimate of the cluster age of $8.3\pm0.3$ Gyr and the mass of stars on the
lower red giant branch (RGB), which is $M_{\rm RGB} = 1.15\pm0.02M_{\odot}$ for
NGC6791. These numbers therefore continue to serve as verification points for
other methods of age and mass measures, such as asteroseismology.
| 2104.14330 | 737,909 |
In a previous paper, we computed the energy density and the non-linear energy
cascade rate for transverse kink waves using Elsasser variables. In this paper,
we focus on the standing kink waves, which are impulsively excited in coronal
loops by external perturbations. We present an analytical calculation to
compute the damping time due to the non-linear development of the
Kelvin-Helmholtz instability. The main result is that the damping time is
inversely proportional to the oscillation amplitude. We compare the damping
times from our formula with the results of numerical simulations and
observations. In both cases we find a reasonably good match. The comparison
with the simulations show that the non-linear damping dominates in the high
amplitude regime, while the low amplitude regime shows damping by resonant
absorption. In the comparison with the observations, we find a power law
inversely proportional to the amplitude $\eta^{-1}$ as an outer envelope for
our Monte Carlo data points.
| 2104.14331 | 737,909 |
Network dismantling aims to degrade the connectivity of a network by removing
an optimal set of nodes and has been widely adopted in many real-world
applications such as epidemic control and rumor containment. However,
conventional methods usually focus on simple network modeling with only
pairwise interactions, while group-wise interactions modeled by hypernetwork
are ubiquitous and critical. In this work, we formulate the hypernetwork
dismantling problem as a node sequence decision problem and propose a deep
reinforcement learning (DRL)-based hypernetwork dismantling framework. Besides,
we design a novel inductive hypernetwork embedding method to ensure the
transferability to various real-world hypernetworks. Generally, our framework
builds an agent. It first generates small-scale synthetic hypernetworks and
embeds the nodes and hypernetworks into a low dimensional vector space to
represent the action and state space in DRL, respectively. Then trial-and-error
dismantling tasks are conducted by the agent on these synthetic hypernetworks,
and the dismantling strategy is continuously optimized. Finally, the
well-optimized strategy is applied to real-world hypernetwork dismantling
tasks. Experimental results on five real-world hypernetworks demonstrate the
effectiveness of our proposed framework.
| 2104.14332 | 737,909 |
We present MoonLight, a tool for monitoring temporal and spatio-temporal
properties of mobile and spatially distributed cyber-physical systems (CPS). In
the proposed framework, space is represented as a weighted graph, describing
the topological configurations in which the single CPS entities (nodes of the
graph) are arranged. Both nodes and edges have attributes modelling physical
and logical quantities that can change in time. MoonLight is implemented in
Java and supports the monitoring of Spatio-Temporal Reach and Escape Logic
(STREL). MoonLight can be used as a standalone command line tool, as a Java
API, or via Matlab interface. We provide here some examples using the Matlab
interface and we evaluate the tool performance also by comparing with other
tools specialized in monitoring only temporal properties.
| 2104.14333 | 737,909 |
We extend the construction of equilibria for linear-quadratic and
mean-variance portfolio problems available in the literature to a large class
of mean-field time-inconsistent stochastic control problems in continuous time.
Our approach relies on a time discretization of the control problem via
n-person games, which are characterized via the maximum principle using
Backward Stochastic Differential Equations (BSDEs). The existence of equilibria
is proved by applying weak convergence arguments to the solutions of n-person
games. A numerical implementation is provided by approximating n-person games
using finite Markov chains.
| 2104.14334 | 737,909 |
The dramatic increase in sensitivity, spectral coverage and resolution of
radio astronomical facilities in recent years has opened new possibilities for
observation of chemical differentiation and isotopic fractionation in
protostellar sources to shed light on their spatial and temporal evolution. In
warm interstellar environments, methanol is an abundant species, hence spectral
data for its isotopic forms are of special interest. In the present work, the
millimeter-wave spectrum of the $^{13}$CH$_3$OD isotopologue has been
investigated over the region from 150$-$510 GHz to provide a set of transition
frequencies for potential astronomical application. The focus is on two types
of prominent $^{13}$CH$_3$OD spectral groupings, namely the $a$-type
$^qR$-branch multiplets and the $b$-type $Q$-branches. Line positions are
reported for the $^qR(J)$ clusters for $J = 3$ to 10 for the $v_{\rm t} = 0$
and 1 torsional states, and for a number of $v_{\rm t} = 0$ and 1 $^rQ(J)$ or
$^pQ(J)$ line series up to $J = 25$. The frequencies have been fitted to a
multi-parameter torsion-rotation Hamiltonian, and upper level excitation
energies have been calculated from the resulting molecular constants.
| 2104.14340 | 737,909 |
Impurities hosted in semiconducting solid matrices represent an extensively
studied platform for quantum computing applications. In this scenario, the
so-called flip-flop qubit emerges as a convenient choice for scalable
implementations in silicon. Flip-flop qubits are realized implanting
phosphorous donor in isotopically purified silicon, and encoding the logical
states in the donor nuclear spin and in its bound electron. Electrically
modulating the hyperfine interaction by applying a vertical electric field
causes an Electron Dipole Spin Resonance (EDSR) transition between the states
with antiparallel spins
$\{|\downarrow\Uparrow\rangle,|\uparrow\Downarrow\rangle\}$, that are chosen as
the logical states. When two qubits are considered, the dipole-dipole
interaction is exploited allowing long-range coupling between them. A universal
set of quantum gates for flip-flop qubits is here proposed and the effect of a
realistic 1/f noise on the gate fidelity is investigated for the single qubit
$R_z(-\frac{\pi}{2})$ and Hadamard gate and for the two-qubit $\sqrt{iSWAP}$
gate.
| 2104.14341 | 737,909 |
We study the problem of fairly allocating indivisible items to agents with
different entitlements, which captures, for example, the distribution of
ministries among political parties in a coalition government. Our focus is on
picking sequences derived from common apportionment methods, including five
traditional divisor methods and the quota method. We paint a complete picture
of these methods in relation to known envy-freeness and proportionality
relaxations for indivisible items as well as monotonicity properties with
respect to the resource, population, and weights. In addition, we provide
characterizations of picking sequences satisfying each of the fairness notions,
and show that the well-studied maximum Nash welfare solution fails resource-
and population-monotonicity even in the unweighted setting. Our results serve
as an argument in favor of using picking sequences in weighted fair division
problems.
| 2104.14347 | 737,909 |
We investigate the invariance of the Gibbs measure for the fractional
Schrodinger equation of exponential type (expNLS) $i\partial_t u +
(-\Delta)^{\frac{\alpha}2} u = 2\gamma\beta e^{\beta|u|^2}u$ on $d$-dimensional
compact Riemannian manifolds $\mathcal{M}$, for a dispersion parameter
$\alpha>d$, some coupling constant $\beta>0$, and $\gamma\neq 0$. (i) We first
study the construction of the Gibbs measure for (expNLS). We prove that in the
defocusing case $\gamma>0$, the measure is well-defined in the whole regime
$\alpha>d$ and $\beta>0$ (Theorem 1.1 (i)), while in the focusing case
$\gamma<0$ its partition function is always infinite for any $\alpha>d$ and
$\beta>0$, even with a mass cut-off of arbitrary small size (Theorem 1.1 (ii)).
(ii) We then study the dynamics (expNLS) with random initial data of low
regularity. We first use a compactness argument to prove weak invariance of the
Gibbs measure in the whole regime $\alpha>d$ and $0<\beta < \beta^\star_\alpha$
for some natural parameter $0<\beta^\star_\alpha\sim (\alpha-d)$ (Theorem 1.3
(i)). In the large dispersion regime $\alpha>2d$, we can improve this result by
constructing a local deterministic flow for (expNLS) for any $\beta>0$. Using
the Gibbs measure, we prove that solutions are almost surely global for
$0<\beta \ll\beta^\star_\alpha$, and that the Gibbs measure is invariant
(Theorem 1.3 (ii)). (iii) Finally, in the particular case $d=1$ and
$\mathcal{M}=\mathbb{T}$, we are able to exploit some probabilistic multilinear
smoothing effects to build a probabilistic flow for (expNLS) for
$1+\frac{\sqrt{2}}2<\alpha \leq 2$, locally for arbitrary $\beta>0$ and
globally for $0<\beta \ll \beta^\star_\alpha$ (Theorem 1.5).
| 2104.14348 | 737,909 |
Recognition of hand gestures is one of the most fundamental tasks in
human-robot interaction. Sparse representation based methods have been widely
used due to their efficiency and low requirements on the training data.
Recently, nonconvex regularization techniques including the $\ell_{1-2}$
regularization have been proposed in the image processing community to promote
sparsity while achieving efficient performance. In this paper, we propose a
vision-based human arm gesture recognition model based on the $\ell_{1-2}$
regularization, which is solved by the alternating direction method of
multipliers (ADMM). Numerical experiments on realistic data sets have shown the
effectiveness of this method in identifying arm gestures.
| 2104.14349 | 737,909 |
Recent years have seen tremendous progress in the theoretical understanding
of quantum systemsdriven dissipatively by coupling them to different baths at
their edges. This was possible because of the concurrent advances in the models
used to represent these systems, the methods employed, and the analysis of the
emerging phenomenology. Here we aim to give a comprehensive review of these
three integrated research directions. We first provide an overarching view of
the models of boundary driven open quantum systems, both in the weak and strong
coupling regimes. This is followed by a review of state-of-the-art analytical
and numerical methods, both exact, perturbative and approximate. Finally,we
discuss the transport properties of some paradigmatic one-dimensional chains,
with an emphasis on disordered and quasiperiodic systems, the emergence of
rectification and negative differential conductance, and the role of phase
transitions.
| 2104.14350 | 737,909 |
A remarkable consequence of the Hohenberg-Kohn theorem of density functional
theory is the existence of an injective map between the electronic density and
any observable of the many electron problem in an external potential. In this
work, we study the problem of predicting a particular observable, the band gap
of semiconductors and band insulators, from the knowledge of the local
electronic density. Using state-of-the-art machine learning techniques, we
predict the experimental band gaps from computationally inexpensive density
functional theory calculations. We propose a modified Behler-Parrinello (BP)
architecture that greatly improves the model capacity while maintaining the
symmetry properties of the BP architecture. Using this scheme, we obtain band
gaps at a level of accuracy comparable to those obtained with state of the art
and computationally intensive hybrid functionals, thus significantly reducing
the computational cost of the task.
| 2104.14351 | 737,909 |
In this paper, we investigate the problem of prescribing Webster scalar
curvatures on compact pseudo-Hermitian manifolds. In terms of the method of
upper and lower solutions and the perturbation theory of self-adjoint
operators, we can describe some sets of Webster scalar curvature functions
which can be realized through pointwise CR conformal deformations and CR
conformally equivalent deformations respectively from a given pseudo-Hermitian
structure.
| 2104.14358 | 737,909 |
Operational urban transport models require to gather heterogeneous sets of
data and often integrate different sub-models. Their systematic validation and
reproducible application therefore remains problematic. We propose in this
contribution to build transport models from the bottom-up using scientific
workflow systems with open-source components and data. These open models are
aimed in particular at estimating congestion of public transport in all UK
urban areas. This allows us building health indicators related to public
transport density in the context of the COVID-19 crisis, and testing related
policies.
| 2104.14359 | 737,909 |
Video salient object detection (VSOD) is an important task in many vision
applications. Reliable VSOD requires to simultaneously exploit the information
from both the spatial domain and the temporal domain. Most of the existing
algorithms merely utilize simple fusion strategies, such as addition and
concatenation, to merge the information from different domains. Despite their
simplicity, such fusion strategies may introduce feature redundancy, and also
fail to fully exploit the relationship between multi-level features extracted
from both spatial and temporal domains. In this paper, we suggest an adaptive
local-global refinement framework for VSOD. Different from previous approaches,
we propose a local refinement architecture and a global one to refine the
simply fused features with different scopes, which can fully explore the local
dependence and the global dependence of multi-level features. In addition, to
emphasize the effective information and suppress the useless one, an adaptive
weighting mechanism is designed based on graph convolutional neural network
(GCN). We show that our weighting methodology can further exploit the feature
correlations, thus driving the network to learn more discriminative feature
representation. Extensive experimental results on public video datasets
demonstrate the superiority of our method over the existing ones.
| 2104.14360 | 737,909 |
This paper provides maximal function characterizations of anisotropic
Triebel-Lizorkin spaces associated to general expansive matrices for the full
range of parameters $p \in (0,\infty)$, $q \in (0,\infty]$ and $\alpha \in
\mathbb{R}$. The equivalent norm is defined in terms of the decay of wavelet
coefficients, quantified by a Peetre-type space over a one-parameter dilation
group. For the Banach space regime $p,q \geq 1$, we use this characterization
to prove the existence of frames and Riesz sequences of dual molecules for the
Triebel-Lizorkin spaces; the atoms are obtained by translations and anisotropic
dilations of a single function, where neither the translation nor dilation
parameters are required to belong to a discrete subgroup. Explicit criteria for
molecules are given in terms of smoothness, decay and moment conditions.
| 2104.14361 | 737,909 |
In recent years, data and computing resources are typically distributed in
the devices of end users, various regions or organizations. Because of laws or
regulations, the distributed data and computing resources cannot be directly
shared among different regions or organizations for machine learning tasks.
Federated learning emerges as an efficient approach to exploit distributed data
and computing resources, so as to collaboratively train machine learning
models, while obeying the laws and regulations and ensuring data security and
data privacy. In this paper, we provide a comprehensive survey of existing
works for federated learning. We propose a functional architecture of federated
learning systems and a taxonomy of related techniques. Furthermore, we present
the distributed training, data communication, and security of FL systems.
Finally, we analyze their limitations and propose future research directions.
| 2104.14362 | 737,909 |
In collaborative robotic cells, a human operator and a robot share the
workspace in order to execute a common job, consisting of a set of tasks. A
proper allocation and scheduling of the tasks for the human and for the robot
is crucial for achieving an efficient human-robot collaboration. In order to
deal with the dynamic and unpredictable behavior of the human and for allowing
the human and the robot to negotiate about the tasks to be executed, a two
layers architecture for solving the task allocation and scheduling problem is
proposed. The first layer optimally solves the task allocation problem
considering nominal execution times. The second layer, which is reactive,
adapts online the sequence of tasks to be executed by the robot considering
deviations from the nominal behaviors and requests coming from the human and
from robot. The proposed architecture is experimentally validated on a
collaborative assembly job.
| 2104.14363 | 737,909 |
How does your brain decide what you will do next? Over the past few decades
compelling evidence has emerged that the basal ganglia, a collection of nuclei
in the fore- and mid-brain of all vertebrates, are vital to action selection.
Gurney, Prescott, and Redgrave published an influential computational account
of this idea in Biological Cybernetics in 2001. Here we take a look back at
this pair of papers, outlining the "GPR" model contained therein, the context
of that model's development, and the influence it has had over the past twenty
years. Tracing its lineage into models and theories still emerging now, we are
encouraged that the GPR model is that rare thing, a computational model of a
brain circuit whose advances were directly built on by others.
| 2104.14364 | 737,909 |
This paper proposes a multivariable extremum seeking scheme using Fast
Fourier Transform (FFT) for a network of subsystems working towards optimizing
the sum of their local objectives, where the overall objective is the only
available measurement. Here, the different inputs are perturbed with different
dither frequencies, and the power spectrum of the overall output signal
obtained using FFT is used to estimate the steady-state cost gradient w.r.t.
each input. The inputs for the subsystems are then updated using integral
control in order to drive the respective gradients to zero. This paper provides
analytical rules for designing the FFT-based gradient estimation algorithm and
analyzes the stability properties of the resulting extremum seeking scheme for
the static map setting. The effectiveness of the proposed FFT-based
multivariable extremum seeking scheme is demonstrated using two examples,
namely, wind farm power optimization problem, and a heat exchanger network for
industrial waste-to-heat recovery.
| 2104.14365 | 737,909 |
In this paper, we study the Erd\H{o}s-Falconer distance problem in five
dimensions for sets of Cartesian product structure. More precisely, we show
that for $A\subset \mathbb{F}_p$ with $|A|\gg p^{\frac{13}{22}}$, then
$\Delta(A^5)=\mathbb{F}_p$. When $|A-A|\sim |A|$, we obtain stronger statements
as follows:
If $|A|\gg p^{\frac{13}{22}}$, then $(A-A)^2+A^2+A^2+A^2+A^2=\mathbb{F}_p.$
If $|A|\gg p^{\frac{4}{7}}$, then
$(A-A)^2+(A-A)^2+A^2+A^2+A^2+A^2=\mathbb{F}_p.$
| 2104.14366 | 737,909 |
White dwarfs, the most abundant stellar remnants, provide a promising means
of probing dark matter interactions, complimentary to terrestrial searches. The
scattering of dark matter from stellar constituents leads to gravitational
capture, with important observational consequences. In particular, white dwarf
heating occurs due to the energy transfer in the dark matter capture and
thermalisation processes, and the subsequent annihilation of captured dark
matter. We consider the capture of dark matter by scattering on either the ion
or the degenerate electron component of white dwarfs. For ions, we account for
the stellar structure, the star opacity, realistic nuclear form factors that go
beyond the simple Helm approach, and finite temperature effects pertinent to
sub-GeV dark matter. Electrons are treated as relativistic, degenerate targets,
with Pauli blocking, finite temperature and multiple scattering effects all
taken into account. We also estimate the dark matter evaporation rate. The dark
matter-nucleon/electron scattering cross sections can be constrained by
comparing the heating rate due to dark matter capture with observations of cold
white dwarfs in dark matter-rich environments. We apply this technique to
observations of old white dwarfs in the globular cluster Messier 4, which we
assume to be located in a DM subhalo. For dark matter-nucleon scattering, we
find that white dwarfs can probe the sub-GeV mass range inaccessible to direct
detection searches, with the low mass reach limited only by evaporation, and
can be competitive with direct detection in the $1-10^4$ GeV range. White dwarf
limits on dark matter-electron scattering are found to outperform current
electron recoil experiments over the full mass range considered, and extend
well beyond the $\sim 10$ GeV mass regime where the sensitivity of electron
recoil experiments is reduced.
| 2104.14367 | 737,909 |
Centrality measures identify the most important nodes in a complex network.
In recent years, multilayer networks have emerged as a flexible tool to create
increasingly realistic models of complex systems. In this paper, we generalize
matrix function-based centrality and communicability measures to the case of
layer-coupled multiplex networks. We use the supra-adjacency matrix as the
network representation, which has already been used to generalize eigenvector
centrality to temporal and multiplex networks. With this representation, the
definition of single-layer matrix function-based centrality measures in terms
of walks on the networks carries over naturally to the multilayer case. Several
aggregation techniques allow the ranking of nodes, layers, as well as
node-layer pairs in terms of their importance in the network. We present
efficient and scalable numerical methods based on Krylov subspace techniques
and Gauss quadrature rules, which provide a high accuracy in only a few
iterations and which scale linearly in the network size under the assumption of
sparsity in the supra-adjacency matrix. Finally, we present extensive numerical
studies for both directed and undirected as well as weighted and unweighted
multiplex networks. While we focus on social and transportation applications
the networks' size ranges between $89$ and $2.28 \cdot 10^6$ nodes and between
$3$ and $37$ layers.
| 2104.14368 | 737,909 |
Tuning two dimensional nanomaterial's structural and electronic properties
has facilitated the new research paradigm in electronic device applications. In
this work, the first principles density functional theory based methods are
used to investigate the structural, electronic, and transport properties of an
orthorhombic diboron dinitride based polymorph. Interestingly, it depicts a low
band gap semiconducting nature with a robust anisotropic behaviour compared to
the hexagonal boron nitride, which is an insulator and isotropic. We can also
tune the structural and electronic properties of the semiconducting B2N2 based
structure through an external inplane mechanical strain. Further, by employing
the Landauer Buttiker approach, the electronic transmission function, and
electric current calculations reveal that the diboron dinitride based polymorph
shows a robust direction dependent anisotropy of the quantum transport
properties. We have demonstrated the direction dependence of the electric
current in two perpendicular directions, where we have observed an electric
current ratio of around 61.75 at 0.8 V. All these findings, such as directional
dependence anisotropy in transmission function, current voltage
characteristics, and bandgap tunning, suggest that the applicability of such
B2N2 based monolayer can be promising for futuristic electronic device
applications.
| 2104.14369 | 737,909 |
In this paper, we propose a novel subspace learning framework for one-class
classification. The proposed framework presents the problem in the form of
graph embedding. It includes the previously proposed subspace one-class
techniques as its special cases and provides further insight on what these
techniques actually optimize. The framework allows to incorporate other
meaningful optimization goals via the graph preserving criterion and reveals
spectral and spectral regression-based solutions as alternatives to the
previously used gradient-based technique. We combine the subspace learning
framework iteratively with Support Vector Data Description applied in the
subspace to formulate Graph-Embedded Subspace Support Vector Data Description.
We experimentally analyzed the performance of newly proposed different
variants. We demonstrate improved performance against the baselines and the
recently proposed subspace learning methods for one-class classification.
| 2104.14370 | 737,909 |
In this paper, we introduce structured sparsity estimators in Generalized
Linear Models. Structured sparsity estimators in the least squares loss are
introduced by Stucky and van de Geer (2018) recently for fixed design and
normal errors. We extend their results to debiased structured sparsity
estimators with Generalized Linear Model based loss. Structured sparsity
estimation means penalized loss functions with a possible sparsity structure
used in the chosen norm. These include weighted group lasso, lasso and norms
generated from convex cones. The significant difficulty is that it is not clear
how to prove two oracle inequalities. The first one is for the initial
penalized Generalized Linear Model estimator. Since it is not clear how a
particular feasible-weighted nodewise regression may fit in an oracle
inequality for penalized Generalized Linear Model, we need a second oracle
inequality to get oracle bounds for the approximate inverse for the sample
estimate of second-order partial derivative of Generalized Linear Model.
Our contributions are fivefold: 1. We generalize the existing oracle
inequality results in penalized Generalized Linear Models by proving the
underlying conditions rather than assuming them. One of the key issues is the
proof of a sample one-point margin condition and its use in an oracle
inequality. 2. Our results cover even non sub-Gaussian errors and regressors.
3. We provide a feasible weighted nodewise regression proof which generalizes
the results in the literature from a simple l_1 norm usage to norms generated
from convex cones. 4. We realize that norms used in feasible nodewise
regression proofs should be weaker or equal to the norms in penalized
Generalized Linear Model loss. 5. We can debias the first step estimator via
getting an approximate inverse of the singular-sample second order partial
derivative of Generalized Linear Model loss.
| 2104.14371 | 737,909 |
The underspecification of most machine learning pipelines means that we
cannot rely solely on validation performance to assess the robustness of deep
learning systems to naturally occurring distribution shifts. Instead, making
sure that a neural network can generalize across a large number of different
situations requires to understand the specific way in which it solves a task.
In this work, we propose to study this problem from a geometric perspective
with the aim to understand two key characteristics of neural network solutions
in underspecified settings: how is the geometry of the learned function related
to the data representation? And, are deep networks always biased towards
simpler solutions, as conjectured in recent literature? We show that the way
neural networks handle the underspecification of these problems is highly
dependent on the data representation, affecting both the geometry and the
complexity of the learned predictors. Our results highlight that understanding
the architectural inductive bias in deep learning is fundamental to address the
fairness, robustness, and generalization of these systems.
| 2104.14372 | 737,909 |
Heat engines are fundamental physical objects to develop nonequilibrium
thermodynamics. The thermodynamic performance of the heat engine is determined
by the choice of cycle and time-dependence of parameters. Here, we propose a
systematic numerical method to find a heat engine cycle to optimize some target
functions. We apply the method to heat engines with slowly varying parameters
and show that the method works well. Our numerical method is based on the
genetic algorithm which is widely applied to various optimization problems.
| 2104.14373 | 737,909 |
Benefitting from insensitivity to light and high penetration of foggy
environments, infrared cameras are widely used for sensing in nighttime traffic
scenes. However, the low contrast and lack of chromaticity of thermal infrared
(TIR) images hinder the human interpretation and portability of high-level
computer vision algorithms. Colorization to translate a nighttime TIR image
into a daytime color (NTIR2DC) image may be a promising way to facilitate
nighttime scene perception. Despite recent impressive advances in image
translation, semantic encoding entanglement and geometric distortion in the
NTIR2DC task remain under-addressed. Hence, we propose a toP-down attEntion And
gRadient aLignment based GAN, referred to as PearlGAN. A top-down guided
attention module and an elaborate attentional loss are first designed to reduce
the semantic encoding ambiguity during translation. Then, a structured gradient
alignment loss is introduced to encourage edge consistency between the
translated and input images. In addition, pixel-level annotation is carried out
on a subset of FLIR and KAIST datasets to evaluate the semantic preservation
performance of multiple translation methods. Furthermore, a new metric is
devised to evaluate the geometric consistency in the translation process.
Extensive experiments demonstrate the superiority of the proposed PearlGAN over
other image translation methods for the NTIR2DC task. The source code and
labeled segmentation masks will be available at
\url{https://github.com/FuyaLuo/PearlGAN/}.
| 2104.14374 | 737,909 |
One of the most common problems of weakly supervised object localization is
that of inaccurate object coverage. In the context of state-of-the-art methods
based on Class Activation Mapping, this is caused either by localization maps
which focus, exclusively, on the most discriminative region of the objects of
interest or by activations occurring in background regions. To address these
two problems, we propose two representation regularization mechanisms: Full
Region Regularizationwhich tries to maximize the coverage of the localization
map inside the object region, and Common Region Regularization which minimizes
the activations occurring in background regions. We evaluate the two
regularizations on the ImageNet, CUB-200-2011 and OpenImages-segmentation
datasets, and show that the proposed regularizations tackle both problems,
outperforming the state-of-the-art by a significant margin.
| 2104.14375 | 737,909 |
Thermal dissociation and recombination of molecular hydrogen, H_2, in the
atmospheres of ultra-hot Jupiters (UHJs) has been shown to play an important
role in global heat redistribution. This, in turn, significantly impacts their
planetary emission, yet only limited investigations on the atmospheric effects
have so far been conducted. Here we investigate the heat redistribution caused
by this dissociation/recombination reaction, alongside feedback mechanisms
between the atmospheric chemistry and radiative transfer, for a planetary and
stellar configuration typical of UHJs. To do this, we have developed a
time-dependent pseudo-2D model, including a treatment of time-independent
equilibrium chemical effects. As a result of the reaction heat redistribution,
we find temperature changes of up to $\sim$400 K in the atmosphere. When TiO
and VO are additionally considered as opacity sources, these changes in
temperature increase to over $\sim$800 K in some areas. This heat
redistribution is found to significantly shift the region of peak atmospheric
temperature, or hotspot, towards the evening terminator in both cases. The
impact of varying the longitudinal wind speed on the reaction heat distribution
is also investigated. When excluding TiO/VO, increased wind speeds are shown to
increase the impact of the reaction heat redistribution up to a threshold wind
speed. When including TiO/VO there is no apparent wind speed threshold, due to
thermal stabilisation by these species. We also construct pseudo-2D phase
curves from our model, and highlight both significant spectral flux damping and
increased phase offset caused by the reaction heat redistribution.
| 2104.14376 | 737,909 |
We generalize solid-state tight-binding techniques for the spectral analysis
of large superconducting circuits. We find that tight-binding states can be
better suited for approximating the low-energy excitations than charge-basis
states, as illustrated for the interesting example of the current-mirror
circuit. The use of tight binding can dramatically lower the Hilbert space
dimension required for convergence to the true spectrum, and allows for the
accurate simulation of larger circuits that are out of reach of charge basis
diagonalization.
| 2104.14377 | 737,909 |
In this study, the stability dependence of turbulent Prandtl number ($Pr_t$)
is quantified via a simple analytical approach. Based on the conventional
budget equations, a hybrid length scale formulation is first proposed and its
functional relationships to well-known length scales are established. Next, the
ratios of these length scales are utilized to derive an explicit relationship
between $Pr_t$ and gradient Richardson number. The results predicted by the
proposed formulation are compared against other competing formulations as well
as published datasets.
| 2104.14378 | 737,909 |
We propose a new approach to train a variational information bottleneck (VIB)
that improves its robustness to adversarial perturbations. Unlike the
traditional methods where the hard labels are usually used for the
classification task, we refine the categorical class information in the
training phase with soft labels which are obtained from a pre-trained reference
neural network and can reflect the likelihood of the original class labels. We
also relax the Gaussian posterior assumption in the VIB implementation by using
the mutual information neural estimation. Extensive experiments have been
performed with the MNIST and CIFAR-10 datasets, and the results show that our
proposed approach significantly outperforms the benchmarked models.
| 2104.14379 | 737,909 |
We propose and implement a Privacy-preserving Federated Learning (PPFL)
framework for mobile systems to limit privacy leakages in federated learning.
Leveraging the widespread presence of Trusted Execution Environments (TEEs) in
high-end and mobile devices, we utilize TEEs on clients for local training, and
on servers for secure aggregation, so that model/gradient updates are hidden
from adversaries. Challenged by the limited memory size of current TEEs, we
leverage greedy layer-wise training to train each model's layer inside the
trusted area until its convergence. The performance evaluation of our
implementation shows that PPFL can significantly improve privacy while
incurring small system overheads at the client-side. In particular, PPFL can
successfully defend the trained model against data reconstruction, property
inference, and membership inference attacks. Furthermore, it can achieve
comparable model utility with fewer communication rounds (0.54x) and a similar
amount of network traffic (1.002x) compared to the standard federated learning
of a complete model. This is achieved while only introducing up to ~15% CPU
time, ~18% memory usage, and ~21% energy consumption overhead in PPFL's
client-side.
| 2104.14380 | 737,909 |
We study the probability that an $(n - m)$-dimensional linear subspace in
$\mathbb{P}^n$ or a collection of points spanning such a linear subspace is
contained in an $m$-dimensional variety $Y \subset \mathbb{P}^n$. This involves
a strategy used by Galkin--Shinder to connect properties of a cubic
hypersurface to its Fano variety of lines via cut and paste relations in the
Grothendieck ring of varieties. Generalizing this idea to varieties of higher
codimension and degree, we can measure growth rates of weighted probabilities
of $k$-planes contained in a sequence of varieties with varying initial
parameters over a finite field. In the course of doing this, we move an
identity motivated by rationality problems involving cubic hypersurfaces to a
motivic statistics setting associated with cohomological stability.
| 2104.14381 | 737,909 |
We investigate the rotating quark matter in the three-flavor Nambu and
Jona-Lasinio (NJL) model. The chiral condensation, spin polarization and number
susceptibility of strange quark are carefully studied at finite temperature
without or with finite chemical potential in this model. We find that the
rotation suppresses the chiral condensation and enhances the first-order quark
spin polarization, however for the second-order quark spin polarization and
quark number susceptibility the effect is very interesting, in the case of zero
chemical potential which have a jump structure when the first-order phase
transitions take place. When extending to the situation with finite chemical
potential, we find the angular velocity also plays a crucial role, at small or
large enough angular velocity the chemical potential enhances the
susceptibility, however in the middle region of angular velocity the effect of
the chemical potential is suppressed by the angular velocity and susceptibility
can be changed considerably, which can be also observed that the quark number
susceptibility has two maximum value. Furthermore, it is found that at
sufficiently large angular velocity the contributions played by light quark and
strange quark to these phenomena are almost equal. We expect these studies to
be used to understand the chiral symmetry breaking and restoration as well as
probe the QCD phase transition.
| 2104.14382 | 737,909 |
Real-world data is usually segmented by attributes and distributed across
different parties. Federated learning empowers collaborative training without
exposing local data or models. As we demonstrate through designed attacks, even
with a small proportion of corrupted data, an adversary can accurately infer
the input attributes. We introduce an adversarial learning based procedure
which tunes a local model to release privacy-preserving intermediate
representations. To alleviate the accuracy decline, we propose a defense method
based on the forward-backward splitting algorithm, which respectively deals
with the accuracy loss and privacy loss in the forward and backward gradient
descent steps, achieving the two objectives simultaneously. Extensive
experiments on a variety of datasets have shown that our defense significantly
mitigates privacy leakage with negligible impact on the federated learning
task.
| 2104.14383 | 737,909 |
Motivated by the quantum speedup for dynamic programming on the Boolean
hypercube by Ambainis et al. (2019), we investigate which graphs admit a
similar quantum advantage. In this paper, we examine a generalization of the
Boolean hypercube graph, the $n$-dimensional lattice graph $Q(D,n)$ with
vertices in $\{0,1,\ldots,D\}^n$. We study the complexity of the following
problem: given a subgraph $G$ of $Q(D,n)$ via query access to the edges,
determine whether there is a path from $0^n$ to $D^n$. While the classical
query complexity is $\widetilde{\Theta}((D+1)^n)$, we show a quantum algorithm
with complexity $\widetilde O(T_D^n)$, where $T_D < D+1$. The first few values
of $T_D$ are $T_1 \approx 1.817$, $T_2 \approx 2.660$, $T_3 \approx 3.529$,
$T_4 \approx 4.421$, $T_5 \approx 5.332$ (the $D=1$ case corresponds to the
hypercube and replicates the result of Ambainis et al.).
We then show an implementation of this algorithm with time complexity
$\text{poly}(n)^{\log n} T_D^n$, and apply it to the Set Multicover problem. In
this problem, $m$ subsets of $[n]$ are given, and the task is to find the
smallest number of these subsets that cover each element of $[n]$ at least $D$
times. While the time complexity of the best known classical algorithm is
$O(m(D+1)^n)$, the time complexity of our quantum algorithm is
$\text{poly}(m,n)^{\log n} T_D^n$.
| 2104.14384 | 737,909 |
Few-shot classification aims to recognize unseen classes with few labeled
samples from each class. Many meta-learning models for few-shot classification
elaborately design various task-shared inductive bias (meta-knowledge) to solve
such tasks, and achieve impressive performance. However, when there exists the
domain shift between the training tasks and the test tasks, the obtained
inductive bias fails to generalize across domains, which degrades the
performance of the meta-learning models. In this work, we aim to improve the
robustness of the inductive bias through task augmentation. Concretely, we
consider the worst-case problem around the source task distribution, and
propose the adversarial task augmentation method which can generate the
inductive bias-adaptive 'challenging' tasks. Our method can be used as a simple
plug-and-play module for various meta-learning models, and improve their
cross-domain generalization capability. We conduct extensive experiments under
the cross-domain setting, using nine few-shot classification datasets:
mini-ImageNet, CUB, Cars, Places, Plantae, CropDiseases, EuroSAT, ISIC and
ChestX. Experimental results show that our method can effectively improve the
few-shot classification performance of the meta-learning models under domain
shift, and outperforms the existing works.
| 2104.14385 | 737,909 |
Visual domain randomization in simulated environments is a widely used method
to transfer policies trained in simulation to real robots. However, domain
randomization and augmentation hamper the training of a policy. As
reinforcement learning struggles with a noisy training signal, this additional
nuisance can drastically impede training. For difficult tasks it can even
result in complete failure to learn. To overcome this problem we propose to
pre-train a perception encoder that already provides an embedding invariant to
the randomization. We demonstrate that this yields consistently improved
results on a randomized version of DeepMind control suite tasks and a stacking
environment on arbitrary backgrounds with zero-shot transfer to a physical
robot.
| 2104.14386 | 737,909 |
The $\phi^4$ double-well theory admits a kink solution, whose rich
phenomenology is strongly affected by the existence of a single bound
excitation called the shape mode. We find that the leading quantum correction
to the energy needed to excite the shape mode is $-0.115567\lambda/m$ in terms
of the coupling $\lambda/4$ and the meson mass $m$ evaluated at the minimum of
the potential. On the other hand, the correction to the continuum threshold is
$-0.433\lambda/m$. A naive extrapolation to finite coupling then suggests that
the shape mode melts into the continuum at the modest coupling of
$\lambda/4\sim 0.106 m^2$, where the $\mathbb{Z}_2$ symmetry is still broken.
| 2104.14387 | 737,909 |