abstract
stringlengths 6
6.09k
| id
stringlengths 9
16
| time
int64 725k
738k
|
---|---|---|
In the past years, the interest in the laser-driven acceleration of heavy
ions in the mass range of A ~ 200 has been increasing due to promising
application ideas like the fission-fusion nuclear reaction mechanism, aiming at
the production of neutron-rich isotopes relevant for the astrophysical
r-process nucleosynthesis. In this paper, we report on the laser acceleration
of gold ions to beyond 7 MeV/u, exceeding for the first time an important
prerequisite for this nuclear reaction scheme. Moreover, the gold ion charge
states have been detected with an unprecedented resolution, which enables the
separation of individual charge states up to 4 MeV/u. The recorded charge-state
distributions show a remarkable dependency on the target foil thickness and
differ from simulations, lacking a straight-forward explanation by the
established ionization models.
| 2104.14520 | 737,909 |
We report complex magnetic, magnetoresistance (MR) and magnetocaloric
properties of Gd4RhAl and Tb4RhAl forming in the Gd4RhIn type cubic structure.
Though the synthesis of the compounds was reported long ago, to our knowledge,
no attempt was made to investigate the properties of these compounds. The
present results of ac and dc magnetization, electrical resistivity and
heat-capacity measurements down to 1.8 K establish that these compounds undergo
antiferromagnetic order initially, followed by complex spin-glass features with
decreasing temperature. These characteristic temperatures are: For Gd case, TN
is about 46K and TG is about 21 K, and for Tb, about 32 and 28 K respectively.
Additionally, there are field induced magnetic effects, interestingly leading
to non-monotonic variations in MR. There is a significant MR over a wide
temperature range above TN, similar to the behavior of magnetocaloric effect
(MCE) as measured by isothermal entropy change (DeltaS). An intriguing finding
we made is that DeltaS at the onset of magnetic order is significantly larger
for the Tb compound than that observed for the Gd analogue near its TN. On the
basis of this observation in a cubic material, we raise a question whether
aspherical nature of the 4f orbital can play a role to enhance MCE under
favorable circumstances, a clue that could be useful to find materials for
magnetocaloric applications.
| 2104.14521 | 737,909 |
We address the problem of decoding video file fragments when the necessary
encoding parameters are missing. With this objective, we propose a method that
automatically generates H.264 video headers containing these parameters and
extracts coded pictures in the partially available compressed video data. To
accomplish this, we examined a very large corpus of videos to learn patterns of
encoding settings commonly used by encoders and created a parameter dictionary.
Further, to facilitate a more efficient search our method identifies
characteristics of a coded bitstream to discriminate the entropy coding mode.
It also utilizes the application logs created by the decoder to identify
correct parameter values. Evaluation of the effectiveness of the proposed
method on more than 55K videos with diverse provenance shows that it can
generate valid headers on average in 11.3 decoding trials per video. This
result represents an improvement by more than a factor of 10 over the
conventional approach of video header stitching to recover video file
fragments.
| 2104.14522 | 737,909 |
Finding an effective formula for describing a discriminant of a quadrinomial
(a formula which can be easily computed for high values of degrees of
quadrinomials) is a difficult problem. In 2018 Otake and Shaska using advanced
matrix operations found an explicit expression of $\Delta(x^n+t(x^2+ax+b))$. In
this paper we focus on deriving similar results, taking advantage of
alternative elementary approach, for quadrinomials of the form $x^n+ax^k+bx+c$,
where $ k \in \{2,3,n-1\}$. Moreover, we make some notes about
$\Delta(x^{2n}+ax^n+bx^l+c)$ such that $n>2l$.
| 2104.14523 | 737,909 |
In 1957 Feynman suggested that the quantum/classical character of gravity can
be assessed by the presence/absence of entanglement between gravitationally
interacting test masses. However, in all proposed experimental realisations
using matter-wave interferometry the extreme weakness of this interaction
requires pure initial states with extreme squeezing to achieve measurable
entanglement for reasonable interaction times. In practice, the systems that
can be prepared in such nonclassical states are limited to small masses, which
in turn limits the rate at which they get entangled. Here we address this key
challenge - the weakness of gravitational interaction - by using a massive body
as an amplifying mediator of gravitational interaction between two
test-systems. Our analysis shows, that this results in an effective interaction
between the two test-systems that grows with the mass of the mediator and is
independent of its initial state and, therefore, its temperature. This greatly
reduces the requirement on the mass and degree of delocalization of the test
systems and, while still highly challenging, brings experiments on
gravitational source masses a step closer to reality.
| 2104.14524 | 737,909 |
We propose a change-point detection method for large scale multiple testing
problems with data having clustered signals. Unlike the classic change-point
setup, the signals can vary in size within a cluster. The clustering structure
on the signals enables us to effectively delineate the boundaries between
signal and non-signal segments. New test statistics are proposed for
observations from one and/or multiple realizations. Their asymptotic
distributions are derived. We also study the associated variance estimation
problem. We allow the variances to be heteroscedastic in the multiple
realization case, which substantially expands the applicability of the proposed
method. Simulation studies demonstrate that the proposed approach has a
favorable performance. Our procedure is applied to {an array based Comparative
Genomic Hybridization (aCGH)} dataset.
| 2104.14525 | 737,909 |
Tensors, which provide a powerful and flexible model for representing
multi-attribute data and multi-way interactions, play an indispensable role in
modern data science across various fields in science and engineering. A
fundamental task is to faithfully recover the tensor from highly incomplete
measurements in a statistically and computationally efficient manner.
Harnessing the low-rank structure of tensors in the Tucker decomposition, this
paper develops a scaled gradient descent (ScaledGD) algorithm to directly
recover the tensor factors with tailored spectral initializations, and shows
that it provably converges at a linear rate independent of the condition number
of the ground truth tensor for two canonical problems -- tensor completion and
tensor regression -- as soon as the sample size is above the order of $n^{3/2}$
ignoring other dependencies, where $n$ is the dimension of the tensor. This
leads to an extremely scalable approach to low-rank tensor estimation compared
with prior art, which suffers from at least one of the following drawbacks:
extreme sensitivity to ill-conditioning, high per-iteration costs in terms of
memory and computation, or poor sample complexity guarantees. To the best of
our knowledge, ScaledGD is the first algorithm that achieves near-optimal
statistical and computational complexities simultaneously for low-rank tensor
completion with the Tucker decomposition. Our algorithm highlights the power of
appropriate preconditioning in accelerating nonconvex statistical estimation,
where the iteration-varying preconditioners promote desirable invariance
properties of the trajectory with respect to the underlying symmetry in
low-rank tensor factorization.
| 2104.14526 | 737,909 |
We propose to assess the fairness of personalized recommender systems in the
sense of envy-freeness: every (group of) user(s) should prefer their
recommendations to the recommendations of other (groups of) users. Auditing for
envy-freeness requires probing user preferences to detect potential blind
spots, which may deteriorate recommendation performance. To control the cost of
exploration, we propose an auditing algorithm based on pure exploration and
conservative constraints in multi-armed bandits. We study, both theoretically
and empirically, the trade-offs achieved by this algorithm.
| 2104.14527 | 737,909 |
For deep learning methods applied to the diagnosis of gastric cancer
intelligently, existing methods concentrate more on Convolutional Neural
Networks (CNN) but no approaches are available using Visual Transformer (VT).
VT's efficient and stable deep learning models with the most recent application
in the field of computer vision, which is capable of improving the recognition
of global information in images. In this paper, a multi-scale visual
transformer model (GasHis-Transformer) is proposed for a gastric histopathology
image classification (GHIC) task, which enables the automatic classification of
gastric histological images of abnormal and normal cancer by obtained by
optical microscopy to facilitate the medical work of histopathologists. This
GasHis-Transformer model is built on two fundamental modules, including a
global information module (GIM) and a local information module (LIM). In the
experiment, an open source hematoxylin and eosin (H&E) stained gastric
histopathology dataset with 280 abnormal or normal images are divided into
training, validation, and test sets at a ratio of 1:1:2 first. Then,
GasHis-Transformer obtains precision, recall, F1-score, and accuracy on the
testing set of 98.0%, 100.0%, 96.0%, and 98.0%. Furthermore, a contrast
experiment also tests the generalization ability of the proposed
GatHis-Transformer model with a lymphoma image dataset including 374 images and
a breast cancer dataset including 1390 images in two extended experiments and
achieves an accuracy of 83.9% and 89.4%, respectively. Finally,
GasHis-Transformer model demonstrates high classification performance and shows
its effectiveness and enormous potential in GHIC tasks.
| 2104.14528 | 737,909 |
We introduce a family of order $N\in \mathbb{N}$ Lax matrices that is indexed
by the natural number $k\in \{1,\ldots,N-1\}.$ For each value of $k$ they serve
as strong Lax matrices of a hierarchy of integrable difference systems in edge
variables that in turn lead to hierarchies of integrable difference systems in
vertex variables or in a combination of edge and vertex variables. Furthermore,
the entries of the Lax matrices are considered as elements of a division ring,
so we obtain hierarchies of discrete integrable systems extended in the
non-commutative domain.
| 2104.14529 | 737,909 |
We introduce a two-parameter function $\phi_{q_+,q_-}$ on the infinite
hyperoctahedral group, which is a bivariate refinement of the reflection length
keeping track of the long and the short reflections separately. We provide a
complete characterization of the parameters $q_+,q_-$ when the signed
reflection function $\phi_{q_+,q_-}$ is positive definite and we prove that
this condition holds if and only if $\phi_{q_+,q_-}$ is an extreme character of
the infinite hyperoctahedral group. We construct the corresponding
representations as a natural action of the hyperoctahedral group $B(n)$ on the
tensor product of $n$ copies of a vector space, which gives a two-parameter
analog of the classical construction of Schur--Weyl.
We apply our characterization to construct a cyclic Fock space of type B
which generalizes the one-parameter construction in type A found previously by
Bo\.zejko and Guta. We also construct a new cyclic Gaussian operator of type B
and we relate its moments with the Askey--Wilson--Kerov distribution by using
the notion of cycles on pair-partitions, which we introduce here.
| 2104.14530 | 737,909 |
In this paper we address a variant of the Kazhdan-Lusztig non-degeneracy
conjecture posed by Gedeon, Proudfoot and Young. We prove that if $M$ has a
free basis (something that conjecturally asymptotically all matroids are
expected to possess), then $M$ is non-degenerate. To this end, we study the
behavior of Kazhdan-Lusztig polynomials of matroids with respect to the
operation of circuit-hyperplane relaxation. This yields a family of polynomials
that relate the Kazhdan-Lusztig, the inverse Kazhdan-Lusztig and the
$Z$-polynomial of a matroid with those of its relaxations and do not depend on
the matroid. As an application of our results, we deduce that uniform matroids
maximize coefficient-wise the Kazhdan-Lusztig polynomials, inverse
Kazhdan-Lusztig polynomials and the $Z$-polynomials, when restricted to sparse
paving matroids.
| 2104.14531 | 737,909 |
The mass media play at least five basic functions which include news
dissemination, surveillance of the environment, correlation of the components
of the society, entertainment and transmission of social heritage. Sometimes,
disruptions and impairments do occur in the performance of these roles and some
of these basic functions become dysfunctions, which turn the media into
purveyor of negative values. The present study investigates how popular the
Nigerian TV reality show, Big Brother Naija BBN, is perceived by its viewers.
Three hundred heavy viewers of the program were surveyed from Lagos and Ede,
South-West Nigeria, and their opinions and attitudes were sought regarding, why
they like or dislike the program; the gratifications that those who like the
program derive and whether the BBN, as media content, is generally functional
or dysfunctional to the society. Sixty six per cent 66 33.7 of respondents like
the program because it entertains. Half of the respondents, 99 50.5 dislike
immoral aspects of the program. The viewers affirm that the eviction part of
the program was their highest form of gratification. Most respondents, despite
public outcry against the program, consider the program to be functional.
Findings reinforce the postulation that TV viewers are not passive consumers of
media contents.
| 2104.14532 | 737,909 |
We propose an optimization scheme for ground-state cooling of a mechanical
mode by coupling to a general three-level system. We formulate the optimization
scheme, using the master equation approach, over a broad range of system
parameters including detunings, decay rates, coupling strengths, and pumping
rate. We implement the optimization scheme on three physical systems: a
colloidal quantum dot coupled to its confined phonon mode, a polariton coupled
to a mechanical resonator mode, and a coupled-cavity system coupled to a
mechanical resonator mode. These three physical systems span a broad range of
mechanical mode frequencies, coupling rates, and decay rates. Our optimization
scheme lowers the stead-state phonon number in all three cases by orders of
magnitude. We also calculate the net cooling rate by estimating the phonon
decay rate and show that the optimized system parameters also result in
efficient cooling. The proposed optimization scheme can be readily extended to
any generic driven three-level system coupled to a mechanical mode.
| 2104.14533 | 737,909 |
Balancing and push-recovery are essential capabilities enabling humanoid
robots to solve complex locomotion tasks. In this context, classical control
systems tend to be based on simplified physical models and hard-coded
strategies. Although successful in specific scenarios, this approach requires
demanding tuning of parameters and switching logic between
specifically-designed controllers for handling more general perturbations. We
apply model-free Deep Reinforcement Learning for training a general and robust
humanoid push-recovery policy in a simulation environment. Our method targets
high-dimensional whole-body humanoid control and is validated on the iCub
humanoid. Reward components incorporating expert knowledge on humanoid control
enable fast learning of several robust behaviors by the same policy, spanning
the entire body. We validate our method with extensive quantitative analyses in
simulation, including out-of-sample tasks which demonstrate policy robustness
and generalization, both key requirements towards real-world robot deployment.
| 2104.14534 | 737,909 |
Anomaly detection, the task of identifying unusual samples in data, often
relies on a large set of training samples. In this work, we consider the
setting of few-shot anomaly detection in images, where only a few images are
given at training. We devise a hierarchical generative model that captures the
multi-scale patch distribution of each training image. We further enhance the
representation of our model by using image transformations and optimize
scale-specific patch-discriminators to distinguish between real and fake
patches of the image, as well as between different transformations applied to
those patches. The anomaly score is obtained by aggregating the patch-based
votes of the correct transformation across scales and image regions. We
demonstrate the superiority of our method on both the one-shot and few-shot
settings, on the datasets of Paris, CIFAR10, MNIST and FashionMNIST as well as
in the setting of defect detection on MVTec. In all cases, our method
outperforms the recent baseline methods.
| 2104.14535 | 737,909 |
#StopAsianHate and #StopAAPIHate are two of the most commonly used hashtags
that represent the current movement to end hate crimes against the Asian
American and Pacific Islander community. We conduct a social media study of
public opinion on the #StopAsianHate and #StopAAPIHate movement based on 46,058
Twitter users across 30 states in the United States ranging from March 18 to
April 11, 2021. The movement attracts more participation from women, younger
adults, Asian and Black communities. 51.56% of the Twitter users show direct
support, 18.38% are news about anti-Asian hate crimes, while 5.43% show a
negative attitude towards the movement. Public opinion varies across user
characteristics. Furthermore, among the states with most racial bias motivated
hate crimes, the negative attitude towards the #StopAsianHate and #StopAAPIHate
movement is the weakest. To our best knowledge, this is the first large-scale
social media-based study to understand public opinion on the #StopAsianHate and
#StopAAPIHate movement. We hope our study can provide insights and promote
research on anti-Asian hate crimes, and ultimately help address such a serious
societal issue for the common benefits of all communities.
| 2104.14536 | 737,909 |
While learned video codecs have demonstrated great promise, they have yet to
achieve sufficient efficiency for practical deployment. In this work, we
propose several novel ideas for learned video compression which allow for
improved performance for the low-latency mode (I- and P-frames only) along with
a considerable increase in computational efficiency. In this setting, for
natural videos our approach compares favorably across the entire R-D curve
under metrics PSNR, MS-SSIM and VMAF against all mainstream video standards
(H.264, H.265, AV1) and all ML codecs. At the same time, our approach runs at
least 5x faster and has fewer parameters than all ML codecs which report these
figures.
Our contributions include a flexible-rate framework allowing a single model
to cover a large and dense range of bitrates, at a negligible increase in
computation and parameter count; an efficient backbone optimized for ML-based
codecs; and a novel in-loop flow prediction scheme which leverages prior
information towards more efficient compression.
We benchmark our method, which we call ELF-VC (Efficient, Learned and
Flexible Video Coding) on popular video test sets UVG and MCL-JCV under metrics
PSNR, MS-SSIM and VMAF. For example, on UVG under PSNR, it reduces the BD-rate
by 44% against H.264, 26% against H.265, 15% against AV1, and 35% against the
current best ML codec. At the same time, on an NVIDIA Titan V GPU our approach
encodes/decodes VGA at 49/91 FPS, HD 720 at 19/35 FPS, and HD 1080 at 10/18
FPS.
| 2104.14335 | 737,909 |
Though machine learning models are achieving great success, ex-tensive
studies have exposed their disadvantage of inheriting latent discrimination and
societal bias from the training data, which hinders their adoption on
high-state applications. Thus, many efforts have been taken for developing fair
machine learning models. Most of them require that sensitive attributes are
available during training to learn fair models. However, in many real-world
applications, it is usually infeasible to obtain the sensitive attribute due to
privacy or legal issues, which challenges existing fair classifiers. Though the
sensitive attribute of each data sample is unknown, we observe that there are
usually some non-sensitive features in the training data that are highly
correlated with sensitive attributes, which can be used to alleviate the bias.
Therefore, in this paper, we study a novel problem of exploring features that
are highly correlated with sensitive attributes for learning fair and accurate
classifier without sensitive attributes. We theoretically show that by
minimizing the correlation between these related features and model prediction,
we can learn a fair classifier. Based on this motivation, we propose a novel
framework which simultaneously uses these related features for accurate
prediction and regularizing the model to be fair. In addition, the model can
dynamically adjust the importance weight of each related feature to balance the
contribution of the feature on model classification and fairness. Experimental
results on real-world datasets demonstrate the effectiveness of the proposed
model for learning fair models with high classification accuracy.
| 2104.14537 | 737,909 |
We consider the distributed training of large-scale neural networks that
serve as PDE solvers producing full field outputs. We specifically consider
neural solvers for the generalized 3D Poisson equation over megavoxel domains.
A scalable framework is presented that integrates two distinct advances. First,
we accelerate training a large model via a method analogous to the multigrid
technique used in numerical linear algebra. Here, the network is trained using
a hierarchy of increasing resolution inputs in sequence, analogous to the 'V',
'W', 'F', and 'Half-V' cycles used in multigrid approaches. In conjunction with
the multi-grid approach, we implement a distributed deep learning framework
which significantly reduces the time to solve. We show the scalability of this
approach on both GPU (Azure VMs on Cloud) and CPU clusters (PSC Bridges2). This
approach is deployed to train a generalized 3D Poisson solver that scales well
to predict output full-field solutions up to the resolution of 512x512x512 for
a high dimensional family of inputs.
| 2104.14538 | 737,909 |
Model bias is an inherent limitation of the current dominant approach to
optimal quantum control, which relies on a system simulation for optimization
of control policies. To overcome this limitation, we propose a circuit-based
approach for training a reinforcement learning agent on quantum control tasks
in a model-free way. Given a continuously parameterized control circuit, the
agent learns its parameters through trial-and-error interaction with the
quantum system, using measurements as the only source of information about the
quantum state. By focusing on the task of quantum state preparation in a
harmonic oscillator coupled to an ancilla qubit, we show how to reward the
learning agent using measurements of experimentally available observables. We
demonstrate by numerical simulations preparation of arbitrary states using both
open- and closed-loop control through adaptive quantum feedback. Our work is of
immediate relevance to superconducting circuits and trapped ions platforms
where such training can be implemented real-time in an experiment, allowing
complete elimination of model bias and the adaptation of quantum control
policies to the specific system in which they are deployed.
| 2104.14539 | 737,909 |
Self-supervised monocular depth estimation networks are trained to predict
scene depth using nearby frames as a supervision signal during training.
However, for many applications, sequence information in the form of video
frames is also available at test time. The vast majority of monocular networks
do not make use of this extra signal, thus ignoring valuable information that
could be used to improve the predicted depth. Those that do, either use
computationally expensive test-time refinement techniques or off-the-shelf
recurrent networks, which only indirectly make use of the geometric information
that is inherently available.
We propose ManyDepth, an adaptive approach to dense depth estimation that can
make use of sequence information at test time, when it is available. Taking
inspiration from multi-view stereo, we propose a deep end-to-end cost volume
based approach that is trained using self-supervision only. We present a novel
consistency loss that encourages the network to ignore the cost volume when it
is deemed unreliable, e.g. in the case of moving objects, and an augmentation
scheme to cope with static cameras. Our detailed experiments on both KITTI and
Cityscapes show that we outperform all published self-supervised baselines,
including those that use single or multiple frames at test time.
| 2104.14540 | 737,909 |
An equation has been derived to predict unit cell volume of high entropy
alloys, HEA, by two different methods. Both treatments led to the same
equation. For cubic HEA lattice parameters were calculated. The predicted
lattice parameters were compared with those reported for 68 HEAs. Lattice
parameters were also calculated using the equivalent of Vegards law for these
alloys. Average errors were 0.52, and 0.42 when Vegards law, and the equation
derived in this work were used, respectively.
| 2104.14541 | 737,909 |
We discuss a model where a mixed warm and hot keV neutrino dark matter rises
naturally. We arrange active and sterile neutrinos in the same $SU(3)_L$
multiplet, with the lightest sterile neutrino being dark matter. The other two
heavy sterile neutrinos, through their out-of-equilibrium decay, contribute
both to the dilution of dark matter density and its population, after
freeze-out. We show that this model features all ingredients to overcome the
overproduction of keV neutrino dark matter, and explore the phenomenological
implications for Big Bang Nucleosynthesis and the number of relativistic
degrees of freedom.
| 2104.14542 | 737,909 |
Variational quantum algorithms (VQAs) promise efficient use of near-term
quantum computers. However, training these algorithms often requires an
extensive amount of time and suffers from the barren plateau problem where the
magnitude of the gradients vanishes with increasing number of qubits. Here, we
show how to optimally train a VQA for learning quantum states. Parameterized
quantum circuits can form Gaussian kernels, which we use to derive optimal
adaptive learning rates for gradient ascent. We introduce the generalized
quantum natural gradient that features stability and optimized movement in
parameter space. Both methods together outperform other optimization routines
and can enhance VQAs as well as quantum control techniques. The gradients of
the VQA do not vanish when the fidelity between the initial state and the state
to be learned is bounded from below. We identify a VQA for quantum simulation
with such a constraint that can be trained free of barren plateaus. Finally, we
propose the application of Gaussian kernels for quantum machine learning.
| 2104.14543 | 737,909 |
Synthetic datasets play a critical role in pre-training CNN models for
optical flow, but they are painstaking to generate and hard to adapt to new
applications. To automate the process, we present AutoFlow, a simple and
effective method to render training data for optical flow that optimizes the
performance of a model on a target dataset. AutoFlow takes a layered approach
to render synthetic data, where the motion, shape, and appearance of each layer
are controlled by learnable hyperparameters. Experimental results show that
AutoFlow achieves state-of-the-art accuracy in pre-training both PWC-Net and
RAFT. Our code and data are available at https://autoflow-google.github.io .
| 2104.14544 | 737,909 |
Object tracking has achieved significant progress over the past few years.
However, state-of-the-art trackers become increasingly heavy and expensive,
which limits their deployments in resource-constrained applications. In this
work, we present LightTrack, which uses neural architecture search (NAS) to
design more lightweight and efficient object trackers. Comprehensive
experiments show that our LightTrack is effective. It can find trackers that
achieve superior performance compared to handcrafted SOTA trackers, such as
SiamRPN++ and Ocean, while using much fewer model Flops and parameters.
Moreover, when deployed on resource-constrained mobile chipsets, the discovered
trackers run much faster. For example, on Snapdragon 845 Adreno GPU, LightTrack
runs $12\times$ faster than Ocean, while using $13\times$ fewer parameters and
$38\times$ fewer Flops. Such improvements might narrow the gap between academic
models and industrial deployments in object tracking task. LightTrack is
released at https://github.com/researchmm/LightTrack.
| 2104.14545 | 737,909 |
Over-parametrized deep neural networks trained by stochastic gradient descent
are successful in performing many tasks of practical relevance. One aspect of
over-parametrization is the possibility that the student network has a larger
expressivity than the data generating process. In the context of a
student-teacher scenario, this corresponds to the so-called over-realizable
case, where the student network has a larger number of hidden units than the
teacher. For on-line learning of a two-layer soft committee machine in the
over-realizable case, we find that the approach to perfect learning occurs in a
power-law fashion rather than exponentially as in the realizable case. All
student nodes learn and replicate one of the teacher nodes if teacher and
student outputs are suitably rescaled.
| 2104.14546 | 737,909 |
Recent deep-learning-based techniques for the reconstruction of geometries
from different input representations such as images and point clouds have been
instrumental in advancing research in geometric machine learning. Most of these
techniques rely on a triangular mesh representation for representing the
geometry, with very recent attempts in using B-splines. While Non-Uniform
Rational B-splines (NURBS) are the de facto standard in the CAD industry,
minimal efforts have been made to bridge the gap between deep-learning
frameworks and the NURBS representation for geometry. The backbone of modern
deep learning techniques is the use of a fully automatic differentiable
definition for each mathematical operation to enable backpropagation of losses
while training. In order to integrate the NURBS representation of CAD models
with deep learning methods, we propose a differentiable NURBS layer for
evaluating the curve or surface given a set of NURBS parameters. We have
developed a NURBS layer defining the forward and backward pass required for
automatic differentiation. Our implementation is GPU accelerated and is
directly integrated with PyTorch, a popular deep learning framework. We
demonstrate the efficacy of our NURBS layer by automatically incorporating it
with the stochastic gradient descent algorithm and performing CAD operations
such as curve or surface fitting and surface offsetting. Further, we show its
utility in deep learning applications such as point cloud reconstruction and
structural modeling and analysis of shell structures such as heart valves.
These examples show that our layer has better performance for certain deep
learning frameworks and can be directly integrated with any CAD deep-learning
framework that require the use of NURBS.
| 2104.14547 | 737,909 |
Self-supervised learning algorithms based on instance discrimination train
encoders to be invariant to pre-defined transformations of the same instance.
While most methods treat different views of the same image as positives for a
contrastive loss, we are interested in using positives from other instances in
the dataset. Our method, Nearest-Neighbor Contrastive Learning of visual
Representations (NNCLR), samples the nearest neighbors from the dataset in the
latent space, and treats them as positives. This provides more semantic
variations than pre-defined transformations.
We find that using the nearest-neighbor as positive in contrastive losses
improves performance significantly on ImageNet classification, from 71.7% to
75.6%, outperforming previous state-of-the-art methods. On semi-supervised
learning benchmarks we improve performance significantly when only 1% ImageNet
labels are available, from 53.8% to 56.5%. On transfer learning benchmarks our
method outperforms state-of-the-art methods (including supervised learning with
ImageNet) on 8 out of 12 downstream datasets. Furthermore, we demonstrate
empirically that our method is less reliant on complex data augmentations. We
see a relative reduction of only 2.1% ImageNet Top-1 accuracy when we train
using only random crops.
| 2104.14548 | 737,909 |
This paper proposes a distributed Reinforcement Learning (RL) based framework
that can be used for synthesizing MAC layer wireless protocols in IoT networks
with low-complexity wireless transceivers. The proposed framework does not rely
on complex hardware capabilities such as carrier sensing and its associated
algorithmic complexities that are often not supported in wireless transceivers
of low-cost and low-energy IoT devices. In this framework, the access protocols
are first formulated as Markov Decision Processes (MDP) and then solved using
RL. A distributed and multi-Agent RL framework is used as the basis for
protocol synthesis. Distributed behavior makes the nodes independently learn
optimal transmission strategies without having to rely on full network level
information and direct knowledge of behavior of other nodes. The nodes learn to
minimize packet collisions such that optimal throughput can be attained and
maintained for loading conditions that are higher than what the known benchmark
protocols (such as ALOHA) for IoT devices without complex transceivers. In
addition, the nodes are observed to be able to learn to act optimally in the
presence of heterogeneous loading and network topological conditions. Finally,
the proposed learning approach allows the wireless bandwidth to be fairly
distributed among network nodes in a way that is not dependent on such
heterogeneities. Via simulation experiments, the paper demonstrates the
performance of the learning paradigm and its abilities to make nodes adapt
their optimal transmission strategies on the fly in response to various network
dynamics.
| 2104.14549 | 737,909 |
We show that for a model complete strongly minimal theory whose pregeometry
is flat, the recursive spectrum (SRM($T$)) is either of the form $[0,\alpha)$
for $\alpha\in \omega+2$ or $[0,n]\cup\{\omega\}$ for $n\in \omega$, or
$\{\omega\}$, or contained in $\{0,1,2\}$.
Combined with previous results, this leaves precisely 4 sets for which it is
not yet determined whether each is the spectrum of a model complete strongly
minimal theory with a flat pregeometry.
| 2104.14550 | 737,909 |
Recent generative models can synthesize "views" of artificial images that
mimic real-world variations, such as changes in color or pose, simply by
learning from unlabeled image collections. Here, we investigate whether such
views can be applied to real images to benefit downstream analysis tasks such
as image classification. Using a pretrained generator, we first find the latent
code corresponding to a given real input image. Applying perturbations to the
code creates natural variations of the image, which can then be ensembled
together at test-time. We use StyleGAN2 as the source of generative
augmentations and investigate this setup on classification tasks involving
facial attributes, cat faces, and cars. Critically, we find that several design
decisions are required towards making this process work; the perturbation
procedure, weighting between the augmentations and original image, and training
the classifier on synthesized images can all impact the result. Currently, we
find that while test-time ensembling with GAN-based augmentations can offer
some small improvements, the remaining bottlenecks are the efficiency and
accuracy of the GAN reconstructions, coupled with classifier sensitivities to
artifacts in GAN-generated images.
| 2104.14551 | 737,909 |
Quantum computing has the potential to revolutionize computing for certain
classes of problems with exponential scaling, and yet this potential is
accompanied by significant sensitivity to noise, requiring sophisticated error
correction and mitigation strategies. Here we simulate the relaxations of
stationary states at different frequencies on several quantum computers to
obtain unique spectroscopic fingerprints of their noise. Response functions
generated from the data reveal a clear signature of non-Markovian dynamics,
demonstrating that each of the quantum computers acts as a non-Markovian bath
with a unique colored noise profile. The study suggest that noisy
intermediate-scale quantum computers (NISQ) provide a built-in noisy bath that
can be analyzed from their simulation of closed quantum systems with the
results potentially being harnessed for error mitigation or open-system
simulation.
| 2104.14552 | 737,909 |
Visual content often contains recurring elements. Text is made up of glyphs
from the same font, animations, such as cartoons or video games, are composed
of sprites moving around the screen, and natural videos frequently have
repeated views of objects. In this paper, we propose a deep learning approach
for obtaining a graphically disentangled representation of recurring elements
in a completely self-supervised manner. By jointly learning a dictionary of
texture patches and training a network that places them onto a canvas, we
effectively deconstruct sprite-based content into a sparse, consistent, and
interpretable representation that can be easily used in downstream tasks. Our
framework offers a promising approach for discovering recurring patterns in
image collections without supervision.
| 2104.14553 | 737,909 |
Recent advances in geometric deep-learning introduce complex computational
challenges for evaluating the distance between meshes. From a mesh model, point
clouds are necessary along with a robust distance metric to assess surface
quality or as part of the loss function for training models. Current methods
often rely on a uniform random mesh discretization, which yields irregular
sampling and noisy distance estimation. In this paper we introduce MongeNet, a
fast and optimal transport based sampler that allows for an accurate
discretization of a mesh with better approximation properties. We compare our
method to the ubiquitous random uniform sampling and show that the
approximation error is almost half with a very small computational overhead.
| 2104.14554 | 737,909 |
We show how one may classify all semisimple algebras containing the
$\mathfrak{su}(3)\oplus \mathfrak{su}(2) \oplus \mathfrak{u}(1)$ symmetry of
the Standard Model and acting on some given matter sector, enabling theories
beyond the Standard Model with unification (partial or total) of symmetries
(gauged or global) to be catalogued. With just a single generation of Standard
Model fermions plus a singlet neutrino, the only gauged symmetries correspond
to the well-known algebras $\mathfrak{su}(5)$, $\mathfrak{so}(10),$ and
$\mathfrak{su}(4)\oplus \mathfrak{su}(2) \oplus \mathfrak{su}(2)$, but with two
or more generations a limited number of exotic symmetries mixing flavor, color,
and electroweak symmetries become possible. We provide a complete catalogue in
the case of 3 generations or fewer and describe how the method can be
generalized to include additional matter.
| 2104.14555 | 737,909 |
Recent works find that AI algorithms learn biases from data. Therefore, it is
urgent and vital to identify biases in AI algorithms. However, the previous
bias identification pipeline overly relies on human experts to conjecture
potential biases (e.g., gender), which may neglect other underlying biases not
realized by humans. To help human experts better find the AI algorithms'
biases, we study a new problem in this work -- for a classifier that predicts a
target attribute of the input image, discover its unknown biased attribute.
To solve this challenging problem, we use a hyperplane in the generative
model's latent space to represent an image attribute; thus, the original
problem is transformed to optimizing the hyperplane's normal vector and offset.
We propose a novel total-variation loss within this framework as the objective
function and a new orthogonalization penalty as a constraint. The latter
prevents trivial solutions in which the discovered biased attribute is
identical with the target or one of the known-biased attributes. Extensive
experiments on both disentanglement datasets and real-world datasets show that
our method can discover biased attributes and achieve better disentanglement
w.r.t. target attributes. Furthermore, the qualitative results show that our
method can discover unnoticeable biased attributes for various object and scene
classifiers, proving our method's generalizability for detecting biased
attributes in diverse domains of images. The code is available at
https://git.io/J3kMh.
| 2104.14556 | 737,909 |
We propose a novel approach for few-shot talking-head synthesis. While recent
works in neural talking heads have produced promising results, they can still
produce images that do not preserve the identity of the subject in source
images. We posit this is a result of the entangled representation of each
subject in a single latent code that models 3D shape information, identity
cues, colors, lighting and even background details. In contrast, we propose to
factorize the representation of a subject into its spatial and style
components. Our method generates a target frame in two steps. First, it
predicts a dense spatial layout for the target image. Second, an image
generator utilizes the predicted layout for spatial denormalization and
synthesizes the target frame. We experimentally show that this disentangled
representation leads to a significant improvement over previous methods, both
quantitatively and qualitatively.
| 2104.14557 | 737,909 |
We present a large-scale study on unsupervised spatiotemporal representation
learning from videos. With a unified perspective on four recent image-based
frameworks, we study a simple objective that can easily generalize all these
methods to space-time. Our objective encourages temporally-persistent features
in the same video, and in spite of its simplicity, it works surprisingly well
across: (i) different unsupervised frameworks, (ii) pre-training datasets,
(iii) downstream datasets, and (iv) backbone architectures. We draw a series of
intriguing observations from this study, e.g., we discover that encouraging
long-spanned persistency can be effective even if the timespan is 60 seconds.
In addition to state-of-the-art results in multiple benchmarks, we report a few
promising cases in which unsupervised pre-training can outperform its
supervised counterpart. Code is made available at
https://github.com/facebookresearch/SlowFast
| 2104.14558 | 737,909 |
Exemplar-based portrait stylization is widely attractive and highly desired.
Despite recent successes, it remains challenging, especially when considering
both texture and geometric styles. In this paper, we present the first
framework for one-shot 3D portrait style transfer, which can generate 3D face
models with both the geometry exaggerated and the texture stylized while
preserving the identity from the original content. It requires only one
arbitrary style image instead of a large set of training examples for a
particular style, provides geometry and texture outputs that are fully
parameterized and disentangled, and enables further graphics applications with
the 3D representations. The framework consists of two stages. In the first
geometric style transfer stage, we use facial landmark translation to capture
the coarse geometry style and guide the deformation of the dense 3D face
geometry. In the second texture style transfer stage, we focus on performing
style transfer on the canonical texture by adopting a differentiable renderer
to optimize the texture in a multi-view framework. Experiments show that our
method achieves robustly good results on different artistic styles and
outperforms existing methods. We also demonstrate the advantages of our method
via various 2D and 3D graphics applications. Project page is
https://halfjoe.github.io/projs/3DPS/index.html.
| 2104.14559 | 737,909 |