text
stringlengths 29
3.31k
| label
sequencelengths 1
11
|
---|---|
An important problem in geostatistics is to build models of the subsurface of
the Earth given physical measurements at sparse spatial locations. Typically,
this is done using spatial interpolation methods or by reproducing patterns
from a reference image. However, these algorithms fail to produce realistic
patterns and do not exhibit the wide range of uncertainty inherent in the
prediction of geology. In this paper, we show how semantic inpainting with
Generative Adversarial Networks can be used to generate varied realizations of
geology which honor physical measurements while matching the expected
geological patterns. In contrast to other algorithms, our method scales well
with the number of data points and mimics a distribution of patterns as opposed
to a single pattern or image. The generated conditional samples are state of
the art. | [
"stat.ML",
"physics.comp-ph",
"physics.geo-ph"
] |
Deep generative models have been enjoying success in modeling continuous
data. However it remains challenging to capture the representations for
discrete structures with formal grammars and semantics, e.g., computer programs
and molecular structures. How to generate both syntactically and semantically
correct data still remains largely an open problem. Inspired by the theory of
compiler where the syntax and semantics check is done via syntax-directed
translation (SDT), we propose a novel syntax-directed variational autoencoder
(SD-VAE) by introducing stochastic lazy attributes. This approach converts the
offline SDT check into on-the-fly generated guidance for constraining the
decoder. Comparing to the state-of-the-art methods, our approach enforces
constraints on the output space so that the output will be not only
syntactically valid, but also semantically reasonable. We evaluate the proposed
model with applications in programming language and molecules, including
reconstruction and program/molecule optimization. The results demonstrate the
effectiveness in incorporating syntactic and semantic constraints in discrete
generative models, which is significantly better than current state-of-the-art
approaches. | [
"cs.LG",
"cs.CL"
] |
The convolution operation suffers from a limited receptive filed, while
global modeling is fundamental to dense prediction tasks, such as semantic
segmentation. In this paper, we apply graph convolution into the semantic
segmentation task and propose an improved Laplacian. The graph reasoning is
directly performed in the original feature space organized as a spatial
pyramid. Different from existing methods, our Laplacian is data-dependent and
we introduce an attention diagonal matrix to learn a better distance metric. It
gets rid of projecting and re-projecting processes, which makes our proposed
method a light-weight module that can be easily plugged into current computer
vision architectures. More importantly, performing graph reasoning directly in
the feature space retains spatial relationships and makes spatial pyramid
possible to explore multiple long-range contextual patterns from different
scales. Experiments on Cityscapes, COCO Stuff, PASCAL Context and PASCAL VOC
demonstrate the effectiveness of our proposed methods on semantic segmentation.
We achieve comparable performance with advantages in computational and memory
overhead. | [
"cs.CV"
] |
The world has transitioned into a new phase of online learning in response to
the recent Covid19 pandemic. Now more than ever, it has become paramount to
push the limits of online learning in every manner to keep flourishing the
education system. One crucial component of online learning is Knowledge Tracing
(KT). The aim of KT is to model student's knowledge level based on their
answers to a sequence of exercises referred as interactions. Students acquire
their skills while solving exercises and each such interaction has a distinct
impact on student ability to solve a future exercise. This \textit{impact} is
characterized by 1) the relation between exercises involved in the interactions
and 2) student forget behavior. Traditional studies on knowledge tracing do not
explicitly model both the components jointly to estimate the impact of these
interactions. In this paper, we propose a novel Relation-aware self-attention
model for Knowledge Tracing (RKT). We introduce a relation-aware self-attention
layer that incorporates the contextual information. This contextual information
integrates both the exercise relation information through their textual content
as well as student performance data and the forget behavior information through
modeling an exponentially decaying kernel function. Extensive experiments on
three real-world datasets, among which two new collections are released to the
public, show that our model outperforms state-of-the-art knowledge tracing
methods. Furthermore, the interpretable attention weights help visualize the
relation between interactions and temporal patterns in the human learning
process. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Deep learning-based models have recently outperformed state-of-the-art
seasonal forecasting models, such as for predicting El Ni\~no-Southern
Oscillation (ENSO). However, current deep learning models are based on
convolutional neural networks which are difficult to interpret and can fail to
model large-scale atmospheric patterns. In comparison, graph neural networks
(GNNs) are capable of modeling large-scale spatial dependencies and are more
interpretable due to the explicit modeling of information flow through edge
connections. We propose the first application of graph neural networks to
seasonal forecasting. We design a novel graph connectivity learning module that
enables our GNN model to learn large-scale spatial interactions jointly with
the actual ENSO forecasting task. Our model, \graphino, outperforms
state-of-the-art deep learning-based models for forecasts up to six months
ahead. Additionally, we show that our model is more interpretable as it learns
sensible connectivity structures that correlate with the ENSO anomaly pattern. | [
"cs.LG",
"cs.NE",
"physics.ao-ph",
"stat.ML"
] |
In this work, we propose CARLS, a novel framework for augmenting the capacity
of existing deep learning frameworks by enabling multiple components -- model
trainers, knowledge makers and knowledge banks -- to concertedly work together
in an asynchronous fashion across hardware platforms. The proposed CARLS is
particularly suitable for learning paradigms where model training benefits from
additional knowledge inferred or discovered during training, such as node
embeddings for graph neural networks or reliable pseudo labels from model
predictions. We also describe three learning paradigms -- semi-supervised
learning, curriculum learning and multimodal learning -- as examples that can
be scaled up efficiently by CARLS. One version of CARLS has been open-sourced
and available for download at:
https://github.com/tensorflow/neural-structured-learning/tree/master/research/carls | [
"cs.LG"
] |
The problem of segmenting a given image into coherent regions is important in
Computer Vision and many industrial applications require segmenting a known
object into its components. Examples include identifying individual parts of a
component for process control work in a manufacturing plant and identifying
parts of a car from a photo for automatic damage detection. Unfortunately most
of an object's parts of interest in such applications share the same pixel
characteristics, having similar colour and texture. This makes segmenting the
object into its components a non-trivial task for conventional image
segmentation algorithms. In this paper, we propose a "Model Assisted
Segmentation" method to tackle this problem. A 3D model of the object is
registered over the given image by optimising a novel gradient based loss
function. This registration obtains the full 3D pose from an image of the
object. The image can have an arbitrary view of the object and is not limited
to a particular set of views. The segmentation is subsequently performed using
a level-set based method, using the projected contours of the registered 3D
model as initialisation curves. The method is fully automatic and requires no
user interaction. Also, the system does not require any prior training. We
present our results on photographs of a real car. | [
"cs.CV"
] |
Decision forests, including Random Forests and Gradient Boosting Trees, have
recently demonstrated state-of-the-art performance in a variety of machine
learning settings. Decision forests are typically ensembles of axis-aligned
decision trees; that is, trees that split only along feature dimensions. In
contrast, many recent extensions to decision forests are based on axis-oblique
splits. Unfortunately, these extensions forfeit one or more of the favorable
properties of decision forests based on axis-aligned splits, such as robustness
to many noise dimensions, interpretability, or computational efficiency. We
introduce yet another decision forest, called "Sparse Projection Oblique
Randomer Forests" (SPORF). SPORF uses very sparse random projections, i.e.,
linear combinations of a small subset of features. SPORF significantly improves
accuracy over existing state-of-the-art algorithms on a standard benchmark
suite for classification with >100 problems of varying dimension, sample size,
and number of classes. To illustrate how SPORF addresses the limitations of
both axis-aligned and existing oblique decision forest methods, we conduct
extensive simulated experiments. SPORF typically yields improved performance
over existing decision forests, while mitigating computational efficiency and
scalability and maintaining interpretability. SPORF can easily be incorporated
into other ensemble methods such as boosting to obtain potentially similar
gains. | [
"stat.ML",
"cs.LG",
"68T10",
"I.5.2"
] |
Current state-of-the-art visual recognition systems usually rely on the
following pipeline: (a) pretraining a neural network on a large-scale dataset
(e.g., ImageNet) and (b) finetuning the network weights on a smaller,
task-specific dataset. Such a pipeline assumes the sole weight adaptation is
able to transfer the network capability from one domain to another domain,
based on a strong assumption that a fixed architecture is appropriate for all
domains. However, each domain with a distinct recognition target may need
different levels/paths of feature hierarchy, where some neurons may become
redundant, and some others are re-activated to form new network structures. In
this work, we prove that dynamically adapting network architectures tailored
for each domain task along with weight finetuning benefits in both efficiency
and effectiveness, compared to the existing image recognition pipeline that
only tunes the weights regardless of the architecture. Our method can be easily
generalized to an unsupervised paradigm by replacing supernet training with
self-supervised learning in the source domain tasks and performing linear
evaluation in the downstream tasks. This further improves the search efficiency
of our method. Moreover, we also provide principled and empirical analysis to
explain why our approach works by investigating the ineffectiveness of existing
neural architecture search. We find that preserving the joint distribution of
the network architecture and weights is of importance. This analysis not only
benefits image recognition but also provides insights for crafting neural
networks. Experiments on five representative image recognition tasks such as
person re-identification, age estimation, gender recognition, image
classification, and unsupervised domain adaptation demonstrate the
effectiveness of our method. | [
"cs.CV"
] |
The increasing number of regulations and expectations of predictive machine
learning models, such as so called right to explanation, has led to a large
number of methods promising greater interpretability. High demand has led to a
widespread adoption of XAI techniques like Shapley values, Partial Dependence
profiles or permutational variable importance. However, we still do not know
enough about their properties and how they manifest in the context in which
explanations are created by analysts, reviewed by auditors, and interpreted by
various stakeholders. This paper highlights a blind spot which, although
critical, is often overlooked when monitoring and auditing machine learning
models: the effect of the reference data on the explanation calculation. We
discuss that many model explanations depend directly or indirectly on the
choice of the referenced data distribution. We showcase examples where small
changes in the distribution lead to drastic changes in the explanations, such
as a change in trend or, alarmingly, a conclusion. Consequently, we postulate
that obtaining robust and useful explanations always requires supporting them
with a broader context. | [
"cs.LG"
] |
Reinforcement learning algorithms are typically geared towards optimizing the
expected return of an agent. However, in many practical applications, low
variance in the return is desired to ensure the reliability of an algorithm. In
this paper, we propose on-policy and off-policy actor-critic algorithms that
optimize a performance criterion involving both mean and variance in the
return. Previous work uses the second moment of return to estimate the variance
indirectly. Instead, we use a much simpler recently proposed direct variance
estimator which updates the estimates incrementally using temporal difference
methods. Using the variance-penalized criterion, we guarantee the convergence
of our algorithm to locally optimal policies for finite state action Markov
decision processes. We demonstrate the utility of our algorithm in tabular and
continuous MuJoCo domains. Our approach not only performs on par with
actor-critic and prior variance-penalization baselines in terms of expected
return, but also generates trajectories which have lower variance in the
return. | [
"cs.LG",
"cs.AI"
] |
Graph Neural Networks (GNNs) have recently received significant research
attention due to their superior performance on a variety of graph-related
learning tasks. Most of the current works focus on either static or dynamic
graph settings, addressing a single particular task, e.g., node/graph
classification, link prediction. In this work, we investigate the question: can
GNNs be applied to continuously learning a sequence of tasks? Towards that, we
explore the Continual Graph Learning (CGL) paradigm and present the Experience
Replay based framework ER-GNN for CGL to alleviate the catastrophic forgetting
problem in existing GNNs. ER-GNN stores knowledge from previous tasks as
experiences and replays them when learning new tasks to mitigate the
catastrophic forgetting issue. We propose three experience node selection
strategies: mean of feature, coverage maximization, and influence maximization,
to guide the process of selecting experience nodes. Extensive experiments on
three benchmark datasets demonstrate the effectiveness of our ER-GNN and shed
light on the incremental graph (non-Euclidean) structure learning. | [
"cs.LG",
"stat.ML"
] |
Existing attention mechanisms either attend to local image grid or object
level features for Visual Question Answering (VQA). Motivated by the
observation that questions can relate to both object instances and their parts,
we propose a novel attention mechanism that jointly considers reciprocal
relationships between the two levels of visual details. The bottom-up attention
thus generated is further coalesced with the top-down information to only focus
on the scene elements that are most relevant to a given question. Our design
hierarchically fuses multi-modal information i.e., language, object- and
gird-level features, through an efficient tensor decomposition scheme. The
proposed model improves the state-of-the-art single model performances from
67.9% to 68.2% on VQAv1 and from 65.7% to 67.4% on VQAv2, demonstrating a
significant boost. | [
"cs.CV",
"cs.AI",
"cs.CL"
] |
We introduce SharpNet, a method that predicts an accurate depth map for an
input color image, with a particular attention to the reconstruction of
occluding contours: Occluding contours are an important cue for object
recognition, and for realistic integration of virtual objects in Augmented
Reality, but they are also notoriously difficult to reconstruct accurately. For
example, they are a challenge for stereo-based reconstruction methods, as
points around an occluding contour are visible in only one image. Inspired by
recent methods that introduce normal estimation to improve depth prediction, we
introduce a novel term that constrains depth and occluding contours
predictions. Since ground truth depth is difficult to obtain with pixel-perfect
accuracy along occluding contours, we use synthetic images for training,
followed by fine-tuning on real data. We demonstrate our approach on the
challenging NYUv2-Depth dataset, and show that our method outperforms the
state-of-the-art along occluding contours, while performing on par with the
best recent methods for the rest of the images. Its accuracy along the
occluding contours is actually better than the `ground truth' acquired by a
depth camera based on structured light. We show this by introducing a new
benchmark based on NYUv2-Depth for evaluating occluding contours in monocular
reconstruction, which is our second contribution. | [
"cs.CV"
] |
Density estimation, compression and data generation are crucial tasks in
artificial intelligence. Variational Auto-Encoders (VAEs) constitute a single
framework to achieve these goals. Here, we present a novel class of generative
models, called self-supervised Variational Auto-Encoder (selfVAE), that
utilizes deterministic and discrete variational posteriors. This class of
models allows to perform both conditional and unconditional sampling, while
simplifying the objective function. First, we use a single self-supervised
transformation as a latent variable, where a transformation is either
downscaling or edge detection. Next, we consider a hierarchical architecture,
i.e., multiple transformations, and we show its benefits compared to the VAE.
The flexibility of selfVAE in data reconstruction finds a particularly
interesting use case in data compression tasks, where we can trade-off memory
for better data quality, and vice-versa. We present performance of our approach
on three benchmark image data (Cifar10, Imagenette64, and CelebA). | [
"stat.ML",
"cs.LG"
] |
Wide area networking infrastructures (WANs), particularly science and
research WANs, are the backbone for moving large volumes of scientific data
between experimental facilities and data centers. With demands growing at
exponential rates, these networks are struggling to cope with large data
volumes, real-time responses, and overall network performance. Network
operators are increasingly looking for innovative ways to manage the limited
underlying network resources. Forecasting network traffic is a critical
capability for proactive resource management, congestion mitigation, and
dedicated transfer provisioning. To this end, we propose a nonautoregressive
graph-based neural network for multistep network traffic forecasting.
Specifically, we develop a dynamic variant of diffusion convolutional recurrent
neural networks to forecast traffic in research WANs. We evaluate the efficacy
of our approach on real traffic from ESnet, the U.S. Department of Energy's
dedicated science network. Our results show that compared to classical
forecasting methods, our approach explicitly learns the dynamic nature of
spatiotemporal traffic patterns, showing significant improvements in
forecasting accuracy. Our technique can surpass existing statistical and deep
learning approaches by achieving approximately 20% mean absolute percentage
error for multiple hours of forecasts despite dynamic network traffic settings. | [
"cs.LG",
"cs.NI",
"eess.SP",
"stat.ML"
] |
Person detection and pose estimation is a key requirement to develop
intelligent context-aware assistance systems. To foster the development of
human pose estimation methods and their applications in the Operating Room
(OR), we release the Multi-View Operating Room (MVOR) dataset, the first public
dataset recorded during real clinical interventions. It consists of 732
synchronized multi-view frames recorded by three RGB-D cameras in a hybrid OR.
It also includes the visual challenges present in such environments, such as
occlusions and clutter. We provide camera calibration parameters, color and
depth frames, human bounding boxes, and 2D/3D pose annotations. In this paper,
we present the dataset, its annotations, as well as baseline results from
several recent person detection and 2D/3D pose estimation methods. Since we
need to blur some parts of the images to hide identity and nudity in the
released dataset, we also present a comparative study of how the baselines have
been impacted by the blurring. Results show a large margin for improvement and
suggest that the MVOR dataset can be useful to compare the performance of the
different methods. | [
"cs.CV"
] |
Developing successful sign language recognition, generation, and translation
systems requires expertise in a wide range of fields, including computer
vision, computer graphics, natural language processing, human-computer
interaction, linguistics, and Deaf culture. Despite the need for deep
interdisciplinary knowledge, existing research occurs in separate disciplinary
silos, and tackles separate portions of the sign language processing pipeline.
This leads to three key questions: 1) What does an interdisciplinary view of
the current landscape reveal? 2) What are the biggest challenges facing the
field? and 3) What are the calls to action for people working in the field? To
help answer these questions, we brought together a diverse group of experts for
a two-day workshop. This paper presents the results of that interdisciplinary
workshop, providing key background that is often overlooked by computer
scientists, a review of the state-of-the-art, a set of pressing challenges, and
a call to action for the research community. | [
"cs.CV",
"cs.CL",
"cs.CY",
"cs.GR",
"cs.HC"
] |
In vision-based reinforcement learning (RL) tasks, it is prevalent to assign
the auxiliary task with a surrogate self-supervised loss so as to obtain more
semantic representations and improve sample efficiency. However, abundant
information in self-supervised auxiliary tasks has been disregarded, since the
representation learning part and the decision-making part are separated. To
sufficiently utilize information in the auxiliary task, we present a simple yet
effective idea to employ self-supervised loss as an intrinsic reward, called
Intrinsically Motivated Self-Supervised learning in Reinforcement learning
(IM-SSR). We formally show that the self-supervised loss can be decomposed as
exploration for novel states and robustness improvement from nuisance
elimination. IM-SSR can be effortlessly plugged into any reinforcement learning
with self-supervised auxiliary objectives with nearly no additional cost.
Combined with IM-SSR, the previous underlying algorithms achieve salient
improvements on both sample efficiency and generalization in various
vision-based robotics tasks from the DeepMind Control Suite, especially when
the reward signal is sparse. | [
"cs.LG",
"cs.AI"
] |
Deep convolutional networks have achieved the state-of-the-art for semantic
image segmentation tasks. However, training these networks requires access to
densely labeled images, which are known to be very expensive to obtain. On the
other hand, the web provides an almost unlimited source of images annotated at
the image level. How can one utilize this much larger weakly annotated set for
tasks that require dense labeling? Prior work often relied on localization
cues, such as saliency maps, objectness priors, bounding boxes etc., to address
this challenging problem. In this paper, we propose a model that generates
auxiliary labels for each image, while simultaneously forcing the output of the
CNN to satisfy the mean-field constraints imposed by a conditional random
field. We show that one can enforce the CRF constraints by forcing the
distribution at each pixel to be close to the distribution of its neighbors.
This is in stark contrast with methods that compute a recursive expansion of
the mean-field distribution using a recurrent architecture and train the
resultant distribution. Instead, the proposed model adds an extra loss term to
the output of the CNN, and hence, is faster than recursive implementations. We
achieve the state-of-the-art for weakly supervised semantic image segmentation
on VOC 2012 dataset, assuming no manually labeled pixel level information is
available. Furthermore, the incorporation of conditional random fields in CNN
incurs little extra time during training. | [
"cs.CV"
] |
We propose a scalable approach to learn video-based question answering (QA):
answer a "free-form natural language question" about a video content. Our
approach automatically harvests a large number of videos and descriptions
freely available online. Then, a large number of candidate QA pairs are
automatically generated from descriptions rather than manually annotated. Next,
we use these candidate QA pairs to train a number of video-based QA methods
extended fromMN (Sukhbaatar et al. 2015), VQA (Antol et al. 2015), SA (Yao et
al. 2015), SS (Venugopalan et al. 2015). In order to handle non-perfect
candidate QA pairs, we propose a self-paced learning procedure to iteratively
identify them and mitigate their effects in training. Finally, we evaluate
performance on manually generated video-based QA pairs. The results show that
our self-paced learning procedure is effective, and the extended SS model
outperforms various baselines. | [
"cs.CV",
"cs.AI",
"cs.MM"
] |
Recent research on reinforcement learning (RL) has suggested that trained
agents are vulnerable to maliciously crafted adversarial samples. In this work,
we show how such samples can be generalised from White-box and Grey-box attacks
to a strong Black-box case, where the attacker has no knowledge of the agents,
their training parameters and their training methods. We use
sequence-to-sequence models to predict a single action or a sequence of future
actions that a trained agent will make. First, we show our approximation model,
based on time-series information from the agent, consistently predicts RL
agents' future actions with high accuracy in a Black-box setup on a wide range
of games and RL algorithms. Second, we find that although adversarial samples
are transferable from the target model to our RL agents, they often outperform
random Gaussian noise only marginally. This highlights a serious methodological
deficiency in previous work on such agents; random jamming should have been
taken as the baseline for evaluation. Third, we propose a novel use for
adversarial samplesin Black-box attacks of RL agents: they can be used to
trigger a trained agent to misbehave after a specific time delay. This appears
to be a genuinely new type of attack. It potentially enables an attacker to use
devices controlled by RL agents as time bombs. | [
"cs.LG",
"cs.CR",
"cs.CV",
"stat.ML"
] |
Marginal Structural Models (MSM) are the most popular models for causal
inference from time-series observational data. However, they have two main
drawbacks: (a) they do not capture subject heterogeneity, and (b) they only
consider fixed time intervals and do not scale gracefully with longer
intervals. In this work, we propose a new family of MSMs to address these two
concerns. We model the potential outcomes as a three-dimensional tensor of low
rank, where the three dimensions correspond to the agents, time periods and the
set of possible histories. Unlike the traditional MSM, we allow the dimensions
of the tensor to increase with the number of agents and time periods. We set up
a weighted tensor completion problem as our estimation procedure, and show that
the solution to this problem converges to the true model in an appropriate
sense. Then we show how to solve the estimation problem, providing conditions
under which we can approximately and efficiently solve the estimation problem.
Finally we propose an algorithm based on projected gradient descent, which is
easy to implement, and evaluate its performance on a simulated dataset. | [
"cs.LG",
"stat.ML"
] |
In this paper, we propose a simple yet effective method to endow deep 3D
models with rotation invariance by expressing the coordinates in an intrinsic
frame determined by the object shape itself. Key to our approach is to find
such an intrinsic frame which should be unique to the identical object shape
and consistent across different instances of the same category, e.g. the frame
axes of desks should be all roughly along the edges. Interestingly, the
principal component analysis exactly provides an effective way to define such a
frame, i.e. setting the principal components as the frame axes. As the
principal components have direction ambiguity caused by the sign-ambiguity of
eigenvector computation, there exist several intrinsic frames for each object.
In order to achieve absolute rotation invariance for a deep model, we adopt the
coordinates expressed in all intrinsic frames as inputs to obtain multiple
output features, which will be further aggregated as a final feature via a
self-attention module. Our method is theoretically rotation-invariant and can
be flexibly embedded into the current network architectures. Comprehensive
experiments demonstrate that our approach can achieve near state-of-the-art
performance on rotation-augmented dataset for ModelNet40 classification and
outperform other models on SHREC'17 perturbed retrieval task. | [
"cs.CV"
] |
Graph structured data provide two-fold information: graph structures and node
attributes. Numerous graph-based algorithms rely on both information to achieve
success in supervised tasks, such as node classification and link prediction.
However, node attributes could be missing or incomplete, which significantly
deteriorates the performance. The task of node attribute generation aims to
generate attributes for those nodes whose attributes are completely unobserved.
This task benefits many real-world problems like profiling, node classification
and graph data augmentation. To tackle this task, we propose a deep adversarial
learning based method to generate node attributes; called node attribute neural
generator (NANG). NANG learns a unifying latent representation which is shared
by both node attributes and graph structures and can be translated to different
modalities. We thus use this latent representation as a bridge to convert
information from one modality to another. We further introduce practical
applications to quantify the performance of node attribute generation.
Extensive experiments are conducted on four real-world datasets and the
empirical results show that node attributes generated by the proposed method
are high-qualitative and beneficial to other applications. The datasets and
codes are available online. | [
"stat.ML",
"cs.LG"
] |
Reinforcement learning (RL) allows to solve complex tasks such as Go often
with a stronger performance than humans. However, the learned behaviors are
usually fixed to specific tasks and unable to adapt to different contexts. Here
we consider the case of adapting RL agents to different time restrictions, such
as finishing a task with a given time limit that might change from one task
execution to the next. We define such problems as Time Adaptive Markov Decision
Processes and introduce two model-free, value-based algorithms: the Independent
Gamma-Ensemble and the n-Step Ensemble. In difference to classical approaches,
they allow a zero-shot adaptation between different time restrictions. The
proposed approaches represent general mechanisms to handle time adaptive tasks
making them compatible with many existing RL methods, algorithms, and
scenarios. | [
"cs.LG",
"cs.AI",
"stat.ML",
"I.2.6"
] |
Strong regulations in the financial industry mean that any decisions based on
machine learning need to be explained. This precludes the use of powerful
supervised techniques such as neural networks. In this study we propose a new
unsupervised and semi-supervised technique known as the topological
hierarchical decomposition (THD). This process breaks a dataset down into ever
smaller groups, where groups are associated with a simplicial complex that
approximate the underlying topology of a dataset. We apply THD to the FICO
machine learning challenge dataset, consisting of anonymized home equity loan
applications using the MAPPER algorithm to build simplicial complexes. We
identify different groups of individuals unable to pay back loans, and
illustrate how the distribution of feature values in a simplicial complex can
be used to explain the decision to grant or deny a loan by extracting
illustrative explanations from two THDs on the dataset. | [
"cs.LG",
"stat.ML"
] |
Most deep neural networks (DNNs) based ultrasound (US) medical image analysis
models use pretrained backbones (e.g., ImageNet) for better model
generalization. However, the domain gap between natural and medical images
causes an inevitable performance bottleneck. To alleviate this problem, an US
dataset named US-4 is constructed for direct pretraining on the same domain. It
contains over 23,000 images from four US video sub-datasets. To learn robust
features from US-4, we propose an US semi-supervised contrastive learning
method, named USCL, for pretraining. In order to avoid high similarities
between negative pairs as well as mine abundant visual features from limited US
videos, USCL adopts a sample pair generation method to enrich the feature
involved in a single step of contrastive optimization. Extensive experiments on
several downstream tasks show the superiority of USCL pretraining against
ImageNet pretraining and other state-of-the-art (SOTA) pretraining approaches.
In particular, USCL pretrained backbone achieves fine-tuning accuracy of over
94% on POCUS dataset, which is 10% higher than 84% of the ImageNet pretrained
model. The source codes of this work are available at
https://github.com/983632847/USCL. | [
"cs.CV",
"cs.AI"
] |
Self-supervised representation learning has achieved remarkable success in
recent years. By subverting the need for supervised labels, such approaches are
able to utilize the numerous unlabeled images that exist on the Internet and in
photographic datasets. Yet to build truly intelligent agents, we must construct
representation learning algorithms that can learn not only from datasets but
also learn from environments. An agent in a natural environment will not
typically be fed curated data. Instead, it must explore its environment to
acquire the data it will learn from. We propose a framework, curious
representation learning (CRL), which jointly learns a reinforcement learning
policy and a visual representation model. The policy is trained to maximize the
error of the representation learner, and in doing so is incentivized to explore
its environment. At the same time, the learned representation becomes stronger
and stronger as the policy feeds it ever harder data to learn from. Our learned
representations enable promising transfer to downstream navigation tasks,
performing better than or comparably to ImageNet pretraining without using any
supervision at all. In addition, despite being trained in simulation, our
learned representations can obtain interpretable results on real images. Code
is available at https://yilundu.github.io/crl/. | [
"cs.CV",
"cs.AI",
"cs.LG",
"cs.RO"
] |
Matching people across multiple camera views known as person
re-identification, is a challenging problem due to the change in visual
appearance caused by varying lighting conditions. The perceived color of the
subject appears to be different with respect to illumination. Previous works
use color as it is or address these challenges by designing color spaces
focusing on a specific cue. In this paper, we propose a data driven approach
for learning color patterns from pixels sampled from images across two camera
views. The intuition behind this work is that, even though pixel values of same
color would be different across views, they should be encoded with the same
values. We model color feature generation as a learning problem by jointly
learning a linear transformation and a dictionary to encode pixel values. We
also analyze different photometric invariant color spaces. Using color as the
only cue, we compare our approach with all the photometric invariant color
spaces and show superior performance over all of them. Combining with other
learned low-level and high-level features, we obtain promising results in
ViPER, Person Re-ID 2011 and CAVIAR4REID datasets. | [
"cs.CV"
] |
The accuracy of deep convolutional neural networks (CNNs) generally improves
when fueled with high resolution images. However, this often comes at a high
computational cost and high memory footprint. Inspired by the fact that not all
regions in an image are task-relevant, we propose a novel framework that
performs efficient image classification by processing a sequence of relatively
small inputs, which are strategically selected from the original image with
reinforcement learning. Such a dynamic decision process naturally facilitates
adaptive inference at test time, i.e., it can be terminated once the model is
sufficiently confident about its prediction and thus avoids further redundant
computation. Notably, our framework is general and flexible as it is compatible
with most of the state-of-the-art light-weighted CNNs (such as MobileNets,
EfficientNets and RegNets), which can be conveniently deployed as the backbone
feature extractor. Experiments on ImageNet show that our method consistently
improves the computational efficiency of a wide variety of deep models. For
example, it further reduces the average latency of the highly efficient
MobileNet-V3 on an iPhone XS Max by 20% without sacrificing accuracy. Code and
pre-trained models are available at
https://github.com/blackfeather-wang/GFNet-Pytorch. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
Processing an input signal that contains arbitrary structures, e.g.,
superpixels and point clouds, remains a big challenge in computer vision.
Linear diffusion, an effective model for image processing, has been recently
integrated with deep learning algorithms. In this paper, we propose to learn
pairwise relations among data points in a global fashion to improve semantic
segmentation with arbitrarily-structured data, through spatial generalized
propagation networks (SGPN). The network propagates information on a group of
graphs, which represent the arbitrarily-structured data, through a learned,
linear diffusion process. The module is flexible to be embedded and jointly
trained with many types of networks, e.g., CNNs. We experiment with semantic
segmentation networks, where we use our propagation module to jointly train on
different data -- images, superpixels and point clouds. We show that SGPN
consistently improves the performance of both pixel and point cloud
segmentation, compared to networks that do not contain this module. Our method
suggests an effective way to model the global pairwise relations for
arbitrarily-structured data. | [
"cs.CV"
] |
We take a Bayesian perspective to illustrate a connection between training
speed and the marginal likelihood in linear models. This provides two major
insights: first, that a measure of a model's training speed can be used to
estimate its marginal likelihood. Second, that this measure, under certain
conditions, predicts the relative weighting of models in linear model
combinations trained to minimize a regression loss. We verify our results in
model selection tasks for linear models and for the infinite-width limit of
deep neural networks. We further provide encouraging empirical evidence that
the intuition developed in these settings also holds for deep neural networks
trained with stochastic gradient descent. Our results suggest a promising new
direction towards explaining why neural networks trained with stochastic
gradient descent are biased towards functions that generalize well. | [
"cs.LG"
] |
Gesture recognition has attracted considerable attention owing to its great
potential in applications. Although the great progress has been made recently
in multi-modal learning methods, existing methods still lack effective
integration to fully explore synergies among spatio-temporal modalities
effectively for gesture recognition. The problems are partially due to the fact
that the existing manually designed network architectures have low efficiency
in the joint learning of multi-modalities. In this paper, we propose the first
neural architecture search (NAS)-based method for RGB-D gesture recognition.
The proposed method includes two key components: 1) enhanced temporal
representation via the proposed 3D Central Difference Convolution (3D-CDC)
family, which is able to capture rich temporal context via aggregating temporal
difference information; and 2) optimized backbones for multi-sampling-rate
branches and lateral connections among varied modalities. The resultant
multi-modal multi-rate network provides a new perspective to understand the
relationship between RGB and depth modalities and their temporal dynamics.
Comprehensive experiments are performed on three benchmark datasets (IsoGD,
NvGesture, and EgoGesture), demonstrating the state-of-the-art performance in
both single- and multi-modality settings.The code is available at
https://github.com/ZitongYu/3DCDC-NAS | [
"cs.CV"
] |
We report a method to convert discrete representations of molecules to and
from a multidimensional continuous representation. This model allows us to
generate new molecules for efficient exploration and optimization through
open-ended spaces of chemical compounds. A deep neural network was trained on
hundreds of thousands of existing chemical structures to construct three
coupled functions: an encoder, a decoder and a predictor. The encoder converts
the discrete representation of a molecule into a real-valued continuous vector,
and the decoder converts these continuous vectors back to discrete molecular
representations. The predictor estimates chemical properties from the latent
continuous vector representation of the molecule. Continuous representations
allow us to automatically generate novel chemical structures by performing
simple operations in the latent space, such as decoding random vectors,
perturbing known chemical structures, or interpolating between molecules.
Continuous representations also allow the use of powerful gradient-based
optimization to efficiently guide the search for optimized functional
compounds. We demonstrate our method in the domain of drug-like molecules and
also in the set of molecules with fewer that nine heavy atoms. | [
"cs.LG",
"physics.chem-ph"
] |
The integration of deep learning to reinforcement learning (RL) has enabled
RL to perform efficiently in high-dimensional environments. Deep RL methods
have been applied to solve many complex real-world problems in recent years.
However, development of a deep RL-based system is challenging because of
various issues such as the selection of a suitable deep RL algorithm, its
network configuration, training time, training methods, and so on. This paper
proposes a comprehensive software framework that not only plays a vital role in
designing a connect-the-dots deep RL architecture but also provides a guideline
to develop a realistic RL application in a short time span. We have designed
and developed a deep RL-based software framework that strictly ensures
flexibility, robustness, and scalability. By inheriting the proposed
architecture, software managers can foresee any challenges when designing a
deep RL-based system. As a result, they can expedite the design process and
actively control every stage of software development, which is especially
critical in agile development environments. To enforce generalization, the
proposed architecture does not depend on a specific RL algorithm, a network
configuration, the number of agents, or the type of agents. Using our
framework, software developers can develop and integrate new RL algorithms or
new types of agents, and can flexibly change network configuration or the
number of agents. | [
"cs.LG",
"cs.AI",
"cs.GT",
"cs.RO"
] |
We present a novel method for local image feature matching. Instead of
performing image feature detection, description, and matching sequentially, we
propose to first establish pixel-wise dense matches at a coarse level and later
refine the good matches at a fine level. In contrast to dense methods that use
a cost volume to search correspondences, we use self and cross attention layers
in Transformer to obtain feature descriptors that are conditioned on both
images. The global receptive field provided by Transformer enables our method
to produce dense matches in low-texture areas, where feature detectors usually
struggle to produce repeatable interest points. The experiments on indoor and
outdoor datasets show that LoFTR outperforms state-of-the-art methods by a
large margin. LoFTR also ranks first on two public benchmarks of visual
localization among the published methods. | [
"cs.CV",
"cs.RO"
] |
The heavy traffic congestion problem has always been a concern for modern
cities. To alleviate traffic congestion, researchers use reinforcement learning
(RL) to develop better traffic signal control (TSC) algorithms in recent years.
However, most RL models are trained and tested in the same traffic flow
environment, which results in a serious overfitting problem. Since the traffic
flow environment in the real world keeps varying, these models can hardly be
applied due to the lack of generalization ability. Besides, the limited number
of accessible traffic flow data brings extra difficulty in testing the
generalization ability of the models. In this paper, we design a novel traffic
flow generator based on Wasserstein generative adversarial network to generate
sufficient diverse and quality traffic flows and use them to build proper
training and testing environments. Then we propose a meta-RL TSC framework
GeneraLight to improve the generalization ability of TSC models. GeneraLight
boosts the generalization performance by combining the idea of flow clustering
and model-agnostic meta-learning. We conduct extensive experiments on multiple
real-world datasets to show the superior performance of GeneraLight on
generalizing to different traffic flows. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
We study model-based offline Reinforcement Learning with general function
approximation. We present an algorithm named Constrained Pessimistic Policy
Optimization (CPPO) which leverages a general function class and uses a
constraint to encode pessimism. Under the assumption that the ground truth
model belongs to our function class, CPPO can learn with the offline data only
providing partial coverage, i.e., it can learn a policy that completes against
any policy that is covered by the offline data, in polynomial sample complexity
with respect to the statistical complexity of the function class. We then
demonstrate that this algorithmic framework can be applied to many specialized
Markov Decision Processes where the additional structural assumptions can
further refine the concept of partial coverage. One notable example is low-rank
MDP with representation learning where the partial coverage is defined using
the concept of relative condition number measured by the underlying unknown
ground truth feature representation. Finally, we introduce and study the
Bayesian setting in offline RL. The key benefit of Bayesian offline RL is that
algorithmically, we do not need to explicitly construct pessimism or reward
penalty which could be hard beyond models with linear structures. We present a
posterior sampling-based incremental policy optimization algorithm (PS-PO)
which proceeds by iteratively sampling a model from the posterior distribution
and performing one-step incremental policy optimization inside the sampled
model. Theoretically, in expectation with respect to the prior distribution,
PS-PO can learn a near optimal policy under partial coverage with polynomial
sample complexity. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Sketch has been employed as an effective communicative tool to express the
abstract and intuitive meanings of object. Recognizing the free-hand sketch
drawing is extremely useful in many real-world applications. While
content-based sketch recognition has been studied for several decades, the
instance-level Sketch-Based Image Retrieval (SBIR) tasks have attracted
significant research attention recently. The existing datasets such as
QMUL-Chair and QMUL-Shoe, focus on the retrieval tasks of chairs and shoes.
However, there are several key limitations in previous instance-level SBIR
works. The state-of-the-art works have to heavily rely on the pre-training
process, quality of edge maps, multi-cropping testing strategy, and augmenting
sketch images. To efficiently solve the instance-level SBIR, we propose a new
Deep Triplet Classification Siamese Network (DeepTCNet) which employs
DenseNet-169 as the basic feature extractor and is optimized by the triplet
loss and classification loss. Critically, our proposed DeepTCNet can break the
limitations existed in previous works. The extensive experiments on five
benchmark sketch datasets validate the effectiveness of the proposed model.
Additionally, to study the tasks of sketch-based hairstyle retrieval, this
paper contributes a new instance-level photo-sketch dataset - Hairstyle
Photo-Sketch dataset, which is composed of 3600 sketches and photos, and 2400
sketch-photo pairs. | [
"cs.CV"
] |
In this paper, we investigate the use of generative adversarial networks in
the task of image generation according to subjective measures of semantic
attributes. Unlike the standard (CGAN) that generates images from discrete
categorical labels, our architecture handles both continuous and discrete
scales. Given pairwise comparisons of images, our model, called RankCGAN,
performs two tasks: it learns to rank images using a subjective measure; and it
learns a generative model that can be controlled by that measure. RankCGAN
associates each subjective measure of interest to a distinct dimension of some
latent space. We perform experiments on UT-Zap50K, PubFig and OSR datasets and
demonstrate that the model is expressive and diverse enough to conduct
two-attribute exploration and image editing. | [
"cs.CV"
] |
We present methods for online linear optimization that take advantage of
benign (as opposed to worst-case) sequences. Specifically if the sequence
encountered by the learner is described well by a known "predictable process",
the algorithms presented enjoy tighter bounds as compared to the typical worst
case bounds. Additionally, the methods achieve the usual worst-case regret
bounds if the sequence is not benign. Our approach can be seen as a way of
adding prior knowledge about the sequence within the paradigm of online
learning. The setting is shown to encompass partial and side information.
Variance and path-length bounds can be seen as particular examples of online
learning with simple predictable sequences.
We further extend our methods and results to include competing with a set of
possible predictable processes (models), that is "learning" the predictable
process itself concurrently with using it to obtain better regret guarantees.
We show that such model selection is possible under various assumptions on the
available feedback. Our results suggest a promising direction of further
research with potential applications to stock market and time series
prediction. | [
"stat.ML",
"cs.LG"
] |
Understanding driving situations regardless the conditions of the traffic
scene is a cornerstone on the path towards autonomous vehicles; however,
despite common sensor setups already include complementary devices such as
LiDAR or radar, most of the research on perception systems has traditionally
focused on computer vision. We present a LiDAR-based 3D object detection
pipeline entailing three stages. First, laser information is projected into a
novel cell encoding for bird's eye view projection. Later, both object location
on the plane and its heading are estimated through a convolutional neural
network originally designed for image processing. Finally, 3D oriented
detections are computed in a post-processing phase. Experiments on KITTI
dataset show that the proposed framework achieves state-of-the-art results
among comparable methods. Further tests with different LiDAR sensors in real
scenarios assess the multi-device capabilities of the approach. | [
"cs.CV"
] |
Arguably one of the top success stories of deep learning is transfer
learning. The finding that pre-training a network on a rich source set (eg.,
ImageNet) can help boost performance once fine-tuned on a usually much smaller
target set, has been instrumental to many applications in language and vision.
Yet, very little is known about its usefulness in 3D point cloud understanding.
We see this as an opportunity considering the effort required for annotating
data in 3D. In this work, we aim at facilitating research on 3D representation
learning. Different from previous works, we focus on high-level scene
understanding tasks. To this end, we select a suite of diverse datasets and
tasks to measure the effect of unsupervised pre-training on a large source set
of 3D scenes. Our findings are extremely encouraging: using a unified triplet
of architecture, source dataset, and contrastive loss for pre-training, we
achieve improvement over recent best results in segmentation and detection
across 6 different benchmarks for indoor and outdoor, real and synthetic
datasets -- demonstrating that the learned representation can generalize across
domains. Furthermore, the improvement was similar to supervised pre-training,
suggesting that future efforts should favor scaling data collection over more
detailed annotation. We hope these findings will encourage more research on
unsupervised pretext task design for 3D deep learning. | [
"cs.CV"
] |
We introduce a general framework for leveraging graph stream data for
temporal prediction-based applications. Our proposed framework includes novel
methods for learning an appropriate graph time-series representation, modeling
and weighting the temporal dependencies, and generalizing existing embedding
methods for such data. While previous work on dynamic modeling and embedding
has focused on representing a stream of timestamped edges using a time-series
of graphs based on a specific time-scale (e.g., 1 month), we propose the notion
of an $\epsilon$-graph time-series that uses a fixed number of edges for each
graph, and show its superiority over the time-scale representation used in
previous work. In addition, we propose a number of new temporal models based on
the notion of temporal reachability graphs and weighted temporal summary
graphs. These temporal models are then used to generalize existing base
(static) embedding methods by enabling them to incorporate and appropriately
model temporal dependencies in the data. From the 6 temporal network models
investigated (for each of the 7 base embedding methods), we find that the top-3
temporal models are always those that leverage the new $\epsilon$-graph
time-series representation. Furthermore, the dynamic embedding methods from the
framework almost always achieve better predictive performance than existing
state-of-the-art dynamic node embedding methods that are developed specifically
for such temporal prediction tasks. Finally, the findings of this work are
useful for designing better dynamic embedding methods. | [
"cs.LG",
"cs.AI",
"cs.SI",
"stat.ML"
] |
Monocular depth estimation has become one of the most studied applications in
computer vision, where the most accurate approaches are based on fully
supervised learning models. However, the acquisition of accurate and large
ground truth data sets to model these fully supervised methods is a major
challenge for the further development of the area. Self-supervised methods
trained with monocular videos constitute one the most promising approaches to
mitigate the challenge mentioned above due to the wide-spread availability of
training data. Consequently, they have been intensively studied, where the main
ideas explored consist of different types of model architectures, loss
functions, and occlusion masks to address non-rigid motion. In this paper, we
propose two new ideas to improve self-supervised monocular trained depth
estimation: 1) self-attention, and 2) discrete disparity prediction. Compared
with the usual localised convolution operation, self-attention can explore a
more general contextual information that allows the inference of similar
disparity values at non-contiguous regions of the image. Discrete disparity
prediction has been shown by fully supervised methods to provide a more robust
and sharper depth estimation than the more common continuous disparity
prediction, besides enabling the estimation of depth uncertainty. We show that
the extension of the state-of-the-art self-supervised monocular trained depth
estimator Monodepth2 with these two ideas allows us to design a model that
produces the best results in the field in KITTI 2015 and Make3D, closing the
gap with respect self-supervised stereo training and fully supervised
approaches. | [
"cs.CV",
"cs.LG"
] |
Scenario generation is an important step in the operation and planning of
power systems with high renewable penetrations. In this work, we proposed a
data-driven approach for scenario generation using generative adversarial
networks, which is based on two interconnected deep neural networks. Compared
with existing methods based on probabilistic models that are often hard to
scale or sample from, our method is data-driven, and captures renewable energy
production patterns in both temporal and spatial dimensions for a large number
of correlated resources. For validation, we use wind and solar times-series
data from NREL integration data sets. We demonstrate that the proposed method
is able to generate realistic wind and photovoltaic power profiles with full
diversity of behaviors. We also illustrate how to generate scenarios based on
different conditions of interest by using labeled data during training. For
example, scenarios can be conditioned on weather events~(e.g. high wind day) or
time of the year~(e,g. solar generation for a day in July). Because of the
feedforward nature of the neural networks, scenarios can be generated extremely
efficiently without sophisticated sampling techniques. | [
"cs.LG",
"cs.SY",
"math.OC"
] |
Video scene parsing is a long-standing challenging task in computer vision,
aiming to assign pre-defined semantic labels to pixels of all frames in a given
video. Compared with image semantic segmentation, this task pays more attention
on studying how to adopt the temporal information to obtain higher predictive
accuracy. In this report, we introduce our solution for the 1st Video Scene
Parsing in the Wild Challenge, which achieves a mIoU of 57.44 and obtained the
2nd place (our team name is CharlesBLWX). | [
"cs.CV"
] |
Small area change detection from synthetic aperture radar (SAR) is a highly
challenging task. In this paper, a robust unsupervised approach is proposed for
small area change detection from multi-temporal SAR images using deep learning.
First, a multi-scale superpixel reconstruction method is developed to generate
a difference image (DI), which can suppress the speckle noise effectively and
enhance edges by exploiting local, spatially homogeneous information. Second, a
two-stage centre-constrained fuzzy c-means clustering algorithm is proposed to
divide the pixels of the DI into changed, unchanged and intermediate classes
with a parallel clustering strategy. Image patches belonging to the first two
classes are then constructed as pseudo-label training samples, and image
patches of the intermediate class are treated as testing samples. Finally, a
convolutional wavelet neural network (CWNN) is designed and trained to classify
testing samples into changed or unchanged classes, coupled with a deep
convolutional generative adversarial network (DCGAN) to increase the number of
changed class within the pseudo-label training samples. Numerical experiments
on four real SAR datasets demonstrate the validity and robustness of the
proposed approach, achieving up to 99.61% accuracy for small area change
detection. | [
"cs.CV",
"eess.IV"
] |
The demosaicking provokes the spatial and color correlation of noise, which
is afterwards enhanced by the imaging pipeline. The correct removal previous or
simultaneously with the demosaicking process is not usually considered in the
literature. We present a novel imaging chain including a denoising of the Bayer
CFA and a demosaicking method for image sequences. The proposed algorithm uses
a spatio-temporal patch method for the noise removal and demosaicking of the
CFA. The experimentation, including real examples, illustrates the superior
performance of the proposed chain, avoiding the creation of artifacts and
colored spots in the final image. | [
"cs.CV"
] |
A key challenge of learning the geometry of dressed humans lies in the
limited availability of the ground truth data (e.g., 3D scanned models), which
results in the performance degradation of 3D human reconstruction when applying
to real-world imagery. We address this challenge by leveraging a new data
resource: a number of social media dance videos that span diverse appearance,
clothing styles, performances, and identities. Each video depicts dynamic
movements of the body and clothes of a single person while lacking the 3D
ground truth geometry. To utilize these videos, we present a new method to use
the local transformation that warps the predicted local geometry of the person
from an image to that of another image at a different time instant. This allows
self-supervision as enforcing a temporal coherence over the predictions. In
addition, we jointly learn the depth along with the surface normals that are
highly responsive to local texture, wrinkle, and shade by maximizing their
geometric consistency. Our method is end-to-end trainable, resulting in high
fidelity depth estimation that predicts fine geometry faithful to the input
real image. We demonstrate that our method outperforms the state-of-the-art
human depth estimation and human shape recovery approaches on both real and
rendered images. | [
"cs.CV"
] |
Although Generative Adversarial Networks have shown remarkable performance in
image generation, there are some challenges in image realism and convergence
speed. The results of some models display the imbalances of quality within a
generated image, in which some defective parts appear compared with other
regions. Different from general single global optimization methods, we
introduce an adaptive global and local bilevel optimization model(GL-GAN). The
model achieves the generation of high-resolution images in a complementary and
promoting way, where global optimization is to optimize the whole images and
local is only to optimize the low-quality areas. With a simple network
structure, GL-GAN is allowed to effectively avoid the nature of imbalance by
local bilevel optimization, which is accomplished by first locating low-quality
areas and then optimizing them. Moreover, by using feature map cues from
discriminator output, we propose the adaptive local and global optimization
method(Ada-OP) for specific implementation and find that it boosts the
convergence speed. Compared with the current GAN methods, our model has shown
impressive performance on CelebA, CelebA-HQ and LSUN datasets. | [
"cs.CV",
"eess.IV"
] |
Recent explainability related studies have shown that state-of-the-art DNNs
do not always adopt correct evidences to make decisions. It not only hampers
their generalization but also makes them less likely to be trusted by
end-users. In pursuit of developing more credible DNNs, in this paper we
propose CREX, which encourages DNN models to focus more on evidences that
actually matter for the task at hand, and to avoid overfitting to
data-dependent bias and artifacts. Specifically, CREX regularizes the training
process of DNNs with rationales, i.e., a subset of features highlighted by
domain experts as justifications for predictions, to enforce DNNs to generate
local explanations that conform with expert rationales. Even when rationales
are not available, CREX still could be useful by requiring the generated
explanations to be sparse. Experimental results on two text classification
datasets demonstrate the increased credibility of DNNs trained with CREX.
Comprehensive analysis further shows that while CREX does not always improve
prediction accuracy on the held-out test set, it significantly increases DNN
accuracy on new and previously unseen data beyond test set, highlighting the
advantage of the increased credibility. | [
"cs.LG",
"cs.IR",
"stat.ML"
] |
For any positive integer $k$, there exist neural networks with $\Theta(k^3)$
layers, $\Theta(1)$ nodes per layer, and $\Theta(1)$ distinct parameters which
can not be approximated by networks with $\mathcal{O}(k)$ layers unless they
are exponentially large --- they must possess $\Omega(2^k)$ nodes. This result
is proved here for a class of nodes termed "semi-algebraic gates" which
includes the common choices of ReLU, maximum, indicator, and piecewise
polynomial functions, therefore establishing benefits of depth against not just
standard networks with ReLU gates, but also convolutional networks with ReLU
and maximization gates, sum-product networks, and boosted decision trees (in
this last case with a stronger separation: $\Omega(2^{k^3})$ total tree nodes
are required). | [
"cs.LG",
"cs.NE",
"stat.ML"
] |
The success of deep learning based models for computer vision applications
requires large scale human annotated data which are often expensive to
generate. Self-supervised learning, a subset of unsupervised learning, handles
this problem by learning meaningful features from unlabeled image or video
data. In this paper, we propose a self-supervised learning approach to learn
transferable features from MR video clips by enforcing the model to learn
anatomical features. The pretext task models are designed to predict the
correct ordering of the jumbled image patches that the MR video frames are
divided into. To the best of our knowledge, none of the supervised learning
models performing injury classification task from MR video provide any
explanation for the decisions made by the models and hence makes our work the
first of its kind on MR video data. Experiments on the pretext task show that
this proposed approach enables the model to learn spatial context invariant
features which help for reliable and explainable performance in downstream
tasks like classification of Anterior Cruciate Ligament tear injury from knee
MRI. The efficiency of the novel Convolutional Neural Network proposed in this
paper is reflected in the experimental results obtained in the downstream task. | [
"cs.CV"
] |
Nonlinear optimal control problems are often solved with numerical methods
that require knowledge of system's dynamics which may be difficult to infer,
and that carry a large computational cost associated with iterative
calculations. We present a novel neurobiologically inspired hierarchical
learning framework, Reinforcement Learning Optimal Control, which operates on
two levels of abstraction and utilises a reduced number of controllers to solve
nonlinear systems with unknown dynamics in continuous state and action spaces.
Our approach is inspired by research at two levels of abstraction: first, at
the level of limb coordination human behaviour is explained by linear optimal
feedback control theory. Second, in cognitive tasks involving learning symbolic
level action selection, humans learn such problems using model-free and
model-based reinforcement learning algorithms. We propose that combining these
two levels of abstraction leads to a fast global solution of nonlinear control
problems using reduced number of controllers. Our framework learns the local
task dynamics from naive experience and forms locally optimal infinite horizon
Linear Quadratic Regulators which produce continuous low-level control. A
top-level reinforcement learner uses the controllers as actions and learns how
to best combine them in state space while maximising a long-term reward. A
single optimal control objective function drives high-level symbolic learning
by providing training signals on desirability of each selected controller. We
show that a small number of locally optimal linear controllers are able to
solve global nonlinear control problems with unknown dynamics when combined
with a reinforcement learner in this hierarchical framework. Our algorithm
competes in terms of computational cost and solution quality with sophisticated
control algorithms and we illustrate this with solutions to benchmark problems. | [
"cs.LG",
"stat.ML"
] |
Recently, neural architecture search (NAS) has been applied to automatically
search high-performance networks for medical image segmentation. The NAS search
space usually contains a network topology level (controlling connections among
cells with different spatial scales) and a cell level (operations within each
cell). Existing methods either require long searching time for large-scale 3D
image datasets, or are limited to pre-defined topologies (such as U-shaped or
single-path). In this work, we focus on three important aspects of NAS in 3D
medical image segmentation: flexible multi-path network topology, high search
efficiency, and budgeted GPU memory usage. A novel differentiable search
framework is proposed to support fast gradient-based search within a highly
flexible network topology search space. The discretization of the searched
optimal continuous model in differentiable scheme may produce a sub-optimal
final discrete model (discretization gap). Therefore, we propose a topology
loss to alleviate this problem. In addition, the GPU memory usage for the
searched 3D model is limited with budget constraints during search. Our
Differentiable Network Topology Search scheme (DiNTS) is evaluated on the
Medical Segmentation Decathlon (MSD) challenge, which contains ten challenging
segmentation tasks. Our method achieves the state-of-the-art performance and
the top ranking on the MSD challenge leaderboard. | [
"cs.CV"
] |
The Learnable Tree Filter presents a remarkable approach to model
structure-preserving relations for semantic segmentation. Nevertheless, the
intrinsic geometric constraint forces it to focus on the regions with close
spatial distance, hindering the effective long-range interactions. To relax the
geometric constraint, we give the analysis by reformulating it as a Markov
Random Field and introduce a learnable unary term. Besides, we propose a
learnable spanning tree algorithm to replace the original non-differentiable
one, which further improves the flexibility and robustness. With the above
improvements, our method can better capture long-range dependencies and
preserve structural details with linear complexity, which is extended to
several vision tasks for more generic feature transform. Extensive experiments
on object detection/instance segmentation demonstrate the consistent
improvements over the original version. For semantic segmentation, we achieve
leading performance (82.1% mIoU) on the Cityscapes benchmark without
bells-and-whistles. Code is available at
https://github.com/StevenGrove/LearnableTreeFilterV2. | [
"cs.CV",
"cs.AI",
"68T45"
] |
Canonical Correlation Analysis (CCA) models are powerful for studying the
associations between two sets of variables. The canonically correlated
representations, termed \textit{canonical variates} are widely used in
unsupervised learning to analyze unlabeled multi-modal registered datasets.
Despite their success, CCA models may break (or overfit) if the number of
variables in either of the modalities exceeds the number of samples. Moreover,
often a significant fraction of the variables measures modality-specific
information, and thus removing them is beneficial for identifying the
\textit{canonically correlated variates}. Here, we propose $\ell_0$-CCA, a
method for learning correlated representations based on sparse subsets of
variables from two observed modalities. Sparsity is obtained by multiplying the
input variables by stochastic gates, whose parameters are learned together with
the CCA weights via an $\ell_0$-regularized correlation loss. We further
propose $\ell_0$-Deep CCA for solving the problem of non-linear sparse CCA by
modeling the correlated representations using deep nets. We demonstrate the
efficacy of the method using several synthetic and real examples. Most notably,
by gating nuisance input variables, our approach improves the extracted
representations compared to other linear, non-linear and sparse CCA-based
models. | [
"cs.LG",
"stat.ML"
] |
Blind face restoration (BFR) from severely degraded face images in the wild
is a very challenging problem. Due to the high illness of the problem and the
complex unknown degradation, directly training a deep neural network (DNN)
usually cannot lead to acceptable results. Existing generative adversarial
network (GAN) based methods can produce better results but tend to generate
over-smoothed restorations. In this work, we propose a new method by first
learning a GAN for high-quality face image generation and embedding it into a
U-shaped DNN as a prior decoder, then fine-tuning the GAN prior embedded DNN
with a set of synthesized low-quality face images. The GAN blocks are designed
to ensure that the latent code and noise input to the GAN can be respectively
generated from the deep and shallow features of the DNN, controlling the global
face structure, local face details and background of the reconstructed image.
The proposed GAN prior embedded network (GPEN) is easy-to-implement, and it can
generate visually photo-realistic results. Our experiments demonstrated that
the proposed GPEN achieves significantly superior results to state-of-the-art
BFR methods both quantitatively and qualitatively, especially for the
restoration of severely degraded face images in the wild. The source code and
models can be found at https://github.com/yangxy/GPEN. | [
"cs.CV"
] |
Probabilistic methods for point set registration have demonstrated
competitive results in recent years. These techniques estimate a probability
distribution model of the point clouds. While such a representation has shown
promise, it is highly sensitive to variations in the density of 3D points. This
fundamental problem is primarily caused by changes in the sensor location
across point sets. We revisit the foundations of the probabilistic registration
paradigm. Contrary to previous works, we model the underlying structure of the
scene as a latent probability distribution, and thereby induce invariance to
point set density changes. Both the probabilistic model of the scene and the
registration parameters are inferred by minimizing the Kullback-Leibler
divergence in an Expectation Maximization based framework. Our density-adaptive
registration successfully handles severe density variations commonly
encountered in terrestrial Lidar applications. We perform extensive experiments
on several challenging real-world Lidar datasets. The results demonstrate that
our approach outperforms state-of-the-art probabilistic methods for multi-view
registration, without the need of re-sampling. Code is available at
https://github.com/felja633/DARE. | [
"cs.CV"
] |
The task of classifying mammograms is very challenging because the lesion is
usually small in the high resolution image. The current state-of-the-art
approaches for medical image classification rely on using the de-facto method
for ConvNets - fine-tuning. However, there are fundamental differences between
natural images and medical images, which based on existing evidence from the
literature, limits the overall performance gain when designed with algorithmic
approaches. In this paper, we propose to go beyond fine-tuning by introducing a
novel framework called MorphHR, in which we highlight a new transfer learning
scheme. The idea behind the proposed framework is to integrate
function-preserving transformations, for any continuous non-linear activation
neurons, to internally regularise the network for improving mammograms
classification. The proposed solution offers two major advantages over the
existing techniques. Firstly and unlike fine-tuning, the proposed approach
allows for modifying not only the last few layers but also several of the first
ones on a deep ConvNet. By doing this, we can design the network front to be
suitable for learning domain specific features. Secondly, the proposed scheme
is scalable to hardware. Therefore, one can fit high resolution images on
standard GPU memory. We show that by using high resolution images, one prevents
losing relevant information. We demonstrate, through numerical and visual
experiments, that the proposed approach yields to a significant improvement in
the classification performance over state-of-the-art techniques, and is indeed
on a par with radiology experts. Moreover and for generalisation purposes, we
show the effectiveness of the proposed learning scheme on another large
dataset, the ChestX-ray14, surpassing current state-of-the-art techniques. | [
"cs.CV"
] |
A novel locally statistical active contour model (ACM) for image segmentation
in the presence of intensity inhomogeneity is presented in this paper. The
inhomogeneous objects are modeled as Gaussian distributions of different means
and variances, and a moving window is used to map the original image into
another domain, where the intensity distributions of inhomogeneous objects are
still Gaussian but are better separated. The means of the Gaussian
distributions in the transformed domain can be adaptively estimated by
multiplying a bias field with the original signal within the window. A
statistical energy functional is then defined for each local region, which
combines the bias field, the level set function, and the constant approximating
the true signal of the corresponding object. Experiments on both synthetic and
real images demonstrate the superiority of our proposed algorithm to
state-of-the-art and representative methods. | [
"cs.CV"
] |
In this paper we propose several novel distributed gradient-based temporal
difference algorithms for multi-agent off-policy learning of linear
approximation of the value function in Markov decision processes with strict
information structure constraints, limiting inter-agent communications to small
neighborhoods. The algorithms are composed of: 1) local parameter updates based
on single-agent off-policy gradient temporal difference learning algorithms,
including eligibility traces with state dependent parameters, and 2) linear
stochastic time varying consensus schemes, represented by directed graphs. The
proposed algorithms differ by their form, definition of eligibility traces,
selection of time scales and the way of incorporating consensus iterations. The
main contribution of the paper is a convergence analysis based on the general
properties of the underlying Feller-Markov processes and the stochastic time
varying consensus model. We prove, under general assumptions, that the
parameter estimates generated by all the proposed algorithms weakly converge to
the corresponding ordinary differential equations (ODE) with precisely defined
invariant sets. It is demonstrated how the adopted methodology can be applied
to temporal-difference algorithms under weaker information structure
constraints. The variance reduction effect of the proposed algorithms is
demonstrated by formulating and analyzing an asymptotic stochastic differential
equation. Specific guidelines for communication network design are provided.
The algorithms' superior properties are illustrated by characteristic
simulation results. | [
"cs.LG",
"cs.DC",
"cs.SY",
"eess.SY",
"stat.ML"
] |
Equations governing physico-chemical processes are usually known at
microscopic spatial scales, yet one suspects that there exist equations, e.g.
in the form of Partial Differential Equations (PDEs), that can explain the
system evolution at much coarser, meso- or macroscopic length scales.
Discovering those coarse-grained effective PDEs can lead to considerable
savings in computation-intensive tasks like prediction or control. We propose a
framework combining artificial neural networks with multiscale computation, in
the form of equation-free numerics, for efficient discovery of such macro-scale
PDEs directly from microscopic simulations. Gathering sufficient microscopic
data for training neural networks can be computationally prohibitive;
equation-free numerics enable a more parsimonious collection of training data
by only operating in a sparse subset of the space-time domain. We also propose
using a data-driven approach, based on manifold learning and unnormalized
optimal transport of distributions, to identify macro-scale dependent
variable(s) suitable for the data-driven discovery of said PDEs. This approach
can corroborate physically motivated candidate variables, or introduce new
data-driven variables, in terms of which the coarse-grained effective PDE can
be formulated. We illustrate our approach by extracting coarse-grained
evolution equations from particle-based simulations with a priori unknown
macro-scale variable(s), while significantly reducing the requisite data
collection computational effort. | [
"stat.ML",
"cs.LG",
"physics.comp-ph",
"physics.data-an",
"49Q22, 35-XX,"
] |
Recently, there has been an increasing number of efforts to introduce models
capable of generating natural language explanations (NLEs) for their
predictions on vision-language (VL) tasks. Such models are appealing, because
they can provide human-friendly and comprehensive explanations. However, there
is a lack of comparison between existing methods, which is due to a lack of
re-usable evaluation frameworks and a scarcity of datasets. In this work, we
introduce e-ViL and e-SNLI-VE. e-ViL is a benchmark for explainable
vision-language tasks that establishes a unified evaluation framework and
provides the first comprehensive comparison of existing approaches that
generate NLEs for VL tasks. It spans four models and three datasets and both
automatic metrics and human evaluation are used to assess model-generated
explanations. e-SNLI-VE is currently the largest existing VL dataset with NLEs
(over 430k instances). We also propose a new model that combines UNITER, which
learns joint embeddings of images and text, and GPT-2, a pre-trained language
model that is well-suited for text generation. It surpasses the previous state
of the art by a large margin across all datasets. Code and data are available
here: https://github.com/maximek3/e-ViL. | [
"cs.CV",
"cs.CL",
"cs.LG"
] |
Empirical evidence shows that ensembles, such as bagging, boosting, random
and rotation forests, generally perform better in terms of their generalization
error than individual classifiers. To explain this performance, Schapire et al.
(1998) developed an upper bound on the generalization error of an ensemble
based on the margins of the training data, from which it was concluded that
larger margins should lead to lower generalization error, everything else being
equal. Many other researchers have backed this assumption and presented tighter
bounds on the generalization error based on either the margins or functions of
the margins. For instance, Shen and Li (2010) provide evidence suggesting that
the generalization error of a voting classifier might be reduced by increasing
the mean and decreasing the variance of the margins. In this article we propose
several techniques and empirically test whether the current state of research
in explaining ensemble performance holds. We evaluate the proposed methods
through experiments with real and simulated data sets. | [
"stat.ML",
"cs.LG",
"stat.CO"
] |
This paper presents a new approach for code similarity on High Level
programs. Our technique is based on Fast Dynamic Time Warping, that builds a
warp path or points relation with local restrictions. The source code is
represented into Time Series using the operators inside programming languages
that makes possible the comparison. This makes possible subsequence detection
that represent similar code instructions. In contrast with other code
similarity algorithms, we do not make features extraction. The experiments show
that two source codes are similar when their respective Time Series are
similar. | [
"cs.CV",
"cs.DS",
"I.5.2"
] |
When working to understand usage of a data format, examples of the data
format are often more representative than the format's specification. For
example, two different applications might use very different JSON
representations, or two PDF-writing applications might make use of very
different areas of the PDF specification to realize the same rendered content.
The complexity arising from these distinct origins can lead to large,
difficult-to-understand attack surfaces, presenting a security concern when
considering both exfiltration and data schizophrenia. Grammar inference can aid
in describing the practical language generator behind examples of a data
format. However, most grammar inference research focuses on natural language,
not data formats, and fails to support crucial features such as type recursion.
We propose a novel set of mechanisms for grammar inference, RL-GRIT, and apply
them to understanding de facto data formats. After reviewing existing grammar
inference solutions, it was determined that a new, more flexible scaffold could
be found in Reinforcement Learning (RL). Within this work, we lay out the many
algorithmic changes required to adapt RL from its traditional, sequential-time
environment to the highly interdependent environment of parsing. The result is
an algorithm which can demonstrably learn recursive control structures in
simple data formats, and can extract meaningful structure from fragments of the
PDF format. Whereas prior work in grammar inference focused on either regular
languages or constituency parsing, we show that RL can be used to surpass the
expressiveness of both classes, and offers a clear path to learning
context-sensitive languages. The proposed algorithm can serve as a building
block for understanding the ecosystems of de facto data formats. | [
"cs.LG",
"cs.CR",
"cs.PL"
] |
Although in recent years reinforcement learning has become very popular the
number of successful applications to different kinds of operations research
problems is rather scarce. Reinforcement learning is based on the well-studied
dynamic programming technique and thus also aims at finding the best stationary
policy for a given Markov Decision Process, but in contrast does not require
any model knowledge. The policy is assessed solely on consecutive states (or
state-action pairs), which are observed while an agent explores the solution
space. The contributions of this paper are manifold. First we provide deep
theoretical insights to the widely applied standard discounted reinforcement
learning framework, which give rise to the understanding of why these
algorithms are inappropriate when permanently provided with non-zero rewards,
such as costs or profit. Second, we establish a novel near-Blackwell-optimal
reinforcement learning algorithm. In contrary to former method it assesses the
average reward per step separately and thus prevents the incautious combination
of different types of state values. Thereby, the Laurent Series expansion of
the discounted state values forms the foundation for this development and also
provides the connection between the two approaches. Finally, we prove the
viability of our algorithm on a challenging problem set, which includes a
well-studied M/M/1 admission control queuing system. In contrast to standard
discounted reinforcement learning our algorithm infers the optimal policy on
all tested problems. The insights are that in the operations research domain
machine learning techniques have to be adapted and advanced to successfully
apply these methods in our settings. | [
"cs.LG",
"stat.ML"
] |
We study the problem of semi-supervised anomaly detection with domain
adaptation. Given a set of normal data from a source domain and a limited
amount of normal examples from a target domain, the goal is to have a
well-performing anomaly detector in the target domain. We propose the Invariant
Representation Anomaly Detection (IRAD) to solve this problem where we first
learn to extract a domain-invariant representation. The extraction is achieved
by an across-domain encoder trained together with source-specific encoders and
generators by adversarial learning. An anomaly detector is then trained using
the learnt representations. We evaluate IRAD extensively on digits images
datasets (MNIST, USPS and SVHN) and object recognition datasets (Office-Home).
Experimental results show that IRAD outperforms baseline models by a wide
margin across different datasets. We derive a theoretical lower bound for the
joint error that explains the performance decay from overtraining and also an
upper bound for the generalization error. | [
"cs.LG",
"stat.ML"
] |
As neural networks gain widespread adoption in embedded devices, there is a
need for model compression techniques to facilitate deployment in
resource-constrained environments. Quantization is one of the go-to methods
yielding state-of-the-art model compression. Most approaches take a fully
trained model, apply different heuristics to determine the optimal
bit-precision for different layers of the network, and retrain the network to
regain any drop in accuracy. Based on Activation Density (AD)-the proportion of
non-zero activations in a layer-we propose an in-training quantization method.
Our method calculates bit-width for each layer during training yielding a mixed
precision model with competitive accuracy. Since we train lower precision
models during training, our approach yields the final quantized model at lower
training complexity and also eliminates the need for re-training. We run
experiments on benchmark datasets like CIFAR-10, CIFAR-100, TinyImagenet on
VGG19/ResNet18 architectures and report the accuracy and energy estimates for
the same. We achieve ~4.5x benefit in terms of estimated
multiply-and-accumulate (MAC) reduction while reducing the training complexity
by 50% in our experiments. To further evaluate the energy benefits of our
proposed method, we develop a mixed-precision scalable Process In Memory (PIM)
hardware accelerator platform. The hardware platform incorporates shift-add
functionality for handling multi-bit precision neural network models.
Evaluating the quantized models obtained with our proposed method on the PIM
platform yields ~5x energy reduction compared to 16-bit models. Additionally,
we find that integrating AD based quantization with AD based pruning (both
conducted during training) yields up to ~198x and ~44x energy reductions for
VGG19 and ResNet18 architectures respectively on PIM platform compared to
baseline 16-bit precision, unpruned models. | [
"cs.LG",
"cs.NE"
] |
Clustering is one of the fundamental tasks in computer vision and pattern
recognition. Recently, deep clustering methods (algorithms based on deep
learning) have attracted wide attention with their impressive performance. Most
of these algorithms combine deep unsupervised representation learning and
standard clustering together. However, the separation of representation
learning and clustering will lead to suboptimal solutions because the two-stage
strategy prevents representation learning from adapting to subsequent tasks
(e.g., clustering according to specific cues). To overcome this issue, efforts
have been made in the dynamic adaption of representation and cluster
assignment, whereas current state-of-the-art methods suffer from heuristically
constructed objectives with representation and cluster assignment alternatively
optimized. To further standardize the clustering problem, we audaciously
formulate the objective of clustering as finding a precise feature as the cue
for cluster assignment. Based on this, we propose a general-purpose deep
clustering framework which radically integrates representation learning and
clustering into a single pipeline for the first time. The proposed framework
exploits the powerful ability of recently developed generative models for
learning intrinsic features, and imposes an entropy minimization on the
distribution of the cluster assignment by a dedicated variational algorithm.
Experimental results show that the performance of the proposed method is
superior, or at least comparable to, the state-of-the-art methods on the
handwritten digit recognition, fashion recognition, face recognition and object
recognition benchmark datasets. | [
"cs.CV",
"cs.LG"
] |
Many real-world applications involve multivariate, geo-tagged time series
data: at each location, multiple sensors record corresponding measurements. For
example, air quality monitoring system records PM2.5, CO, etc. The resulting
time-series data often has missing values due to device outages or
communication errors. In order to impute the missing values, state-of-the-art
methods are built on Recurrent Neural Networks (RNN), which process each time
stamp sequentially, prohibiting the direct modeling of the relationship between
distant time stamps. Recently, the self-attention mechanism has been proposed
for sequence modeling tasks such as machine translation, significantly
outperforming RNN because the relationship between each two time stamps can be
modeled explicitly. In this paper, we are the first to adapt the self-attention
mechanism for multivariate, geo-tagged time series data. In order to jointly
capture the self-attention across multiple dimensions, including time, location
and the sensor measurements, while maintain low computational complexity, we
propose a novel approach called Cross-Dimensional Self-Attention (CDSA) to
process each dimension sequentially, yet in an order-independent manner. Our
extensive experiments on four real-world datasets, including three standard
benchmarks and our newly collected NYC-traffic dataset, demonstrate that our
approach outperforms the state-of-the-art imputation and forecasting methods. A
detailed systematic analysis confirms the effectiveness of our design choices. | [
"cs.LG",
"stat.ML"
] |
Skin cancer is among the most common cancer types. Dermoscopic image analysis
improves the diagnostic accuracy for detection of malignant melanoma and other
pigmented skin lesions when compared to unaided visual inspection. Hence,
computer-based methods to support medical experts in the diagnostic procedure
are of great interest. Fine-tuning pre-trained convolutional neural networks
(CNNs) has been shown to work well for skin lesion classification. Pre-trained
CNNs are usually trained with natural images of a fixed image size which is
typically significantly smaller than captured skin lesion images and
consequently dermoscopic images are downsampled for fine-tuning. However,
useful medical information may be lost during this transformation. In this
paper, we explore the effect of input image size on skin lesion classification
performance of fine-tuned CNNs. For this, we resize dermoscopic images to
different resolutions, ranging from 64x64 to 768x768 pixels and investigate the
resulting classification performance of three well-established CNNs, namely
DenseNet-121, ResNet-18, and ResNet-50. Our results show that using very small
images (of size 64x64 pixels) degrades the classification performance, while
images of size 128x128 pixels and above support good performance with larger
image sizes leading to slightly improved classification. We further propose a
novel fusion approach based on a three-level ensemble strategy that exploits
multiple fine-tuned networks trained with dermoscopic images at various sizes.
When applied on the ISIC 2017 skin lesion classification challenge, our fusion
approach yields an area under the receiver operating characteristic curve of
89.2% and 96.6% for melanoma classification and seborrheic keratosis
classification, respectively, outperforming state-of-the-art algorithms. | [
"cs.CV"
] |
Weakly-supervised instance segmentation aims to detect and segment object
instances precisely, given imagelevel labels only. Unlike previous methods
which are composed of multiple offline stages, we propose Sequential Label
Propagation and Enhancement Networks (referred as Label-PEnet) that
progressively transform image-level labels to pixel-wise labels in a
coarse-to-fine manner. We design four cascaded modules including multi-label
classification, object detection, instance refinement and instance
segmentation, which are implemented sequentially by sharing the same backbone.
The cascaded pipeline is trained alternatively with a curriculum learning
strategy that generalizes labels from high-level images to low-level pixels
gradually with increasing accuracy. In addition, we design a proposal
calibration module to explore the ability of classification networks to find
key pixels that identify object parts, which serves as a post validation
strategy running in the inverse order. We evaluate the efficiency of our
Label-PEnet in mining instance masks on standard benchmarks: PASCAL VOC 2007
and 2012. Experimental results show that Label-PEnet outperforms the
state-of-the-art algorithms by a clear margin, and obtains comparable
performance even with the fully-supervised approaches. | [
"cs.CV"
] |
In this work, we propose a novel straightforward method for medical volume
and sequence segmentation with limited annotations. To avert laborious
annotating, the recent success of self-supervised learning(SSL) motivates the
pre-training on unlabeled data. Despite its success, it is still challenging to
adapt typical SSL methods to volume/sequence segmentation, due to their lack of
mining on local semantic discrimination and rare exploitation on volume and
sequence structures. Based on the continuity between slices/frames and the
common spatial layout of organs across volumes/sequences, we introduced a novel
bootstrap self-supervised representation learning method by leveraging the
predictable possibility of neighboring slices. At the core of our method is a
simple and straightforward dense self-supervision on the predictions of local
representations and a strategy of predicting locals based on global context,
which enables stable and reliable supervision for both global and local
representation mining among volumes. Specifically, we first proposed an
asymmetric network with an attention-guided predictor to enforce
distance-specific prediction and supervision on slices within and across
volumes/sequences. Secondly, we introduced a novel prototype-based
foreground-background calibration module to enhance representation consistency.
The two parts are trained jointly on labeled and unlabeled data. When evaluated
on three benchmark datasets of medical volumes and sequences, our model
outperforms existing methods with a large margin of 4.5\% DSC on ACDC, 1.7\% on
Prostate, and 2.3\% on CAMUS. Intensive evaluations reveals the effectiveness
and superiority of our method. | [
"cs.CV"
] |
Deep convolutional neural networks (DCNN) aided high dynamic range (HDR)
imaging recently received a lot of attention. The quality of DCNN generated HDR
images have overperformed the traditional counterparts. However, DCNN is prone
to be computationally intensive and power-hungry. To address the challenge, we
propose LightFuse, a light-weight CNN-based algorithm for extreme dual-exposure
image fusion, which can be implemented on various embedded computing platforms
with limited power and hardware resources. Two sub-networks are utilized: a
GlobalNet (G) and a DetailNet (D). The goal of G is to learn the global
illumination information on the spatial dimension, whereas D aims to enhance
local details on the channel dimension. Both G and D are based solely on
depthwise convolution (D Conv) and pointwise convolution (P Conv) to reduce
required parameters and computations. Experimental results display that the
proposed technique could generate HDR images with plausible details in
extremely exposed regions. Our PSNR score exceeds the other state-of-the-art
approaches by 1.2 to 1.6 times and achieves 1.4 to 20 times FLOP and parameter
reduction compared with others. | [
"cs.CV",
"eess.IV"
] |
Visual localization has become a key enabling component of many place
recognition and SLAM systems. Contemporary research has primarily focused on
improving accuracy and precision-recall type metrics, with relatively little
attention paid to a system's absolute storage scaling characteristics, its
flexibility to adapt to available computational resources, and its longevity
with respect to easily incorporating newly learned or hand-crafted image
descriptors. Most significantly, improvement in one of these aspects typically
comes at the cost of others: for example, a snapshot-based system that achieves
sub-linear storage cost typically provides no metric pose estimation, or, a
highly accurate pose estimation technique is often ossified in adapting to
recent advances in appearance-invariant features. In this paper, we present a
novel 6-DOF localization system that for the first time simultaneously achieves
all the three characteristics: significantly sub-linear storage growth,
agnosticism to image descriptors, and customizability to available storage and
computational resources. The key features of our method are developed based on
a novel adaptation of multiple-label learning, together with effective
dimensional reduction and learning techniques that enable simple and efficient
optimization. We evaluate our system on several large benchmarking datasets and
provide detailed comparisons to state-of-the-art systems. The proposed method
demonstrates competitive accuracy with existing pose estimation methods while
achieving better sub-linear storage scaling, significantly reduced absolute
storage requirements, and faster training and deployment speeds. | [
"cs.CV"
] |
We introduce a simple and versatile framework for image-to-image translation.
We unearth the importance of normalization layers, and provide a carefully
designed two-stream generative model with newly proposed feature
transformations in a coarse-to-fine fashion. This allows multi-scale semantic
structure information and style representation to be effectively captured and
fused by the network, permitting our method to scale to various tasks in both
unsupervised and supervised settings. No additional constraints (e.g., cycle
consistency) are needed, contributing to a very clean and simple method.
Multi-modal image synthesis with arbitrary style control is made possible. A
systematic study compares the proposed method with several state-of-the-art
task-specific baselines, verifying its effectiveness in both perceptual quality
and quantitative evaluations. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Pose-guided person image generation is to transform a source person image to
a target pose. This task requires spatial manipulations of source data.
However, Convolutional Neural Networks are limited by the lack of ability to
spatially transform the inputs. In this paper, we propose a differentiable
global-flow local-attention framework to reassemble the inputs at the feature
level. Specifically, our model first calculates the global correlations between
sources and targets to predict flow fields. Then, the flowed local patch pairs
are extracted from the feature maps to calculate the local attention
coefficients. Finally, we warp the source features using a content-aware
sampling method with the obtained local attention coefficients. The results of
both subjective and objective experiments demonstrate the superiority of our
model. Besides, additional results in video animation and view synthesis show
that our model is applicable to other tasks requiring spatial transformation.
Our source code is available at
https://github.com/RenYurui/Global-Flow-Local-Attention. | [
"cs.CV",
"cs.AI"
] |
The matching function for the problem of stereo reconstruction or optical
flow has been traditionally designed as a function of the distance between the
features describing matched pixels. This approach works under assumption, that
the appearance of pixels in two stereo cameras or in two consecutive video
frames does not change dramatically. However, this might not be the case, if we
try to match pixels over a large interval of time.
In this paper we propose a method, which learns the matching function, that
automatically finds the space of allowed changes in visual appearance, such as
due to the motion blur, chromatic distortions, different colour calibration or
seasonal changes. Furthermore, it automatically learns the importance of
matching scores of contextual features at different relative locations and
scales. Proposed classifier gives reliable estimations of pixel disparities
already without any form of regularization.
We evaluated our method on two standard problems - stereo matching on KITTI
outdoor dataset, optical flow on Sintel data set, and on newly introduced
TimeLapse change detection dataset. Our algorithm obtained very promising
results comparable to the state-of-the-art. | [
"cs.CV"
] |
Self-supervised learning approaches leverage unlabeled samples to acquire
generic knowledge about different concepts, hence allowing for
annotation-efficient downstream task learning. In this paper, we propose a
novel self-supervised method that leverages multiple imaging modalities. We
introduce the multimodal puzzle task, which facilitates rich representation
learning from multiple image modalities. The learned representations allow for
subsequent fine-tuning on different downstream tasks. To achieve that, we learn
a modality-agnostic feature embedding by confusing image modalities at the
data-level. Together with the Sinkhorn operator, with which we formulate the
puzzle solving optimization as permutation matrix inference instead of
classification, they allow for efficient solving of multimodal puzzles with
varying levels of complexity. In addition, we also propose to utilize
cross-modal generation techniques for multimodal data augmentation used for
training self-supervised tasks. In other words, we exploit synthetic images for
self-supervised pretraining, instead of downstream tasks directly, in order to
circumvent quality issues associated with synthetic images, while improving
data-efficiency and representations quality. Our experimental results, which
assess the gains in downstream performance and data-efficiency, show that
solving our multimodal puzzles yields better semantic representations, compared
to treating each modality independently. Our results also highlight the
benefits of exploiting synthetic images for self-supervised pretraining. We
showcase our approach on four downstream tasks: Brain tumor segmentation and
survival days prediction using four MRI modalities, Prostate segmentation using
two MRI modalities, and Liver segmentation using unregistered CT and MRI
modalities. We outperform many previous solutions, and achieve results
competitive to state-of-the-art. | [
"cs.CV",
"cs.LG"
] |
Graph Neural Networks (GNNs) have attracted increasing attention due to its
successful applications on various graph-structure data. However, recent
studies have shown that adversarial attacks are threatening the functionality
of GNNs. Although numerous works have been proposed to defend adversarial
attacks from various perspectives, most of them can be robust against the
attacks only on specific scenarios. To address this shortage of robust
generalization, we propose to defend the adversarial attacks on GNN through
applying the Spatio-Temporal sparsification (called ST-Sparse) on the GNN
hidden node representation. ST-Sparse is similar to the Dropout regularization
in spirit. Through intensive experiment evaluation with GCN as the target GNN
model, we identify the benefits of ST-Sparse as follows: (1) ST-Sparse shows
the defense performance improvement in most cases, as it can effectively
increase the robust accuracy by up to 6\% improvement; (2) ST-Sparse
illustrates its robust generalization capability by integrating with the
existing defense methods, similar to the integration of Dropout into various
deep learning models as a standard regularization technique; (3) ST-Sparse also
shows its ordinary generalization capability on clean datasets, in that
ST-SparseGCN (the integration of ST-Sparse and the original GCN) even
outperform the original GCN, while the other three representative defense
methods are inferior to the original GCN. | [
"cs.LG",
"cs.AI"
] |
We consider the problem of estimating a function from $n$ noisy samples whose
discrete Total Variation (TV) is bounded by $C_n$. We reveal a deep connection
to the seemingly disparate problem of Strongly Adaptive online learning
(Daniely et al, 2015) and provide an $O(n \log n)$ time algorithm that attains
the near minimax optimal rate of $\tilde O (n^{1/3}C_n^{2/3})$ under squared
error loss. The resulting algorithm runs online and optimally adapts to the
unknown smoothness parameter $C_n$. This leads to a new and more versatile
alternative to wavelets-based methods for (1) adaptively estimating TV bounded
functions; (2) online forecasting of TV bounded trends in time series. | [
"cs.LG",
"math.OC",
"stat.ML"
] |
In this paper, we demonstrate how to do automated theorem proving in the
presence of a large knowledge base of potential premises without learning from
human proofs. We suggest an exploration mechanism that mixes in additional
premises selected by a tf-idf (term frequency-inverse document frequency) based
lookup in a deep reinforcement learning scenario. This helps with exploring and
learning which premises are relevant for proving a new theorem. Our experiments
show that the theorem prover trained with this exploration mechanism
outperforms provers that are trained only on human proofs. It approaches the
performance of a prover trained by a combination of imitation and reinforcement
learning. We perform multiple experiments to understand the importance of the
underlying assumptions that make our exploration approach work, thus explaining
our design choices. | [
"cs.LG",
"cs.AI",
"cs.LO",
"stat.ML"
] |
Traditional convolution-based generative adversarial networks synthesize
images based on hierarchical local operations, where long-range dependency
relation is implicitly modeled with a Markov chain. It is still not sufficient
for categories with complicated structures. In this paper, we characterize
long-range dependence with attentive normalization (AN), which is an extension
to traditional instance normalization. Specifically, the input feature map is
softly divided into several regions based on its internal semantic similarity,
which are respectively normalized. It enhances consistency between distant
regions with semantic correspondence. Compared with self-attention GAN, our
attentive normalization does not need to measure the correlation of all
locations, and thus can be directly applied to large-size feature maps without
much computational burden. Extensive experiments on class-conditional image
generation and semantic inpainting verify the efficacy of our proposed module. | [
"cs.CV"
] |
Time series data play an important role in many applications and their
analysis reveals crucial information for understanding the underlying
processes. Among the many time series learning tasks of great importance, we
here focus on semi-supervised learning based on a graph representation of the
data. Two main aspects are involved in this task. A suitable distance measure
to evaluate the similarities between time series, and a learning method to make
predictions based on these distances. However, the relationship between the two
aspects has never been studied systematically in the context of graph-based
learning. We describe four different distance measures, including (Soft) DTW
and MPDist, a distance measure based on the Matrix Profile, as well as four
successful semi-supervised learning methods, including the graph Allen--Cahn
method and a Graph Convolutional Neural Network. We then compare the
performance of the algorithms on binary classification data sets. In our
findings we compare the chosen graph-based methods using all distance measures
and observe that the results vary strongly with respect to the accuracy. As
predicted by the ``no free lunch'' theorem, no clear best combination to employ
in all cases is found. Our study provides a reproducible framework for future
work in the direction of semi-supervised learning for time series with a focus
on graph representations. | [
"cs.LG",
"cs.NA",
"math.NA"
] |
With large quantities of data typically available nowadays, forecasting
models that are trained across sets of time series, known as Global Forecasting
Models (GFM), are regularly outperforming traditional univariate forecasting
models that work on isolated series. As GFMs usually share the same set of
parameters across all time series, they often have the problem of not being
localised enough to a particular series, especially in situations where
datasets are heterogeneous. We study how ensembling techniques can be used with
generic GFMs and univariate models to solve this issue. Our work systematises
and compares relevant current approaches, namely clustering series and training
separate submodels per cluster, the so-called ensemble of specialists approach,
and building heterogeneous ensembles of global and local models. We fill some
gaps in the existing GFM localisation approaches, in particular by
incorporating varied clustering techniques such as feature-based clustering,
distance-based clustering and random clustering, and generalise them to use
different underlying GFM model types. We then propose a new methodology of
clustered ensembles where we train multiple GFMs on different clusters of
series, obtained by changing the number of clusters and cluster seeds. Using
Feed-forward Neural Networks, Recurrent Neural Networks, and Pooled Regression
models as the underlying GFMs, in our evaluation on eight publicly available
datasets, the proposed models are able to achieve significantly higher accuracy
than baseline GFM models and univariate forecasting methods. | [
"cs.LG",
"cs.NE",
"stat.ML"
] |
Structured weight pruning is a representative model compression technique of
DNNs to reduce the storage and computation requirements and accelerate
inference. An automatic hyperparameter determination process is necessary due
to the large number of flexible hyperparameters. This work proposes
AutoCompress, an automatic structured pruning framework with the following key
performance improvements: (i) effectively incorporate the combination of
structured pruning schemes in the automatic process; (ii) adopt the
state-of-art ADMM-based structured weight pruning as the core algorithm, and
propose an innovative additional purification step for further weight reduction
without accuracy loss; and (iii) develop effective heuristic search method
enhanced by experience-based guided search, replacing the prior deep
reinforcement learning technique which has underlying incompatibility with the
target pruning problem. Extensive experiments on CIFAR-10 and ImageNet datasets
demonstrate that AutoCompress is the key to achieve ultra-high pruning rates on
the number of weights and FLOPs that cannot be achieved before. As an example,
AutoCompress outperforms the prior work on automatic model compression by up to
33x in pruning rate (120x reduction in the actual parameter count) under the
same accuracy. Significant inference speedup has been observed from the
AutoCompress framework on actual measurements on smartphone. We release all
models of this work at anonymous link: http://bit.ly/2VZ63dS. | [
"cs.LG",
"cs.AI",
"cs.CV",
"cs.NE",
"stat.ML"
] |
Exterior contour and interior structure are both vital features for
classifying objects. However, most of the existing methods consider exterior
contour feature and internal structure feature separately, and thus fail to
function when classifying patchy image structures that have similar contours
and flexible structures. To address above limitations, this paper proposes a
novel Multi-Orientation Region Transform (MORT), which can effectively
characterize both contour and structure features simultaneously, for patchy
image structure classification. MORT is performed over multiple orientation
regions at multiple scales to effectively integrate patchy features, and thus
enables a better description of the shape in a coarse-to-fine manner. Moreover,
the proposed MORT can be extended to combine with the deep convolutional neural
network techniques, for further enhancement of classification accuracy. Very
encouraging experimental results on the challenging ultra-fine-grained cultivar
recognition task, insect wing recognition task, and large variation butterfly
recognition task are obtained, which demonstrate the effectiveness and
superiority of the proposed MORT over the state-of-the-art methods in
classifying patchy image structures. Our code and three patchy image structure
datasets are available at: https://github.com/XiaohanYu-GU/MReT2019. | [
"cs.CV",
"cs.AI"
] |
Property inference attacks reveal statistical properties about a training set
but are difficult to distinguish from the primary purposes of statistical
machine learning, which is to produce models that capture statistical
properties about a distribution. Motivated by Yeom et al.'s membership
inference framework, we propose a formal and generic definition of property
inference attacks. The proposed notion describes attacks that can distinguish
between possible training distributions, extending beyond previous property
inference attacks that infer the ratio of a particular type of data in the
training data set. In this paper, we show how our definition captures previous
property inference attacks as well as a new attack that reveals the average
degree of nodes of a training graph and report on experiments giving insight
into the potential risks of property inference attacks. | [
"cs.LG",
"cs.AI",
"cs.CR"
] |
We present Border-SegGCN, a novel architecture to improve semantic
segmentation by refining the border outline using graph convolutional networks
(GCN). The semantic segmentation network such as Unet or DeepLabV3+ is used as
a base network to have pre-segmented output. This output is converted into a
graphical structure and fed into the GCN to improve the border pixel prediction
of the pre-segmented output. We explored and studied the factors such as border
thickness, number of edges for a node, and the number of features to be fed
into the GCN by performing experiments. We demonstrate the effectiveness of the
Border-SegGCN on the CamVid and Carla dataset, achieving a test set performance
of 81.96% without any post-processing on CamVid dataset. It is higher than the
reported state of the art mIoU achieved on CamVid dataset by 0.404% | [
"cs.CV",
"cs.AI"
] |
Federated Learning (FL) is a framework which enables distributed model
training using a large corpus of decentralized training data. Existing methods
aggregate models disregarding their internal representations, which are crucial
for training models in vision tasks. System and statistical heterogeneity
(e.g., highly imbalanced and non-i.i.d. data) further harm model training. To
this end, we introduce a method, called FedProto, which computes client
deviations using margins of prototypical representations learned on distributed
data, and applies them to drive federated optimization via an attention
mechanism. In addition, we propose three methods to analyse statistical
properties of feature representations learned in FL, in order to elucidate the
relationship between accuracy, margins and feature discrepancy of FL models. In
experimental analyses, FedProto demonstrates state-of-the-art accuracy and
convergence rate across image classification and semantic segmentation
benchmarks by enabling maximum margin training of FL models. Moreover, FedProto
reduces uncertainty of predictions of FL models compared to the baseline. To
our knowledge, this is the first work evaluating FL models in dense prediction
tasks, such as semantic segmentation. | [
"cs.LG",
"cs.CV"
] |
Image-to-image translation is significant to many computer vision and machine
learning tasks such as image synthesis and video synthesis. It has primary
applications in the graphics editing and animation industries. With the
development of generative adversarial networks, a lot of attention has been
drawn to image-to-image translation tasks. In this paper, we propose and
investigate a novel task named as panoptic-level image-to-image translation and
a naive baseline of solving this task. Panoptic-level image translation extends
the current image translation task to two separate objectives of semantic style
translation (adjust the style of objects to that of different domains) and
instance transfiguration (swap between different types of objects). The
proposed task generates an image from a complete and detailed panoptic
perspective which can enrich the context of real-world vision synthesis. Our
contribution consists of the proposal of a significant task worth investigating
and a naive baseline of solving it. The proposed baseline consists of the
multiple instances sequential translation and semantic-level translation with
domain-invariant content code. | [
"cs.CV"
] |
Finding the optimal signal timing strategy is a difficult task for the
problem of large-scale traffic signal control (TSC). Multi-Agent Reinforcement
Learning (MARL) is a promising method to solve this problem. However, there is
still room for improvement in extending to large-scale problems and modeling
the behaviors of other agents for each individual agent. In this paper, a new
MARL, called Cooperative double Q-learning (Co-DQL), is proposed, which has
several prominent features. It uses a highly scalable independent double
Q-learning method based on double estimators and the UCB policy, which can
eliminate the over-estimation problem existing in traditional independent
Q-learning while ensuring exploration. It uses mean field approximation to
model the interaction among agents, thereby making agents learn a better
cooperative strategy. In order to improve the stability and robustness of the
learning process, we introduce a new reward allocation mechanism and a local
state sharing method. In addition, we analyze the convergence properties of the
proposed algorithm. Co-DQL is applied on TSC and tested on a multi-traffic
signal simulator. According to the results obtained on several traffic
scenarios, Co- DQL outperforms several state-of-the-art decentralized MARL
algorithms. It can effectively shorten the average waiting time of the vehicles
in the whole road system. | [
"cs.LG",
"cs.MA",
"stat.ML"
] |
Monocular 3D object detection aims to detect objects in a 3D physical world
from a single camera. However, recent approaches either rely on expensive LiDAR
devices, or resort to dense pixel-wise depth estimation that causes prohibitive
computational cost. In this paper, we propose an end-to-end trainable monocular
3D object detector without learning the dense depth. Specifically, the grid
coordinates of a 2D box are first projected back to 3D space with the pinhole
model as 3D centroids proposals. Then, a novel object-aware voting approach is
introduced, which considers both the region-wise appearance attention and the
geometric projection distribution, to vote the 3D centroid proposals for 3D
object localization. With the late fusion and the predicted 3D orientation and
dimension, the 3D bounding boxes of objects can be detected from a single RGB
image. The method is straightforward yet significantly superior to other
monocular-based methods. Extensive experimental results on the challenging
KITTI benchmark validate the effectiveness of the proposed method. | [
"cs.CV"
] |
This paper introduces a procedure for testing the identifiability of Bayesian
models for causal inference. Although the do-calculus is sound and complete
given a causal graph, many practical assumptions cannot be expressed in terms
of graph structure alone, such as the assumptions required by instrumental
variable designs, regression discontinuity designs, and within-subjects
designs. We present simulation-based identifiability (SBI), a fully automated
identification test based on a particle optimization scheme with simulated
observations. This approach expresses causal assumptions as priors over
functions in a structural causal model, including flexible priors using
Gaussian processes. We prove that SBI is asymptotically sound and complete, and
produces practical finite-sample bounds. We also show empirically that SBI
agrees with known results in graph-based identification as well as with
widely-held intuitions for designs in which graph-based methods are
inconclusive. | [
"cs.LG",
"cs.AI",
"stat.ME"
] |
In the era of end-to-end deep learning, many advances in computer vision are
driven by large amounts of labeled data. In the optical flow setting, however,
obtaining dense per-pixel ground truth for real scenes is difficult and thus
such data is rare. Therefore, recent end-to-end convolutional networks for
optical flow rely on synthetic datasets for supervision, but the domain
mismatch between training and test scenarios continues to be a challenge.
Inspired by classical energy-based optical flow methods, we design an
unsupervised loss based on occlusion-aware bidirectional flow estimation and
the robust census transform to circumvent the need for ground truth flow. On
the KITTI benchmarks, our unsupervised approach outperforms previous
unsupervised deep networks by a large margin, and is even more accurate than
similar supervised methods trained on synthetic datasets alone. By optionally
fine-tuning on the KITTI training data, our method achieves competitive optical
flow accuracy on the KITTI 2012 and 2015 benchmarks, thus in addition enabling
generic pre-training of supervised networks for datasets with limited amounts
of ground truth. | [
"cs.CV"
] |
Training deep neural networks is challenging when large and annotated
datasets are unavailable. Extensive manual annotation of data samples is
time-consuming, expensive, and error-prone, notably when it needs to be done by
experts. To address this issue, increased attention has been devoted to
techniques that propagate uncertain labels (also called pseudo labels) to large
amounts of unsupervised samples and use them for training the model. However,
these techniques still need hundreds of supervised samples per class in the
training set and a validation set with extra supervised samples to tune the
model. We improve a recent iterative pseudo-labeling technique, Deep Feature
Annotation (DeepFA), by selecting the most confident unsupervised samples to
iteratively train a deep neural network. Our confidence-based sampling strategy
relies on only dozens of annotated training samples per class with no
validation set, considerably reducing user effort in data annotation. We first
ascertain the best configuration for the baseline -- a self-trained deep neural
network -- and then evaluate our confidence DeepFA for different confidence
thresholds. Experiments on six datasets show that DeepFA already outperforms
the self-trained baseline, but confidence DeepFA can considerably outperform
the original DeepFA and the baseline. | [
"cs.LG"
] |