aid
stringlengths 9
15
| mid
stringlengths 7
10
| abstract
stringlengths 78
2.56k
| related_work
stringlengths 92
1.77k
| ref_abstract
dict |
---|---|---|---|---|
1901.09590 | 2914592219 | Knowledge graphs are structured representations of real world facts. However, they typically contain only a small subset of all possible facts. Link prediction is a task of inferring missing facts based on existing ones. We propose TuckER, a relatively simple but powerful linear model based on Tucker decomposition of the binary tensor representation of knowledge graph triples. TuckER outperforms all previous state-of-the-art models across standard link prediction datasets. We prove that TuckER is a fully expressive model, deriving the bound on its entity and relation embedding dimensionality for full expressiveness which is several orders of magnitude smaller than the bound of previous state-of-the-art models ComplEx and SimplE. We further show that several previously introduced linear models can be viewed as special cases of TuckER. | DistMult DistMult @cite_15 is a special case of RESCAL with a diagonal matrix per relation, so the number of parameters of DistMult grows linearly with respect to the embedding dimension, reducing overfitting. However, the linear transformation performed on subject entity embedding vectors in DistMult is limited to a stretch. Given the equivalence of subject and object entity embeddings for the same entity, third-order binary tensor learned by DistMult is symmetric in the subject and object entity mode and thus DistMult cannot model asymmetric relations. | {
"cite_N": [
"@cite_15"
],
"mid": [
"1533230146"
],
"abstract": [
"Abstract: We consider learning representations of entities and relations in KBs using the neural-embedding approach. We show that most existing models, including NTN (, 2013) and TransE (, 2013b), can be generalized under a unified learning framework, where entities are low-dimensional vectors learned from a neural network and relations are bilinear and or linear mapping functions. Under this framework, we compare a variety of embedding models on the link prediction task. We show that a simple bilinear formulation achieves new state-of-the-art results for the task (achieving a top-10 accuracy of 73.2 vs. 54.7 by TransE on Freebase). Furthermore, we introduce a novel approach that utilizes the learned relation embeddings to mine logical rules such as \"BornInCity(a,b) and CityInCountry(b,c) => Nationality(a,c)\". We find that embeddings learned from the bilinear objective are particularly good at capturing relational semantics and that the composition of relations is characterized by matrix multiplication. More interestingly, we demonstrate that our embedding-based rule extraction approach successfully outperforms a state-of-the-art confidence-based rule mining approach in mining Horn rules that involve compositional reasoning."
]
} |
1901.09590 | 2914592219 | Knowledge graphs are structured representations of real world facts. However, they typically contain only a small subset of all possible facts. Link prediction is a task of inferring missing facts based on existing ones. We propose TuckER, a relatively simple but powerful linear model based on Tucker decomposition of the binary tensor representation of knowledge graph triples. TuckER outperforms all previous state-of-the-art models across standard link prediction datasets. We prove that TuckER is a fully expressive model, deriving the bound on its entity and relation embedding dimensionality for full expressiveness which is several orders of magnitude smaller than the bound of previous state-of-the-art models ComplEx and SimplE. We further show that several previously introduced linear models can be viewed as special cases of TuckER. | ComplEx ComplEx @cite_1 extends DistMult to the complex domain. Even though each relation matrix of ComplEx is still diagonal, subject and object entity embeddings for the same entity are no longer equivalent, but complex conjugates, which introduces asymmetry into the tensor decomposition and thus enables ComplEx to model asymmetric relations. | {
"cite_N": [
"@cite_1"
],
"mid": [
"2963432357"
],
"abstract": [
"In statistical relational learning, the link prediction problem is key to automatically understand the structure of large knowledge bases. As in previous studies, we propose to solve this problem through latent factorization. However, here we make use of complex valued embeddings. The composition of complex embeddings can handle a large variety of binary relations, among them symmetric and antisymmetric relations. Compared to state-of-the-art models such as Neural Tensor Network and Holographic Embeddings, our approach based on complex embeddings is arguably simpler, as it only uses the Hermitian dot product, the complex counterpart of the standard dot product between real vectors. Our approach is scalable to large datasets as it remains linear in both space and time, while consistently outperforming alternative approaches on standard link prediction benchmarks."
]
} |
1901.09590 | 2914592219 | Knowledge graphs are structured representations of real world facts. However, they typically contain only a small subset of all possible facts. Link prediction is a task of inferring missing facts based on existing ones. We propose TuckER, a relatively simple but powerful linear model based on Tucker decomposition of the binary tensor representation of knowledge graph triples. TuckER outperforms all previous state-of-the-art models across standard link prediction datasets. We prove that TuckER is a fully expressive model, deriving the bound on its entity and relation embedding dimensionality for full expressiveness which is several orders of magnitude smaller than the bound of previous state-of-the-art models ComplEx and SimplE. We further show that several previously introduced linear models can be viewed as special cases of TuckER. | ConvE ConvE @cite_32 is the first non-linear model that significantly outperformed the preceding linear models. In ConvE, a global 2D convolution operation is performed on the subject entity and relation embedding vectors, after they are reshaped to matrices and concatenated. The obtained feature maps are flattened, transformed through a fully connected layer, and the inner product is taken with all object entity vectors to generate a score for each triple. Whilst results achieved by ConvE are impressive, its reshaping and concatenating of vectors as well as using 2D convolution on word embeddings is unintuitive. | {
"cite_N": [
"@cite_32"
],
"mid": [
"2964116313"
],
"abstract": [
"Link prediction for knowledge graphs is the task of predicting missing relationships between entities. Previous work on link prediction has focused on shallow, fast models which can scale to large knowledge graphs. However, these models learn less expressive features than deep, multi-layer models - which potentially limits performance. In this work we introduce ConvE, a multi-layer convolutional network model for link prediction, and report state-of-the-art results for several established datasets. We also show that the model is highly parameter efficient, yielding the same performance as DistMult and R-GCN with 8x and 17x fewer parameters. Analysis of our model suggests that it is particularly effective at modelling nodes with high indegree - which are common in highly-connected, complex knowledge graphs such as Freebase and YAGO3. In addition, it has been noted that the WN18 and FB15k datasets suffer from test set leakage, due to inverse relations from the training set being present in the test set - however, the extent of this issue has so far not been quantified. We find this problem to be severe: a simple rule-based model can achieve state-of-the-art results on both WN18 and FB15k. To ensure that models are evaluated on datasets where simply exploiting inverse relations cannot yield competitive results, we investigate and validate several commonly used datasets - deriving robust variants where necessary. We then perform experiments on these robust datasets for our own and several previously proposed models, and find that ConvE achieves state-of-the-art Mean Reciprocal Rank across all datasets."
]
} |
1901.09590 | 2914592219 | Knowledge graphs are structured representations of real world facts. However, they typically contain only a small subset of all possible facts. Link prediction is a task of inferring missing facts based on existing ones. We propose TuckER, a relatively simple but powerful linear model based on Tucker decomposition of the binary tensor representation of knowledge graph triples. TuckER outperforms all previous state-of-the-art models across standard link prediction datasets. We prove that TuckER is a fully expressive model, deriving the bound on its entity and relation embedding dimensionality for full expressiveness which is several orders of magnitude smaller than the bound of previous state-of-the-art models ComplEx and SimplE. We further show that several previously introduced linear models can be viewed as special cases of TuckER. | HypER HypER @cite_9 is a simplified convolutional model, that uses a hypernetwork to generate 1D convolutional filters for each relation, extracting relation-specific features from subject entity embeddings. The authors show that convolution is a way of introducing sparsity and parameter tying and that HypER can be understood in terms of tensor factorization up to a non-linearity, thus placing HypER closer to the well established family of factorization models. The drawback of HypER is that it sets most elements of the core weight tensor to 0, which amounts to hard regularization, rather than letting the model learn which parameters to use via a soft regularization approach. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2888572441"
],
"abstract": [
"Knowledge graphs are graphical representations of large databases of facts, which typically suffer from incompleteness. Inferring missing relations (links) between entities (nodes) is the task of link prediction. A recent state-of-the-art approach to link prediction, ConvE, implements a convolutional neural network to extract features from concatenated subject and relation vectors. Whilst results are impressive, the method is unintuitive and poorly understood. We propose a hypernetwork architecture that generates simplified relation-specific convolutional filters that (i) outperforms ConvE and all previous approaches across standard datasets; and (ii) can be framed as tensor factorization and thus set within a well established family of factorization models for link prediction. We thus demonstrate that convolution simply offers a convenient computational means of introducing sparsity and parameter tying to find an effective trade-off between non-linear expressiveness and the number of parameters to learn."
]
} |
1901.09891 | 2913012226 | Data augmentation is usually adopted to increase the amount of training data, prevent overfitting and improve the performance of deep models. However, in practice, random data augmentation, such as random image cropping, is low-efficiency and might introduce many uncontrolled background noises. In this paper, we propose Weakly Supervised Data Augmentation Network (WS-DAN) to explore the potential of data augmentation. Specifically, for each training image, we first generate attention maps to represent the object's discriminative parts by weakly supervised learning. Next, we augment the image guided by these attention maps, including attention cropping and attention dropping. The proposed WS-DAN improves the classification accuracy in two folds. In the first stage, images can be seen better since more discriminative parts' features will be extracted. In the second stage, attention regions provide accurate location of object, which ensures our model to look at the object closer and further improve the performance. Comprehensive experiments in common fine-grained visual classification datasets show that our WS-DAN surpasses the state-of-the-art methods, which demonstrates its effectiveness. | Random data augmentation suffers from low efficiency and generating much uncontrolled noise data. To overcome these issue, a few methods have been proposed to take dataset distribution into consideration and augment data according to the feedback of the training dataset, which is more effective than a random distribution. Cubuk al proposed AutoAugmentation @cite_16 to create a search space of data augmentation policies. It can automatically design a specific policy so as to obtain state-of-the-art validation accuracy for target dataset. Peng al proposed Adversarial Data Augmentation @cite_43 to jointly optimize data augmentation and deep model. They designed an augmentation network to online generate data and improve the robustness of the deep model. However, their data specific augmentation designing process is significantly complicated than random augmentation. Our attention guided data augmentation is more simple and direct, which can be easily trained end-to-end. | {
"cite_N": [
"@cite_43",
"@cite_16"
],
"mid": [
"2963468256",
"2804047946"
],
"abstract": [
"Random data augmentation is a critical technique to avoid overfitting in training deep models. Yet, data augmentation and network training are often two isolated processes in most settings, yielding to a suboptimal training. Why not jointly optimize the two? We propose adversarial data augmentation to address this limitation. The key idea is to design a generator (e.g. an augmentation network) that competes against a discriminator (e.g. a target network) by generating hard examples online. The generator explores weaknesses of the discriminator, while the discriminator learns from hard augmentations to achieve better performance. A reward penalty strategy is also proposed for efficient joint training. We investigate human pose estimation and carry out comprehensive ablation studies to validate our method. The results prove that our method can effectively improve state-of-the-art models without additional data effort.",
"Data augmentation is an effective technique for improving the accuracy of modern image classifiers. However, current data augmentation implementations are manually designed. In this paper, we describe a simple procedure called AutoAugment to automatically search for improved data augmentation policies. In our implementation, we have designed a search space where a policy consists of many sub-policies, one of which is randomly chosen for each image in each mini-batch. A sub-policy consists of two operations, each operation being an image processing function such as translation, rotation, or shearing, and the probabilities and magnitudes with which the functions are applied. We use a search algorithm to find the best policy such that the neural network yields the highest validation accuracy on a target dataset. Our method achieves state-of-the-art accuracy on CIFAR-10, CIFAR-100, SVHN, and ImageNet (without additional data). On ImageNet, we attain a Top-1 accuracy of 83.5 which is 0.4 better than the previous record of 83.1 . On CIFAR-10, we achieve an error rate of 1.5 , which is 0.6 better than the previous state-of-the-art. Augmentation policies we find are transferable between datasets. The policy learned on ImageNet transfers well to achieve significant improvements on other datasets, such as Oxford Flowers, Caltech-101, Oxford-IIT Pets, FGVC Aircraft, and Stanford Cars."
]
} |
1901.09891 | 2913012226 | Data augmentation is usually adopted to increase the amount of training data, prevent overfitting and improve the performance of deep models. However, in practice, random data augmentation, such as random image cropping, is low-efficiency and might introduce many uncontrolled background noises. In this paper, we propose Weakly Supervised Data Augmentation Network (WS-DAN) to explore the potential of data augmentation. Specifically, for each training image, we first generate attention maps to represent the object's discriminative parts by weakly supervised learning. Next, we augment the image guided by these attention maps, including attention cropping and attention dropping. The proposed WS-DAN improves the classification accuracy in two folds. In the first stage, images can be seen better since more discriminative parts' features will be extracted. In the second stage, attention regions provide accurate location of object, which ensures our model to look at the object closer and further improve the performance. Comprehensive experiments in common fine-grained visual classification datasets show that our WS-DAN surpasses the state-of-the-art methods, which demonstrates its effectiveness. | To focus on the local features, many methods rely on the annotations of parts location or attribute. Part R-CNN @cite_44 extended R-CNN @cite_32 to detect objects and localize their parts under a geometric prior, then predicted a fine-grained category from a pose-normalized representation. @cite_33 proposed a feedback-control framework Deep LAC to back-propagate alignment and classification errors to localization; they also proposed a valve linkage function (VLF) to connect the localization and classification modules. | {
"cite_N": [
"@cite_44",
"@cite_32",
"@cite_33"
],
"mid": [
"56385144",
"",
"2891951760"
],
"abstract": [
"Semantic part localization can facilitate fine-grained categorization by explicitly isolating subtle appearance differences associated with specific object parts. Methods for pose-normalized representations have been proposed, but generally presume bounding box annotations at test time due to the difficulty of object detection. We propose a model for fine-grained categorization that overcomes these limitations by leveraging deep convolutional features computed on bottom-up region proposals. Our method learns whole-object and part detectors, enforces learned geometric constraints between them, and predicts a fine-grained category from a pose-normalized representation. Experiments on the Caltech-UCSD bird dataset confirm that our method outperforms state-of-the-art fine-grained categorization methods in an end-to-end evaluation without requiring a bounding box at test time.",
"",
"Fine-grained classification is challenging due to the difficulty of finding discriminative features. Finding those subtle traits that fully characterize the object is not straightforward. To handle this circumstance, we propose a novel self-supervision mechanism to effectively localize informative regions without the need of bounding-box part annotations. Our model, termed NTS-Net for Navigator-Teacher-Scrutinizer Network, consists of a Navigator agent, a Teacher agent and a Scrutinizer agent. In consideration of intrinsic consistency between informativeness of the regions and their probability being ground-truth class, we design a novel training paradigm, which enables Navigator to detect most informative regions under the guidance from Teacher. After that, the Scrutinizer scrutinizes the proposed regions from Navigator and makes predictions. Our model can be viewed as a multi-agent cooperation, wherein agents benefit from each other, and make progress together. NTS-Net can be trained end-to-end, while provides accurate fine-grained classification predictions as well as highly informative regions during inference. We achieve state-of-the-art performance in extensive benchmark datasets."
]
} |
1901.09891 | 2913012226 | Data augmentation is usually adopted to increase the amount of training data, prevent overfitting and improve the performance of deep models. However, in practice, random data augmentation, such as random image cropping, is low-efficiency and might introduce many uncontrolled background noises. In this paper, we propose Weakly Supervised Data Augmentation Network (WS-DAN) to explore the potential of data augmentation. Specifically, for each training image, we first generate attention maps to represent the object's discriminative parts by weakly supervised learning. Next, we augment the image guided by these attention maps, including attention cropping and attention dropping. The proposed WS-DAN improves the classification accuracy in two folds. In the first stage, images can be seen better since more discriminative parts' features will be extracted. In the second stage, attention regions provide accurate location of object, which ensures our model to look at the object closer and further improve the performance. Comprehensive experiments in common fine-grained visual classification datasets show that our WS-DAN surpasses the state-of-the-art methods, which demonstrates its effectiveness. | Weakly supervised learning is an umbrella term that covers a variety of studies that attempt to construct predictive models by learning with weak supervision @cite_0 , which mainly consists of incomplete, inexact and inaccurate supervision. Localizing object or its parts only by image-level annotation belongs to the inexact supervision. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2746791238"
],
"abstract": [
"Supervised learning techniques construct predictive models by learning from a large number of training examples, where each training example has a label indicating its ground-truth output. Though current techniques have achieved great success, it is noteworthy that in many tasks it is difficult to get strong supervision information like fully ground-truth labels due to the high cost of the data-labeling process. Thus, it is desirable for machine-learning techniques to work with weak supervision. This article reviews some research progress of weakly supervised learning, focusing on three typical types of weak supervision: incomplete supervision, where only a subset of training data is given with labels; inexact supervision, where the training data are given with only coarse-grained labels; and inaccurate supervision, where the given labels are not always ground-truth."
]
} |
1901.09891 | 2913012226 | Data augmentation is usually adopted to increase the amount of training data, prevent overfitting and improve the performance of deep models. However, in practice, random data augmentation, such as random image cropping, is low-efficiency and might introduce many uncontrolled background noises. In this paper, we propose Weakly Supervised Data Augmentation Network (WS-DAN) to explore the potential of data augmentation. Specifically, for each training image, we first generate attention maps to represent the object's discriminative parts by weakly supervised learning. Next, we augment the image guided by these attention maps, including attention cropping and attention dropping. The proposed WS-DAN improves the classification accuracy in two folds. In the first stage, images can be seen better since more discriminative parts' features will be extracted. In the second stage, attention regions provide accurate location of object, which ensures our model to look at the object closer and further improve the performance. Comprehensive experiments in common fine-grained visual classification datasets show that our WS-DAN surpasses the state-of-the-art methods, which demonstrates its effectiveness. | Accurately locating the object or its parts only by image-level supervision is very challenging. Early work @cite_7 @cite_13 usually generate class-specific localization maps by Global Average Pooling (GAP) @cite_3 . The activation area can reflect the location of an object. However, training by softmax cross entropy loss usually leads the model to pay attention to the most discriminative location, whose output bounding box just covers part of the object. To locate the whole object. Singh al @cite_22 randomly hides the patches of input images so as to force the network to find other discriminative parts. However, the process is inefficient for the lack of high-level guidance. Zhang al proposed Adversarial Complementary Learning (ACoL) @cite_4 approach to discover entire objects by training two adversary complementary classifiers, which can locate different object parts and discover the complementary regions that belong to the same object. Nevertheless, there are only two complementary regions in their implementation, which limits accuracy. Our attention-guided data augmentation encourages the model to pay attention to multiple object parts, extract more discriminative features and achieve better performance in object localizing. | {
"cite_N": [
"@cite_4",
"@cite_22",
"@cite_7",
"@cite_3",
"@cite_13"
],
"mid": [
"2964274719",
"2963045696",
"2295107390",
"",
"2962761264"
],
"abstract": [
"In this work, we propose Adversarial Complementary Learning (ACoL) to automatically localize integral objects of semantic interest with weak supervision. We first mathematically prove that class localization maps can be obtained by directly selecting the class-specific feature maps of the last convolutional layer, which paves a simple way to identify object regions. We then present a simple network architecture including two parallel-classifiers for object localization. Specifically, we leverage one classification branch to dynamically localize some discriminative object regions during the forward pass. Although it is usually responsive to sparse parts of the target objects, this classifier can drive the counterpart classifier to discover new and complementary object regions by erasing its discovered regions from the feature maps. With such an adversarial learning, the two parallel-classifiers are forced to leverage complementary object regions for classification and can finally generate integral object localization together. The merits of ACoL are mainly two-fold: 1) it can be trained in an end-to-end manner; 2) dynamically erasing enables the counterpart classifier to discover complementary object regions more effectively. We demonstrate the superiority of our ACoL approach in a variety of experiments. In particular, the Top-1 localization error rate on the ILSVRC dataset is 45.14 , which is the new state-of-the-art.",
"We propose ‘Hide-and-Seek’, a weakly-supervised framework that aims to improve object localization in images and action localization in videos. Most existing weakly-supervised methods localize only the most discriminative parts of an object rather than all relevant parts, which leads to suboptimal performance. Our key idea is to hide patches in a training image randomly, forcing the network to seek other relevant parts when the most discriminative part is hidden. Our approach only needs to modify the input image and can work with any network designed for object localization. During testing, we do not need to hide any patches. Our Hide-and-Seek approach obtains superior performance compared to previous methods for weakly-supervised object localization on the ILSVRC dataset. We also demonstrate that our framework can be easily extended to weakly-supervised action localization.",
"In this work, we revisit the global average pooling layer proposed in [13], and shed light on how it explicitly enables the convolutional neural network (CNN) to have remarkable localization ability despite being trained on imagelevel labels. While this technique was previously proposed as a means for regularizing training, we find that it actually builds a generic localizable deep representation that exposes the implicit attention of CNNs on an image. Despite the apparent simplicity of global average pooling, we are able to achieve 37.1 top-5 error for object localization on ILSVRC 2014 without training on any bounding box annotation. We demonstrate in a variety of experiments that our network is able to localize the discriminative image regions despite just being trained for solving classification task1.",
"",
"Global covariance pooling in convolutional neural networks has achieved impressive improvement over the classical first-order pooling. Recent works have shown matrix square root normalization plays a central role in achieving state-of-the-art performance. However, existing methods depend heavily on eigendecomposition (EIG) or singular value decomposition (SVD), suffering from inefficient training due to limited support of EIG and SVD on GPU. Towards addressing this problem, we propose an iterative matrix square root normalization method for fast end-to-end training of global covariance pooling networks. At the core of our method is a meta-layer designed with loop-embedded directed graph structure. The meta-layer consists of three consecutive nonlinear structured layers, which perform pre-normalization, coupled matrix iteration and post-compensation, respectively. Our method is much faster than EIG or SVD based ones, since it involves only matrix multiplications, suitable for parallel implementation on GPU. Moreover, the proposed network with ResNet architecture can converge in much less epochs, further accelerating network training. On large-scale ImageNet, we achieve competitive performance superior to existing counterparts. By finetuning our models pre-trained on ImageNet, we establish state-of-the-art results on three challenging fine-grained benchmarks. The source code and network models will be available at http: www.peihuali.org iSQRT-COV."
]
} |
1901.09888 | 2913777072 | The increasing interest in user privacy is leading to new privacy preserving machine learning paradigms. In the Federated Learning paradigm, a master machine learning model is distributed to user clients, the clients use their locally stored data and model for both inference and calculating model updates. The model updates are sent back and aggregated on the server to update the master model then redistributed to the clients. In this paradigm, the user data never leaves the client, greatly enhancing the user' privacy, in contrast to the traditional paradigm of collecting, storing and processing user data on a backend server beyond the user's control. In this paper we introduce, as far as we are aware, the first federated implementation of a Collaborative Filter. The federated updates to the model are based on a stochastic gradient approach. As a classical case study in machine learning, we explore a personalized recommendation system based on users' implicit feedback and demonstrate the method's applicability to both the MovieLens and an in-house dataset. Empirical validation confirms a collaborative filter can be federated without a loss of accuracy compared to a standard implementation, hence enhancing the user's privacy in a widely used recommender application while maintaining recommender performance. | This work lies at the intersection of three research topics: (i) matrix factorization, (ii) parallel & distributed and, (iii) federated learning. Recently, Alternating Least Squares (ALS) and Stochastic Gradient Descent (SGD) have gained much interest and have become the most popular algorithms for matrix factorization in recommender systems @cite_25 . The ALS algorithm allows learning the latent factor matrices by alternating between updates to one factor matrix while holding the other factor matrix fixed. Each iteration of an update to the latent factor matrices is referred to as an . Although the time complexity per epoch is cubic in the number of factors, numerous studies show the ALS is well suited for parallelization @cite_13 @cite_12 @cite_9 @cite_10 @cite_4 . It is not merely a coincidence that ALS is the premiere parallel matrix factorization implementation for CF in Apache Spark ( https: spark.apache.org docs latest mllib-collaborative-filtering.html). | {
"cite_N": [
"@cite_13",
"@cite_4",
"@cite_9",
"@cite_10",
"@cite_25",
"@cite_12"
],
"mid": [
"1511814458",
"2142466236",
"",
"2029463952",
"2054141820",
"2113802117"
],
"abstract": [
"Many recommendation systems suggest items to users by utilizing the techniques of collaborative filtering(CF) based on historical records of items that the users have viewed, purchased, or rated. Two major problems that most CF approaches have to contend with are scalability and sparseness of the user profiles. To tackle these issues, in this paper, we describe a CF algorithm alternating-least-squares with weighted-?-regularization(ALS-WR), which is implemented on a parallel Matlab platform. We show empirically that the performance of ALS-WR (in terms of root mean squared error(RMSE)) monotonically improves with both the number of features and the number of ALS iterations. We applied the ALS-WR algorithm on a large-scale CF problem, the Netflix Challenge, with 1000 hidden features and obtained a RMSE score of 0.8985, which is one of the best results based on a pure method. In addition, combining with the parallel version of other known methods, we achieved a performance improvement of 5.91 over Netflix's own CineMatch recommendation system. Our method is simple and scales well to very large datasets.",
"Matrix factorization, when the matrix has missing values, has become one of the leading techniques for recommender systems. To handle web-scale datasets with millions of users and billions of ratings, scalability becomes an important issue. Alternating least squares (ALS) and stochastic gradient descent (SGD) are two popular approaches to compute matrix factorization, and there has been a recent flurry of activity to parallelize these algorithms. However, due to the cubic time complexity in the target rank, ALS is not scalable to large-scale datasets. On the other hand, SGD conducts efficient updates but usually suffers from slow convergence that is sensitive to the parameters. Coordinate descent, a classical optimization approach, has been used for many other large-scale problems, but its application to matrix factorization for recommender systems has not been thoroughly explored. In this paper, we show that coordinate descent-based methods have a more efficient update rule compared to ALS and have faster and more stable convergence than SGD. We study different update sequences and propose the CCD++ algorithm, which updates rank-one factors one by one. In addition, CCD++ can be easily parallelized on both multi-core and distributed systems. We empirically show that CCD++ is much faster than ALS and SGD in both settings. As an example, with a synthetic dataset containing 14.6 billion ratings, on a distributed memory cluster with 64 processors, to deliver the desired test RMSE, CCD++ is 49 times faster than SGD and 20 times faster than ALS. When the number of processors is increased to 256, CCD++ takes only 16 s and is still 40 times faster than SGD and 20 times faster than ALS.",
"",
"The efficient, distributed factorization of large matrices on clusters of commodity machines is crucial to applying latent factor models in industrial-scale recommender systems. We propose an efficient, data-parallel low-rank matrix factorization with Alternating Least Squares which uses a series of broadcast-joins that can be efficiently executed with MapReduce. We empirically show that the performance of our solution is suitable for real-world use cases. We present experiments on two publicly available datasets and on a synthetic dataset termed Bigflix, generated from the Netflix dataset. Bigflix contains 25 million users and more than 5 billion ratings, mimicking data sizes recently reported as Netflix' production workload. We demonstrate that our approach is able to run an iteration of Alternating Least Squares in six minutes on this dataset. Our implementation has been contributed to the open source machine learning library Apache Mahout.",
"As the Netflix Prize competition has demonstrated, matrix factorization models are superior to classic nearest neighbor techniques for producing product recommendations, allowing the incorporation of additional information such as implicit feedback, temporal effects, and confidence levels.",
"The collaborative filtering (CF) using known user ratings of items has proved to be effective for predicting user preferences in item selection. This thriving subfield of machine learning became popular in the late 1990s with the spread of online services that use recommender systems, such as Amazon, Yahoo! Music, and Netflix. CF approaches are usually designed to work on very large data sets. Therefore the scalability of the methods is crucial. In this work, we propose various scalable solutions that are validated against the Netflix Prize data set, currently the largest publicly available collection. First, we propose various matrix factorization (MF) based techniques. Second, a neighbor correction method for MF is outlined, which alloys the global perspective of MF and the localized property of neighbor based approaches efficiently. In the experimentation section, we first report on some implementation issues, and we suggest on how parameter optimization can be performed efficiently for MFs. We then show that the proposed scalable approaches compare favorably with existing ones in terms of prediction accuracy and or required training time. Finally, we report on some experiments performed on MovieLens and Jester data sets."
]
} |
1901.09888 | 2913777072 | The increasing interest in user privacy is leading to new privacy preserving machine learning paradigms. In the Federated Learning paradigm, a master machine learning model is distributed to user clients, the clients use their locally stored data and model for both inference and calculating model updates. The model updates are sent back and aggregated on the server to update the master model then redistributed to the clients. In this paradigm, the user data never leaves the client, greatly enhancing the user' privacy, in contrast to the traditional paradigm of collecting, storing and processing user data on a backend server beyond the user's control. In this paper we introduce, as far as we are aware, the first federated implementation of a Collaborative Filter. The federated updates to the model are based on a stochastic gradient approach. As a classical case study in machine learning, we explore a personalized recommendation system based on users' implicit feedback and demonstrate the method's applicability to both the MovieLens and an in-house dataset. Empirical validation confirms a collaborative filter can be federated without a loss of accuracy compared to a standard implementation, hence enhancing the user's privacy in a widely used recommender application while maintaining recommender performance. | Federated Learning, on the other hand, a distributed learning paradigm essentially assumes user data is not available on central servers and is private and confidential. A prominent direction of research in this domain is based on the weighted averaging of the model parameters @cite_17 @cite_24 . In practice, a master machine learning model is distributed to user clients. Each client updates the local copy of the model weights using the user's personal data and sends updated weights to the server which uses the weighted average of the clients local model weights to update the master model. This federated averaging approach has recently attracted much attention for deep neural networks, however, the same approach may not be applicable to a wide class of other machine learning models such as matrix factorization. Classical studies based on federated averaging used CNNs to train on benchmark image recognition tasks @cite_17 , and LSTM on a language modeling tasks @cite_2 @cite_15 . As a follow-up analysis on federating deep learning models, numerous studies have been proposed addressing the optimization of the communication payloads, noisy, unbalanced @cite_23 , non-iid and massively distributed data @cite_33 . | {
"cite_N": [
"@cite_33",
"@cite_24",
"@cite_23",
"@cite_2",
"@cite_15",
"@cite_17"
],
"mid": [
"2807006176",
"2900594532",
"",
"2900120080",
"2904190483",
""
],
"abstract": [
"Federated learning enables resource-constrained edge compute devices, such as mobile phones and IoT devices, to learn a shared model for prediction, while keeping the training data local. This decentralized approach to train models provides privacy, security, regulatory and economic benefits. In this work, we focus on the statistical challenge of federated learning when local data is non-IID. We first show that the accuracy of federated learning reduces significantly, by up to 55 for neural networks trained for highly skewed non-IID data, where each client device trains only on a single class of data. We further show that this accuracy reduction can be explained by the weight divergence, which can be quantified by the earth mover's distance (EMD) between the distribution over classes on each device and the population distribution. As a solution, we propose a strategy to improve training on non-IID data by creating a small subset of data which is globally shared between all the edge devices. Experiments show that accuracy can be increased by 30 for the CIFAR-10 dataset with only 5 globally shared data.",
"Federated learning is an approach to distributed machine learning where a global model is learned by aggregating models that have been trained locally on data-generating clients. Contrary to centralized optimization, clients can be very large in number and face challenges of data and network heterogeneity. Examples of clients include smartphones and connected vehicles, which highlights the practical relevance of federated learning. We benchmark three federated learning algorithms and compare their performance against a centralized approach where data resides on the server. The algorithms Federated Averaging (FedAvg), Federated Stochastic Variance Reduced Gradient, and CO-OP are evaluated on the MNIST dataset, using both i.i.d. and non-i.i.d. partitionings of the data. Our results show that FedAvg achieves the highest accuracy among the federated algorithms, regardless of how data was partitioned. Our comparison between FedAvg and centralized learning shows that they are practically equivalent when i.i.d. data is used. However, the centralized approach outperforms FedAvg with non-i.i.d. data.",
"",
"We train a recurrent neural network language model using a distributed, on-device learning framework called federated learning for the purpose of next-word prediction in a virtual keyboard for smartphones. Server-based training using stochastic gradient descent is compared with training on client devices using the Federated Averaging algorithm. The federated algorithm, which enables training on a higher-quality dataset for this use case, is shown to achieve better prediction recall. This work demonstrates the feasibility and benefit of training language models on client devices without exporting sensitive user data to servers. The federated learning environment gives users greater control over the use of their data and simplifies the task of incorporating privacy by default with distributed training and aggregation across a population of client devices.",
"Federated learning is a distributed form of machine learning where both the training data and model training are decentralized. In this paper, we use federated learning in a commercial, global-scale setting to train, evaluate and deploy a model to improve virtual keyboard search suggestion quality without direct access to the underlying user data. We describe our observations in federated training, compare metrics to live deployments, and present resulting quality increases. In whole, we demonstrate how federated learning can be applied end-to-end to both improve user experiences and enhance user privacy.",
""
]
} |
1901.09774 | 2912208857 | Facial attributes are important since they provide a detailed description and determine the visual appearance of human faces. In this paper, we aim at converting a face image to a sketch while simultaneously generating facial attributes. To this end, we propose a novel Attribute-Guided Sketch Generative Adversarial Network (ASGAN) which is an end-to-end framework and contains two pairs of generators and discriminators, one of which is used to generate faces with attributes while the other one is employed for image-to-sketch translation. The two generators form a W-shaped network (W-net) and they are trained jointly with a weight-sharing constraint. Additionally, we also propose two novel discriminators, the residual one focusing on attribute generation and the triplex one helping to generate realistic looking sketches. To validate our model, we have created a new large dataset with 8,804 images, named the Attribute Face Photo & Sketch (AFPS) dataset which is the first dataset containing attributes associated to face sketch images. The experimental results demonstrate that the proposed network (i) generates more photo-realistic faces with sharper facial attributes than baselines and (ii) has good generalization capability on different generative tasks. | Based on CGANs, @cite_46 have developed a generic framework Pix2pix'', which is suitable for different generative tasks. In Pix2pix, one conditional image is adopted as a reference during the training time. The generator in Pix2pix is a U-net, which tries to synthesize a fake image conditioned on the given conditional image in order to fool the discriminator, while the discriminator tries to identify the fake image by comparing it with the corresponding target image. Under these settings, the discriminator takes the pairs of images as input. The U-net is actually an Encoder-Decoder network with skip connection, in which the encoder consists of multiple convolution layers and the decoder consists of multiple deconvolution layers. @cite_46 added skip connections between each layer @math and layer @math which allows feature sharing between the encoder and decoder, where @math is the total number of layers. All channels at layer @math are simply concatenated with those at layer @math by the skip connections. @cite_46 shares a similar goal with us, but it cannot solve the face-to-attributed-sketch translation task since it cannot convert a face image to a sketch conditioned on external facial attributes while our ASGAN is specifically designed to tackle this task. | {
"cite_N": [
"@cite_46"
],
"mid": [
"2963073614"
],
"abstract": [
"We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Moreover, since the release of the pix2pix software associated with this paper, hundreds of twitter users have posted their own artistic experiments using our system. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without handengineering our loss functions either."
]
} |
1901.09892 | 2914520039 | Neural networks play an increasingly important role in the field of machine learning and are included in many applications in society. Unfortunately, neural networks suffer from adversarial samples generated to attack them. However, most of the generation approaches either assume that the attacker has full knowledge of the neural network model or are limited by the type of attacked model. In this paper, we propose a new approach that generates a black-box attack to neural networks based on the swarm evolutionary algorithm. Benefiting from the improvements in the technology and theoretical characteristics of evolutionary algorithms, our approach has the advantages of effectiveness, black-box attack, generality, and randomness. Our experimental results show that both the MNIST images and the CIFAR-10 images can be perturbed to successful generate a black-box attack with 100 probability on average. In addition, the proposed attack, which is successful on distilled neural networks with almost 100 probability, is resistant to defensive distillation. The experimental results also indicate that the robustness of the artificial intelligence algorithm is related to the complexity of the model and the data set. In addition, we find that the adversarial samples to some extent reproduce the characteristics of the sample data learned by the neural network model. | Some recent research aimed to defend against the attack of adversarial samples and proposed approaches such as defensive distillatione @cite_20 @cite_10 @cite_31 @cite_28 . However, experiment results show that these approaches do not perform well in particular situations due to not being able to defend against adversarial samples of high quality @cite_3 . | {
"cite_N": [
"@cite_28",
"@cite_3",
"@cite_31",
"@cite_10",
"@cite_20"
],
"mid": [
"2950468330",
"2625220439",
"2594867206",
"2174868984",
"2606529538"
],
"abstract": [
"Machine learning and deep learning in particular has advanced tremendously on perceptual tasks in recent years. However, it remains vulnerable against adversarial perturbations of the input that have been crafted specifically to fool the system while being quasi-imperceptible to a human. In this work, we propose to augment deep neural networks with a small \"detector\" subnetwork which is trained on the binary classification task of distinguishing genuine data from data containing adversarial perturbations. Our method is orthogonal to prior work on addressing adversarial perturbations, which has mostly focused on making the classification network itself more robust. We show empirically that adversarial perturbations can be detected surprisingly well even though they are quasi-imperceptible to humans. Moreover, while the detectors have been trained to detect only a specific adversary, they generalize to similar and weaker adversaries. In addition, we propose an adversarial attack that fools both the classifier and the detector and a novel training procedure for the detector that counteracts this attack.",
"Ongoing research has proposed several methods to defend neural networks against adversarial examples, many of which researchers have shown to be ineffective. We ask whether a strong defense can be created by combining multiple (possibly weak) defenses. To answer this question, we study three defenses that follow this approach. Two of these are recently proposed defenses that intentionally combine components designed to work well together. A third defense combines three independent defenses. For all the components of these defenses and the combined defenses themselves, we show that an adaptive adversary can create adversarial examples successfully with low distortion. Thus, our work implies that ensemble of weak defenses is not sufficient to provide strong defense against adversarial examples.",
"Deep neural networks (DNNs) are powerful nonlinear architectures that are known to be robust to random perturbations of the input. However, these models are vulnerable to adversarial perturbations--small input changes crafted explicitly to fool the model. In this paper, we ask whether a DNN can distinguish adversarial samples from their normal and noisy counterparts. We investigate model confidence on adversarial samples by looking at Bayesian uncertainty estimates, available in dropout neural networks, and by performing density estimation in the subspace of deep features learned by the model. The result is a method for implicit adversarial detection that is oblivious to the attack algorithm. We evaluate this method on a variety of standard datasets including MNIST and CIFAR-10 and show that it generalizes well across different architectures and attacks. Our findings report that 85-93 ROC-AUC can be achieved on a number of standard classification tasks with a negative class that consists of both normal and noisy samples.",
"Deep learning algorithms have been shown to perform extremely well on many classical machine learning problems. However, recent studies have shown that deep learning, like other machine learning techniques, is vulnerable to adversarial samples: inputs crafted to force a deep neural network (DNN) to provide adversary-selected outputs. Such attacks can seriously undermine the security of the system supported by the DNN, sometimes with devastating consequences. For example, autonomous vehicles can be crashed, illicit or illegal content can bypass content filters, or biometric authentication systems can be manipulated to allow improper access. In this work, we introduce a defensive mechanism called defensive distillation to reduce the effectiveness of adversarial samples on DNNs. We analytically investigate the generalizability and robustness properties granted by the use of defensive distillation when training DNNs. We also empirically study the effectiveness of our defense mechanisms on two DNNs placed in adversarial settings. The study shows that defensive distillation can reduce effectiveness of sample creation from 95 to less than 0.5 on a studied DNN. Such dramatic gains can be explained by the fact that distillation leads gradients used in adversarial sample creation to be reduced by a factor of 10^30. We also find that distillation increases the average minimum number of features that need to be modified to create adversarial samples by about 800 on one of the DNNs we tested.",
"Many machine learning classifiers are vulnerable to adversarial perturbations. An adversarial perturbation modifies an input to change a classifier's prediction without causing the input to seem substantially different to human perception. We deploy three methods to detect adversarial images. Adversaries trying to bypass our detectors must make the adversarial image less pathological or they will fail trying. Our best detection method reveals that adversarial images place abnormal emphasis on the lower-ranked principal components from PCA. Other detectors and a colorful saliency map are in an appendix."
]
} |
1907.09987 | 2963313981 | Bayesian inference is used extensively to infer and to quantify the uncertainty in a field of interest from a measurement of a related field when the two are linked by a physical model. Despite its many applications, Bayesian inference faces challenges when inferring fields that have discrete representations of large dimension, and or have prior distributions that are difficult to represent mathematically. In this manuscript we consider the use of Generative Adversarial Networks (GANs) in addressing these challenges. A GAN is a type of deep neural network equipped with the ability to learn the distribution implied by multiple samples of a given field. Once trained on these samples, the generator component of a GAN maps the iid components of a low-dimensional latent vector to an approximation of the distribution of the field of interest. In this work we demonstrate how this approximate distribution may be used as a prior in a Bayesian update, and how it addresses the challenges associated with characterizing complex prior distributions and the large dimension of the inferred field. We demonstrate the efficacy of this approach by applying it to the problem of inferring and quantifying uncertainty in the initial temperature field in a heat conduction problem from a noisy measurement of the temperature at later time. | The solution of an inverse problem using sample-based priors has a rich history (see @cite_6 @cite_33 for example). As does the idea of reducing the dimension of the parameter space by mapping it to a lower-dimensional space @cite_17 @cite_27 . However, the use of GANs in these tasks is novel. | {
"cite_N": [
"@cite_27",
"@cite_33",
"@cite_6",
"@cite_17"
],
"mid": [
"2075385817",
"2019588162",
"1983546683",
"2074686342"
],
"abstract": [
"A greedy algorithm for the construction of a reduced model with reduction in both parameter and state is developed for an efficient solution of statistical inverse problems governed by partial differential equations with distributed parameters. Large-scale models are too costly to evaluate repeatedly, as is required in the statistical setting. Furthermore, these models often have high-dimensional parametric input spaces, which compounds the difficulty of effectively exploring the uncertainty space. We simultaneously address both challenges by constructing a projection-based reduced model that accepts low-dimensional parameter inputs and whose model evaluations are inexpensive. The associated parameter and state bases are obtained through a greedy procedure that targets the governing equations, model outputs, and prior information. The methodology and results are presented for groundwater inverse problems in one and two dimensions.",
"The construction of suitable preconditioners for the solution of linear systems by iterative methods continues to receive a lot of interest. Traditionally, preconditioners are designed to accelerate convergence of iterative methods to the solution of the linear system. However, when truncated iterative methods are used as regularized solvers of ill-posed problems, the rate of convergence is seldom an issue, and traditional preconditioners are of little use. Here, we present a new approach to the design of preconditioners for ill-posed linear systems, suitable when statistical information about the desired solution or a collection of typical solutions is available. The preconditioners are constructed from the covariance matrix of the solution viewed as a random variable. Since the construction is based on available prior information, these preconditioners are called priorconditioners. A statistical truncation index selection is also presented. Computed examples illustrate how effective such priorconditioners can be.",
"In this paper, we consider the impedance tomography problem of estimating the conductivity distribution within the body from static current voltage measurements on the body's surface. We present a new method of implementing prior information of the conductivities in the optimization algorithm. The method is based on the approximation of the prior covariance matrix by simulated samples of feasible conductivities. The reduction of the dimensionality of the optimization problem is performed by principal component analysis (PCA).",
"We consider a Bayesian approach to nonlinear inverse problems in which the unknown quantity is a spatial or temporal field, endowed with a hierarchical Gaussian process prior. Computational challenges in this construction arise from the need for repeated evaluations of the forward model (e.g., in the context of Markov chain Monte Carlo) and are compounded by high dimensionality of the posterior. We address these challenges by introducing truncated Karhunen-Loeve expansions, based on the prior distribution, to efficiently parameterize the unknown field and to specify a stochastic forward problem whose solution captures that of the deterministic forward model over the support of the prior. We seek a solution of this problem using Galerkin projection on a polynomial chaos basis, and use the solution to construct a reduced-dimensionality surrogate posterior density that is inexpensive to evaluate. We demonstrate the formulation on a transient diffusion equation with prescribed source terms, inferring the spatially-varying diffusivity of the medium from limited and noisy data."
]
} |
1907.09987 | 2963313981 | Bayesian inference is used extensively to infer and to quantify the uncertainty in a field of interest from a measurement of a related field when the two are linked by a physical model. Despite its many applications, Bayesian inference faces challenges when inferring fields that have discrete representations of large dimension, and or have prior distributions that are difficult to represent mathematically. In this manuscript we consider the use of Generative Adversarial Networks (GANs) in addressing these challenges. A GAN is a type of deep neural network equipped with the ability to learn the distribution implied by multiple samples of a given field. Once trained on these samples, the generator component of a GAN maps the iid components of a low-dimensional latent vector to an approximation of the distribution of the field of interest. In this work we demonstrate how this approximate distribution may be used as a prior in a Bayesian update, and how it addresses the challenges associated with characterizing complex prior distributions and the large dimension of the inferred field. We demonstrate the efficacy of this approach by applying it to the problem of inferring and quantifying uncertainty in the initial temperature field in a heat conduction problem from a noisy measurement of the temperature at later time. | Recently, a number of authors have considered the use machine learning-based methods for solving inverse problems. These include the use of convolutional neural networks (CNNs) to solve physics-driven inverse problems @cite_18 @cite_23 @cite_46 , and GANs to solve problems in computer vision @cite_57 @cite_65 @cite_29 @cite_54 @cite_49 @cite_15 @cite_51 @cite_47 . There is also a growing body of work dedicated to using GANs to learn regularizers in solving inverse problems @cite_31 and in compressed sensing @cite_32 @cite_58 @cite_19 @cite_13 @cite_30 . However, these approaches differs from ours in at least two significant ways. First, they solve the inverse problem as an optimization problem and do not rely on Bayesian inference; as a result, regularization is added in an ad-hoc manner, and no attempt is made to quantify the uncertainty in the inferred field. Second, the forward map is assumed to satisfy an extension of the restricted isometry property, which may not be the case for forward maps induced by physics-based operators. | {
"cite_N": [
"@cite_13",
"@cite_30",
"@cite_18",
"@cite_47",
"@cite_31",
"@cite_15",
"@cite_29",
"@cite_54",
"@cite_65",
"@cite_32",
"@cite_57",
"@cite_19",
"@cite_23",
"@cite_49",
"@cite_46",
"@cite_58",
"@cite_51"
],
"mid": [
"",
"",
"2607406448",
"",
"2963226480",
"",
"",
"",
"",
"2595294663",
"2604885021",
"",
"2574952845",
"",
"2946534073",
"",
""
],
"abstract": [
"",
"",
"We propose a partially learned approach for the solution of ill-posed inverse problems with not necessarily linear forward operators. The method builds on ideas from classical regularisation theory and recent advances in deep learning to perform learning while making use of prior information about the inverse problem encoded in the forward operator, noise model and a regularising functional. The method results in a gradient-like iterative scheme, where the 'gradient' component is learned using a convolutional network that includes the gradients of the data discrepancy and regulariser as input in each iteration. We present results of such a partially learned gradient scheme on a non-linear tomographic inversion problem with simulated data from both the Sheep-Logan phantom as well as a head CT. The outcome is compared against filtered backprojection and total variation reconstruction and the proposed method provides a 5.4 dB PSNR improvement over the total variation reconstruction while being significantly faster, giving reconstructions of pixel images in about 0.4 s using a single graphics processing unit (GPU).",
"",
"Inverse Problems in medical imaging and computer vision are traditionally solved using purely model-based methods. Among those variational regularization models are one of the most popular approaches. We propose a new framework for applying data-driven approaches to inverse problems, using a neural network as a regularization functional. The network learns to discriminate between the distribution of ground truth images and the distribution of unregularized reconstructions. Once trained, the network is applied to the inverse problem by solving the corresponding variational problem. Unlike other data-based approaches for inverse problems, the algorithm can be applied even if only unsupervised training data is available. Experiments demonstrate the potential of the framework for denoising on the BSDS dataset and for computer tomography reconstruction on the LIDC dataset.",
"",
"",
"",
"",
"The goal of compressed sensing is to estimate a vector from an underdetermined system of noisy linear measurements, by making use of prior knowledge on the structure of vectors in the relevant domain. For almost all results in this literature, the structure is represented by sparsity in a well-chosen basis. We show how to achieve guarantees similar to standard compressed sensing but without employing sparsity at all. Instead, we suppose that vectors lie near the range of a generative model G : ℝk → ℝn. Our main theorem is that, if G is L-Lipschitz, then roughly O(k log L) random Gaussian measurements suffice for an l2 l2 recovery guarantee. We demonstrate our results using generative models from published variational autoencoder and generative adversarial networks. Our method can use 5-10x fewer measurements than Lasso for the same accuracy.",
"While deep learning methods have achieved state-of-theart performance in many challenging inverse problems like image inpainting and super-resolution, they invariably involve problem-specific training of the networks. Under this approach, each inverse problem requires its own dedicated network. In scenarios where we need to solve a wide variety of problems, e.g., on a mobile camera, it is inefficient and expensive to use these problem-specific networks. On the other hand, traditional methods using analytic signal priors can be used to solve any linear inverse problem; this often comes with a performance that is worse than learning-based methods. In this work, we provide a middle ground between the two kinds of methods — we propose a general framework to train a single deep neural network that solves arbitrary linear inverse problems. We achieve this by training a network that acts as a quasi-projection operator for the set of natural images and show that any linear inverse problem involving natural images can be solved using iterative methods. We empirically show that the proposed framework demonstrates superior performance over traditional methods using wavelet sparsity prior while achieving performance comparable to specially-trained networks on tasks including compressive sensing and pixel-wise inpainting.",
"",
"In this paper, we propose a novel deep convolutional neural network (CNN)-based algorithm for solving ill-posed inverse problems. Regularized iterative algorithms have emerged as the standard approach to ill-posed inverse problems in the past few decades. These methods produce excellent results, but can be challenging to deploy in practice due to factors including the high computational cost of the forward and adjoint operators and the difficulty of hyperparameter selection. The starting point of this paper is the observation that unrolled iterative methods have the form of a CNN (filtering followed by pointwise non-linearity) when the normal operator ( @math , where @math is the adjoint of the forward imaging operator, @math ) of the forward model is a convolution. Based on this observation, we propose using direct inversion followed by a CNN to solve normal-convolutional inverse problems. The direct inversion encapsulates the physical model of the system, but leads to artifacts when the problem is ill posed; the CNN combines multiresolution decomposition and residual learning in order to learn to remove these artifacts while preserving image structure. We demonstrate the performance of the proposed network in sparse-view reconstruction (down to 50 views) on parallel beam X-ray computed tomography in synthetic phantoms as well as in real experimental sinograms. The proposed network outperforms total variation-regularized iterative reconstruction for the more realistic phantoms and requires less than a second to reconstruct a @math image on the GPU.",
"",
"Abstract The ability to make decisions based on quantities of interest that depend on variables inferred from measurement finds application in different fields of mechanics and physics. The evaluation of the inferred variables, and hence the quantities of interest, from the measurement typically requires the solution of an inverse problem. For example, in medical imaging the elastic heterogeneity of a tumor and its nonlinear elastic response can be used to distinguish benign tumors from their malignant counterparts. These images of linear and nonlinear elastic parameters of tissue are typically obtained by using a measured displacement field and solving a complex inverse elasticity problem. In this paper we consider circumventing the solution of the inverse problem by using measured displacements as input to a deep convolutional neural network (CNN) and training it to classify tumors on the basis of their elastic heterogeneity and nonlinearity. For a simple, 5-layer CNN trained with 8,000 samples for heterogeneity, and a 4-layer CNN trained with 4,000 samples for nonlinear elasticity we report classification accuracies in the range of 99 . 7 − 99 . 9 . The training and testing data are both obtained from the forward solution of finite element models of samples. We also analyze the weights of the trained model to understand the process through which the network extracts features of elastic moduli from the input displacement images. Finally, we apply the nonlinear elasticity classifier, which is trained entirely using simulated data, to displacement images obtained from ten patients with breast lesions and note that it correctly classifies eight out of ten cases. This application illustrates how data from physics-based models can be used in improving the performance of a data-driven algorithm in data-sparse scenarios.",
"",
""
]
} |
1907.09987 | 2963313981 | Bayesian inference is used extensively to infer and to quantify the uncertainty in a field of interest from a measurement of a related field when the two are linked by a physical model. Despite its many applications, Bayesian inference faces challenges when inferring fields that have discrete representations of large dimension, and or have prior distributions that are difficult to represent mathematically. In this manuscript we consider the use of Generative Adversarial Networks (GANs) in addressing these challenges. A GAN is a type of deep neural network equipped with the ability to learn the distribution implied by multiple samples of a given field. Once trained on these samples, the generator component of a GAN maps the iid components of a low-dimensional latent vector to an approximation of the distribution of the field of interest. In this work we demonstrate how this approximate distribution may be used as a prior in a Bayesian update, and how it addresses the challenges associated with characterizing complex prior distributions and the large dimension of the inferred field. We demonstrate the efficacy of this approach by applying it to the problem of inferring and quantifying uncertainty in the initial temperature field in a heat conduction problem from a noisy measurement of the temperature at later time. | More recently, the approach described in @cite_36 utilizes GANs in a Bayesian setting; however the GAN is trained to approximate the posterior distribution (and not the prior, as in our case), and training is done in a supervised fashion. That is, paired samples of the measurement @math and the corresponding true solution @math are required. In contrast, our approach is unsupervised, where we require only samples of the true solution @math to train the GAN prior. | {
"cite_N": [
"@cite_36"
],
"mid": [
"2901093491"
],
"abstract": [
"Characterizing statistical properties of solutions of inverse problems is essential for decision making. Bayesian inversion offers a tractable framework for this purpose, but current approaches are computationally unfeasible for most realistic imaging applications in the clinic. We introduce two novel deep learning based methods for solving large-scale inverse problems using Bayesian inversion: a sampling based method using a WGAN with a novel mini-discriminator and a direct approach that trains a neural network using a novel loss function. The performance of both methods is demonstrated on image reconstruction in ultra low dose 3D helical CT. We compute the posterior mean and standard deviation of the 3D images followed by a hypothesis test to assess whether a \"dark spot\" in the liver of a cancer stricken patient is present. Both methods are computationally efficient and our evaluation shows very promising performance that clearly supports the claim that Bayesian inversion is usable for 3D imaging in time critical applications."
]
} |
1907.09798 | 2972934250 | Motivated by the success of encoding multi-scale contextual information for image analysis, we propose our PointAtrousGraph (PAG) - a deep permutation-invariant hierarchical encoder-decoder for efficiently exploiting multi-scale edge features in point clouds. Our PAG is constructed by several novel modules, such as Point Atrous Convolution (PAC), Edge-preserved Pooling (EP) and Edge-preserved Unpooling (EU). Similar with atrous convolution, our PAC can effectively enlarge receptive fields of filters and thus densely learn multi-scale point features. Following the idea of non-overlapping max-pooling operations, we propose our EP to preserve critical edge features during subsampling. Correspondingly, our EU modules gradually recover spatial information for edge features. In addition, we introduce chained skip subsampling upsampling modules that directly propagate edge features to the final stage. Particularly, our proposed auxiliary loss functions can further improve our performance. Experimental results show that our PAG outperform previous state-of-the-art methods on various 3D semantic perception applications. | Deep hierarchical encoder-decoder architectures are widely and successfully used for many image-based tasks, such as human pose estimation @cite_41 @cite_80 , semantic segmentation @cite_66 @cite_3 @cite_11 @cite_60 @cite_48 @cite_73 @cite_64 @cite_33 @cite_46 @cite_72 , optical flow estimation @cite_53 @cite_49 , and object detection @cite_6 @cite_74 @cite_56 . The encoder-decoder architecture, stacked hourglass module, is based on the successive steps of pooling and upsampling, which produces impressive results on human pose estimation @cite_41 . @cite_6 introduced the feature pyramid network for object detection. As for semantic segmentation tasks, U-Net @cite_66 , SegNet @cite_3 and DeconvNet @cite_72 follow the symmetric encoder-decoder architectures, and they refine the segmentation masks by utilizing features in low-level layers. DeepLabv3+ @cite_15 takes advantage of both the encoder-decoder architecture and the atrous convolution modules to effectively change the fields-of-view of filters to capture multi-scale contextual information, which provides new state-of-the-art performance on many semantic segmentation benchmarks. that progressively reduces the feature resolution, enlarges the receptive fields of filters and captures higher semantic information; (2) that gradually recovers the spatial information @cite_15 . | {
"cite_N": [
"@cite_64",
"@cite_33",
"@cite_60",
"@cite_41",
"@cite_48",
"@cite_53",
"@cite_15",
"@cite_3",
"@cite_6",
"@cite_56",
"@cite_72",
"@cite_49",
"@cite_74",
"@cite_80",
"@cite_46",
"@cite_73",
"@cite_66",
"@cite_11"
],
"mid": [
"2738804062",
"2963728677",
"2563705555",
"2307770531",
"2598666589",
"764651262",
"2964309882",
"2963881378",
"2565639579",
"2579985080",
"1745334888",
"2560474170",
"2572745118",
"",
"2963815618",
"2753588254",
"1901129140",
"2557406251"
],
"abstract": [
"Many machine vision applications require predictions for every pixel of the input image (for example semantic segmentation, boundary detection). Models for such problems usually consist of encoders which decreases spatial resolution while learning a high-dimensional representation, followed by decoders who recover the original input resolution and result in low-dimensional predictions. While encoders have been studied rigorously, relatively few studies address the decoder side. Therefore this paper presents an extensive comparison of a variety of decoders for a variety of pixel-wise prediction tasks. Our contributions are: (1) Decoders matter: we observe significant variance in results between different types of decoders on various problems. (2) We introduce a novel decoder: bilinear additive upsampling. (3) We introduce new residual-like connections for decoders. (4) We identify two decoder types which give a consistently high performance.",
"Recent progress in semantic segmentation has been driven by improving the spatial resolution under Fully Convolutional Networks (FCNs). To address this problem, we propose a Stacked Deconvolutional Network (SDN) for semantic segmentation. In SDN, multiple shallow deconvolutional networks, which are called as SDN units, are stacked one by one to integrate contextual information and bring the fine recovery of localization information. Meanwhile, inter-unit and intra-unit connections are designed to assist network training and enhance feature fusion since the connections improve the flow of information and gradient propagation throughout the network. Besides, hierarchical supervision is applied during the upsampling process of each SDN unit, which enhances the discrimination of feature representations and benefits the network optimization. We carry out comprehensive experiments and achieve the new state-ofthe- art results on four datasets, including PASCAL VOC 2012, CamVid, GATECH, COCO Stuff. In particular, our best model without CRF post-processing achieves an intersection-over-union score of 86.6 in the test set.",
"Recently, very deep convolutional neural networks (CNNs) have shown outstanding performance in object recognition and have also been the first choice for dense classification problems such as semantic segmentation. However, repeated subsampling operations like pooling or convolution striding in deep CNNs lead to a significant decrease in the initial image resolution. Here, we present RefineNet, a generic multi-path refinement network that explicitly exploits all the information available along the down-sampling process to enable high-resolution prediction using long-range residual connections. In this way, the deeper layers that capture high-level semantic features can be directly refined using fine-grained features from earlier convolutions. The individual components of RefineNet employ residual connections following the identity mapping mindset, which allows for effective end-to-end training. Further, we introduce chained residual pooling, which captures rich background context in an efficient manner. We carry out comprehensive experiments and set new state-of-the-art results on seven public datasets. In particular, we achieve an intersection-over-union score of 83.4 on the challenging PASCAL VOC 2012 dataset, which is the best reported result to date.",
"This work introduces a novel convolutional network architecture for the task of human pose estimation. Features are processed across all scales and consolidated to best capture the various spatial relationships associated with the body. We show how repeated bottom-up, top-down processing used in conjunction with intermediate supervision is critical to improving the performance of the network. We refer to the architecture as a “stacked hourglass” network based on the successive steps of pooling and upsampling that are done to produce a final set of predictions. State-of-the-art results are achieved on the FLIC and MPII benchmarks outcompeting all recent methods.",
"One of recent trends [31, 32, 14] in network architecture design is stacking small filters (e.g., 1x1 or 3x3) in the entire network because the stacked small filters is more efficient than a large kernel, given the same computational complexity. However, in the field of semantic segmentation, where we need to perform dense per-pixel prediction, we find that the large kernel (and effective receptive field) plays an important role when we have to perform the classification and localization tasks simultaneously. Following our design principle, we propose a Global Convolutional Network to address both the classification and localization issues for the semantic segmentation. We also suggest a residual-based boundary refinement to further refine the object boundaries. Our approach achieves state-of-art performance on two public benchmarks and significantly outperforms previous results, 82.2 (vs 80.2 ) on PASCAL VOC 2012 dataset and 76.9 (vs 71.8 ) on Cityscapes dataset.",
"Convolutional neural networks (CNNs) have recently been very successful in a variety of computer vision tasks, especially on those linked to recognition. Optical flow estimation has not been among the tasks CNNs succeeded at. In this paper we construct CNNs which are capable of solving the optical flow estimation problem as a supervised learning task. We propose and compare two architectures: a generic architecture and another one including a layer that correlates feature vectors at different image locations. Since existing ground truth data sets are not sufficiently large to train a CNN, we generate a large synthetic Flying Chairs dataset. We show that networks trained on this unrealistic data still generalize very well to existing datasets such as Sintel and KITTI, achieving competitive accuracy at frame rates of 5 to 10 fps.",
"Spatial pyramid pooling module or encode-decoder structure are used in deep neural networks for semantic segmentation task. The former networks are able to encode multi-scale contextual information by probing the incoming features with filters or pooling operations at multiple rates and multiple effective fields-of-view, while the latter networks can capture sharper object boundaries by gradually recovering the spatial information. In this work, we propose to combine the advantages from both methods. Specifically, our proposed model, DeepLabv3+, extends DeepLabv3 by adding a simple yet effective decoder module to refine the segmentation results especially along object boundaries. We further explore the Xception model and apply the depthwise separable convolution to both Atrous Spatial Pyramid Pooling and decoder modules, resulting in a faster and stronger encoder-decoder network. We demonstrate the effectiveness of the proposed model on PASCAL VOC 2012 and Cityscapes datasets, achieving the test set performance of 89 and 82.1 without any post-processing. Our paper is accompanied with a publicly available reference implementation of the proposed models in Tensorflow at https: github.com tensorflow models tree master research deeplab.",
"We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network [1] . The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature map(s). Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. This eliminates the need for learning to upsample. The upsampled maps are sparse and are then convolved with trainable filters to produce dense feature maps. We compare our proposed architecture with the widely adopted FCN [2] and also with the well known DeepLab-LargeFOV [3] , DeconvNet [4] architectures. This comparison reveals the memory versus accuracy trade-off involved in achieving good segmentation performance. SegNet was primarily motivated by scene understanding applications. Hence, it is designed to be efficient both in terms of memory and computational time during inference. It is also significantly smaller in the number of trainable parameters than other competing architectures and can be trained end-to-end using stochastic gradient descent. We also performed a controlled benchmark of SegNet and other architectures on both road scenes and SUN RGB-D indoor scene segmentation tasks. These quantitative assessments show that SegNet provides good performance with competitive inference time and most efficient inference memory-wise as compared to other architectures. We also provide a Caffe implementation of SegNet and a web demo at http: mi.eng.cam.ac.uk projects segnet .",
"Feature pyramids are a basic component in recognition systems for detecting objects at different scales. But pyramid representations have been avoided in recent object detectors that are based on deep convolutional networks, partially because they are slow to compute and memory intensive. In this paper, we exploit the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost. A top-down architecture with lateral connections is developed for building high-level semantic feature maps at all scales. This architecture, called a Feature Pyramid Network (FPN), shows significant improvement as a generic feature extractor in several applications. Using a basic Faster R-CNN system, our method achieves state-of-the-art single-model results on the COCO detection benchmark without bells and whistles, surpassing all existing single-model entries including those from the COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU and thus is a practical and accurate solution to multi-scale object detection. Code will be made publicly available.",
"The main contribution of this paper is an approach for introducing additional context into state-of-the-art general object detection. To achieve this we first combine a state-of-the-art classifier (Residual-101[14]) with a fast detection framework (SSD[18]). We then augment SSD+Residual-101 with deconvolution layers to introduce additional large-scale context in object detection and improve accuracy, especially for small objects, calling our resulting system DSSD for deconvolutional single shot detector. While these two contributions are easily described at a high-level, a naive implementation does not succeed. Instead we show that carefully adding additional stages of learned transformations, specifically a module for feed-forward connections in deconvolution and a new output module, enables this new approach and forms a potential way forward for further detection research. Results are shown on both PASCAL VOC and COCO detection. Our DSSD with @math input achieves 81.5 mAP on VOC2007 test, 80.0 mAP on VOC2012 test, and 33.2 mAP on COCO, outperforming a state-of-the-art method R-FCN[3] on each dataset.",
"We propose a novel semantic segmentation algorithm by learning a deep deconvolution network. We learn the network on top of the convolutional layers adopted from VGG 16-layer net. The deconvolution network is composed of deconvolution and unpooling layers, which identify pixelwise class labels and predict segmentation masks. We apply the trained network to each proposal in an input image, and construct the final semantic segmentation map by combining the results from all proposals in a simple manner. The proposed algorithm mitigates the limitations of the existing methods based on fully convolutional networks by integrating deep deconvolution network and proposal-wise prediction, our segmentation method typically identifies detailed structures and handles objects in multiple scales naturally. Our network demonstrates outstanding performance in PASCAL VOC 2012 dataset, and we achieve the best accuracy (72.5 ) among the methods trained without using Microsoft COCO dataset through ensemble with the fully convolutional network.",
"The FlowNet demonstrated that optical flow estimation can be cast as a learning problem. However, the state of the art with regard to the quality of the flow has still been defined by traditional methods. Particularly on small displacements and real-world data, FlowNet cannot compete with variational methods. In this paper, we advance the concept of end-to-end learning of optical flow and make it work really well. The large improvements in quality and speed are caused by three major contributions: first, we focus on the training data and show that the schedule of presenting data during training is very important. Second, we develop a stacked architecture that includes warping of the second image with intermediate optical flow. Third, we elaborate on small displacements by introducing a subnetwork specializing on small motions. FlowNet 2.0 is only marginally slower than the original FlowNet but decreases the estimation error by more than 50 . It performs on par with state-of-the-art methods, while running at interactive frame rates. Moreover, we present faster variants that allow optical flow computation at up to 140fps with accuracy matching the original FlowNet.",
"In recent years, we have seen tremendous progress in the field of object detection. Most of the recent improvements have been achieved by targeting deeper feedforward networks. However, many hard object categories such as bottle, remote, etc. require representation of fine details and not just coarse, semantic representations. But most of these fine details are lost in the early convolutional layers. What we need is a way to incorporate finer details from lower layers into the detection architecture. Skip connections have been proposed to combine high-level and low-level features, but we argue that selecting the right features from low-level requires top-down contextual information. Inspired by the human visual pathway, in this paper we propose top-down modulations as a way to incorporate fine details into the detection framework. Our approach supplements the standard bottom-up, feedforward ConvNet with a top-down modulation (TDM) network, connected using lateral connections. These connections are responsible for the modulation of lower layer filters, and the top-down network handles the selection and integration of contextual information and low-level features. The proposed TDM architecture provides a significant boost on the COCO testdev benchmark, achieving 28.6 AP for VGG16, 35.2 AP for ResNet101, and 37.3 for InceptionResNetv2 network, without any bells and whistles (e.g., multi-scale, iterative box refinement, etc.).",
"",
"Modern semantic segmentation frameworks usually combine low-level and high-level features from pre-trained backbone convolutional models to boost performance. In this paper, we first point out that a simple fusion of low-level and high-level features could be less effective because of the gap in semantic levels and spatial resolution. We find that introducing semantic information into low-level features and high-resolution details into high-level features is more effective for the later fusion. Based on this observation, we propose a new framework, named ExFuse, to bridge the gap between low-level and high-level features thus significantly improve the segmentation quality by 4.0 in total. Furthermore, we evaluate our approach on the challenging PASCAL VOC 2012 segmentation benchmark and achieve 87.9 mean IoU, which outperforms the previous state-of-the-art results.",
"Effective integration of local and global contextual information is crucial for dense labeling problems. Most existing methods based on an encoder-decoder architecture simply concatenate features from earlier layers to obtain higher-frequency details in the refinement stages. However, there are limits to the quality of refinement possible if ambiguous information is passed forward. In this paper we propose Gated Feedback Refinement Network (G-FRNet), an end-to-end deep learning framework for dense labeling tasks that addresses this limitation of existing methods. Initially, G-FRNet makes a coarse prediction and then it progressively refines the details by efficiently integrating local and global contextual information during the refinement stages. We introduce gate units that control the information passed forward in order to filter out ambiguity. Experiments on three challenging dense labeling datasets (CamVid, PASCAL VOC 2012, and Horse-Cow Parsing) show the effectiveness of our method. Our proposed approach achieves state-of-the-art results on the CamVid and Horse-Cow Parsing datasets, and produces competitive results on the PASCAL VOC 2012 dataset.",
"There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http: lmb.informatik.uni-freiburg.de people ronneber u-net .",
"Semantic image segmentation is an essential component of modern autonomous driving systems, as an accurate understanding of the surrounding scene is crucial to navigation and action planning. Current state-of-the-art approaches in semantic image segmentation rely on pre-trained networks that were initially developed for classifying images as a whole. While these networks exhibit outstanding recognition performance (i.e., what is visible?), they lack localization accuracy (i.e., where precisely is something located?). Therefore, additional processing steps have to be performed in order to obtain pixel-accurate segmentation masks at the full image resolution. To alleviate this problem we propose a novel ResNet-like architecture that exhibits strong localization and recognition performance. We combine multi-scale context with pixel-level accuracy by using two processing streams within our network: One stream carries information at the full image resolution, enabling precise adherence to segment boundaries. The other stream undergoes a sequence of pooling operations to obtain robust features for recognition. The two streams are coupled at the full image resolution using residuals. Without additional processing steps and without pre-training, our approach achieves an intersection-over-union score of 71.8 on the Cityscapes dataset."
]
} |
1907.09798 | 2972934250 | Motivated by the success of encoding multi-scale contextual information for image analysis, we propose our PointAtrousGraph (PAG) - a deep permutation-invariant hierarchical encoder-decoder for efficiently exploiting multi-scale edge features in point clouds. Our PAG is constructed by several novel modules, such as Point Atrous Convolution (PAC), Edge-preserved Pooling (EP) and Edge-preserved Unpooling (EU). Similar with atrous convolution, our PAC can effectively enlarge receptive fields of filters and thus densely learn multi-scale point features. Following the idea of non-overlapping max-pooling operations, we propose our EP to preserve critical edge features during subsampling. Correspondingly, our EU modules gradually recover spatial information for edge features. In addition, we introduce chained skip subsampling upsampling modules that directly propagate edge features to the final stage. Particularly, our proposed auxiliary loss functions can further improve our performance. Experimental results show that our PAG outperform previous state-of-the-art methods on various 3D semantic perception applications. | Unorganized point cloud is a simple and straight-forward representation of 3D structures. @cite_45 . The pioneering work PointNet @cite_45 achieves permutation-invariance by applying symmetric functions. Inspired by PointNet, many following works @cite_65 @cite_82 @cite_59 @cite_21 propose more complicated symmetric operations to exploit local geometrical details in 3D points. Semantic labeling on point cloud is more challenging than classification and object-part segmentation. SPG @cite_0 and SGPN @cite_50 both construct super point graphs to refine their semantic labeling results. RSNet @cite_32 introduces a slice pooling layer, a recurrent neural network layer and a slice unpooling layer, which projects unordered point features onto an ordered sequence of feature vectors. However, unlike many networks for semantic labeling tasks on images, they @cite_0 @cite_76 @cite_70 do not have hierarchical encoder-decoder architectures. | {
"cite_N": [
"@cite_70",
"@cite_21",
"@cite_65",
"@cite_32",
"@cite_0",
"@cite_45",
"@cite_59",
"@cite_50",
"@cite_76",
"@cite_82"
],
"mid": [
"",
"",
"2963121255",
"2963517242",
"2769473888",
"2560609797",
"",
"2769312834",
"",
""
],
"abstract": [
"",
"",
"Few prior works study deep learning on point sets. PointNet is a pioneer in this direction. However, by design PointNet does not capture local structures induced by the metric space points live in, limiting its ability to recognize fine-grained patterns and generalizability to complex scenes. In this work, we introduce a hierarchical neural network that applies PointNet recursively on a nested partitioning of the input point set. By exploiting metric space distances, our network is able to learn local features with increasing contextual scales. With further observation that point sets are usually sampled with varying densities, which results in greatly decreased performance for networks trained on uniform densities, we propose novel set learning layers to adaptively combine features from multiple scales. Experiments show that our network called PointNet++ is able to learn deep point set features efficiently and robustly. In particular, results significantly better than state-of-the-art have been obtained on challenging benchmarks of 3D point clouds.",
"Point clouds are an efficient data format for 3D data. However, existing 3D segmentation methods for point clouds either do not model local dependencies [21] or require added computations [14, 23]. This work presents a novel 3D segmentation framework, RSNet1, to efficiently model local structures in point clouds. The key component of the RSNet is a lightweight local dependency module. It is a combination of a novel slice pooling layer, Recurrent Neural Network (RNN) layers, and a slice unpooling layer. The slice pooling layer is designed to project features of unordered points onto an ordered sequence of feature vectors so that traditional end-to-end learning algorithms (RNNs) can be applied. The performance of RSNet is validated by comprehensive experiments on the S3DIS[1], ScanNet[3], and ShapeNet [34] datasets. In its simplest form, RSNets surpass all previous state-of-the-art methods on these benchmarks. And comparisons against previous state-of-the-art methods [21, 23] demonstrate the efficiency of RSNets.",
"We propose a novel deep learning-based framework to tackle the challenge of semantic segmentation of large-scale point clouds of millions of points. We argue that the organization of 3D point clouds can be efficiently captured by a structure called superpoint graph (SPG), derived from a partition of the scanned scene into geometrically homogeneous elements. SPGs offer a compact yet rich representation of contextual relationships between object parts, which is then exploited by a graph convolutional network. Our framework sets a new state of the art for segmenting outdoor LiDAR scans (+11.9 and +8.8 mIoU points for both Semantic3D test sets), as well as indoor scans (+12.4 mIoU points for the S3DIS dataset).",
"Point cloud is an important type of geometric data structure. Due to its irregular format, most researchers transform such data to regular 3D voxel grids or collections of images. This, however, renders data unnecessarily voluminous and causes issues. In this paper, we design a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input. Our network, named PointNet, provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing. Though simple, PointNet is highly efficient and effective. Empirically, it shows strong performance on par or even better than state of the art. Theoretically, we provide analysis towards understanding of what the network has learnt and why the network is robust with respect to input perturbation and corruption.",
"",
"We introduce Similarity Group Proposal Network (SGPN), a simple and intuitive deep learning framework for 3D object instance segmentation on point clouds. SGPN uses a single network to predict point grouping proposals and a corresponding semantic class for each proposal, from which we can directly extract instance segmentation results. Important to the effectiveness of SGPN is its novel representation of 3D instance segmentation results in the form of a similarity matrix that indicates the similarity between each pair of points in embedded feature space, thus producing an accurate grouping proposal for each point. Experimental results on various 3D scenes show the effectiveness of our method on 3D instance segmentation, and we also evaluate the capability of SGPN to improve 3D object detection and semantic segmentation results. We also demonstrate its flexibility by seamlessly incorporating 2D CNN features into the framework to boost performance.",
"",
""
]
} |
1907.09760 | 2963048191 | We present NOLBO, a variational observation model estimation for 3D multi-object from 2D single shot. Previous probabilistic instance-level understandings mainly consider the single-object image, not single shot with multi-object; relations between objects and the entire scene are out of their focus. The objectness of each observation also hardly join their model. Therefore, we propose a method to approximate the Bayesian observation model of scene-level 3D multi-object understanding. By exploiting variational auto-encoder (VAE), we estimate latent variables from the entire scene, which follow tractable distributions and concurrently imply 3D full shape and pose. To perform object-oriented data association and probabilistic simultaneous localization and mapping (SLAM), our observation models can easily be adopted to probabilistic inference by replacing object-oriented features with latent variables. | With the recent advent of neural networks, a number of single object classification and detection methods with high performance have been proposed @cite_31 @cite_36 @cite_39 . Beyond obtaining one feature vector from one image for an object, several multi-object detection techniques from single shot have been developed by introducing new network structures @cite_12 @cite_59 @cite_51 @cite_16 @cite_38 . In particular, some of these methods can be applied to various real-time tasks since the whole detection network is composed of single network pipeline. | {
"cite_N": [
"@cite_38",
"@cite_36",
"@cite_39",
"@cite_59",
"@cite_31",
"@cite_16",
"@cite_51",
"@cite_12"
],
"mid": [
"2884561390",
"2194775991",
"1686810756",
"",
"2163605009",
"",
"",
"2613718673"
],
"abstract": [
"The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors. Code is at: https: github.com facebookresearch Detectron.",
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.",
"In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.",
"",
"We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry.",
"",
"",
"State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn."
]
} |
1907.09760 | 2963048191 | We present NOLBO, a variational observation model estimation for 3D multi-object from 2D single shot. Previous probabilistic instance-level understandings mainly consider the single-object image, not single shot with multi-object; relations between objects and the entire scene are out of their focus. The objectness of each observation also hardly join their model. Therefore, we propose a method to approximate the Bayesian observation model of scene-level 3D multi-object understanding. By exploiting variational auto-encoder (VAE), we estimate latent variables from the entire scene, which follow tractable distributions and concurrently imply 3D full shape and pose. To perform object-oriented data association and probabilistic simultaneous localization and mapping (SLAM), our observation models can easily be adopted to probabilistic inference by replacing object-oriented features with latent variables. | Various studies have also been conducted to understand the instance-level representation from 2D images such as object shape, orientation or bounding box. @cite_4 @cite_2 @cite_55 and @cite_48 estimate the orientation of the object by viewpoint classification with discretized bins. In addition, 3D bounding box regression has been carried out to obtain the object location and orientation @cite_49 @cite_6 @cite_10 @cite_40 . In order to estimate the distinct 3D shape of objects, @cite_20 aligns the prior shape to a single object image through key point matching and estimates its 3D shape and orientation together. @cite_0 estimates the 3D mesh with linear combination of parameterized prior shapes. In @cite_23 @cite_37 @cite_17 @cite_41 , they have actively utilized non-linear regression and latent variables of neural networks for 3D reconstruction from 2D. | {
"cite_N": [
"@cite_37",
"@cite_4",
"@cite_41",
"@cite_48",
"@cite_55",
"@cite_6",
"@cite_0",
"@cite_40",
"@cite_49",
"@cite_2",
"@cite_23",
"@cite_10",
"@cite_20",
"@cite_17"
],
"mid": [
"2964137676",
"1591870335",
"",
"2799123546",
"1949483711",
"2768879211",
"2773522905",
"2560544142",
"2229637417",
"2518803647",
"2546066744",
"2600447016",
"1946609740",
"2963735494"
],
"abstract": [
"What is a good vector representation of an object? We believe that it should be generative in 3D, in the sense that it can produce new 3D objects; as well as be predictable from 2D, in the sense that it can be perceived from 2D images. We propose a novel architecture, called the TL-embedding network, to learn an embedding space with these properties. The network consists of two components: (a) an autoencoder that ensures the representation is generative; and (b) a convolutional network that ensures the representation is predictable. This enables tackling a number of tasks including voxel prediction from 2D images and 3D model retrieval. Extensive experimental analysis demonstrates the usefulness and versatility of this embedding.",
"Object viewpoint estimation from 2D images is an essential task in computer vision. However, two issues hinder its progress: scarcity of training data with viewpoint annotations, and a lack of powerful features. Inspired by the growing availability of 3D models, we propose a framework to address both issues by combining render-based image synthesis and CNNs (Convolutional Neural Networks). We believe that 3D models have the potential in generating a large number of images of high variation, which can be well exploited by deep CNN with a high learning capacity. Towards this goal, we propose a scalable and overfit-resistant image synthesis pipeline, together with a novel CNN specifically tailored for the viewpoint estimation task. Experimentally, we show that the viewpoint estimation from our pipeline can significantly outperform state-of-the-art methods on PASCAL 3D+ benchmark.",
"",
"We present a fast inverse-graphics framework for instance-level 3D scene understanding. We train a deep convolutional network that learns to map image regions to the full 3D shape and pose of all object instances in the image. Our method produces a compact 3D representation of the scene, which can be readily used for applications like autonomous driving. Many traditional 2D vision outputs, like instance segmentations and depth-maps, can be obtained by simply rendering our output 3D scene model. We exploit class-specific shape priors by learning a low dimensional shape-space from collections of CAD models. We present novel representations of shape and pose, that strive towards better 3D equivariance and generalization. In order to exploit rich supervisory signals in the form of 2D annotations like segmentation, we propose a differentiable Render-and-Compare loss that allows 3D shape and pose to be learned with 2D supervision. We evaluate our method on the challenging real-world datasets of Pascal3D+ and KITTI, where we achieve state-of-the-art results.",
"We characterize the problem of pose estimation for rigid objects in terms of determining viewpoint to explain coarse pose and keypoint prediction to capture the finer details. We address both these tasks in two different settings - the constrained setting with known bounding boxes and the more challenging detection setting where the aim is to simultaneously detect and correctly estimate pose of objects. We present Convolutional Neural Network based architectures for these and demonstrate that leveraging viewpoint estimates can substantially improve local appearance based keypoint predictions. In addition to achieving significant improvements over state-of-the-art in the above tasks, we analyze the error modes and effect of object characteristics on performance to guide future efforts towards this goal.",
"We propose a single-shot approach for simultaneously detecting an object in an RGB image and predicting its 6D pose without requiring multiple stages or having to examine multiple hypotheses. Unlike a recently proposed single-shot technique for this task [10] that only predicts an approximate 6D pose that must then be refined, ours is accurate enough not to require additional post-processing. As a result, it is much faster - 50 fps on a Titan X (Pascal) GPU - and more suitable for real-time processing. The key component of our method is a new CNN architecture inspired by [27, 28] that directly predicts the 2D image locations of the projected vertices of the object's 3D bounding box. The object's 6D pose is then estimated using a PnP algorithm. For single object and multiple object pose estimation on the LINEMOD and OCCLUSION datasets, our approach substantially outperforms other recent CNN-based approaches [10, 25] when they are all used without postprocessing. During post-processing, a pose refinement step can be used to boost the accuracy of these two methods, but at 10 fps or less, they are much slower than our method.",
"One challenge that remains open in 3D deep learning is how to efficiently represent 3D data to feed deep networks. Recent works have relied on volumetric or point cloud representations, but such approaches suffer from a number of issues such as computational complexity, unordered data, and lack of finer geometry. This paper demonstrates that a mesh representation (i.e. vertices and faces to form polygonal surfaces) is able to capture fine-grained geometry for 3D reconstruction tasks. A mesh however is also unstructured data similar to point clouds. We address this problem by proposing a learning framework to infer the parameters of a compact mesh representation rather than learning from the mesh itself. This compact representation encodes a mesh using free-form deformation and a sparse linear combination of models allowing us to reconstruct 3D meshes from single images. In contrast to prior work, we do not rely on silhouettes and landmarks to perform 3D reconstruction. We evaluate our method on synthetic and real-world datasets with very promising results. Our framework efficiently reconstructs 3D objects in a low-dimensional way while preserving its important geometrical aspects.",
"We present a method for 3D object detection and pose estimation from a single image. In contrast to current techniques that only regress the 3D orientation of an object, our method first regresses relatively stable 3D object properties using a deep convolutional neural network and then combines these estimates with geometric constraints provided by a 2D object bounding box to produce a complete 3D bounding box. The first network output estimates the 3D object orientation using a novel hybrid discrete-continuous loss, which significantly outperforms the L2 loss. The second output regresses the 3D object dimensions, which have relatively little variance compared to alternatives and can often be predicted for many object types. These estimates, combined with the geometric constraints on translation imposed by the 2D bounding box, enable us to recover a stable and accurate 3D object pose. We evaluate our method on the challenging KITTI object detection benchmark [2] both on the official metric of 3D orientation estimation and also on the accuracy of the obtained 3D bounding boxes. Although conceptually simple, our method outperforms more complex and computationally expensive approaches that leverage semantic segmentation, instance level segmentation and flat ground priors [4] and sub-category detection [23][24]. Our discrete-continuous loss also produces state of the art results for 3D viewpoint estimation on the Pascal 3D+ dataset[26].",
"We focus on the task of amodal 3D object detection in RGB-D images, which aims to produce a 3D bounding box of an object in metric form at its full extent. We introduce Deep Sliding Shapes, a 3D ConvNet formulation that takes a 3D volumetric scene from a RGB-D image as input and outputs 3D object bounding boxes. In our approach, we propose the first 3D Region Proposal Network (RPN) to learn objectness from geometric shapes and the first joint Object Recognition Network (ORN) to extract geometric features in 3D and color features in 2D. In particular, we handle objects of various sizes by training an amodal RPN at two different scales and an ORN to regress 3D bounding boxes. Experiments show that our algorithm outperforms the state-of-the-art by 13.8 in mAP and is 200× faster than the original Sliding Shapes.",
"Convolutional Neural Networks (CNNs) were recently shown to provide state-of-the- art results for object category viewpoint estimation. However different ways of formulat- ing this problem have been proposed and the competing approaches have been explored with very different design choices. This paper presents a comparison of these approaches in a unified setting as well as a detailed analysis of the key factors that impact perfor- mance. Followingly, we present a new joint training method with the detection task and demonstrate its benefit. We also highlight the superiority of classification approaches over regression approaches, quantify the benefits of deeper architectures and extended training data, and demonstrate that synthetic data is beneficial even when using ImageNet training data. By combining all these elements, we demonstrate an improvement of ap- proximately 5 mAVP over previous state-of-the-art results on the Pascal3D+ dataset [28]. In particular for their most challenging 24 view classification task we improve the results from 31.1 to 36.1 mAVP.",
"We study the problem of 3D object generation. We propose a novel framework, namely 3D Generative Adversarial Network (3D-GAN), which generates 3D objects from a probabilistic space by leveraging recent advances in volumetric convolutional networks and generative adversarial nets. The benefits of our model are three-fold: first, the use of an adversarial criterion, instead of traditional heuristic criteria, enables the generator to capture object structure implicitly and to synthesize high-quality 3D objects; second, the generator establishes a mapping from a low-dimensional probabilistic space to the space of 3D objects, so that we can sample objects without a reference image or CAD models, and explore the 3D object manifold; third, the adversarial discriminator provides a powerful 3D shape descriptor which, learned without supervision, has wide applications in 3D object recognition. Experiments demonstrate that our method generates high-quality 3D objects, and our unsupervisedly learned features achieve impressive performance on 3D object recognition, comparable with those of supervised learning methods.",
"This paper presents a novel approach to estimating the continuous six degree of freedom (6-DoF) pose (3D translation and rotation) of an object from a single RGB image. The approach combines semantic keypoints predicted by a convolutional network (convnet) with a deformable shape model. Unlike prior work, we are agnostic to whether the object is textured or textureless, as the convnet learns the optimal representation from the available training image data. Furthermore, the approach can be applied to instance- and class-based pose recovery. Empirically, we show that the proposed approach can accurately recover the 6-DoF object pose for both instance- and class-based scenarios with a cluttered background. For class-based object pose estimation, state-of-the-art accuracy is shown on the large-scale PASCAL3D+ dataset.",
"Despite the great progress achieved in recognizing objects as 2D bounding boxes in images, it is still very challenging to detect occluded objects and estimate the 3D properties of multiple objects from a single image. In this paper, we propose a novel object representation, 3D Voxel Pattern (3DVP), that jointly encodes the key properties of objects including appearance, 3D shape, viewpoint, occlusion and truncation. We discover 3DVPs in a data-driven way, and train a bank of specialized detectors for a dictionary of 3DVPs. The 3DVP detectors are capable of detecting objects with specific visibility patterns and transferring the meta-data from the 3DVPs to the detected objects, such as 2D segmentation mask, 3D pose as well as occlusion or truncation boundaries. The transferred meta-data allows us to infer the occlusion relationship among objects, which in turn provides improved object recognition results. Experiments are conducted on the KITTI detection benchmark [17] and the outdoor-scene dataset [41]. We improve state-of-the-art results on car detection and pose estimation with notable margins (6 in difficult data of KITTI). We also verify the ability of our method in accurately segmenting objects from the background and localizing them in 3D.",
"In this paper, we propose a novel 3D-RecGAN approach, which reconstructs the complete 3D structure of a given object from a single arbitrary depth view using generative adversarial networks. Unlike the existing work which typically requires multiple views of the same object or class labels to recover the full 3D geometry, the proposed 3D-RecGAN only takes the voxel grid representation of a depth view of the object as input, and is able to generate the complete 3D occupancy grid by filling in the occluded missing regions. The key idea is to combine the generative capabilities of autoencoders and the conditional Generative Adversarial Networks (GAN) framework, to infer accurate and fine-grained 3D structures of objects in high-dimensional voxel space. Extensive experiments on large synthetic datasets show that the proposed 3D-RecGAN significantly outperforms the state of the art in single view 3D object reconstruction, and is able to reconstruct unseen types of objects. Our code and data are available at: https: github.com Yang7879 3D-RecGAN."
]
} |
1907.09760 | 2963048191 | We present NOLBO, a variational observation model estimation for 3D multi-object from 2D single shot. Previous probabilistic instance-level understandings mainly consider the single-object image, not single shot with multi-object; relations between objects and the entire scene are out of their focus. The objectness of each observation also hardly join their model. Therefore, we propose a method to approximate the Bayesian observation model of scene-level 3D multi-object understanding. By exploiting variational auto-encoder (VAE), we estimate latent variables from the entire scene, which follow tractable distributions and concurrently imply 3D full shape and pose. To perform object-oriented data association and probabilistic simultaneous localization and mapping (SLAM), our observation models can easily be adopted to probabilistic inference by replacing object-oriented features with latent variables. | Through multi-object detection and instance-level understanding altogether, learning the disentangled representation of multi-object becomes achievable. @cite_6 exploits the yolov2 structure @cite_51 to estimate the 3D bounding box and center of the multi-object and obtains the orientations. In @cite_48 , they estimate the 3D shape rendering and orientation under faster R-CNN structure @cite_12 . They obtain the shape rendering via weighted sum of the parameterized prior shape with PCL. Orientations are estimated by classifying the bins which indicate the discretized object pose. Similarly, in @cite_15 , they design the object observation factor to perform data association for pose SLAM. RoIs for multi-object are obtained by @cite_51 . | {
"cite_N": [
"@cite_48",
"@cite_6",
"@cite_15",
"@cite_51",
"@cite_12"
],
"mid": [
"2799123546",
"2768879211",
"2789111492",
"",
"2613718673"
],
"abstract": [
"We present a fast inverse-graphics framework for instance-level 3D scene understanding. We train a deep convolutional network that learns to map image regions to the full 3D shape and pose of all object instances in the image. Our method produces a compact 3D representation of the scene, which can be readily used for applications like autonomous driving. Many traditional 2D vision outputs, like instance segmentations and depth-maps, can be obtained by simply rendering our output 3D scene model. We exploit class-specific shape priors by learning a low dimensional shape-space from collections of CAD models. We present novel representations of shape and pose, that strive towards better 3D equivariance and generalization. In order to exploit rich supervisory signals in the form of 2D annotations like segmentation, we propose a differentiable Render-and-Compare loss that allows 3D shape and pose to be learned with 2D supervision. We evaluate our method on the challenging real-world datasets of Pascal3D+ and KITTI, where we achieve state-of-the-art results.",
"We propose a single-shot approach for simultaneously detecting an object in an RGB image and predicting its 6D pose without requiring multiple stages or having to examine multiple hypotheses. Unlike a recently proposed single-shot technique for this task [10] that only predicts an approximate 6D pose that must then be refined, ours is accurate enough not to require additional post-processing. As a result, it is much faster - 50 fps on a Titan X (Pascal) GPU - and more suitable for real-time processing. The key component of our method is a new CNN architecture inspired by [27, 28] that directly predicts the 2D image locations of the projected vertices of the object's 3D bounding box. The object's 6D pose is then estimated using a PnP algorithm. For single object and multiple object pose estimation on the LINEMOD and OCCLUSION datasets, our approach substantially outperforms other recent CNN-based approaches [10, 25] when they are all used without postprocessing. During post-processing, a pose refinement step can be used to boost the accuracy of these two methods, but at 10 fps or less, they are much slower than our method.",
"We present a new paradigm for real-time object-oriented SLAM with a monocular camera. Contrary to previous approaches, that rely on object-level models, we construct category-level models from CAD collections which are now widely available. To alleviate the need for huge amounts of labeled data, we develop a rendering pipeline that enables synthesis of large datasets from a limited amount of manually labeled data. Using data thus synthesized, we learn category-level models for object deformations in 3D, as well as discriminative object features in 2D. These category models are instance-independent and aid in the design of object landmark observations that can be incorporated into a generic monocular SLAM framework. Where typical object-SLAM approaches usually solve only for object and camera poses, we also estimate object shape on-the-fly, allowing for a wide range of objects from the category to be present in the scene. Moreover, since our 2D object features are learned discriminatively, the proposed object-SLAM system succeeds in several scenarios where sparse feature-based monocular SLAM fails due to insufficient features or parallax. Also, the proposed category-models help in object instance retrieval, useful for Augmented Reality (AR) applications. We evaluate the proposed framework on multiple challenging real-world scenes and show --- to the best of our knowledge --- first results of an instance-independent monocular object-SLAM system and the benefits it enjoys over feature-based SLAM methods.",
"",
"State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn."
]
} |
1907.09760 | 2963048191 | We present NOLBO, a variational observation model estimation for 3D multi-object from 2D single shot. Previous probabilistic instance-level understandings mainly consider the single-object image, not single shot with multi-object; relations between objects and the entire scene are out of their focus. The objectness of each observation also hardly join their model. Therefore, we propose a method to approximate the Bayesian observation model of scene-level 3D multi-object understanding. By exploiting variational auto-encoder (VAE), we estimate latent variables from the entire scene, which follow tractable distributions and concurrently imply 3D full shape and pose. To perform object-oriented data association and probabilistic simultaneous localization and mapping (SLAM), our observation models can easily be adopted to probabilistic inference by replacing object-oriented features with latent variables. | These studies are efficient because they mainly concern direct and accurate estimation of the object characteristics through network modeling; on the other hand, the probabilistic observation models are relatively less considered. Although they exploit the neural network for nonlinear regression, approximating the intractable distribution is rarely concerned. Therefore, Bayesian inference with obtained features are challenging; for example, data association for SLAM is considered only in front-end and additional algorithms are necessary to perform loop closing and place recognition @cite_45 @cite_15 . | {
"cite_N": [
"@cite_15",
"@cite_45"
],
"mid": [
"2789111492",
"2097696373"
],
"abstract": [
"We present a new paradigm for real-time object-oriented SLAM with a monocular camera. Contrary to previous approaches, that rely on object-level models, we construct category-level models from CAD collections which are now widely available. To alleviate the need for huge amounts of labeled data, we develop a rendering pipeline that enables synthesis of large datasets from a limited amount of manually labeled data. Using data thus synthesized, we learn category-level models for object deformations in 3D, as well as discriminative object features in 2D. These category models are instance-independent and aid in the design of object landmark observations that can be incorporated into a generic monocular SLAM framework. Where typical object-SLAM approaches usually solve only for object and camera poses, we also estimate object shape on-the-fly, allowing for a wide range of objects from the category to be present in the scene. Moreover, since our 2D object features are learned discriminatively, the proposed object-SLAM system succeeds in several scenarios where sparse feature-based monocular SLAM fails due to insufficient features or parallax. Also, the proposed category-models help in object instance retrieval, useful for Augmented Reality (AR) applications. We evaluate the proposed framework on multiple challenging real-world scenes and show --- to the best of our knowledge --- first results of an instance-independent monocular object-SLAM system and the benefits it enjoys over feature-based SLAM methods.",
"We present the major advantages of a new 'object oriented' 3D SLAM paradigm, which takes full advantage in the loop of prior knowledge that many scenes consist of repeated, domain-specific objects and structures. As a hand-held depth camera browses a cluttered scene, real-time 3D object recognition and tracking provides 6DoF camera-object constraints which feed into an explicit graph of objects, continually refined by efficient pose-graph optimisation. This offers the descriptive and predictive power of SLAM systems which perform dense surface reconstruction, but with a huge representation compression. The object graph enables predictions for accurate ICP-based camera to model tracking at each live frame, and efficient active search for new objects in currently undescribed image regions. We demonstrate real-time incremental SLAM in large, cluttered environments, including loop closure, relocalisation and the detection of moved objects, and of course the generation of an object level scene description with the potential to enable interaction."
]
} |
1907.09760 | 2963048191 | We present NOLBO, a variational observation model estimation for 3D multi-object from 2D single shot. Previous probabilistic instance-level understandings mainly consider the single-object image, not single shot with multi-object; relations between objects and the entire scene are out of their focus. The objectness of each observation also hardly join their model. Therefore, we propose a method to approximate the Bayesian observation model of scene-level 3D multi-object understanding. By exploiting variational auto-encoder (VAE), we estimate latent variables from the entire scene, which follow tractable distributions and concurrently imply 3D full shape and pose. To perform object-oriented data association and probabilistic simultaneous localization and mapping (SLAM), our observation models can easily be adopted to probabilistic inference by replacing object-oriented features with latent variables. | To handle the intractable target distribution, latent variables can be adopted @cite_46 @cite_18 @cite_34 @cite_13 . In order to understand and utilize the latent space, @cite_58 @cite_9 have studied the relations between latent variables and object visualization by using VAE @cite_34 . However, it is still challenging to apply the proposed method to probabilistic model approximation, as it mainly concentrates on the interpretable graphic codes. To approximate the observation probability, entropy and variational likelihood is exploited in the field of the active vision @cite_22 @cite_14 @cite_7 . Using VAE, @cite_60 @cite_8 have proposed methods to approximate the observation model of 3D objects for Bayesian inference. Based on the ELBO which approximates the observation model, they have shown that how the probabilistic SLAM with data association can be performed with expectation-maximization (EM) algorithm. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_22",
"@cite_7",
"@cite_60",
"@cite_8",
"@cite_9",
"@cite_34",
"@cite_46",
"@cite_58",
"@cite_13"
],
"mid": [
"2130428211",
"638212880",
"2144691386",
"2766469071",
"2889482274",
"2967631872",
"2238938802",
"2951004968",
"2107034620",
"1691728462",
"2099471712"
],
"abstract": [
"Latent Dirichlet allocation (LDA) is a Bayesian network that has recently gained much popularity in applications ranging from document modeling to computer vision. Due to the large scale nature of these applications, current inference procedures like variational Bayes and Gibbs sampling have been found lacking. In this paper we propose the collapsed variational Bayesian inference algorithm for LDA, and show that it is computationally efficient, easy to implement and significantly more accurate than standard variational Bayesian inference for LDA.",
"Directing robot attention to recognise activities and to anticipate events like goal-directed actions is a crucial skill for human-robot interaction. Unfortunately, issues like intrinsic time constraints, the spatially distributed nature of the entailed information sources, and the existence of a multitude of unobservable states affecting the system, like latent intentions, have long rendered achievement of such skills a rather elusive goal. The problem tests the limits of current attention control systems. It requires an integrated solution for tracking, exploration and recognition, which traditionally have been seen as separate problems in active vision. We propose a probabilistic generative framework based on information gain maximisation and a mixture of Kalman Filters that uses predictions in both recognition and attention-control. This framework can efficiently use the observations of one element in a dynamic environment to provide information on other elements, and consequently enables guided exploration. Interestingly, the sensors control policy, directly derived from first principles, represents the intuitive trade-off between finding the most discriminative clues and maintaining overall awareness. Experiments on a simulated humanoid robot observing a human executing goal-oriented actions demonstrated improvement on recognition time and precision over baseline systems.",
"We introduce a formalism for optimal sensor parameter selection for iterative state estimation in static systems. Our optimality criterion is the reduction of uncertainty in the state estimation process, rather than an estimator-specific metric (e.g., minimum mean squared estimate error). The claim is that state estimation becomes more reliable if the uncertainty and ambiguity in the estimation process can be reduced. We use Shannon's information theory to select information-gathering actions that maximize mutual information, thus optimizing the information that the data conveys about the true state of the system. The technique explicitly takes into account the a priori probabilities governing the computation of the mutual information. Thus, a sequential decision process can be formed by treating the a priori probability at a certain time step in the decision process as the a posteriori probability of the previous time step. We demonstrate the benefits of our approach in an object recognition application using an active camera for sequential gaze control and viewpoint selection. We describe experiments with discrete and continuous density representations that suggest the effectiveness of the approach.",
"We develop a comprehensive description of the active inference framework, as proposed by Friston (2010), under a machine-learning compliant perspective. Stemming from a biological inspiration and the auto-encoding principles, a sketch of a cognitive architecture is proposed that should provide ways to implement estimation-oriented control policies. Computer simulations illustrate the effectiveness of the approach through a foveated inspection of the input data. The pros and cons of the control policy are analyzed in detail, showing interesting promises in terms of processing compression. Though optimizing future posterior entropy over the actions set is shown enough to attain locally optimal action selection, offline calculation using class-specific saliency maps is shown better for it saves processing costs through saccades pathways pre-processing, with a negligible effect on the recognition compression rates.",
"This paper presents a feature encoding method of complex 3D objects for high-level semantic features. Recent approaches to object recognition methods become important for semantic simultaneous localization and mapping (SLAM). However, there is a lack of consideration of the probabilistic observation model for 3D objects, as the shape of a 3D object basically follows a complex probability distribution. Furthermore, since the mobile robot equipped with a range sensor observes only a single view, much information of the object shape is discarded. These limitations are the major obstacles to semantic SLAM and view-independent loop closure using 3D object shapes as features. In order to enable the numerical analysis for the Bayesian inference, we approximate the true observation model of 3D objects to tractable distributions. Since the observation likelihood can be obtained from the generative model, we formulate the true generative model for 3D object with the Bayesian networks. To capture these complex distributions, we apply a variational auto-encoder. To analyze the approximated distributions and encoded features, we perform classification with maximum likelihood estimation and shape retrieval.",
"We present a Bayesian object observation model for complete probabilistic semantic SLAM. Recent studies on object detection and feature extraction have become important for scene understanding and 3D mapping. However, 3D shape of the object is too complex to formulate the probabilistic observation model; therefore, performing the Bayesian inference of the object-oriented features as well as their pose is less considered. Besides, when the robot equipped with an RGB mono camera only observes the projected single view of an object, a significant amount of the 3D shape information is abandoned. Due to these limitations, semantic SLAM and viewpoint-independent loop closure using volumetric 3D object shape is challenging. In order to enable the complete formulation of probabilistic semantic SLAM, we approximate the observation model of a 3D object with a tractable distribution. We also estimate the variational likelihood from the 2D image of the object to exploit its observed single view. In order to evaluate the proposed method, we perform pose and feature estimation, and demonstrate that the automatic loop closure works seamlessly without additional loop detector in various environments.",
"Objects belonging to different categories evoke reliably different fMRI activity patterns in human occipitotemporal cortex, with the most prominent distinction being that between animate and inanimate objects. An unresolved question is whether these categorical distinctions reflect category-associated visual properties of objects or whether they genuinely reflect object category. Here, we addressed this question by measuring fMRI responses to animate and inanimate objects that were closely matched for shape and low-level visual features. Univariate contrasts revealed animate-and inanimate-preferring regions in ventral and lateral temporal cortex even for individually matched object pairs e.g., snake-rope. Using representational similarity analysis, we mapped out brain regions in which the pairwise dissimilarity of multivoxel activity patterns neural dissimilarity was predicted by the objects' pairwise visual dissimilarity and or their categorical dissimilarity. Visual dissimilarity was measured as the time it took participants to find a unique target among identical distractors in three visual search experiments, where we separately quantified overall dissimilarity, outline dissimilarity, and texture dissimilarity. All three visual dissimilarity structures predicted neural dissimilarity in regions of visual cortex. Interestingly, these analyses revealed several clusters in which categorical dissimilarity predicted neural dissimilarity after regressing out visual dissimilarity. Together, these results suggest that the animate-inanimate organization of human visual cortex is not fully explained by differences in the characteristic shape or texture properties of animals and inanimate objects. Instead, representations of visual object properties and object category may coexist in more anterior parts of the visual system.",
"How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions is two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results.",
"We propose a novel approach to learn and recognize natural scene categories. Unlike previous work, it does not require experts to annotate the training set. We represent the image of a scene by a collection of local regions, denoted as codewords obtained by unsupervised learning. Each region is represented as part of a \"theme\". In previous work, such themes were learnt from hand-annotations of experts, while our method learns the theme distributions as well as the codewords distribution over the themes without supervision. We report satisfactory categorization performances on a large set of 13 categories of complex scenes.",
"This paper presents the Deep Convolution Inverse Graphics Network (DC-IGN), a model that aims to learn an interpretable representation of images, disentangled with respect to three-dimensional scene structure and viewing transformations such as depth rotations and lighting variations. The DC-IGN model is composed of multiple layers of convolution and de-convolution operators and is trained using the Stochastic Gradient Variational Bayes (SGVB) algorithm [10]. We propose a training procedure to encourage neurons in the graphics code layer to represent a specific transformation (e.g. pose or light). Given a single input image, our model can generate new images of the same object with variations in pose and lighting. We present qualitative and quantitative tests of the model's efficacy at learning a 3D rendering engine for varied object classes including faces and chairs.",
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples."
]
} |
1907.09825 | 2964205377 | Efficient behavior and trajectory planning is one of the major challenges for automated driving. Especially intersection scenarios are very demanding due to their complexity arising from the variety of maneuver possibilities and other traffic participants. A key challenge is to generate behaviors which optimize the comfort and progress of the ego vehicle but at the same time are not too aggressive towards other traffic participants. In order to maintain real time capability for courteous behavior and trajectory planning, an efficient formulation of the optimal control problem and corresponding solving algorithms are required. Consequently, a novel planning framework is presented which considers comfort and progress as well as the courtesy of actions in a graph-based behavior planning module. Utilizing the low level trajectory generation, the behavior result can be further optimized for driving comfort while satisfying constraints over the whole planning horizon. According experiments show the practicability and real time capability of the framework. | A method for producing courteous behavior is to use a game-theoretic interaction model @cite_15 . The problem is modeled in a way that all agents try to optimize their own behavior. By predicting and considering the reaction to the ego trajectory, courteous behavior can be generated. It is shown that this courtesy leads to better imitation of human behavior @cite_15 . | {
"cite_N": [
"@cite_15"
],
"mid": [
"2963787234"
],
"abstract": [
"Typically, autonomous cars optimize for a combination of safety, efficiency, and driving quality. But as we get better at this optimization, we start seeing behavior go from too conservative to too aggressive. The car's behavior exposes the incentives we provide in its cost function. In this work, we argue for cars that are not optimizing a purely selfish cost, but also try to be courteous to other interactive drivers. We formalize courtesy as a term in the objective that measures the increase in another driver's cost induced by the autonomous car's behavior. Such a courtesy term enables the robot car to be aware of possible irrationality of the human behavior, and plan accordingly. We analyze the effect of courtesy in a variety of scenarios. We find, for example, that courteous robot cars leave more space when merging in front of a human driver. Moreover, we find that such a courtesy term can help explain real human driver behavior on the NGSIM dataset."
]
} |
1907.09945 | 2963509006 | Affective computing is a field of great interest in many computer vision applications, including video surveillance, behaviour analysis, and human-robot interaction. Most of the existing literature has addressed this field by analysing different sets of face features. However, in the last decade, several studies have shown how body movements can play a key role even in emotion recognition. The majority of these experiments on the body are performed by trained actors whose aim is to simulate emotional reactions. These unnatural expressions differ from the more challenging genuine emotions, thus invalidating the obtained results. In this paper, a solution for basic non-acted emotion recognition based on 3D skeleton and Deep Neural Networks (DNNs) is provided. The proposed work introduces three majors contributions. First, unlike the current state-of-the-art in non-acted body affect recognition, where only static or global body features are considered, in this work also temporal local movements performed by subjects in each frame are examined. Second, an original set of global and time-dependent features for body movement description is provided. Third, to the best of out knowledge, this is the first attempt to use deep learning methods for non-acted body affect recognition. Due to the novelty of the topic, only the UCLIC dataset is currently considered the benchmark for comparative tests. On the latter, the proposed method outperforms all the competitors. | The most accurate affect recognition systems are based on electroencephalography @cite_27 . Although these systems achieve excellent results, they are limited by the use of dedicated sensors that require a controlled environment. The computer vision applications, on the other hand, are more suitable to work in real and uncontrolled scenarios, thanks to the use of more versatile sensors, such as RGB, RGB-D, or thermal cameras. Most of the vision based methods involve emotion recognition by the analysis of facial expressions @cite_18 @cite_15 @cite_14 . This is due to the presence of a great deal of labelled data in the state-of-the-art, organized in datasets, such as in @cite_4 . Although the face is one of the most discriminative ways to identify people's emotions, it is not always possible to capture facial expressions in large and crowded environments. This aspect motivated researchers to try other solutions, including poses and body movements. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_4",
"@cite_27",
"@cite_15"
],
"mid": [
"2345305417",
"2962711746",
"2745497104",
"2151069331",
"2141990597"
],
"abstract": [
"Facial expressions are an important way through which humans interact socially. Building a system capable of automatically recognizing facial expressions from images and video has been an intense field of study in recent years. Interpreting such expressions remains challenging and much research is needed about the way they relate to human affect. This paper presents a general overview of automatic RGB, 3D, thermal and multimodal facial expression analysis. We define a new taxonomy for the field, encompassing all steps from face detection to facial expression recognition, and describe and classify the state of the art methods accordingly. We also present the important datasets and the bench-marking of most influential methods. We conclude with a general discussion about trends, important questions and future lines of research.",
"We have developed a convolutional neural network for the purpose of recognizing facial expressions in human beings. We have fine-tuned the existing convolutional neural network model trained on the visual recognition dataset used in the ILSVRC2012 to two widely used facial expression datasets - CFEE and RaFD, which when trained and tested independently yielded test accuracies of 74.79 and 95.71 , respectively. Generalization of results was evident by training on one dataset and testing on the other. Further, the image product of the cropped faces and their visual saliency maps were computed using Deep Multi-Layer Network for saliency prediction and were fed to the facial expression recognition CNN. In the most generalized experiment, we observed the top-1 accuracy in the test set to be 65.39 . General confusion trends between different facial expressions as exhibited by humans were also observed.",
"Automated affective computing in the wild setting is a challenging problem in computer vision. Existing annotated databases of facial expressions in the wild are small and mostly cover discrete emotions (aka the categorical model). There are very limited annotated facial databases for affective computing in the continuous dimensional model (e.g., valence and arousal). To meet this need, we collected, annotated, and prepared for public distribution a new database of facial emotions in the wild (called AffectNet). AffectNet contains more than 1,000,000 facial images from the Internet by querying three major search engines using 1,250 emotion related keywords in six different languages. About half of the retrieved images were manually annotated for the presence of seven discrete facial expressions and the intensity of valence and arousal. AffectNet is by far the largest database of facial expression, valence, and arousal in the wild enabling research in automated facial expression recognition in two different emotion models. Two baseline deep neural networks are used to classify images in the categorical model and predict the intensity of valence and arousal. Various evaluation metrics show that our deep neural network baselines can perform better than conventional machine learning methods and off-the-shelf facial expression recognition systems.",
"During the last decades, information about the emotional state of users has become more and more important in human-computer interaction. Automatic emotion recognition enables the computer to recognize a user's emotional state and thus allows for appropriate reaction, which may pave the way for computers to act emotionally in the future. In the current study, we investigate different feature sets to build an emotion recognition system from electroencephalo-graphic signals. We used pictures from the International Affective Picture System to induce three emotional states: pleasant, neutral, and unpleasant. We designed a headband with four build-in electrodes at the forehead, which was used to record data from five subjects. Compared to standard EEG-caps, the headband is comfortable to wear and easy to attach, which makes it more suitable for everyday life conditions. To solve the recognition task we developed a system based on support vector machines. With this system we were able to achieve an average recognition rate up to 66.7 on subject dependent recognition, solely based on EEG signals.",
"Conditional Random Fields (CRFs) can be used as a discriminative approach for simultaneous sequence segmentation and frame labeling. Latent-Dynamic Conditional Random Fields (LDCRFs) incorporates hidden state variables within CRFs which model sub-structure motion patterns and dynamics between labels. Motivated by the success of LDCRFs in gesture recognition, we propose a framework for automatic facial expression recognition from continuous video sequence by modeling temporal variations within shapes using LDCRFs. We show that the proposed approach outperforms CRFs for recognizing facial expressions. Using Principal Component Analysis (PCA) we study the separability of various expression classes in lower dimension projected spaces. By comparing the performance of CRFs and LDCRFs against that of Support Vector Machines (SVMs), we demonstrate that temporal variations within shapes are crucial in classifying expressions especially for those with a small range of facial motion like anger and sadness. We also show empirically that only using changes in facial appearance over time, without using shape variations, is not sufficient to obtain high performance for facial expression recognition."
]
} |
1907.09883 | 2963801192 | All public blockchains are secured by a proof of opportunity cost among block producers. For example, the security offered by proof-of-work (PoW) systems, like Bitcoin, is due to spent computation; it is work precisely because it cannot be performed for free. In general, more resources provably lost in producing blocks yields more security for the blockchain. When two blockchains share the same mechanism for providing opportunity cost, as is the case when they share the same PoW algorithm, the two chains compete for resources from block producers. Indeed, if there exists a liquid market between resource types, then theoretically all blockchains will compete for resources. In this paper, we show that there exists a resource allocation equilibrium between any two blockchains, which is essentially driven by the fiat value of reward that each chain offers in return for providing security. We go on to prove that this equilibrium is singular and always achieved provided that block producers behave in a greedy, but cautious fashion. The opposite is true when they are overly greedy: resource allocation oscillates in extremes between the two chains. We show that these results hold both in practice and in a block generation simulation. Finally, we demonstrate several applications of this theory including a trustless price-ratio oracle, increased security for blockchains whose coins have lower fiat value, and a quantification of cost to allocating resources away from the equilibrium. | Also closely related is the work of @cite_6 who apply the theory of Potential Games @cite_23 to the problem of miner hash rate allocation across multiple blockchains. They prove that, regardless of individual hash rate and coinbase rewards for each of the blockchains, hash rate allocation will converge to a pure equilibrium provided that miners follow . The model assumes minimal rationality on behalf of the players, i.e., that they follow an arbitrary better response step improving their individual payoffs.'' @cite_6 do not identify a specific equilibrium point, nor do they specify what the should be. But their work anticipates some of the theoretical results we present in . Furthermore, they show that the equilibrium point can be changed by changing a blockchain's coinbase reward, a property that is emergent from the properties of the equilibrium and one that we exploit to increase security in . @cite_16 reached similar conclusions as @cite_6 using a slightly different game theoretical model of hash rate allocation across cryptocurrencies and mining pools. | {
"cite_N": [
"@cite_16",
"@cite_23",
"@cite_6"
],
"mid": [
"2911474632",
"",
"2804529604"
],
"abstract": [
"We model the competition over several blockchains characterizing multiple cryptocurrencies as a non-cooperative game. Then, we specialize our results to two instances of the general game, showing properties of the Nash equilibrium. In particular, leveraging results about congestion games, we establish the existence of pure Nash equilibria and provide efficient algorithms for finding such equilibria.",
"",
"We formalize the current practice of strategic mining in multi-cryptocurrency markets as a game, and prove that any better-response learning in such games converges to equilibrium. We then offer a reward design scheme that moves the system configuration from any initial equilibrium to a desired one for any better-response learning of the miners. Our work introduces the first multi-coin strategic attack for adaptive and learning miners, as well as the study of reward design in a multi-agent system of learning agents."
]
} |
1907.09883 | 2963801192 | All public blockchains are secured by a proof of opportunity cost among block producers. For example, the security offered by proof-of-work (PoW) systems, like Bitcoin, is due to spent computation; it is work precisely because it cannot be performed for free. In general, more resources provably lost in producing blocks yields more security for the blockchain. When two blockchains share the same mechanism for providing opportunity cost, as is the case when they share the same PoW algorithm, the two chains compete for resources from block producers. Indeed, if there exists a liquid market between resource types, then theoretically all blockchains will compete for resources. In this paper, we show that there exists a resource allocation equilibrium between any two blockchains, which is essentially driven by the fiat value of reward that each chain offers in return for providing security. We go on to prove that this equilibrium is singular and always achieved provided that block producers behave in a greedy, but cautious fashion. The opposite is true when they are overly greedy: resource allocation oscillates in extremes between the two chains. We show that these results hold both in practice and in a block generation simulation. Finally, we demonstrate several applications of this theory including a trustless price-ratio oracle, increased security for blockchains whose coins have lower fiat value, and a quantification of cost to allocating resources away from the equilibrium. | Several authors have sought to determine the optimal hash rate allocation between blockchains for miners or mining pools. @cite_7 argue that miners allocate their hash rate between multiple blockchains so as to minimize the risk associated with fluctuations in coin price. @cite_25 make a similar argument except that their measure of risk is volatility in the payout rate between mining pools. @cite_1 extend this model to mining across blockchains with different PoW algorithms. All of the above approaches are complimentary to the present work, which seeks only to explain miner behavior. In fact, miner-specific behavioral choices help to explain why the aggregate hash rate allocation does not fully allocate to one chain over another (see for details). | {
"cite_N": [
"@cite_1",
"@cite_25",
"@cite_7"
],
"mid": [
"2944490618",
"2789931105",
"2809512071"
],
"abstract": [
"Mining is a central operation of all proof-of-work (PoW) based cryptocurrencies. The vast majority of miners today participate in \"mining pools\" instead of \"solo mining\" in order to lower risk and achieve a more steady income. However, this rise of participation in mining pools negatively affects the decentralization levels of most cryptocurrencies. In this work, we look into mining pools from the point of view of a miner: We present an analytical model and implement a computational tool that allows miners to optimally distribute their computational power over multiple pools and PoW cryptocurrencies (i.e. build a mining portfolio), taking into account their risk aversion levels. Our tool allows miners to maximize their risk-adjusted earnings by diversifying across multiple mining pools which enhances PoW decentralization. Finally, we run an experiment in Bitcoin historical data and demonstrate that a miner diversifying over multiple pools, as instructed by our model tool, receives a higher overall Sharpe ratio (i.e. average excess reward over its standard deviation volatility).",
"The rise of centralized mining pools for risk sharing does not necessarily undermine the decentralization required for public blockchains. However, mining pools as a financial innovation significantly escalates the arms race among competing miners and thus increases the energy consumption of proof-of-work-based blockchains. Each individual miner's cross-pool diversification and endogenous fees charged by pools generally sustain decentralization --- larger pools better internalize their externality on global hash rates, charge higher fees, attract disproportionately fewer miners, and thus grow slower. Empirical evidence from Bitcoin mining supports our model predictions, and the economic insights apply to many other blockchain protocols, as well as mainstream industries with similar characteristics.",
"Abrupt changes in the miner hash rate applied to a proof-of-work (PoW) blockchain can adversely affect user experience and security. Because different PoW blockchains often share hashing algorithms, miners face a complex choice in deciding how to allocate their hash power among chains. We present an economic model that leverages Modern Portfolio Theory to predict a miner’s allocation over time using price data and inferred risk tolerance. The model matches actual allocations with mean absolute error within 20 for four out of the top five miners active on both Bitcoin (BTC) and Bitcoin Cash (BCH) blockchains. A model of aggregate allocation across those four miners shows excellent agreement in magnitude with the actual aggregate as well a correlation coefficient of 0.649. The accuracy of the aggregate allocation model is also sufficient to explain major historical changes in inter-block time (IBT) for BCH. Because estimates of miner risk are not time-dependent and our model is otherwise price-driven, we are able to use it to anticipate the effect of a major price shock on hash allocation and IBT in the BCH blockchain. Using a Monte Carlo simulation, we show that, despite mitigation by the new difficulty adjustment algorithm, a price drop of 50 could increase the IBT by 50 for at least a day, with a peak delay of 100 ."
]
} |
1907.09883 | 2963801192 | All public blockchains are secured by a proof of opportunity cost among block producers. For example, the security offered by proof-of-work (PoW) systems, like Bitcoin, is due to spent computation; it is work precisely because it cannot be performed for free. In general, more resources provably lost in producing blocks yields more security for the blockchain. When two blockchains share the same mechanism for providing opportunity cost, as is the case when they share the same PoW algorithm, the two chains compete for resources from block producers. Indeed, if there exists a liquid market between resource types, then theoretically all blockchains will compete for resources. In this paper, we show that there exists a resource allocation equilibrium between any two blockchains, which is essentially driven by the fiat value of reward that each chain offers in return for providing security. We go on to prove that this equilibrium is singular and always achieved provided that block producers behave in a greedy, but cautious fashion. The opposite is true when they are overly greedy: resource allocation oscillates in extremes between the two chains. We show that these results hold both in practice and in a block generation simulation. Finally, we demonstrate several applications of this theory including a trustless price-ratio oracle, increased security for blockchains whose coins have lower fiat value, and a quantification of cost to allocating resources away from the equilibrium. | @cite_22 devised a Markov Decision Process (MDP) for discovering optimal selfish mining @cite_15 strategies. @cite_12 expanded the model to incorporate adjustable network parameters and include analysis of doublespend attacks. @cite_21 extend the MDP of @cite_12 to model mining difficulty adjustment. The biggest differences between these approaches and the present work is that the former analyze optimal behavior in single blockchains while the present work attempts to explain behavior across multiple blockchains. | {
"cite_N": [
"@cite_15",
"@cite_21",
"@cite_22",
"@cite_12"
],
"mid": [
"2886706106",
"2952115223",
"1523833175",
"2536325433"
],
"abstract": [
"The Bitcoin cryptocurrency records its transactions in a public log called the blockchain. Its security rests critically on the distributed protocol that maintains the blockchain, run by participants called miners. Conventional wisdom asserts that the mining protocol is incentive-compatible and secure against colluding minority groups, that is, it incentivizes miners to follow the protocol as prescribed.",
"Abstract Cryptographic assets such as Bitcoin and Ethereum provide distributed consensus with a Proof-of-Work protocol and incentive-based engineering. The consensus is inherently dependent on the value of the asset due to the incentives. The value of these assets frequently fluctuates, which in turn influences the incentive component of the consensus mechanism. For a proof-of-work consensus to be secure, the participation reward must have a perceived real-world value. The future of this perception is not at all clear. The recent 70 drop in the value of Bitcoin versus the US Dollar may be precipitating a circle of declining security of the platform which we explore in depth in this paper. In this paper, we analyze the impact of fluctuations on the security of Bitcoin now, and in the future. We introduce a novel method to examine the rationale of a miner based on the price fluctuations. We integrate our method with an existing security evaluation framework and simulator. Using our approach, we determine and report on the impact of the value of the cryptographic asset on the security of the blockchain given the miner’s rationale. Our method allows us to evaluate the impact of different methods of incentive manipulation such as reduced block-reward and transaction fees, by simulation.",
"The Bitcoin protocol requires nodes to quickly distribute newly created blocks. Strong nodes can, however, gain higher payoffs by withholding blocks they create and selectively postponing their publication. The existence of such selfish mining attacks was first reported by Eyal and Sirer, who have demonstrated a specific deviation from the standard protocol (a strategy that we name SM1).",
"Proof of Work (PoW) powered blockchains currently account for more than 90 of the total market capitalization of existing digital cryptocurrencies. Although the security provisions of Bitcoin have been thoroughly analysed, the security guarantees of variant (forked) PoW blockchains (which were instantiated with different parameters) have not received much attention in the literature. This opens the question whether existing security analysis of Bitcoin's PoW applies to other implementations which have been instantiated with different consensus and or network parameters. In this paper, we introduce a novel quantitative framework to analyse the security and performance implications of various consensus and network parameters of PoW blockchains. Based on our framework, we devise optimal adversarial strategies for double-spending and selfish mining while taking into account real world constraints such as network propagation, different block sizes, block generation intervals, information propagation mechanism, and the impact of eclipse attacks. Our framework therefore allows us to capture existing PoW-based deployments as well as PoW blockchain variants that are instantiated with different parameters, and to objectively compare the tradeoffs between their performance and security provisions."
]
} |
1907.09786 | 2963372556 | We propose a novel single-step training strategy that allows convolutional encoder-decoder networks that use skip connections, to complete partially observed data by means of hallucination. This strategy is demonstrated for the task of completing 2-D road layouts as well as 3-D vehicle shapes. As input, it takes data from a partially observed domain, for which no ground truth is available, and data from an unpaired prior knowledge domain and trains the network in an end-to-end manner. Our single-step training strategy is compared against two state-of-the-art baselines, one using a two-step auto-encoder training strategy and one using an adversarial strategy. Our novel strategy achieves an improvement up to +12.2 F-measure on the Cityscapes dataset. The learned network intrinsically generalizes better than the baselines on unseen datasets, which is demonstrated by an improvement up to +23.8 F-measure on the unseen KITTI dataset. Moreover, our approach outperforms the baselines using the same backbone network on the 3-D shape completion benchmark by a margin of 0.006 Hamming distance. | Closely related to our task of hallucinating the road layout is image inpainting. In recent years, deep convolutional neural networks (CNNs) enable the possibility of image inpainting with large missing areas, as CNNs can extract abstract semantic information from the observable context. The Context Encoder (CE) @cite_12 network is proposed to inpaint the image with large rectangle areas missing at the image center by applying reconstruction and adversarial loss @cite_13 in training. CE-like networks @cite_21 @cite_8 @cite_5 are proposed with additional discriminative networks applied on locally missing regions, or on the entire image in a patch-wise manner, which are able to perform inpainting with regions missing at arbitrary position. Given a trained generative network, Yeh al @cite_20 do inpainting by finding the embedding vector that minimizes the reconstruction loss by applying back-propagation to the input embedding vector. The main difference between inpainting and the tasks addressed by us and @cite_1 @cite_17 , is that the previously introduced methods of image inpainting are all in a setting in which the complete ground truth is available. | {
"cite_N": [
"@cite_8",
"@cite_21",
"@cite_1",
"@cite_5",
"@cite_20",
"@cite_13",
"@cite_12",
"@cite_17"
],
"mid": [
"",
"2791184993",
"2964294967",
"",
"2963917315",
"1710476689",
"2963420272",
""
],
"abstract": [
"",
"Area of image inpainting over relatively large missing regions recently advanced substantially through adaptation of dedicated deep neural networks. However, current network solutions still introduce undesired artifacts and noise to the repaired regions. We present an image inpainting method that is based on the celebrated generative adversarial network (GAN) framework. The proposed PGGAN method includes a discriminator network that combines a global GAN (G-GAN) architecture with a patchGAN approach. PGGAN first shares network layers between G-GAN and patchGAN, then splits paths to produce two adversarial losses that feed the generator network in order to capture both local continuity of image texture and pervasive global features in images. The proposed framework is evaluated extensively, and the results including comparison to recent state-of-the-art demonstrate that it achieves considerable improvements on both visual and quantitative evaluations.",
"Given a single RGB image of a complex outdoor road scene in the perspective view, we address the novel problem of estimating an occlusion-reasoned semantic scene layout in the top-view. This challenging problem not only requires an accurate understanding of both the 3D geometry and the semantics of the visible scene, but also of occluded areas. We propose a convolutional neural network that learns to predict occluded portions of the scene layout by looking around foreground objects like cars or pedestrians. But instead of hallucinating RGB values, we show that directly predicting the semantics and depths in the occluded areas enables a better transformation into the top-view. We further show that this initial top-view representation can be significantly enhanced by learning priors and rules about typical road layouts from simulated or, if available, map data. Crucially, training our model does not require costly or subjective human annotations for occluded areas or the top-view, but rather uses readily available annotations for standard semantic segmentation in the perspective view. We extensively evaluate and analyze our approach on the KITTI and Cityscapes data sets.",
"",
"Semantic image inpainting is a challenging task where large missing regions have to be filled based on the available visual data. Existing methods which extract information from only a single image generally produce unsatisfactory results due to the lack of high level context. In this paper, we propose a novel method for semantic image inpainting, which generates the missing content by conditioning on the available data. Given a trained generative model, we search for the closest encoding of the corrupted image in the latent image manifold using our context and prior losses. This encoding is then passed through the generative model to infer the missing content. In our method, inference is possible irrespective of how the missing content is structured, while the state-of-the-art learning based method requires specific information about the holes in the training phase. Experiments on three datasets show that our method successfully predicts information in large missing regions and achieves pixel-level photorealism, significantly outperforming the state-of-the-art methods.",
"For many AI projects, deep learning techniques are increasingly being used as the building blocks for innovative solutions ranging from image classification to object detection, image segmentation, image similarity, and text analytics (e.g., sentiment analysis, key phrase extraction). GANs, first introduced by (2014), are emerging as a powerful new approach toward teaching computers how to do complex tasks through a generative process. As noted by Yann LeCun (at http: bit.ly LeCunGANs ), GANs are truly the “coolest idea in machine learning in the last 20 years.”",
"We present an unsupervised visual feature learning algorithm driven by context-based pixel prediction. By analogy with auto-encoders, we propose Context Encoders – a convolutional neural network trained to generate the contents of an arbitrary image region conditioned on its surroundings. In order to succeed at this task, context encoders need to both understand the content of the entire image, as well as produce a plausible hypothesis for the missing part(s). When training context encoders, we have experimented with both a standard pixel-wise reconstruction loss, as well as a reconstruction plus an adversarial loss. The latter produces much sharper results because it can better handle multiple modes in the output. We found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures. We quantitatively demonstrate the effectiveness of our learned features for CNN pre-training on classification, detection, and segmentation tasks. Furthermore, context encoders can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods.",
""
]
} |
1907.09786 | 2963372556 | We propose a novel single-step training strategy that allows convolutional encoder-decoder networks that use skip connections, to complete partially observed data by means of hallucination. This strategy is demonstrated for the task of completing 2-D road layouts as well as 3-D vehicle shapes. As input, it takes data from a partially observed domain, for which no ground truth is available, and data from an unpaired prior knowledge domain and trains the network in an end-to-end manner. Our single-step training strategy is compared against two state-of-the-art baselines, one using a two-step auto-encoder training strategy and one using an adversarial strategy. Our novel strategy achieves an improvement up to +12.2 F-measure on the Cityscapes dataset. The learned network intrinsically generalizes better than the baselines on unseen datasets, which is demonstrated by an improvement up to +23.8 F-measure on the unseen KITTI dataset. Moreover, our approach outperforms the baselines using the same backbone network on the 3-D shape completion benchmark by a margin of 0.006 Hamming distance. | If the ground truth is not available, one has to hallucinate the region to be completed. Srikantha and Gall @cite_23 propose a system to hallucinate a depth map and a semantic map, given an RGB image and a noisy, incomplete depth map, which is able to remove the foreground objects. Schulter al @cite_1 proposes a CNN to conduct a similar task without depth, by intentionally adding random foreground masks during training. Recently, a VAE with two-step training @cite_4 is proposed for shape completion. In the first step, a canonical VAE is trained on a complete shape prior dataset which has no direct correspondence to the incomplete shape dataset. Then the amortized maximum likelihood (AML) is applied as supervision on the incomplete shape data with the decoder fixed in the second training step. This VAE approach is originally used to learn 3-D vehicle shape completion, but it can be generalized to similar tasks such as our 2-D road layout hallucinating. Therefore, we use this approach as a baseline, which is referred to as . | {
"cite_N": [
"@cite_4",
"@cite_1",
"@cite_23"
],
"mid": [
"2798314605",
"2964294967",
"2520145460"
],
"abstract": [
"3D shape completion from partial point clouds is a fundamental problem in computer vision and computer graphics. Recent approaches can be characterized as either data-driven or learning-based. Data-driven approaches rely on a shape model whose parameters are optimized to fit the observations. Learning-based approaches, in contrast, avoid the expensive optimization step and instead directly predict the complete shape from the incomplete observations using deep neural networks. However, full supervision is required which is often not available in practice. In this work, we propose a weakly-supervised learning-based approach to 3D shape completion which neither requires slow optimization nor direct supervision. While we also learn a shape prior on synthetic data, we amortize, i.e., learn, maximum likelihood fitting using deep neural networks resulting in efficient shape completion without sacrificing accuracy. Tackling 3D shape completion of cars on ShapeNet [5] and KITTI [18], we demonstrate that the proposed amortized maximum likelihood approach is able to compete with a fully supervised baseline and a state-of-the-art data-driven approach while being significantly faster. On ModelNet [49], we additionally show that the approach is able to generalize to other object categories as well.",
"Given a single RGB image of a complex outdoor road scene in the perspective view, we address the novel problem of estimating an occlusion-reasoned semantic scene layout in the top-view. This challenging problem not only requires an accurate understanding of both the 3D geometry and the semantics of the visible scene, but also of occluded areas. We propose a convolutional neural network that learns to predict occluded portions of the scene layout by looking around foreground objects like cars or pedestrians. But instead of hallucinating RGB values, we show that directly predicting the semantics and depths in the occluded areas enables a better transformation into the top-view. We further show that this initial top-view representation can be significantly enhanced by learning priors and rules about typical road layouts from simulated or, if available, map data. Crucially, training our model does not require costly or subjective human annotations for occluded areas or the top-view, but rather uses readily available annotations for standard semantic segmentation in the perspective view. We extensively evaluate and analyze our approach on the KITTI and Cityscapes data sets.",
"Building 3D scene models has been a longstanding goal of computer vision. The great progress in depth sensors brings us one step closer to achieving this in a single shot. However, depth sensors still produce imperfect measurements that are sparse and contain holes. While depth completion aims at tackling this issue, it ignores the fact that some regions of the scene are occluded by the foreground objects. Building a scene model would therefore require to hallucinate the depth behind these objects. In contrast with existing methods that either rely on manual input, or focus on the indoor scenario, we introduce a fully-automatic method to jointly complete and hallucinate depth and semantics in challenging outdoor scenes. To this end, we develop a two-layer model representing both the visible information and the hidden one. At the heart of our approach lies a formulation based on the Mumford-Shah functional, for which we derive an effective optimization strategy. Our experiments evidence that our approach can accurately fill the large holes in the input depth maps, segment the different kinds of objects in the scene, and hallucinate the depth and semantics behind the foreground objects."
]
} |
1907.09786 | 2963372556 | We propose a novel single-step training strategy that allows convolutional encoder-decoder networks that use skip connections, to complete partially observed data by means of hallucination. This strategy is demonstrated for the task of completing 2-D road layouts as well as 3-D vehicle shapes. As input, it takes data from a partially observed domain, for which no ground truth is available, and data from an unpaired prior knowledge domain and trains the network in an end-to-end manner. Our single-step training strategy is compared against two state-of-the-art baselines, one using a two-step auto-encoder training strategy and one using an adversarial strategy. Our novel strategy achieves an improvement up to +12.2 F-measure on the Cityscapes dataset. The learned network intrinsically generalizes better than the baselines on unseen datasets, which is demonstrated by an improvement up to +23.8 F-measure on the unseen KITTI dataset. Moreover, our approach outperforms the baselines using the same backbone network on the 3-D shape completion benchmark by a margin of 0.006 Hamming distance. | Road layout hallucinating can be seen as a specific approach of road layout understanding, which is an important task for robot and intelligent vehicle navigation. One challenge is that the ego-centric sensory data usually contains occlusions of the foreground objects, which makes roads visually incomplete. Many works tackling occlusions are focusing on using front-view images, such as road boundary detection @cite_9 , and road segmentation @cite_22 . Less work has been carried out on the top-view ego-centric sensing. In @cite_1 , the proposed system can produce a top-view road layout, while the occlusion is still addressed on the front-view image by pre-processing. In the later top-view refinement, the GPS is heavily relied on for a paired reconstruction supervision. We use a variant of this method, which only uses unpaired prior knowledge and thus no GPS pairing, as the second baseline. It is referred to as . | {
"cite_N": [
"@cite_9",
"@cite_1",
"@cite_22"
],
"mid": [
"2904840670",
"2964294967",
"2807134386"
],
"abstract": [
"This paper is about the detection and inference of road boundaries from mono-images. Our goal is to trace out, in an image, the projection of road boundaries irrespective of whether or not the boundary is actually visible. Large scale occlusion by vehicles prohibits direct approaches - many scenes present 100 occlusion and so we must infer the boundary location using scene context. Such a problem is well suited to CNN derived approaches but the sinuous structure of a hidden narrow continuous curve running through the image presents challenges for conventional NN-architectures. We approach this as a coupled, two class detection problem-solving for occluded and non-occluded curve partitions with a continuity constraint. Our network output is in a hybrid discrete-continuous form which we interpret as measurements of segments of the true road boundary. These measurements are passed to a model selection stage which associates measurements to minimal number of a-priori unknown set of geometric primitives (cubic curves) representing road boundaries. We present a semi-supervised method which leverages a visual localisation to generate 25 thousand labelled images for training and testing - the results of which are presented in the conclusion of the paper.",
"Given a single RGB image of a complex outdoor road scene in the perspective view, we address the novel problem of estimating an occlusion-reasoned semantic scene layout in the top-view. This challenging problem not only requires an accurate understanding of both the 3D geometry and the semantics of the visible scene, but also of occluded areas. We propose a convolutional neural network that learns to predict occluded portions of the scene layout by looking around foreground objects like cars or pedestrians. But instead of hallucinating RGB values, we show that directly predicting the semantics and depths in the occluded areas enables a better transformation into the top-view. We further show that this initial top-view representation can be significantly enhanced by learning priors and rules about typical road layouts from simulated or, if available, map data. Crucially, training our model does not require costly or subjective human annotations for occluded areas or the top-view, but rather uses readily available annotations for standard semantic segmentation in the perspective view. We extensively evaluate and analyze our approach on the KITTI and Cityscapes data sets.",
"Autonomous driving is becoming a reality, yet vehicles still need to rely on complex sensor fusion to understand the scene they act in. The ability to discern static environment and dynamic entities provides a comprehension of the road layout that poses constraints to the reasoning process about moving objects. We pursue this through a GAN-based semantic segmentation inpainting model to remove all dynamic objects from the scene and focus on understanding its static components such as streets, sidewalks and buildings. We evaluate this task on the Cityscapes dataset and on a novel synthetically generated dataset obtained with the CARLA simulator and specifically designed to quantitatively evaluate semantic segmentation inpaintings. We compare our methods with a variety of baselines working both in the RGB and segmentation domains."
]
} |
1907.09815 | 2963714798 | The interaction between language and visual information has been emphasized in visual question answering (VQA) with the help of attention mechanism. However, the relationship between words in question has been underestimated, which makes it hard to answer questions that involve the relationship between multiple entities, such as comparison and counting. In this paper, we develop the graph reasoning networks to tackle this problem. Two kinds of graphs are investigated, namely inter-graph and intra-graph. The inter-graph transfers features of the detected objects to their related query words, enabling the output nodes to have both semantic and factual information. The intra-graph exchanges information between these output nodes from inter-graph to amplify implicit yet important relationship between objects. These two kinds of graphs cooperate with each other, and thus our resulting model can reason the relationship and dependence between objects, which leads to realization of multi-step reasoning. Experimental results on the GQA v1.1 dataset demonstrate the reasoning ability of our method to handle compositional questions about real-world images. We achieve state-of-the-art performance, boosting accuracy to 57.04 . On the VQA 2.0 dataset, we also receive a promising improvement on overall accuracy, especially on counting problem. | : VQA is a task to answer the given question based on the input image. The question is usually embedded into a vector with LSTM @cite_6 , and the image is represented by the fixed-size grid features from ResNet @cite_4 . Recently, @cite_28 focuses on bottom-up attention of image features and proposes a set of salient image regions with natural expression and additional attributes detected by Faster-RCNN @cite_3 . Furthermore, its training set contains 1,600 object classes and 400 attribute classes, larger than the original 80 object classes. | {
"cite_N": [
"@cite_28",
"@cite_3",
"@cite_4",
"@cite_6"
],
"mid": [
"2745461083",
"2613718673",
"2194775991",
""
],
"abstract": [
"Top-down visual attention mechanisms have been used extensively in image captioning and visual question answering (VQA) to enable deeper image understanding through fine-grained analysis and even multiple steps of reasoning. In this work, we propose a combined bottom-up and top-down attention mechanism that enables attention to be calculated at the level of objects and other salient image regions. This is the natural basis for attention to be considered. Within our approach, the bottom-up mechanism (based on Faster R-CNN) proposes image regions, each with an associated feature vector, while the top-down mechanism determines feature weightings. Applying this approach to image captioning, our results on the MSCOCO test server establish a new state-of-the-art for the task, achieving CIDEr SPICE BLEU-4 scores of 117.9, 21.5 and 36.9, respectively. Demonstrating the broad applicability of the method, applying the same approach to VQA we obtain first place in the 2017 VQA Challenge.",
"State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.",
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.",
""
]
} |
1907.09815 | 2963714798 | The interaction between language and visual information has been emphasized in visual question answering (VQA) with the help of attention mechanism. However, the relationship between words in question has been underestimated, which makes it hard to answer questions that involve the relationship between multiple entities, such as comparison and counting. In this paper, we develop the graph reasoning networks to tackle this problem. Two kinds of graphs are investigated, namely inter-graph and intra-graph. The inter-graph transfers features of the detected objects to their related query words, enabling the output nodes to have both semantic and factual information. The intra-graph exchanges information between these output nodes from inter-graph to amplify implicit yet important relationship between objects. These two kinds of graphs cooperate with each other, and thus our resulting model can reason the relationship and dependence between objects, which leads to realization of multi-step reasoning. Experimental results on the GQA v1.1 dataset demonstrate the reasoning ability of our method to handle compositional questions about real-world images. We achieve state-of-the-art performance, boosting accuracy to 57.04 . On the VQA 2.0 dataset, we also receive a promising improvement on overall accuracy, especially on counting problem. | Based on the fusion methods of the two features, we can classify VQA models into two categories: early fusion models and later fusion models. Early fusion models try to fine-tune the image classification network with the intervention of the question, they insert the question embedding into the batch normalization layer @cite_23 @cite_31 to propose MODERN architecture. These models have less risk of over-fitting because of affecting less than 1 However, sometimes, the information given in the image is not enough to infer the right answer, common sense is required in the external knowledge-based models. @cite_9 builds the FVQA dataset based on DBpedia @cite_30 , ConceptNet @cite_14 and WebChild @cite_34 . @cite_1 queries the triplet (visual concept, relation, attribute) in this dataset to score the retrieved facts. And @cite_32 builds a relation graph based on the retrieved facts regarding the visual concept and attribute as nodes and relation as links to exchange information. | {
"cite_N": [
"@cite_30",
"@cite_14",
"@cite_9",
"@cite_1",
"@cite_32",
"@cite_23",
"@cite_31",
"@cite_34"
],
"mid": [
"102708294",
"2016089260",
"2964303913",
"2891394954",
"2962967746",
"2963245493",
"",
"1862719289"
],
"abstract": [
"DBpedia is a community effort to extract structured information from Wikipedia and to make this information available on the Web. DBpedia allows you to ask sophisticated queries against datasets derived from Wikipedia and to link other datasets on the Web to Wikipedia data. We describe the extraction of the DBpedia datasets, and how the resulting information is published on the Web for human-andmachine-consumption. We describe some emerging applications from the DBpedia community and show how website authors can facilitate DBpedia content within their sites. Finally, we present the current status of interlinking DBpedia with other open datasets on the Web and outline how DBpedia could serve as a nucleus for an emerging Web of open data.",
"ConceptNet is a freely available commonsense knowledge base and natural-language-processing tool-kit which supports many practical textual-reasoning tasks over real-world documents including topic-gisting, analogy-making, and other context oriented inferences. The knowledge base is a semantic network presently consisting of over 1.6 million assertions of commonsense knowledge encompassing the spatial, physical, social, temporal, and psychological aspects of everyday life. ConceptNet is generated automatically from the 700 000 sentences of the Open Mind Common Sense Project — a World Wide Web based collaboration with over 14 000 authors.",
"Visual Question Answering (VQA) has attracted much attention in both computer vision and natural language processing communities, not least because it offers insight into the relationships between two important sources of information. Current datasets, and the models built upon them, have focused on questions which are answerable by direct analysis of the question and image alone. The set of such questions that require no external information to answer is interesting, but very limited. It excludes questions which require common sense, or basic factual knowledge to answer, for example. Here we introduce FVQA (Fact-based VQA), a VQA dataset which requires, and supports, much deeper reasoning. FVQA primarily contains questions that require external information to answer. We thus extend a conventional visual question answering dataset, which contains image-question-answer triplets, through additional image-question-answer-supporting fact tuples. Each supporting-fact is represented as a structural triplet, such as . We evaluate several baseline models on the FVQA dataset, and describe a novel model which is capable of reasoning about an image on the basis of supporting-facts.",
"Question answering is an important task for autonomous agents and virtual assistants alike and was shown to support the disabled in efficiently navigating an overwhelming environment. Many existing methods focus on observation-based questions, ignoring our ability to seamlessly combine observed content with general knowledge. To understand interactions with a knowledge base, a dataset has been introduced recently and keyword matching techniques were shown to yield compelling results despite being vulnerable to misconceptions due to synonyms and homographs. To address this issue, we develop a learning-based approach which goes straight to the facts via a learned embedding space. We demonstrate state-of-the-art results on the challenging recently introduced fact-based visual question answering dataset, outperforming competing methods by more than (5 ).",
"Accurately answering a question about a given image requires combining observations with general knowledge. While this is effortless for humans, reasoning with general knowledge remains an algorithmic challenge. To advance research in this direction, a novel fact-based' visual question answering (FVQA) task has been introduced recently along with a large set of curated facts which link two entities, i.e., two possible answers, via a relation. Given a question-image pair, deep net techniques have been employed to successively reduce the large set of facts until one of the two entities of the final remaining fact is predicted as the answer. We observe that a successive process which considers one fact at a time to form a local decision is sub-optimal. Instead, we develop an entity graph and a graph convolutional net method toreason' about the correct answer by jointly considering all entities. We show on the challenging FVQA dataset that this leads to an improvement in accuracy of around 10 compared to the state-of-the-art.",
"It is commonly assumed that language refers to high-level visual concepts while leaving low-level visual processing unaffected. This view dominates the current literature in computational models for language-vision tasks, where visual and linguistic inputs are mostly processed independently before being fused into a single representation. In this paper, we deviate from this classic pipeline and propose to modulate the by a linguistic input. Specifically, we introduce Conditional Batch Normalization (CBN) as an efficient mechanism to modulate convolutional feature maps by a linguistic embedding. We apply CBN to a pre-trained Residual Network (ResNet), leading to the MODulatEd ResNet ( ) architecture, and show that this significantly improves strong baselines on two visual question answering tasks. Our ablation study confirms that modulating from the early stages of the visual processing is beneficial.",
"",
"Applications are increasingly expected to make smart decisions based on what humans consider basic commonsense. An often overlooked but essential form of commonsense involves comparisons, e.g. the fact that bears are typically more dangerous than dogs, that tables are heavier than chairs, or that ice is colder than water. In this paper, we first rely on open information extraction methods to obtain large amounts of comparisons from the Web. We then develop a joint optimization model for cleaning and disambiguating this knowledge with respect to WordNet. This model relies on integer linear programming and semantic coherence scores. Experiments show that our model outperforms strong baselines and allows us to obtain a large knowledge base of disambiguated commonsense assertions."
]
} |
1907.09871 | 2963864879 | The paper details the first successful attempt at using model-checking techniques to verify the correctness of distributed algorithms for robots evolving in a environment. The study focuses on the problem of rendezvous of two robots with lights. There exist many different rendezvous algorithms that aim at finding the minimal number of colors needed to solve rendezvous in various synchrony models (e.g., FSYNC, SSYNC, ASYNC). While these rendezvous algorithms are typically very simple, their analysis and proof of correctness tend to be extremely complex, tedious, and error-prone as impossibility results are based on subtle interactions between robots activation schedules. The paper presents a generic verification model written for the SPIN model-checker. In particular, we explain the subtle design decisions that allow to keep the search space finite and tractable, as well as prove several important theorems that support them. As a sanity check, we use the model to verify several known rendezvous algorithms in six different models of synchrony. In each case, we find that the results obtained from the model-checker are consistent with the results known in the literature. The model-checker outputs a counter-example execution in every case that is known to fail. In the course of developing and proving the validity of the model, we identified several fundamental theorems, including the ability for a well chosen algorithm and ASYNC scheduler to produce an emerging property of memory in a system of oblivious mobile robots, and why it is not a problem for luminous rendezvous algorithms. | Designing and proving mobile robot protocols is notoriously difficult. Formal methods encompass a long-lasting path of research that is meant to overcome errors of human origin. Unsurprisingly, this mechanized approach to protocol correctness was used in the context of mobile robots @cite_11 @cite_32 @cite_5 @cite_18 @cite_1 @cite_27 @cite_15 @cite_28 @cite_32 @cite_21 @cite_2 . | {
"cite_N": [
"@cite_18",
"@cite_28",
"@cite_21",
"@cite_1",
"@cite_32",
"@cite_27",
"@cite_2",
"@cite_5",
"@cite_15",
"@cite_11"
],
"mid": [
"1750856813",
"2264618208",
"2963688871",
"179805188",
"1662376798",
"2963504389",
"2782960731",
"2338820189",
"",
"2538025500"
],
"abstract": [
"We propose a framework to build formal developments for robot networks using the Coq proof assistant, to state and prove formally various properties. We focus in this paper on impossibility proofs, as it is natural to take advantage of the Coq higher order calculus to reason about algorithms as abstract objects. We present in particular formal proofs of two impossibility results for convergence of oblivious mobile robots if respectively more than one half and more than one third of the robots exhibit Byzantine failures, starting from the original theorems by . Thanks to our formalisation, the corresponding Coq developments are quite compact. To our knowledge, these are the first certified (in the sense of formally proved) impossibility results for robot networks.",
"This paper establishes a framework based on logic and automata theory in which to model and automatically verify that multiple mobile robots, with sensing abilities, moving asynchronously, correctly perform their tasks. The motivation is from practical scenarios in which the environment is not completely know to the robots, e.g., physical robots exploring a maze, or software agents exploring a hostile network. The framework shows how to express tasks in a logical language, and exhibits an algorithm solving the parameterised verification problem, where the graphs are treated as the parameter. The main assumption that yields decidability is that the robots take a bounded number of turns. We prove that dropping this assumption results in undecidability, even for robots with very limited (“local”) sensing abilities.",
"We study verification problems for autonomous swarms of mobile robots that self-organize and cooperate to solve global objectives. In particular, we focus in this paper on the model proposed by Suzuki and Yamashita of anonymous robots evolving in a discrete space with a finite number of locations (here, a ring). A large number of algorithms have been proposed working for rings whose size is not a priori fixed and can be hence considered as a parameter. Handmade correctness proofs of these algorithms have been shown to be error-prone, and recent attention had been given to the application of formal methods to automatically prove those. Our work is the first to study the verification problem of such algorithms in the parameterized case. We show that safety and reachability problems are undecidable for robots evolving asynchronously. On the positive side, we show that safety properties are decidable in the synchronous case, as well as in the asynchronous case for a particular class of algorithms. Several properties on the protocol can be decided as well. Decision procedures rely on an encoding in Presburger arithmetics formulae that can be verified by an SMT-solver. Feasibility of our approach is demonstrated by the encoding of several case studies.",
"Recent advances in Distributed Computing highlight models and algorithms for autonomous swarms of mobile robots that self-organize and cooperate to solve global objectives. The overwhelming majority of works so far considers handmade algorithms and correctness proofs.",
"We consider deterministic terminating exploration of a grid by a team of asynchronous oblivious robots. We first consider the semi-synchronous atomic model ATOM. In this model, we exhibit the minimal number of robots to solve the problem w.r.t. the size of the grid. We then consider the asynchronous non-atomic model CORDA. ATOM being strictly stronger than CORDA, the previous bounds also hold in CORDA, and we propose deterministic algorithms in CORDA that matches these bounds. The above results show that except in two particular cases, 3 robots are necessary and sufficient to deterministically explore a grid of at least three nodes. The optimal number of robots for the two remaining cases is: 4 for the (2,2)-Grid and 5 for the (3,3)-Grid, respectively.",
"Recent advances in Distributed Computing highlight models and algorithms for autonomous swarms of mobile robots that self-organise and cooperate to solve global objectives. The overwhelming majority of works so far considers handmade algorithms and proofs of correctness. This paper builds upon a previously proposed formal framework to certify the correctness of impossibility results regarding distributed algorithms that are dedicated to autonomous mobile robots evolving in a continuous space. As a case study, we consider the problem of gathering all robots at a particular location, not known beforehand. A fundamental (but not yet formally certified) result, due to Suzuki and Yamashita, states that this simple task is impossible for two robots executing deterministic code and initially located at distinct positions. Not only do we obtain a certified proof of the original impossibility result, we also get the more general impossibility of gathering with an even number of robots, when any two robots are possibly initially at the same exact location.",
"Swarms of mobile robots recently attracted the focus of the Distributed Computing community. One of the fundamental problems in this context is that of exploration: the robots must coordinate to visit all locations that are reachable from their initial positions. Despite its apparent simplicity, this problem proved quite hard to characterise fully, due to many model variants, leading to informal error-prone reasoning. Over the past few years, a significant effort permitted to set up a formal framework, relying on the Coq proof assistant, which was used to provide certified results when robots evolve in a continuous bi-dimensional Euclidean space. However, the most challenging issues with exploration arise in the discrete setting (a.k.a. graph), where locations are modeled as vertices and where edges between vertices denote the ability for a robot to move from one location to the next. We present a formal model to tackle problems and reason about robot algorithms arising in the discrete setting. Our approach extends and generalises previous research efforts focusing on the continuous model. As case studies, we consider fundamental impossibility results for exploration with stop in the discrete model. To our knowledge, those are the first certified results in this context. This framework paves the way for a general certification workflow dedicated to mobile robots on graphs.",
"Mobile robot networks emerged in the past few years as a promising distributed computing model. Existing work in the literature typically ensures the correctness of mobile robot protocols via ad hoc handwritten proofs, which, in the case of asynchronous execution models, are both cumbersome and error-prone. Our contribution is twofold. We first propose a formal model to describe mobile robot protocols operating in a discrete space i.e., with a finite set of possible robot positions, under synchrony and asynchrony assumptions. We translate this formal model into the DVE language, which is the input format of the model-checkers DiVinE and ITS tools, and formally prove the equivalence of the two models. We then verify several instances of two existing protocols for variants of the ring exploration in an asynchronous setting: exploration with stop and perpetual exclusive exploration. For the first protocol we refine the correctness bounds and for the second one, we exhibit a counter-example. This protocol is then modified and we establish the correctness of the new version with an inductive proof.",
"",
"The model of autonomous oblivious and anonymous mobile robots recently emerged as an attractive distributed computing abstraction that permits to assess the intrinsic difficulties of many fundamentals tasks, such as exploring or gathering in a discrete space. We present and implement a generic method for obtaining all possible protocols for a swarm of mobile robots operating in a particular discrete space. We use the exclusive perpetual exploration of anonymous rings as a case study. Our method permits to discover new protocols that solve the problem, and to assess specific optimization criteria (such as individual coverage, visits frequency, etc.) that are met by those protocols. To our knowledge, this is the first attempt to mechanize the discovery and fine-grained property testing of distributed mobile robot protocols."
]
} |
1907.09871 | 2963864879 | The paper details the first successful attempt at using model-checking techniques to verify the correctness of distributed algorithms for robots evolving in a environment. The study focuses on the problem of rendezvous of two robots with lights. There exist many different rendezvous algorithms that aim at finding the minimal number of colors needed to solve rendezvous in various synchrony models (e.g., FSYNC, SSYNC, ASYNC). While these rendezvous algorithms are typically very simple, their analysis and proof of correctness tend to be extremely complex, tedious, and error-prone as impossibility results are based on subtle interactions between robots activation schedules. The paper presents a generic verification model written for the SPIN model-checker. In particular, we explain the subtle design decisions that allow to keep the search space finite and tractable, as well as prove several important theorems that support them. As a sanity check, we use the model to verify several known rendezvous algorithms in six different models of synchrony. In each case, we find that the results obtained from the model-checker are consistent with the results known in the literature. The model-checker outputs a counter-example execution in every case that is known to fail. In the course of developing and proving the validity of the model, we identified several fundamental theorems, including the ability for a well chosen algorithm and ASYNC scheduler to produce an emerging property of memory in a system of oblivious mobile robots, and why it is not a problem for luminous rendezvous algorithms. | When robots move freely in a continuous two-dimensional Euclidean space (as considered in this paper), to the best of our knowledge the only formal framework available is Pactole. http: pactole.lri.fr It relies on higher-order logic to certify impossibility results @cite_18 @cite_27 @cite_2 , as well as the correctness of algorithms @cite_25 @cite_32 in the and models, possibly for an arbitrary number of robots (hence in a scalable manner). Pactole was recently extended by Balabonski al @cite_29 to handle the model, thanks to its modular design. However, in its current form, Pactole lacks automation; that is, in order to prove a result formally, one still has to write the proof (that is automatically verified), which requires expertise both in Coq (the language Pactole is based upon) and about the mathematical and logical arguments one should use to complete the proof. | {
"cite_N": [
"@cite_18",
"@cite_29",
"@cite_32",
"@cite_27",
"@cite_2",
"@cite_25"
],
"mid": [
"1750856813",
"2896453912",
"1662376798",
"2963504389",
"2782960731",
""
],
"abstract": [
"We propose a framework to build formal developments for robot networks using the Coq proof assistant, to state and prove formally various properties. We focus in this paper on impossibility proofs, as it is natural to take advantage of the Coq higher order calculus to reason about algorithms as abstract objects. We present in particular formal proofs of two impossibility results for convergence of oblivious mobile robots if respectively more than one half and more than one third of the robots exhibit Byzantine failures, starting from the original theorems by . Thanks to our formalisation, the corresponding Coq developments are quite compact. To our knowledge, these are the first certified (in the sense of formally proved) impossibility results for robot networks.",
"Networks of mobile robots captured the attention of the distributed computing community, as they promise new application (rescue, exploration, surveillance) in potentially harmful environments.",
"We consider deterministic terminating exploration of a grid by a team of asynchronous oblivious robots. We first consider the semi-synchronous atomic model ATOM. In this model, we exhibit the minimal number of robots to solve the problem w.r.t. the size of the grid. We then consider the asynchronous non-atomic model CORDA. ATOM being strictly stronger than CORDA, the previous bounds also hold in CORDA, and we propose deterministic algorithms in CORDA that matches these bounds. The above results show that except in two particular cases, 3 robots are necessary and sufficient to deterministically explore a grid of at least three nodes. The optimal number of robots for the two remaining cases is: 4 for the (2,2)-Grid and 5 for the (3,3)-Grid, respectively.",
"Recent advances in Distributed Computing highlight models and algorithms for autonomous swarms of mobile robots that self-organise and cooperate to solve global objectives. The overwhelming majority of works so far considers handmade algorithms and proofs of correctness. This paper builds upon a previously proposed formal framework to certify the correctness of impossibility results regarding distributed algorithms that are dedicated to autonomous mobile robots evolving in a continuous space. As a case study, we consider the problem of gathering all robots at a particular location, not known beforehand. A fundamental (but not yet formally certified) result, due to Suzuki and Yamashita, states that this simple task is impossible for two robots executing deterministic code and initially located at distinct positions. Not only do we obtain a certified proof of the original impossibility result, we also get the more general impossibility of gathering with an even number of robots, when any two robots are possibly initially at the same exact location.",
"Swarms of mobile robots recently attracted the focus of the Distributed Computing community. One of the fundamental problems in this context is that of exploration: the robots must coordinate to visit all locations that are reachable from their initial positions. Despite its apparent simplicity, this problem proved quite hard to characterise fully, due to many model variants, leading to informal error-prone reasoning. Over the past few years, a significant effort permitted to set up a formal framework, relying on the Coq proof assistant, which was used to provide certified results when robots evolve in a continuous bi-dimensional Euclidean space. However, the most challenging issues with exploration arise in the discrete setting (a.k.a. graph), where locations are modeled as vertices and where edges between vertices denote the ability for a robot to move from one location to the next. We present a formal model to tackle problems and reason about robot algorithms arising in the discrete setting. Our approach extends and generalises previous research efforts focusing on the continuous model. As case studies, we consider fundamental impossibility results for exploration with stop in the discrete model. To our knowledge, those are the first certified results in this context. This framework paves the way for a general certification workflow dedicated to mobile robots on graphs.",
""
]
} |
1907.09871 | 2963864879 | The paper details the first successful attempt at using model-checking techniques to verify the correctness of distributed algorithms for robots evolving in a environment. The study focuses on the problem of rendezvous of two robots with lights. There exist many different rendezvous algorithms that aim at finding the minimal number of colors needed to solve rendezvous in various synchrony models (e.g., FSYNC, SSYNC, ASYNC). While these rendezvous algorithms are typically very simple, their analysis and proof of correctness tend to be extremely complex, tedious, and error-prone as impossibility results are based on subtle interactions between robots activation schedules. The paper presents a generic verification model written for the SPIN model-checker. In particular, we explain the subtle design decisions that allow to keep the search space finite and tractable, as well as prove several important theorems that support them. As a sanity check, we use the model to verify several known rendezvous algorithms in six different models of synchrony. In each case, we find that the results obtained from the model-checker are consistent with the results known in the literature. The model-checker outputs a counter-example execution in every case that is known to fail. In the course of developing and proving the validity of the model, we identified several fundamental theorems, including the ability for a well chosen algorithm and ASYNC scheduler to produce an emerging property of memory in a system of oblivious mobile robots, and why it is not a problem for luminous rendezvous algorithms. | On the other side, model checking and its derivatives (automatic program synthesis, parameterized model checking) hint at more automation once a suitable model has been defined with the input language of the model checker. In particular, model-checking proved useful to find bugs (usually in the setting) @cite_5 @cite_13 @cite_6 and to formally check the correctness of published algorithms @cite_32 @cite_5 @cite_28 . Automatic program synthesis @cite_11 @cite_1 was used to obtain automatically algorithms that are correct-by-design''. However, those approaches are limited to instances with few robots. Generalizing them to an arbitrary number of robots with similar models is doubtful as Sangnier al @cite_21 proved that safety and reachability problems are undecidable in the parameterized case. Another limitation of the above approaches is that they consider cases where mobile robots space (, graph). This limitation is due to the model used, that closely matches the original execution model by Suzuki and Yamashita @cite_19 . As a computer can only model a finite set of locations, a continuous 2D Euclidean space cannot be expressed in this model. | {
"cite_N": [
"@cite_28",
"@cite_21",
"@cite_1",
"@cite_32",
"@cite_6",
"@cite_19",
"@cite_5",
"@cite_13",
"@cite_11"
],
"mid": [
"2264618208",
"2963688871",
"179805188",
"1662376798",
"2795221731",
"2044484214",
"2338820189",
"2606351206",
"2538025500"
],
"abstract": [
"This paper establishes a framework based on logic and automata theory in which to model and automatically verify that multiple mobile robots, with sensing abilities, moving asynchronously, correctly perform their tasks. The motivation is from practical scenarios in which the environment is not completely know to the robots, e.g., physical robots exploring a maze, or software agents exploring a hostile network. The framework shows how to express tasks in a logical language, and exhibits an algorithm solving the parameterised verification problem, where the graphs are treated as the parameter. The main assumption that yields decidability is that the robots take a bounded number of turns. We prove that dropping this assumption results in undecidability, even for robots with very limited (“local”) sensing abilities.",
"We study verification problems for autonomous swarms of mobile robots that self-organize and cooperate to solve global objectives. In particular, we focus in this paper on the model proposed by Suzuki and Yamashita of anonymous robots evolving in a discrete space with a finite number of locations (here, a ring). A large number of algorithms have been proposed working for rings whose size is not a priori fixed and can be hence considered as a parameter. Handmade correctness proofs of these algorithms have been shown to be error-prone, and recent attention had been given to the application of formal methods to automatically prove those. Our work is the first to study the verification problem of such algorithms in the parameterized case. We show that safety and reachability problems are undecidable for robots evolving asynchronously. On the positive side, we show that safety properties are decidable in the synchronous case, as well as in the asynchronous case for a particular class of algorithms. Several properties on the protocol can be decided as well. Decision procedures rely on an encoding in Presburger arithmetics formulae that can be verified by an SMT-solver. Feasibility of our approach is demonstrated by the encoding of several case studies.",
"Recent advances in Distributed Computing highlight models and algorithms for autonomous swarms of mobile robots that self-organize and cooperate to solve global objectives. The overwhelming majority of works so far considers handmade algorithms and correctness proofs.",
"We consider deterministic terminating exploration of a grid by a team of asynchronous oblivious robots. We first consider the semi-synchronous atomic model ATOM. In this model, we exhibit the minimal number of robots to solve the problem w.r.t. the size of the grid. We then consider the asynchronous non-atomic model CORDA. ATOM being strictly stronger than CORDA, the previous bounds also hold in CORDA, and we propose deterministic algorithms in CORDA that matches these bounds. The above results show that except in two particular cases, 3 robots are necessary and sufficient to deterministically explore a grid of at least three nodes. The optimal number of robots for the two remaining cases is: 4 for the (2,2)-Grid and 5 for the (3,3)-Grid, respectively.",
"Recent advances in distributed computing highlight models and algorithms for autonomous mo- bile robots that self-organize and cooperate together in order to solve a global objective. As results, a large number of algorithms have been proposed. These algorithms are given together with proofs to assess their correctness. However, those proofs are informal, which are error prone. This paper presents our study on formal verification of mobile robot algorithms. We first propose a formal model for mobile robot algorithms on anonymous ring shape network under multiplicity and asynchrony assumptions. We specify this formal model in Maude, a specification and pro- gramming language based on rewriting logic. We then use its model checker to formally verify an algorithm for robot gathering problem on ring enjoys some desired properties. As the result of the model checking, counterexamples have been found. We detect the sources of some unforeseen design errors. We, furthermore, give our interpretations of these errors.",
"In this note we make a minor correction to a scheme for robots to broadcast their private information. All major results of the paper [I. Suzuki and M. Yamashita, SIAM J. Comput., 28 (1999), pp. 1347-1363] hold with this correction.",
"Mobile robot networks emerged in the past few years as a promising distributed computing model. Existing work in the literature typically ensures the correctness of mobile robot protocols via ad hoc handwritten proofs, which, in the case of asynchronous execution models, are both cumbersome and error-prone. Our contribution is twofold. We first propose a formal model to describe mobile robot protocols operating in a discrete space i.e., with a finite set of possible robot positions, under synchrony and asynchrony assumptions. We translate this formal model into the DVE language, which is the input format of the model-checkers DiVinE and ITS tools, and formally prove the equivalence of the two models. We then verify several instances of two existing protocols for variants of the ring exploration in an asynchronous setting: exploration with stop and perpetual exclusive exploration. For the first protocol we refine the correctness bounds and for the second one, we exhibit a counter-example. This protocol is then modified and we establish the correctness of the new version with an inductive proof.",
"Distributed mobile computing has been recently an active field of research, resulting in a large number of algorithms. However, to the best of our knowledge, few of the designed algorithms have been formally model checked. This paper presents a case study of how to specify and model check a given robot algorithm. We specify the system in Maude, a rewriting logic-based programming and specification language. To check the correctness of the algorithm, we express in LTL the properties it should enjoy. Our analysis leads to a counterexample which implies that the proposed algorithm is not correct.",
"The model of autonomous oblivious and anonymous mobile robots recently emerged as an attractive distributed computing abstraction that permits to assess the intrinsic difficulties of many fundamentals tasks, such as exploring or gathering in a discrete space. We present and implement a generic method for obtaining all possible protocols for a swarm of mobile robots operating in a particular discrete space. We use the exclusive perpetual exploration of anonymous rings as a case study. Our method permits to discover new protocols that solve the problem, and to assess specific optimization criteria (such as individual coverage, visits frequency, etc.) that are met by those protocols. To our knowledge, this is the first attempt to mechanize the discovery and fine-grained property testing of distributed mobile robot protocols."
]
} |
1907.09939 | 2963333474 | Understanding large amounts of spatiotemporal data from particle-based simulations, such as molecular dynamics, often relies on the computation and analysis of aggregate measures. These, however, by virtue of aggregation, hide structural information about the space time localization of the studied phenomena. This leads to degenerate cases where the measures fail to capture distinct behaviour. In order to drill into these aggregate values, we propose a multi-scale visual exploration technique. Our novel representation, based on partial domain aggregation, enables the construction of a continuous scale-space for discrete datasets and the simultaneous exploration of scales in both space and time. We link these two scale-spaces in a scale-space space-time cube and model linked views as orthogonal slices through this cube, thus enabling the rapid identification of spatio-temporal patterns at multiple scales. To demonstrate the effectiveness of our approach, we showcase an advanced exploration of a protein-ligand simulation. | This paper contributes to the analysis of data that usually would be treated as a time series of spatially aggregated values. A good overview of visualization techniques for time-dependent data can be found in a book by @cite_57 . They focus primarily on time-dependent data in general, without specifically addressing spatial localization or scale-space approaches. @cite_28 proposed an analysis based on multivariate trend identification in time dependent data without spatial dependence. @cite_9 explore the use of parallel coordinates for the visualization of trajectories in multi-dimensional dynamical systems. | {
"cite_N": [
"@cite_57",
"@cite_9",
"@cite_28"
],
"mid": [
"1576209423",
"2042862761",
"2105832013"
],
"abstract": [
"Time is an exceptional dimension that is common to many application domains such as medicine, engineering, business, or science. Due to the distinct characteristics of time, appropriate visual and analytical methods are required to explore and analyze them. This book starts with an introduction to visualization and historical examples of visual representations. At its core, the book presents and discusses a systematic view of the visualization of time-oriented data along three key questions: what is being visualized (data), why something is visualized (user tasks), and how it is presented (visual representation). To support visual exploration, interaction techniques and analytical methods are required that are discussed in separate chapters. A large part of this book is devoted to a structured survey of 101 different visualization techniques as a reference for scientists conducting related research as well as for practitioners seeking information on how their time-oriented data can best be visualized.",
"In recent years scientific visualization has been driven by the need to visualize high-dimensional data sets within high-dimensional spaces. However most visualization methods are designed to show only some statistical features of the data set. The paper deals with the visualization of trajectories of high-dimensional dynamical systems which form a L sub n sup n data set of a smooth n-dimensional flow. Three methods that are based on the idea of parallel coordinates are presented and discussed. Visualizations done with these new methods are shown and an interactive visualization tool for the exploration of high-dimensional dynamical systems is proposed.",
"We present a new algorithm to explore and visualize multivariate time-varying data sets. We identify important trend relationships among the variables based on how the values of the variables change over time and how those changes are related to each other in different spatial regions and time intervals. The trend relationships can be used to describe the correlation and causal effects among the different variables. To identify the temporal trends from a local region, we design a new algorithm called SUBDTW to estimate when a trend appears and vanishes in a given time series. Based on the beginning and ending times of the trends, their temporal relationships can be modeled as a state machine representing the trend sequence. Since a scientific data set usually contains millions of data points, we propose an algorithm to extract important trend relationships in linear time complexity. We design novel user interfaces to explore the trend relationships, to visualize their temporal characteristics, and to display their spatial distributions. We use several scientific data sets to test our algorithm and demonstrate its utilities."
]
} |
1907.09939 | 2963333474 | Understanding large amounts of spatiotemporal data from particle-based simulations, such as molecular dynamics, often relies on the computation and analysis of aggregate measures. These, however, by virtue of aggregation, hide structural information about the space time localization of the studied phenomena. This leads to degenerate cases where the measures fail to capture distinct behaviour. In order to drill into these aggregate values, we propose a multi-scale visual exploration technique. Our novel representation, based on partial domain aggregation, enables the construction of a continuous scale-space for discrete datasets and the simultaneous exploration of scales in both space and time. We link these two scale-spaces in a scale-space space-time cube and model linked views as orthogonal slices through this cube, thus enabling the rapid identification of spatio-temporal patterns at multiple scales. To demonstrate the effectiveness of our approach, we showcase an advanced exploration of a protein-ligand simulation. | Our approach is based on the concept of the space-time cube, coined by H "a gerstrand @cite_16 in 1970. Since then it was repeatedly used, either explicitly or as an underlying concept. Recently, @cite_35 presented a useful overview of related techniques. They describe the theoretical concept of a generalized space-time cube together with a taxonomy of all elementary space-time operations and their combinations that can be performed on such a space-time cube. A more abstract approach to time-dependent volume data analysis has been proposed by @cite_41 , based on hyperplane slicing of a four-dimensional space-time hypercube. Other examples of slicing higher-dimensional data (not necessarily time dependent) include Sliceplorer @cite_0 , Hyperslice @cite_19 , Hypersliceplorer @cite_26 and HyperMoVal @cite_27 . | {
"cite_N": [
"@cite_35",
"@cite_26",
"@cite_41",
"@cite_0",
"@cite_19",
"@cite_27",
"@cite_16"
],
"mid": [
"2393302563",
"2815781823",
"2097621063",
"2626618469",
"2103111128",
"1979686514",
"2133649345"
],
"abstract": [
"We present the generalized space-time cube, a descriptive model for visualizations of temporal data. Visualizations are described as operations on the cube, which transform the cube's 3D shape into readable 2D visualizations. Operations include extracting subparts of the cube, flattening it across space or time or transforming the cubes geometry and content. We introduce a taxonomy of elementary space-time cube operations and explain how these operations can be combined and parameterized. The generalized space-time cube has two properties: 1 it is purely conceptual without the need to be implemented, and 2 it applies to all datasets that can be represented in two dimensions plus time e.g. geo-spatial, videos, networks, multivariate data. The proper choice of space-time cube operations depends on many factors, for example, density or sparsity of a cube. Hence, we propose a characterization of structures within space-time cubes, which allows us to discuss strengths and limitations of operations. We finally review interactive systems that support multiple operations, allowing a user to customize his view on the data. With this framework, we hope to facilitate the description, criticism and comparison of temporal data visualizations, as well as encourage the exploration of new techniques and systems. This paper is an extension ofBach etal.'s 2014 work.",
"",
"We present an alternative method for viewing time-varying volumetric data. We consider such data as a four-dimensional data field, rather than considering space and time as separate entities. If we treat the data in this manner, we can apply high dimensional slicing and projection techniques to generate an image hyperplane. The user is provided with an intuitive user interface to specify arbitrary hyperplanes in 4D, which can be displayed with standard volume rendering techniques. From the volume specification, we are able to extract arbitrary hyperslices, combine slices together into a hyperprojection volume, or apply a 4D raycasting method to generate the same results. In combination with appropriate integration operators and transfer functions, we are able to extract and present different space-time features to the user.",
"Multi-dimensional continuous functions are commonly visualized with 2D slices or topological views. Here, we explore 1D slices as an alternative approach to show such functions. Our goal with 1D slices is to combine the benefits of topological views, that is, screen space efficiency, with those of slices, that is a close resemblance of the underlying function. We compare 1D slices to 2D slices and topological views, first, by looking at their performance with respect to common function analysis tasks. We also demonstrate 3 usage scenarios: the 2D sinc function, neural network regression, and optimization traces. Based on this evaluation, we characterize the advantages and drawbacks of each of these approaches, and show how interaction can be used to overcome some of the shortcomings.",
"HyperSlice is a new method for the visualization of scalar functions of many variables. With this method the multi-dimensional function is presented in a simple and easy to understand way in which all dimensions are treated identically. The central concept is the representation of a multi-dimensional function as a matrix of orthogonal two-dimensional slices. These two-dimensional slices lend themselves very well to interaction via direct manipulation, due to a one to one relation between screen space and variable space. Several interaction techniques, for navigation, the location of maxima, and the use of user-defined paths, are presented.",
"During the development of car engines, regression models that are based on machine learning techniques are increasingly important for tasks which require a prediction of results in real-time. While the validation of a model is a key part of its identification process, existing computation- or visualization-based techniques do not adequately support all aspects of model validation. The main contribution of this paper is an interactive approach called HyperMoVal that is designed to support multiple tasks related to model validation: 1) comparing known and predicted results, 2) analyzing regions with a bad fit, 3) assessing the physical plausibility of models also outside regions covered by validation data, and 4) comparing multiple models. The key idea is to visually relate one or more n-dimensional scalar functions to known validation data within a combined visualization. HyperMoVal lays out multiple 2D and 3D sub-projections of the n-dimensional function space around a focal point. We describe how linking HyperMoVal to other views further extends the possibilities for model validation. Based on this integration, we discuss steps towards supporting the entire workflow of identifying regression models. An evaluation illustrates a typical workflow in the application context of car-engine design and reports general feedback of domain experts and users of our approach. These results indicate that our approach significantly accelerates the identification of regression models and increases the confidence in the overall engineering process.",
"The described locking mechanism permits two or more rods to be quickly fitted together and adjusted to a fixed, predetermined length. The locking mechanism is of a type such that the rotation of the rods by approximately 45 DEG -90 DEG automatically fixes the rods in place."
]
} |
1907.09939 | 2963333474 | Understanding large amounts of spatiotemporal data from particle-based simulations, such as molecular dynamics, often relies on the computation and analysis of aggregate measures. These, however, by virtue of aggregation, hide structural information about the space time localization of the studied phenomena. This leads to degenerate cases where the measures fail to capture distinct behaviour. In order to drill into these aggregate values, we propose a multi-scale visual exploration technique. Our novel representation, based on partial domain aggregation, enables the construction of a continuous scale-space for discrete datasets and the simultaneous exploration of scales in both space and time. We link these two scale-spaces in a scale-space space-time cube and model linked views as orthogonal slices through this cube, thus enabling the rapid identification of spatio-temporal patterns at multiple scales. To demonstrate the effectiveness of our approach, we showcase an advanced exploration of a protein-ligand simulation. | Space reformation techniques are valuable tools for gaining insight into the spatial structure of the data. Recent publications present methods for a decomposition using a space-filling curve in MotionRugs @cite_53 and Dynamic Volume Lines @cite_1 . These works transform continuous volume data into one-dimensional representations by following a space-filling curve through the volume. As this kind of space reformation suffers from delocalization , where points close to each other can end up far apart in the representation, @cite_30 tried to overcome this limitation by context-based space-filling curves. A space-filling curve is not applicable to our case as it does not provide the desired aggregation required for the statistical treatment of particle data. Several works address space reformation techniques for volume data, of which we highlight two: A general approach to volume transformation by @cite_6 , via the use of spatial transfer functions for 3D volume warping, and a curved planar reformation by @cite_17 , that was used in medical visualization. For more examples, @cite_38 provide a survey of flattening based techniques in medical visualization. | {
"cite_N": [
"@cite_30",
"@cite_38",
"@cite_53",
"@cite_1",
"@cite_6",
"@cite_17"
],
"mid": [
"2146432373",
"2860633713",
"2888207115",
"2888322672",
"2159162381",
"2119570434"
],
"abstract": [
"A context-based scanning technique for images is presented. An image is scanned along a context-based space filling curve that is computed so as to exploit inherent coherence in the image. The resulting one-dimensi onal representation of the image has improved autocorrelation compared with universal scans such as the PeanoHilbert space filling curve. An efficient algorithm for computing context-based space filling curves is presented. We also discuss the potential of improved autocorrelation of context-based space filling curves for image and video lossless compression.",
"",
"Understanding the movement patterns of collectives, such as flocks of birds or fish swarms, is an interesting open research question. The collectives are driven by mutual objectives or react to individual direction changes and external influence factors and stimuli. The challenge in visualizing collective movement data is to show space and time of hundreds of movements at the same time to enable the detection of spatiotemporal patterns. In this paper, we propose MotionRugs, a novel space efficient technique for visualizing moving groups of entities. Building upon established space-partitioning strategies, our approach reduces the spatial dimensions in each time step to a one-dimensional ordered representation of the individual entities. By design, MotionRugs provides an overlap-free, compact overview of the development of group movements over time and thus, enables analysts to visually identify and explore group-specific temporal patterns. We demonstrate the usefulness of our approach in the field of fish swarm analysis and report on initial feedback of domain experts from the field of collective behavior.",
"The comparison of many members of an ensemble is difficult, tedious, and error-prone, which is aggravated by often just subtle differences. In this paper, we introduce Dynamic Volume Lines for the interactive visual analysis and comparison of sets of 3D volumes. Each volume is linearized along a Hilbert space-filling curve into a 1D Hilbert line plot, which depicts the intensities over the Hilbert indices. We present a nonlinear scaling of these 1D Hilbert line plots based on the intensity variations in the ensemble of 3D volumes, which enables a more effective use of the available screen space. The nonlinear scaling builds the basis for our interactive visualization techniques. An interactive histogram heatmap of the intensity frequencies serves as overview visualization. When zooming in, the frequencies are replaced by detailed 1D Hilbert line plots and optional functional boxplots. To focus on important regions of the volume ensemble, nonlinear scaling is incorporated into the plots. An interactive scaling widget depicts the local ensemble variations. Our brushing and linking interface reveals, for example, regions with a high ensemble variation by showing the affected voxels in a 3D spatial view. We show the applicability of our concepts using two case studies on ensembles of 3D volumes resulting from tomographic reconstruction. In the first case study, we evaluate an artificial specimen from simulated industrial 3D X-ray computed tomography (XCT). In the second case study, a real-world XCT foam specimen is investigated. Our results show that Dynamic Volume Lines can identify regions with high local intensity variations, allowing the user to draw conclusions, for example, about the choice of reconstruction parameters. Furthermore, it is possible to detect ring artifacts in reconstructions volumes.",
"In this paper, we introduce the concept of spatial transfer functions as a unified approach to volume modeling and animation. A spatial transfer function is a function that defines the geometrical transformation of a scalar field in space, and is a generalization and abstraction of a variety of deformation methods. It facilitates a field based representation, and can thus be embedded into a volumetric scene graph under the algebraic framework of constructive volume geometry. We show that when spatial transfer functions are treated as spatial objects, constructive operations and conventional transfer functions can be applied to such spatial objects. We demonstrate spatial transfer functions in action with the aid of a collection of examples in volume visualization, sweeping, deformation and animation. In association with these examples, we describe methods for modeling and realizing spatial transfer functions, including simple procedural functions, operational decomposition of complex functions, large scale domain decomposition and temporal spatial transfer functions. We also discuss the implementation of spatial transfer functions in the vlib API and our efforts in deploying the technique in volume animation.",
"Traditional volume visualization techniques may provide incomplete clinical information needed for applications in medical visualization. In the area of vascular visualization important features such as the lumen of a diseased vessel segment may not be visible. Curved planar reformation (CPR) has proven to be an acceptable practical solution. Existing CPR techniques, however, still have diagnostically relevant limitations. In this paper, we introduce two advances methods for efficient vessel visualization, based on the concept of CPR. Both methods benefit from relaxation of spatial coherence in favor of improved feature perception. We present a new technique to visualize the interior of a vessel in a single image. A vessel is resampled along a spiral around its central axis. The helical spiral depicts the vessel volume. Furthermore, a method to display an entire vascular tree without mutually occluding vessels is presented. Minimal rotations at the bifurcations avoid occlusions. For each viewing direction the entire vessel structure is visible."
]
} |
1907.09939 | 2963333474 | Understanding large amounts of spatiotemporal data from particle-based simulations, such as molecular dynamics, often relies on the computation and analysis of aggregate measures. These, however, by virtue of aggregation, hide structural information about the space time localization of the studied phenomena. This leads to degenerate cases where the measures fail to capture distinct behaviour. In order to drill into these aggregate values, we propose a multi-scale visual exploration technique. Our novel representation, based on partial domain aggregation, enables the construction of a continuous scale-space for discrete datasets and the simultaneous exploration of scales in both space and time. We link these two scale-spaces in a scale-space space-time cube and model linked views as orthogonal slices through this cube, thus enabling the rapid identification of spatio-temporal patterns at multiple scales. To demonstrate the effectiveness of our approach, we showcase an advanced exploration of a protein-ligand simulation. | Substantial work has been done on the visualization of trajectory-based simulation data, including projects such as OVITO @cite_34 , Trillion Particles @cite_23 , Multiscale HIV @cite_13 . These are primarily concerned with the sheer volume of the data and its direct visualization, rather than the computation of derived properties and their temporal analysis. | {
"cite_N": [
"@cite_34",
"@cite_13",
"@cite_23"
],
"mid": [
"2047968138",
"2890964348",
"2125184461"
],
"abstract": [
"The Open Visualization Tool (OVITO) is a new 3D visualization software designed for post-processing atomistic data obtained from molecular dynamics or Monte Carlo simulations. Unique analysis, editing and animations functions are integrated into its easy-to-use graphical user interface. The software is written in object-oriented C++, controllable via Python scripts and easily extendable through a plug-in interface. It is distributed as open-source software and can be downloaded from the website http: ovito.sourceforge.net .",
"Abstract We provide a high-level survey of multiscale molecular visualization techniques, with a focus on application-domain questions, challenges, and tasks. We provide a general introduction to molecular visualization basics and describe a number of domain-specific tasks that drive this work. These tasks, in turn, serve as the general structure of the following survey. First, we discuss methods that support the visual analysis of molecular dynamics simulations. We discuss, in particular, visual abstraction and temporal aggregation. In the second part, we survey multiscale approaches that support the design, analysis, and manipulation of DNA nanostructures and related concepts for abstraction, scale transition, scale-dependent modeling, and navigation of the resulting abstraction spaces. In the third part of the survey, we showcase approaches that support interactive exploration within large structural biology assemblies up to the size of bacterial cells. We describe fundamental rendering techniques as well as approaches for element instantiation, visibility management, visual guidance, camera control, and support of depth perception. We close the survey with a brief listing of important tools that implement many of the discussed approaches and a conclusion that provides some research challenges in the field.",
"Petascale plasma physics simulations have recently entered the regime of simulating trillions of particles. These unprecedented simulations generate massive amounts of data, posing significant challenges in storage, analysis, and visualization. In this paper, we present parallel I O, analysis, and visualization results from a VPIC trillion particle simulation running on 120,000 cores, which produces 30TB of data for a single timestep. We demonstrate the successful application of H5Part, a particle data extension of parallel HDF5, for writing the dataset at a significant fraction of system peak I O rates. To enable efficient analysis, we develop hybrid parallel FastQuery to index and query data using multi-core CPUs on distributed memory hardware. We show good scalability results for the FastQuery implementation using up to 10,000 cores. Finally, we apply this indexing query-driven approach to facilitate the first-ever analysis and visualization of the trillion particle dataset."
]
} |
1907.09939 | 2963333474 | Understanding large amounts of spatiotemporal data from particle-based simulations, such as molecular dynamics, often relies on the computation and analysis of aggregate measures. These, however, by virtue of aggregation, hide structural information about the space time localization of the studied phenomena. This leads to degenerate cases where the measures fail to capture distinct behaviour. In order to drill into these aggregate values, we propose a multi-scale visual exploration technique. Our novel representation, based on partial domain aggregation, enables the construction of a continuous scale-space for discrete datasets and the simultaneous exploration of scales in both space and time. We link these two scale-spaces in a scale-space space-time cube and model linked views as orthogonal slices through this cube, thus enabling the rapid identification of spatio-temporal patterns at multiple scales. To demonstrate the effectiveness of our approach, we showcase an advanced exploration of a protein-ligand simulation. | Focusing on the trajectories, hierarchical particle grouping for large datasets has been done by @cite_33 . @cite_43 extract prominent trajectories from large particle data. present a specialized tool for exploring Monte Carlo simulations of photo-voltaic cells. | {
"cite_N": [
"@cite_43",
"@cite_33"
],
"mid": [
"2086600232",
"1967509282"
],
"abstract": [
"An effective means for flow visualization is the depiction of particle trajectories. When rendering large amounts of these pathlines, standard visualization techniques suffer from several weaknesses, ranging from ambiguous depth perception to high geometrical complexity and decreased interactivity. This paper addresses these problems by choosing a novel approach to pathline visualization in 3D space, which we call Virtual Tubelets. It employs billboarding techniques in combination with suitable textures to create the illusion of three-dimensional tubes, which efficiently depict the particles' trajectories, while still maintaining interactive frame rates. Certain issues concerning virtual environments and immersive displays with multiple projection screens are resolved by choosing an appropriate orientation for the billboards. The use of modern, programmable graphics hardware allows for an additional speed-up of the rendering process and a further improvement of the image quality. This results in a nearly perfect illusion of tubular geometry, including plausible intersections and consistent illumination with the rest of the scene. To prove the efficiency of our approach, rendering speed and visual quality of Virtual Tubelets and conventional, polygonal tube renderings are compared.",
"Interactive visualization of large particle sets is required to analyze the complicated structures and formation processes in astrophysical particle simulations. While some research has been done on the development of visualization techniques for steady particle fields, only very few approaches have been proposed to interactively visualize large time-varying fields and their dynamics. Particle trajectories are known to visualize dynamic processes over time, but due to occlusion and visual cluttering such techniques have only been reported for very small particle sets so far. In this paper we present a novel technique to solve these problems, and we demonstrate the potential of our approach for the visual exploration of large astrophysical particle sequences. We present a new hierarchical space-time data structure for particle sets which allows for a scale-space analysis of trajectories in the simulated fields. In combination with visualization techniques that adapt to the respective scales, clusters of particles with homogeneous motion as well as separation and merging regions can be identified effectively. The additional use of mapping functions to modulate the color and size of trajectories allows emphasizing various particle properties like direction, speed, or particle-specific attributes like temperature. Furthermore, tracking of interactively selected particle subsets permits the user to focus on structures of interest."
]
} |
1907.09939 | 2963333474 | Understanding large amounts of spatiotemporal data from particle-based simulations, such as molecular dynamics, often relies on the computation and analysis of aggregate measures. These, however, by virtue of aggregation, hide structural information about the space time localization of the studied phenomena. This leads to degenerate cases where the measures fail to capture distinct behaviour. In order to drill into these aggregate values, we propose a multi-scale visual exploration technique. Our novel representation, based on partial domain aggregation, enables the construction of a continuous scale-space for discrete datasets and the simultaneous exploration of scales in both space and time. We link these two scale-spaces in a scale-space space-time cube and model linked views as orthogonal slices through this cube, thus enabling the rapid identification of spatio-temporal patterns at multiple scales. To demonstrate the effectiveness of our approach, we showcase an advanced exploration of a protein-ligand simulation. | @cite_47 present a visual analytics tool for exploring molecular structures based on the Solvent Accessible Surface (SAS). This is a geometric approach to the extraction of the spatial configuration of a protein--ligand interaction. MoleCollar and Tunnel Heat Map @cite_24 are works by By s where they reform the properties of a protein tunnel into the tunnel's centre line and the tunnel's cross section. In AnimoAminoMiner @cite_31 , the authors focus on amino acids lining the tunnel and their temporal development. | {
"cite_N": [
"@cite_24",
"@cite_47",
"@cite_31"
],
"mid": [
"1879416943",
"2845084436",
"2160715766"
],
"abstract": [
"Studying the characteristics of proteins and their inner void space, including their geometry, physico-chemical properties and dynamics are instrumental for evaluating the reactivity of the protein with other small molecules. The analysis of long simulations of molecular dynamics produces a large number of voids which have to be further explored and evaluated. In this paper we propose three new methods: two of them convey important properties along the long axis of a selected void during molecular dynamics and one provides a comprehensive picture across the void. The first two proposed methods use a specific heat map to present two types of information: an overview of all detected tunnels in the dynamics and their bottleneck width and stability over time, and an overview of a specific tunnel in the dynamics showing the bottleneck position and changes of the tunnel length over time. These methods help to select a small subset of tunnels, which are explored individually and in detail. For this stage we propose the third method, which shows in one static image the temporal evolvement of the shape of the most critical tunnel part, i.e., its bottleneck. This view is enriched with abstract depictions of different physico-chemical properties of the amino acids surrounding the bottleneck. The usefulness of our newly proposed methods is demonstrated on a case study and the feedback from the domain experts is included. The biochemists confirmed that our novel methods help to convey the information about the appearance and properties of tunnels in a very intuitive and comprehensible manner.",
"",
"In this paper we propose a novel method for the interactive exploration of protein tunnels. The basic principle of our approach is that we entirely abstract from the 3D 4D space the simulated phenomenon is embedded in. A complex 3D structure and its curvature information is represented only by a straightened tunnel centerline and its width profile. This representation focuses on a key aspect of the studied geometry and frees up graphical estate to key chemical and physical properties represented by surrounding amino acids. The method shows the detailed tunnel profile and its temporal aggregation. The profile is interactively linked with a visual overview of all amino acids which are lining the tunnel over time. In this overview, each amino acid is represented by a set of colored lines depicting the spatial and temporal impact of the amino acid on the corresponding tunnel. This representation clearly shows the importance of amino acids with respect to selected criteria. It helps the biochemists to select the candidate amino acids for mutation which changes the protein function in a desired way. The AnimoAminoMiner was designed in close cooperation with domain experts. Its usefulness is documented by their feedback and a case study, which are included."
]
} |
1907.09834 | 2963026274 | We extend the Mobile Server Problem, introduced in SPAA'17, to a model where k identical mobile resources, here named servers, answer requests appearing at points in the Euclidean space. In order to reduce communication costs, the positions of the servers can be adapted by a limited distance m_s per round for each server. The costs are measured similar to the classical Page Migration Problem, i.e., answering a request induces costs proportional to the distance to the nearest server, and moving a server induces costs proportional to the distance multiplied with a weight D. We show that, in our model, no online algorithm can have a constant competitive ratio, i.e., one which is independent of the input length n, even if an augmented moving distance of (1+ )m_s is allowed for the online algorithm. Therefore we investigate a restriction of the power of the adversary dictating the sequence of requests: We demand locality of requests, i.e., that consecutive requests come from points in the Euclidean space with distance bounded by some constant m_c. We show constant lower bounds on the competitiveness in this setting (independent of n, but dependent on k, m_s and m_c). On the positive side, we present a deterministic online algorithm with bounded competitiveness when augmented moving distance and locality of requests is assumed. Our algorithm simulates any given algorithm for the classical k-Page Migration problem as guidance for its servers and extends it by a greedy move of one server in every round. The resulting competitive ratio is polynomial in the number of servers k, the ratio between m_c and m_s, the inverse of the augmentation factor 1 and the competitive ratio of the simulated k-Page Migration algorithm. | In the classical @math -Server Problem as introduced by @cite_8 , @math identical servers are located in a metric space and requests are answered by moving at least one of the servers to the point of the request. The associated costs are equal to the total distance moved. showed that no online algorithm could be better than @math -competitive on every metric with at least @math points. They stated as the @math -Server Conjecture that there is a @math -competitive online algorithm for every metric space. Further, the conjecture is shown to hold for @math and @math where @math is the number of points in the metric space. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2082099352"
],
"abstract": [
"Abstract The k-server problem is that of planning the motion of k mobile servers on the vertices of a graph under a sequence of requests for service. Each request consists of the name of a vertex, and is satisfied by placing a server at the requested vertex. The requests must be satisfied in their order of occurrence. The cost of satisfying a sequence of requests is the distance moved by the servers. In this paper we study on-line algorithms for this problem from the competitive point of view. That is, we seek to develop on-line algorithms whose performance on any sequence of requests is as close as possible to the performance of the optimum off-line algorithm. We obtain optimally competitive algorithms for several important cases. Because of the flexibility in choosing the distances in the graph and the number of servers, the k -server problem can be used to model a number of important paging and caching problems. It can also be used as a building block for solving more general problems. We show how server algorithms can be used to solve a seemingly more general class of problems known as task systems ."
]
} |
1907.09834 | 2963026274 | We extend the Mobile Server Problem, introduced in SPAA'17, to a model where k identical mobile resources, here named servers, answer requests appearing at points in the Euclidean space. In order to reduce communication costs, the positions of the servers can be adapted by a limited distance m_s per round for each server. The costs are measured similar to the classical Page Migration Problem, i.e., answering a request induces costs proportional to the distance to the nearest server, and moving a server induces costs proportional to the distance multiplied with a weight D. We show that, in our model, no online algorithm can have a constant competitive ratio, i.e., one which is independent of the input length n, even if an augmented moving distance of (1+ )m_s is allowed for the online algorithm. Therefore we investigate a restriction of the power of the adversary dictating the sequence of requests: We demand locality of requests, i.e., that consecutive requests come from points in the Euclidean space with distance bounded by some constant m_c. We show constant lower bounds on the competitiveness in this setting (independent of n, but dependent on k, m_s and m_c). On the positive side, we present a deterministic online algorithm with bounded competitiveness when augmented moving distance and locality of requests is assumed. Our algorithm simulates any given algorithm for the classical k-Page Migration problem as guidance for its servers and extends it by a greedy move of one server in every round. The resulting competitive ratio is polynomial in the number of servers k, the ratio between m_c and m_s, the inverse of the augmentation factor 1 and the competitive ratio of the simulated k-Page Migration algorithm. | Since its introduction, many algorithms have been designed for special cases of the problem. Most notable is the Double-Coverage Algorithm @cite_10 , which is @math -competitive on trees. For general metrics, the best known result is the Work-Function Algorithm, which is shown to be @math -competitive @cite_3 . Although this algorithm seems generally inefficient in case of runtime and memory, there have been studies showing that an efficient implementation of this algorithm is indeed possible @cite_9 @cite_15 . It was also shown that the algorithm has an optimal competitive ratio of @math on line and star metrics, as well as metrics with @math points @cite_11 . | {
"cite_N": [
"@cite_9",
"@cite_3",
"@cite_15",
"@cite_10",
"@cite_11"
],
"mid": [
"2113692056",
"2069255833",
"",
"2089998595",
"2002247760"
],
"abstract": [
"This paper deals with the work function algorithm (WFA) for solving the on-line k-server problem. The paper addresses some practical aspects of the WFA, such as its efficient implementation and its true quality of serving. First, an implementation of the WFA is proposed, which is based on network flows, and which reduces each step of the WFA to only one minimal-cost maximal flow problem instance. Next, it is explained how the proposed implementation can further be simplified if the involved metric space is finite. Also, it is described how actual computing of optimal flows can be speeded up by taking into account special properties of the involved networks. Some experiments based on the proposed implementation and improvements are presented, where actual serving costs of the WFA have been measured on very large problem instances and compared with costs of other algorithms. Finally, suitability of the WFA for solving real-life problems is discussed.",
"We prove that the work function algorithm for the k -server problem has a competitive ratio at most 2 k −1. [1988] conjectured that the competitive ratio for the k -server problem is exactly k (it is trivially at least k ); previously the best-known upper bound was exponential in k . Our proof involves three crucial ingredients: A quasiconvexity property of work functions, a duality lemma that uses quasiconvexity to characterize the configuration that achieve maximum increase of the work function, and a potential function that exploits the duality lemma.",
"",
"In the k-server problem, one must choose how k mobile servers will serve each of a sequence of requests, making decisions in an online manner. An optimal deterministic online strategy is exhibited when the requests fall on the real line. For the weighted-cache problem, in which the cost of moving to x from any other point is @math , the weight of x, an optimal deterministic algorithm is also provided. The nonexistence of competitive algorithms for the asymmetric two-server problem and of memoryless algorithms for the weighted-cache problem is proved. A fast algorithm for oflline computing of an optimal schedule is given, and it is shown that finding an optimal offline schedule is at least as hard as the assignment problem.",
"The k-server problem is one of the most fundamental online problems. The problem is to schedule k mobile servers to visit a sequence of points in a metric space with minimum total mileage. The k-server conjecture of Manasse, McGeogh, and Sleator states that there exists a k-competitive online algorithm. The conjecture has been open for over 15 years. The top candidate online algorithm for settling this conjecture is the work function algorithm (WFA) which was shown to have competitive ratio at most 2k - 1. In this paper, we lend support to the conjecture that WFA is in fact k-competitive by proving that it achieves this ratio in several special metric spaces: the line, the star, and all metric spaces with k + 2 points."
]
} |
1907.09834 | 2963026274 | We extend the Mobile Server Problem, introduced in SPAA'17, to a model where k identical mobile resources, here named servers, answer requests appearing at points in the Euclidean space. In order to reduce communication costs, the positions of the servers can be adapted by a limited distance m_s per round for each server. The costs are measured similar to the classical Page Migration Problem, i.e., answering a request induces costs proportional to the distance to the nearest server, and moving a server induces costs proportional to the distance multiplied with a weight D. We show that, in our model, no online algorithm can have a constant competitive ratio, i.e., one which is independent of the input length n, even if an augmented moving distance of (1+ )m_s is allowed for the online algorithm. Therefore we investigate a restriction of the power of the adversary dictating the sequence of requests: We demand locality of requests, i.e., that consecutive requests come from points in the Euclidean space with distance bounded by some constant m_c. We show constant lower bounds on the competitiveness in this setting (independent of n, but dependent on k, m_s and m_c). On the positive side, we present a deterministic online algorithm with bounded competitiveness when augmented moving distance and locality of requests is assumed. Our algorithm simulates any given algorithm for the classical k-Page Migration problem as guidance for its servers and extends it by a greedy move of one server in every round. The resulting competitive ratio is polynomial in the number of servers k, the ratio between m_c and m_s, the inverse of the augmentation factor 1 and the competitive ratio of the simulated k-Page Migration algorithm. | The study of randomized online algorithms was initiated by @cite_4 who gave a @math -competitive algorithm for the complete graph. It is speculated that this factor can be obtained for all metrics, however the question is still open. For general metrics, the first algorithm with polylogarithmic competitive ratio was an @math -competitive algorithm by @cite_14 . This was recently improved by @cite_7 who gave an @math -competitive algorithm for HSTs which can be turned into an @math -competitive one for general metrics by a dynamic embedding of general metrics into HSTs @cite_13 . | {
"cite_N": [
"@cite_13",
"@cite_14",
"@cite_4",
"@cite_7"
],
"mid": [
"2963064374",
"2113184918",
"2054033098",
"2963331653"
],
"abstract": [
"We exhibit a poly(log k)-competitive randomized algorithm for the k-server problem on any metric space. The best previous result independent of the geometry of the underlying metric space is the 2k–1 competitive ratio established for the deterministic work function algorithm by Koutsoupias and Papadimitriou (1995). Even for the special case when the underlying metric space is the real line, the best known competitive ratio was k. Since deterministic algorithms can do no better than k on any metric space with at least k+1 points, this establishes that for every metric space on which the problem is non-trivial, randomized algorithms give an exponential improvement over deterministic algorithms. Our algorithm maintains an approximation of the underlying metric space by a distribution over HSTs. The granularity and accuracy of the approximation is adjusted dynamically according to the aggregate behavior of the HST algorithms. In short: We try to obtain more accurate approximations at the locations and scales where the gactionh is happening. Thus a crucial component of our approach is the O((log k)^2)-competitive randomized algorithm for HSTs obtained in our previous work with Bubeck, Cohen, Lee, and Ma.dry, and its \"multiscale information theory\" perspective.",
"We give the first polylogarithmic-competitive randomized online algorithm for the k-server problem on an arbitrary finite metric space. In particular, our algorithm achieves a competitive ratio of O(log3 n log2 k) for any metric space on n points. Our algorithm improves upon the deterministic (2k-1)-competitive algorithm of Koutsoupias and Papadimitriou [Koutsoupias and Papadimitriou 1995] for a wide range of n.",
"The paging problem is that of deciding which pages to keep in a memory of k pages in order to minimize the number of page faults. We develop the marking algorithm, a randomized on-line algorithm for the paging problem. We prove that its expected cost on any sequence of requests is within a factor of 2Hk of optimum. (Where Hk is the kth harmonic number, which is roughly In k.) The best such factor that can be achieved is Hk. This is in contrast to deterministic algorithms, which cannot be guaranteed to be within a factor smaller than k of optimum. An alternative to comparing an on-line algorithm with the optimum off-line algorithm is the idea of comparing it to several other on-line algorithms. We have obtained results along these lines for the paging problem. Given a set of on-line algorithms ‘Support was provided by a Weizmann fellowship. ‘Partial support was provided by the International Computer Science Institute, Berkeley, CA, and by NSF Grant CCR-8411954. 3Support was provided by the International Computer Science Institute and operating grant A8092 of the Natural Sciences and Engineering Research Council of Canada. Current",
"We present an O((logk)2)-competitive randomized algorithm for the k-server problem on hierarchically separated trees (HSTs). This is the first o(k)-competitive randomized algorithm for which the competitive ratio is independent of the size of the underlying HST. Our algorithm is designed in the framework of online mirror descent where the mirror map is a multiscale entropy. When combined with Bartal’s static HST embedding reduction, this leads to an O((logk)2 logn)-competitive algorithm on any n-point metric space. We give a new dynamic HST embedding that yields an O((logk)3 logΔ)-competitive algorithm on any metric space where the ratio of the largest to smallest non-zero distance is at most Δ."
]
} |
1907.09834 | 2963026274 | We extend the Mobile Server Problem, introduced in SPAA'17, to a model where k identical mobile resources, here named servers, answer requests appearing at points in the Euclidean space. In order to reduce communication costs, the positions of the servers can be adapted by a limited distance m_s per round for each server. The costs are measured similar to the classical Page Migration Problem, i.e., answering a request induces costs proportional to the distance to the nearest server, and moving a server induces costs proportional to the distance multiplied with a weight D. We show that, in our model, no online algorithm can have a constant competitive ratio, i.e., one which is independent of the input length n, even if an augmented moving distance of (1+ )m_s is allowed for the online algorithm. Therefore we investigate a restriction of the power of the adversary dictating the sequence of requests: We demand locality of requests, i.e., that consecutive requests come from points in the Euclidean space with distance bounded by some constant m_c. We show constant lower bounds on the competitiveness in this setting (independent of n, but dependent on k, m_s and m_c). On the positive side, we present a deterministic online algorithm with bounded competitiveness when augmented moving distance and locality of requests is assumed. Our algorithm simulates any given algorithm for the classical k-Page Migration problem as guidance for its servers and extends it by a greedy move of one server in every round. The resulting competitive ratio is polynomial in the number of servers k, the ratio between m_c and m_s, the inverse of the augmentation factor 1 and the competitive ratio of the simulated k-Page Migration algorithm. | Regarding the Page Migration Problem @cite_2 (also known as File Migration Problem), most results focus on online algorithms which handle only a single page. Contrary to the @math -Server Problem, the design of such algorithms is not trivial for the Page Migration Problem. To the best of our knowledge, the current best results are a @math -competitive deterministic algorithm by @cite_0 and a collection of randomized algorithms with competitive ratio of at most 3 by Jeffery Westbrook @cite_5 . The most relevant results for our problem are two constructions by @cite_1 who give both a deterministic and a randomized algorithm which transform a given algorithm for the @math -Server problem into a deterministic randomized algorithm for the @math -Page Migration Problem. If the given @math -Server algorithm is @math -competitive, the deterministic algorithm is @math -competitive, the randomized algorithm is @math -competitive. Conversely, we use the resulting algorithms as a black box in our constructions. | {
"cite_N": [
"@cite_0",
"@cite_5",
"@cite_1",
"@cite_2"
],
"mid": [
"2508179470",
"2049943172",
"2092340542",
"1486124781"
],
"abstract": [
"In this paper, we construct a deterministic 4-competitive algorithm for the online file migration problem, beating the currently best 20-year old, 4.086-competitive MTLM algorithm by (SODA 1997). Like MTLM, our algorithm also operates in phases, but it adapts their lengths dynamically depending on the geometry of requests seen so far. The improvement was obtained by carefully analyzing a linear model (factor-revealing LP) of a single phase of the algorithm. We also show that if an online algorithm operates in phases of fixed length and the adversary is able to modify the graph between phases, no algorithm can beat the competitive ratio of 4.086.",
"The page migration problem is to manage a globally addressed shared memory in a multiprocessor system. Each physical page of memory is located at a given processor, and memory references to that page by other processors incur a cost proportional to the network distance. At times the page may migrate between processors at cost proportional to the distance times @math , a page size factor. The problem is to schedule movements on-line so that the total cost of memory references is within a constant factor @math of the best off-line schedule. An algorithm that does so is called c-competitive. Black and Sleator gave 3-competitive deterministic on-line algorithms for uniform networks (complete graphs with unit edge lengths) and for trees with arbitrary edge lengths. No good deterministic algorithm is known for general networks with arbitrary edge lengths. Randomized algorithms are presented for the migration problem that are both simple and better than 3-competitive against an oblivious adversary. An algorithm for uniform graphs is given. It is approximately 2.28-competitive as @math grows large. A second, more powerful algorithm that works on graphs with arbitrary edge distances is also given. This algorithm is approximately 2.62-competitive (or, 1 plus the golden ratio) for large @math . Both these algorithms use random bits only during an initialization phase, and from then on run deterministically. The competitiveness of a very simple coin-flipping algorithm is also examined.",
"This paper is concerned with the page migration (or file migration) problem (Black and Sleator, Technical Report CMU-CS-89-201, Department of Computer Science, Carnegie-Mellon University, 1989) as part of a large class of on-line problems. The page migration problem deals with the management of pages residing in a network of processors. In the classical problem there is only one copy of each page which is accessed by different processors over time. The page is allowed to be migrated between processors. However a migration incurs higher communication cost than an access (proportionally to the page size). The problem is that of deciding when and where to migrate the page in order to lower access costs. A more general setting is the k-page migration problem where we wish to maintain k copies of the page. The page migration problems are concerned with a dilemma common to many on-line problems: determining when it is beneficial to make configuration changes. We deal with the relaxed task systems model which captures a large class of problems of this type, that can be described as the generalization of some original task system problem (, J. ACM 39(4) (1992) 745-763). Given a c-competitive algorithm for a task system we show how to obtain a deterministic O(c2) and randomized O(c) competitive algorithms for the corresponding relaxed task system. The result implies deterministic algorithms for k-page migration by using k-server (, J. Algorithms 11(2) (1990) 208-230) algorithms, and for network leasing by using generalized Steiner tree algorithms (, Proc 7th Ann. ACM-SIAM Symp. on Discrete Algorithms, January 1996, pp. 68-74), as well as providing solutions for natural generalizations of other problems (e.g. storage rearrangement (, Proc. 36th Ann. IEEE Symp. on Foundations of Computer Science, October 1995, pp. 392-403)). We further study some special cases of the k-page migration problem and get optimal deterministic algorithms. For the classical page migration problem we present a deterministic algorithm that achieves a competitive ratio of 4:086, improving upon the previously best competitive ratio of 7 (, Proc. 25th ACM Symp. on Theory of Computing, May 1993, pp. 164-173). (The current lower bound on the problem is 3:148 (, J. Algorithms 24(1) (1997) 124-157). Copyright 2001 Elsevier Science B.V.",
"In this paper we consider problems that arise in a shared memory multiprocessor in which memory is physically distributed among a number of memories local to each processor or cluster of processors. The issue we address is that of deciding which local memories should contain copies of pages of data. In the migration problem we operate under the constraint that a page must be kept in exactly one local memory. In the replication problem we allow a page to be kept in any subset of the local memories, but do not allow a local memory to drop a page once it has it. For interconnection topologies that are complete graphs, or trees we have obtained efficient on-line algorithms for these problems. Our migration algorithms also extend to interconnections that are products of these topologies (e.g. a hypercube is a product of simple trees). An on-line algorithm decides how to process each request (which is a read or write request from a processor to a page) without knowing future requests. Our algorithms are also said to be competitive because their performance is within a small constant factor of that of any other algorithm, including algorithms that make use of knowledge of future requests. This research was supported in part by the National Science Foundation under grant CCR8658139. This research was sponsored by the Defense Advanced Research Projects Agency (DOD), monitored by the Space and Naval Warfare Systems Command under Contract N00039-87-C-0251. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the US Government."
]
} |
1901.09513 | 2913397231 | Underwater robots are subject to position drift due to the effect of ocean currents and the lack of accurate localisation while submerged. We are interested in exploiting such position drift to estimate the ocean current in the surrounding area, thereby assisting navigation and planning. We present a Gaussian process (GP)-based expectation-maximisation (EM) algorithm that estimates the underlying ocean current using sparse GPS data obtained on the surface and dead-reckoned position estimates. We first develop a specialised GP regression scheme that exploits the incompressibility of ocean currents to counteract the underdetermined nature of the problem. We then use the proposed regression scheme in an EM algorithm that estimates the best-fitting ocean current in between each GPS fix. The proposed algorithm is validated in simulation and on a real dataset, and is shown to be capable of reconstructing the underlying ocean current field. We expect to use this algorithm to close the loop between planning and estimation for underwater navigation in unknown ocean currents. | To estimate ocean currents online, one approach is to consider current as a low-frequency disturbance and then apply an extended Kalman filter (EKF) @cite_7 or nonlinear observer @cite_30 in conjunction with acoustic sensors. However, modelling current as a temporal phenomenon clearly overlooks its spatial structure, and acoustic sensors typically require a stationary reference (e.g., the seabed) @cite_13 . | {
"cite_N": [
"@cite_30",
"@cite_13",
"@cite_7"
],
"mid": [
"2438306175",
"1969142081",
"2267281579"
],
"abstract": [
"As applications for autonomous ocean vehicles expand into more dynamic and constrained environments, such as shallow, coastal areas, the benefits of using more precise dynamic model for control and estimation become more compelling. This paper presents a nonlinear observer for current estimation based on AUV dynamic model. Here, AUV dynamic model in currents is taken into consideration. Motivated by the design method of high-gain observer, we take the current disturbances as the uncertainties of the vehicle dynamic system and design the observer gain matrix with the goal of making the observer robust to the effect of current disturbances. The nonlinear observer estimates vehicle's relative velocity firstly; current velocity is further calculated in an indirect way. The proposed current estimation method is validated by numerical simulation.",
"Autonomous underwater vehicle (AUV) navigation and localization in underwater environments is particularly challenging due to the rapid attenuation of Global Positioning System (GPS) and radio-frequency signals. Underwater communications are low bandwidth and unreliable, and there is no access to a global positioning system. Past approaches to solve the AUV localization problem have employed expensive inertial sensors, used installed beacons in the region of interest, or required periodic surfacing of the AUV. While these methods are useful, their performance is fundamentally limited. Advances in underwater communications and the application of simultaneous localization and mapping (SLAM) technology to the underwater realm have yielded new possibilities in the field. This paper presents a review of the state of the art of AUV navigation and localization, as well as a description of some of the more commonly used methods. In addition, we highlight areas of future research potential.",
"Survey-class autonomous underwater vehicles (AUVs) typically rely on Doppler Velocity Logs (DVL) for precision localization near the seafloor. In cases where the seafloor depth is greater than the DVL bottom-lock range, localizing between the surface and the seafloor presents a localization problem since both GPS and DVL observations are unavailable in the mid-water column. This work proposes a solution to this problem that exploits the fact that current profile layers of the water column are near constant over short time scales (in the scale of minutes). Using observations of these currents obtained with the Acoustic Doppler Current Profiler mode of the DVL during descent, along with data from other sensors, the method discussed herein constrains position error. The method is validated using field data from the Sirius AUV coupled with view-based Simultaneous Localization and Mapping (SLAM) and on descents up to 3km deep with the Sentry AUV."
]
} |
1901.09513 | 2913397231 | Underwater robots are subject to position drift due to the effect of ocean currents and the lack of accurate localisation while submerged. We are interested in exploiting such position drift to estimate the ocean current in the surrounding area, thereby assisting navigation and planning. We present a Gaussian process (GP)-based expectation-maximisation (EM) algorithm that estimates the underlying ocean current using sparse GPS data obtained on the surface and dead-reckoned position estimates. We first develop a specialised GP regression scheme that exploits the incompressibility of ocean currents to counteract the underdetermined nature of the problem. We then use the proposed regression scheme in an EM algorithm that estimates the best-fitting ocean current in between each GPS fix. The proposed algorithm is validated in simulation and on a real dataset, and is shown to be capable of reconstructing the underlying ocean current field. We expect to use this algorithm to close the loop between planning and estimation for underwater navigation in unknown ocean currents. | An approach that does consider the spatial nature of the problem is presented in @cite_0 . The authors examine the feasibility of ocean current estimation through simply calculating the average current velocity by dividing the position drift by time. Unsurprisingly, the estimate is increasingly unreliable as distance between diving and surfacing locations grows, and no predictive capability is provided. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2151464548"
],
"abstract": [
"We consider the potential for making current measurements from gliders, and present data from a deployment in early 2007 of 1000 m Slocum electric gliders in the North West Mediterranean Sea. Three types of current measurement are considered. First, by comparing the difference between successive GPS positions, obtained when the glider surfaces, and dead-reckoned displacements when the glider is submerged, it is possible to estimate depth averaged horizontal currents and also surface drift. Second, our gliders were equipped with Conductivity Temperature Depth sensors, which provided data used to calculate geostrophic horizontal velocity. Third, from the measured rate of change of pressure it is possible to quantify the vertical water velocity as the difference between the measurement and the expected vertical motion. The latter two both require a model of the glider motion, which we outline. Horizontal currents of the order of 30 cm s were measured in the westward flowing Northern Current off the south coast of France, with a width and transport comparable with previous observations using different technologies. The accuracy of the depth-averaged currents in magnitude and direction was limited by the accuracy of the measured heading of the glider. Measurements of vertical velocity were made during a time of active convection when the magnitude of the vertical motion was up to 10 cm s. We estimate that the accuracy of the calculated velocity was of the order of 1 cm s."
]
} |
1901.09482 | 2913508739 | What is the current state-of-the-art for image restoration and enhancement applied to degraded images acquired under less than ideal circumstances? Can the application of such algorithms as a pre-processing step to improve image interpretability for manual analysis or automatic visual recognition to classify scene content? While there have been important advances in the area of computational photography to restore or enhance the visual quality of an image, the capabilities of such techniques have not always translated in a useful way to visual recognition tasks. Consequently, there is a pressing need for the development of algorithms that are designed for the joint problem of improving visual appearance and recognition, which will be an enabling factor for the deployment of visual recognition tools in many real-world scenarios. To address this, we introduce the UG^2 dataset as a large-scale benchmark composed of video imagery captured under challenging conditions, and two enhancement tasks designed to test algorithmic impact on visual quality and automatic object recognition. Furthermore, we propose a set of metrics to evaluate the joint improvement of such tasks as well as individual algorithmic advances, including a novel psychophysics-based evaluation regime for human assessment and a realistic set of quantitative measures for object recognition performance. We introduce six new algorithms for image restoration or enhancement, which were created as part of the IARPA sponsored UG^2 Challenge workshop held at CVPR 2018. Under the proposed evaluation regime, we present an in-depth analysis of these algorithms and a host of deep learning-based and classic baseline approaches. From the observed results, it is evident that we are in the early days of building a bridge between computational photography and visual recognition, leaving many opportunities for innovation in this area. | The areas of image restoration and enhancement have a long history in computational photography, with associated benchmark datasets that are mainly used for the qualitative evaluation of image appearance. These include very small test image sets such as Set5 @cite_130 and Set14 @cite_44 , the set of blurred images introduced by Levin al @cite_31 , and the DIVerse 2K resolution image dataset (DIV2K) @cite_55 designed for super-resolution benchmarking. Datasets containing more diverse scene content have been proposed including Urban100 @cite_105 for enhancement comparisons and LIVE1 @cite_125 for image quality assessment. While not originally designed for computational photography, the Berkeley Segmentation Dataset has been used by itself @cite_105 and in combination with LIVE1 @cite_123 for enhancement work. The popularity of deep learning methods has increased demand for training and testing data, which Su al provide as video content for deblurring work @cite_99 . Importantly, none of these datasets were designed to combine image restoration and enhancement with recognition for a unified benchmark. | {
"cite_N": [
"@cite_99",
"@cite_55",
"@cite_125",
"@cite_130",
"@cite_44",
"@cite_31",
"@cite_123",
"@cite_105"
],
"mid": [
"2558246008",
"2741137940",
"2161907179",
"2047920195",
"",
"2138204001",
"2146782367",
"1930824406"
],
"abstract": [
"Motion blur from camera shake is a major problem in videos captured by hand-held devices. Unlike single-image deblurring, video-based approaches can take advantage of the abundant information that exists across neighboring frames. As a result the best performing methods rely on aligning nearby frames. However, aligning images is a computationally expensive and fragile procedure, and methods that aggregate information must therefore be able to identify which regions have been accurately aligned and which have not, a task which requires high level scene understanding. In this work, we introduce a deep learning solution to video deblurring, where a CNN is trained end-to-end to learn how to accumulate information across frames. To train this network, we collected a dataset of real videos recorded with a high framerate camera, which we use to generate synthetic motion blur for supervision. We show that the features learned from this dataset extend to deblurring motion blur that arises due to camera shake in a wide range of videos, and compare the quality of results to a number of other baselines.",
"This paper introduces a novel large dataset for example-based single image super-resolution and studies the state-of-the-art as emerged from the NTIRE 2017 challenge. The challenge is the first challenge of its kind, with 6 competitions, hundreds of participants and tens of proposed solutions. Our newly collected DIVerse 2K resolution image dataset (DIV2K) was employed by the challenge. In our study we compare the solutions from the challenge to a set of representative methods from the literature and evaluate them using diverse measures on our proposed DIV2K dataset. Moreover, we conduct a number of experiments and draw conclusions on several topics of interest. We conclude that the NTIRE 2017 challenge pushes the state-of-the-art in single-image super-resolution, reaching the best results to date on the popular Set5, Set14, B100, Urban100 datasets and on our newly proposed DIV2K.",
"Measurement of visual quality is of fundamental importance for numerous image and video processing applications, where the goal of quality assessment (QA) algorithms is to automatically assess the quality of images or videos in agreement with human quality judgments. Over the years, many researchers have taken different approaches to the problem and have contributed significant research in this area and claim to have made progress in their respective domains. It is important to evaluate the performance of these algorithms in a comparative setting and analyze the strengths and weaknesses of these methods. In this paper, we present results of an extensive subjective quality assessment study in which a total of 779 distorted images were evaluated by about two dozen human subjects. The \"ground truth\" image quality data obtained from about 25 000 individual human quality judgments is used to evaluate the performance of several prominent full-reference image quality assessment algorithms. To the best of our knowledge, apart from video quality studies conducted by the Video Quality Experts Group, the study presented in this paper is the largest subjective image quality study in the literature in terms of number of images, distortion types, and number of human judgments per image. Moreover, we have made the data from the study freely available to the research community . This would allow other researchers to easily report comparative results in the future",
"This paper describes a single-image super-resolution (SR) algorithm based on nonnegative neighbor embedding. It belongs to the family of single-image example-based SR algorithms, since it uses a dictionary of low resolution (LR) and high resolution (HR) trained patch pairs to infer the unknown HR details. Each LR feature vector in the input image is expressed as the weighted combination of its K nearest neighbors in the dictionary; the corresponding HR feature vector is reconstructed under the assumption that the local LR embedding is preserved. Three key aspects are introduced in order to build a low-complexity competitive algorithm: (i) a compact but efficient representation of the patches (feature representation) (ii) an accurate estimation of the patches by their nearest neighbors (weight computation) (iii) a compact and already built (therefore external) dictionary, which allows a one-step upscaling. The neighbor embedding SR algorithm so designed is shown to give good visual results, comparable to other state-of-the-art methods, while presenting an appreciable reduction of the computational time.",
"",
"Blind deconvolution is the recovery of a sharp version of a blurred image when the blur kernel is unknown. Recent algorithms have afforded dramatic progress, yet many aspects of the problem remain challenging and hard to understand. The goal of this paper is to analyze and evaluate recent blind deconvolution algorithms both theoretically and experimentally. We explain the previously reported failure of the naive MAP approach by demonstrating that it mostly favors no-blur explanations. On the other hand we show that since the kernel size is often smaller than the image size a MAP estimation of the kernel alone can be well constrained and accurately recover the true blur. The plethora of recent deconvolution techniques makes an experimental evaluation on ground-truth data important. We have collected blur data with ground truth and compared recent algorithms under equal settings. Additionally, our data demonstrates that the shift-invariant blur assumption made by most algorithms is often violated.",
"Over the past decade, single image Super-Resolution (SR) research has focused on developing sophisticated image priors, leading to significant advances. Estimating and incorporating the blur model, that relates the high-res and low-res images, has received much less attention, however. In particular, the reconstruction constraint, namely that the blurred and down sampled high-res output should approximately equal the low-res input image, has been either ignored or applied with default fixed blur models. In this work, we examine the relative importance of the image prior and the reconstruction constraint. First, we show that an accurate reconstruction constraint combined with a simple gradient regularization achieves SR results almost as good as those of state-of-the-art algorithms with sophisticated image priors. Second, we study both empirically and theoretically the sensitivity of SR algorithms to the blur model assumed in the reconstruction constraint. We find that an accurate blur model is more important than a sophisticated image prior. Finally, using real camera data, we demonstrate that the default blur models of various SR algorithms may differ from the camera blur, typically leading to over-smoothed results. Our findings highlight the importance of accurately estimating camera blur in reconstructing raw lowers images acquired by an actual camera.",
"Self-similarity based super-resolution (SR) algorithms are able to produce visually pleasing results without extensive training on external databases. Such algorithms exploit the statistical prior that patches in a natural image tend to recur within and across scales of the same image. However, the internal dictionary obtained from the given image may not always be sufficiently expressive to cover the textural appearance variations in the scene. In this paper, we extend self-similarity based SR to overcome this drawback. We expand the internal patch search space by allowing geometric variations. We do so by explicitly localizing planes in the scene and using the detected perspective geometry to guide the patch search process. We also incorporate additional affine transformations to accommodate local shape variations. We propose a compositional model to simultaneously handle both types of transformations. We extensively evaluate the performance in both urban and natural scenes. Even without using any external training databases, we achieve significantly superior results on urban scenes, while maintaining comparable performance on natural scenes as other state-of-the-art SR algorithms."
]
} |
1901.09482 | 2913508739 | What is the current state-of-the-art for image restoration and enhancement applied to degraded images acquired under less than ideal circumstances? Can the application of such algorithms as a pre-processing step to improve image interpretability for manual analysis or automatic visual recognition to classify scene content? While there have been important advances in the area of computational photography to restore or enhance the visual quality of an image, the capabilities of such techniques have not always translated in a useful way to visual recognition tasks. Consequently, there is a pressing need for the development of algorithms that are designed for the joint problem of improving visual appearance and recognition, which will be an enabling factor for the deployment of visual recognition tools in many real-world scenarios. To address this, we introduce the UG^2 dataset as a large-scale benchmark composed of video imagery captured under challenging conditions, and two enhancement tasks designed to test algorithmic impact on visual quality and automatic object recognition. Furthermore, we propose a set of metrics to evaluate the joint improvement of such tasks as well as individual algorithmic advances, including a novel psychophysics-based evaluation regime for human assessment and a realistic set of quantitative measures for object recognition performance. We introduce six new algorithms for image restoration or enhancement, which were created as part of the IARPA sponsored UG^2 Challenge workshop held at CVPR 2018. Under the proposed evaluation regime, we present an in-depth analysis of these algorithms and a host of deep learning-based and classic baseline approaches. From the observed results, it is evident that we are in the early days of building a bridge between computational photography and visual recognition, leaving many opportunities for innovation in this area. | With respect to data collected by aerial vehicles, the VIRAT Video Dataset @cite_80 contains realistic, natural and challenging (in terms of its resolution, background clutter, diversity in scenes)" imagery for event recognition, while the VisDrone2018 Dataset @cite_4 is designed for object detection and tracking. Other datasets including aerial imagery are the UCF Aerial Action Data Set @cite_6 , UCF-ARG @cite_114 , UAV123 @cite_0 , and the multi-purpose dataset introduced by Yao al @cite_36 . As with the computational photography datasets, none of these sets have protocols for image restoration and enhancement coupled with object recognition. | {
"cite_N": [
"@cite_4",
"@cite_36",
"@cite_6",
"@cite_0",
"@cite_80",
"@cite_114"
],
"mid": [
"2798799804",
"128019999",
"",
"2518876086",
"2142996775",
""
],
"abstract": [
"In this paper we present a large-scale visual object detection and tracking benchmark, named VisDrone2018, aiming at advancing visual understanding tasks on the drone platform. The images and video sequences in the benchmark were captured over various urban suburban areas of 14 different cities across China from north to south. Specifically, VisDrone2018 consists of 263 video clips and 10,209 images (no overlap with video clips) with rich annotations, including object bounding boxes, object categories, occlusion, truncation ratios, etc. With intensive amount of effort, our benchmark has more than 2.5 million annotated instances in 179,264 images video frames. Being the largest such dataset ever published, the benchmark enables extensive evaluation and investigation of visual analysis algorithms on the drone platform. In particular, we design four popular tasks with the benchmark, including object detection in images, object detection in videos, single object tracking, and multi-object tracking. All these tasks are extremely challenging in the proposed dataset due to factors such as occlusion, large scale and pose variation, and fast motion. We hope the benchmark largely boost the research and development in visual analysis on drone platforms.",
"This paper presents a large scale general purpose image database with human annotated ground truth. Firstly, an all-in-all labeling framework is proposed to group visual knowledge of three levels: scene level (global geometric description), object level (segmentation, sketch representation, hierarchical decomposition), and low-mid level (2.1D layered representation, object boundary attributes, curve completion, etc.). Much of this data has not appeared in previous databases. In addition, And-Or Graph is used to organize visual elements to facilitate top-down labeling. An annotation tool is developed to realize and integrate all tasks. With this tool, we've been able to create a database consisting of more than 636,748 annotated images and video frames. Lastly, the data is organized into 13 common subsets to serve as benchmarks for diverse evaluation endeavors.",
"",
"In this paper, we propose a new aerial video dataset and benchmark for low altitude UAV target tracking, as well as, a photo-realistic UAV simulator that can be coupled with tracking methods. Our benchmark provides the first evaluation of many state-of-the-art and popular trackers on 123 new and fully annotated HD video sequences captured from a low-altitude aerial perspective. Among the compared trackers, we determine which ones are the most suitable for UAV tracking both in terms of tracking accuracy and run-time. The simulator can be used to evaluate tracking algorithms in real-time scenarios before they are deployed on a UAV “in the field”, as well as, generate synthetic but photo-realistic tracking datasets with automatic ground truth annotations to easily extend existing real-world datasets. Both the benchmark and simulator are made publicly available to the vision community on our website to further research in the area of object tracking from UAVs. (https: ivul.kaust.edu.sa Pages pub-benchmark-simulator-uav.aspx.).",
"We introduce a new large-scale video dataset designed to assess the performance of diverse visual event recognition algorithms with a focus on continuous visual event recognition (CVER) in outdoor areas with wide coverage. Previous datasets for action recognition are unrealistic for real-world surveillance because they consist of short clips showing one action by one individual [15, 8]. Datasets have been developed for movies [11] and sports [12], but, these actions and scene conditions do not apply effectively to surveillance videos. Our dataset consists of many outdoor scenes with actions occurring naturally by non-actors in continuously captured videos of the real world. The dataset includes large numbers of instances for 23 event types distributed throughout 29 hours of video. This data is accompanied by detailed annotations which include both moving object tracks and event examples, which will provide solid basis for large-scale evaluation. Additionally, we propose different types of evaluation modes for visual recognition tasks and evaluation metrics along with our preliminary experimental results. We believe that this dataset will stimulate diverse aspects of computer vision research and help us to advance the CVER tasks in the years ahead.",
""
]
} |
1901.09482 | 2913508739 | What is the current state-of-the-art for image restoration and enhancement applied to degraded images acquired under less than ideal circumstances? Can the application of such algorithms as a pre-processing step to improve image interpretability for manual analysis or automatic visual recognition to classify scene content? While there have been important advances in the area of computational photography to restore or enhance the visual quality of an image, the capabilities of such techniques have not always translated in a useful way to visual recognition tasks. Consequently, there is a pressing need for the development of algorithms that are designed for the joint problem of improving visual appearance and recognition, which will be an enabling factor for the deployment of visual recognition tools in many real-world scenarios. To address this, we introduce the UG^2 dataset as a large-scale benchmark composed of video imagery captured under challenging conditions, and two enhancement tasks designed to test algorithmic impact on visual quality and automatic object recognition. Furthermore, we propose a set of metrics to evaluate the joint improvement of such tasks as well as individual algorithmic advances, including a novel psychophysics-based evaluation regime for human assessment and a realistic set of quantitative measures for object recognition performance. We introduce six new algorithms for image restoration or enhancement, which were created as part of the IARPA sponsored UG^2 Challenge workshop held at CVPR 2018. Under the proposed evaluation regime, we present an in-depth analysis of these algorithms and a host of deep learning-based and classic baseline approaches. From the observed results, it is evident that we are in the early days of building a bridge between computational photography and visual recognition, leaving many opportunities for innovation in this area. | Intuitively, if an image has been corrupted, then employing restoration techniques should improve performance of recognizing objects in the image. An early attempt at unifying a high-level task like object recognition with a low-level task like deblurring was performed by Zeiler al through deconvolutional networks @cite_23 @cite_14 . Similarly, Haris al @cite_32 proposed an end-to-end super resolution training procedure that incorporated detection loss as a training objective, obtaining superior object detection results compared to traditional super-resolution methods for a variety of conditions (including additional perturbations on the low resolution images such as the addition of Gaussian noise). | {
"cite_N": [
"@cite_14",
"@cite_32",
"@cite_23"
],
"mid": [
"",
"2794849006",
"2141200610"
],
"abstract": [
"",
"We consider how image super resolution (SR) can contribute to an object detection task in low-resolution images. Intuitively, SR gives a positive impact on the object detection task. While several previous works demonstrated that this intuition is correct, SR and detector are optimized independently in these works. This paper proposes a novel framework to train a deep neural network where the SR sub-network explicitly incorporates a detection loss in its training objective, via a tradeoff with a traditional detection loss. This end-to-end training procedure allows us to train SR preprocessing for any differentiable detector. We demonstrate that our task-driven SR consistently and significantly improves accuracy of an object detector on low-resolution images for a variety of conditions and scaling factors.",
"We present a hierarchical model that learns image decompositions via alternating layers of convolutional sparse coding and max pooling. When trained on natural images, the layers of our model capture image information in a variety of forms: low-level edges, mid-level edge junctions, high-level object parts and complete objects. To build our model we rely on a novel inference scheme that ensures each layer reconstructs the input, rather than just the output of the layer directly beneath, as is common with existing hierarchical approaches. This makes it possible to learn multiple layers of representation and we show models with 4 layers, trained on images from the Caltech-101 and 256 datasets. When combined with a standard classifier, features extracted from these models outperform SIFT, as well as representations from other feature learning methods."
]
} |
1901.09482 | 2913508739 | What is the current state-of-the-art for image restoration and enhancement applied to degraded images acquired under less than ideal circumstances? Can the application of such algorithms as a pre-processing step to improve image interpretability for manual analysis or automatic visual recognition to classify scene content? While there have been important advances in the area of computational photography to restore or enhance the visual quality of an image, the capabilities of such techniques have not always translated in a useful way to visual recognition tasks. Consequently, there is a pressing need for the development of algorithms that are designed for the joint problem of improving visual appearance and recognition, which will be an enabling factor for the deployment of visual recognition tools in many real-world scenarios. To address this, we introduce the UG^2 dataset as a large-scale benchmark composed of video imagery captured under challenging conditions, and two enhancement tasks designed to test algorithmic impact on visual quality and automatic object recognition. Furthermore, we propose a set of metrics to evaluate the joint improvement of such tasks as well as individual algorithmic advances, including a novel psychophysics-based evaluation regime for human assessment and a realistic set of quantitative measures for object recognition performance. We introduce six new algorithms for image restoration or enhancement, which were created as part of the IARPA sponsored UG^2 Challenge workshop held at CVPR 2018. Under the proposed evaluation regime, we present an in-depth analysis of these algorithms and a host of deep learning-based and classic baseline approaches. From the observed results, it is evident that we are in the early days of building a bridge between computational photography and visual recognition, leaving many opportunities for innovation in this area. | Sajjadi al @cite_87 argue that the use of traditional metrics such as Peak Signal to Noise Ratio (PSNR), Structural Similarity Index (SSIM), or the Information Fidelity Criterion (IFC) might not reflect the performance of some models, and propose the use of object recognition performance as an evaluation metric. They observed that methods that produced images of higher perceptual quality obtained higher classification performance despite obtaining low PSNR scores. In agreement with this, Gondal al @cite_106 observed the correlation of the perceptual quality of an image with its performance when processed by object recognition models. Similarly, Tahboub al @cite_57 evaluate the impact of degradation caused by video compression on pedestrian detection. Other approaches have used visual recognition as a way to evaluate the performance of visual enhancement algorithm for tasks such as text deblurring @cite_88 @cite_8 , image colorization @cite_33 , and single image super resolution @cite_100 . | {
"cite_N": [
"@cite_33",
"@cite_8",
"@cite_87",
"@cite_57",
"@cite_106",
"@cite_88",
"@cite_100"
],
"mid": [
"2326925005",
"",
"2580080206",
"",
"2114380981",
"2319561215",
"1984043993"
],
"abstract": [
"Given a grayscale photograph as input, this paper attacks the problem of hallucinating a plausible color version of the photograph. This problem is clearly underconstrained, so previous approaches have either relied on significant user interaction or resulted in desaturated colorizations. We propose a fully automatic approach that produces vibrant and realistic colorizations. We embrace the underlying uncertainty of the problem by posing it as a classification task and use class-rebalancing at training time to increase the diversity of colors in the result. The system is implemented as a feed-forward pass in a CNN at test time and is trained on over a million color images. We evaluate our algorithm using a “colorization Turing test,” asking human participants to choose between a generated and ground truth color image. Our method successfully fools humans on 32 of the trials, significantly higher than previous methods. Moreover, we show that colorization can be a powerful pretext task for self-supervised feature learning, acting as a cross-channel encoder. This approach results in state-of-the-art performance on several feature learning benchmarks.",
"",
"Single image super-resolution is the task of inferring a high-resolution image from a single low-resolution input. Traditionally, the performance of algorithms for this task is measured using pixel-wise reconstruction measures such as peak signal-to-noise ratio (PSNR) which have been shown to correlate poorly with the human perception of image quality. As a result, algorithms minimizing these metrics tend to produce over-smoothed images that lack high-frequency textures and do not look natural despite yielding high PSNR values. We propose a novel application of automated texture synthesis in combination with a perceptual loss focusing on creating realistic textures rather than optimizing for a pixel-accurate reproduction of ground truth images during training. By using feed-forward fully convolutional neural networks in an adversarial training setting, we achieve a significant boost in image quality at high magnification ratios. Extensive experiments on a number of datasets show the effectiveness of our approach, yielding state-of-the-art results in both quantitative and qualitative benchmarks.",
"",
"Face recognition degrades when faces are of very low resolution since many details about the difference between one person and another can only be captured in images of sufficient resolution. In this work, we propose a new procedure for recognition of low-resolution faces, when there is a high-resolution training set available. Most previous super-resolution approaches are aimed at reconstruction, with recognition only as an after-thought. In contrast, in the proposed method, face features, as they would be extracted for a face recognition algorithm (e.g., eigenfaces, Fisher-faces, etc.), are included in a super-resolution method as prior information. This approach simultaneously provides measures of fit of the super-resolution result, from both reconstruction and recognition perspectives. This is different from the conventional paradigms of matching in a low-resolution domain, or, alternatively, applying a super-resolution algorithm to a low-resolution face and then classifying the super-resolution result. We show, for example, that recognition of faces of as low as 6 times 6 pixel size is considerably improved compared to matching using a super-resolution reconstruction followed by classification, and to matching with a low-resolution training set.",
"In this work we address the problem of blind deconvolution and denoising. We focus on restoration of text documents and we show that this type of highly structured data can be successfully restored by a convolutional neural network. The networks are trained to reconstruct high-quality images directly from blurry inputs without assuming any specific blur and noise models. We demonstrate the performance of the convolutional networks on a large set of text documents and on a combination of realistic de-focus and camera shake blur kernels. On this artificial data, the convolutional networks significantly outperform existing blind deconvolution methods, including those optimized for text, in terms of image quality and OCR accuracy. In fact, the networks outperform even state-of-the-art non-blind methods for anything but the lowest noise levels. The approach is validated on real photos taken by various devices.",
"Currently two evaluation methods of super-resolution (SR) techniques prevail: The objective Peak Signal to Noise Ratio (PSNR) and a qualitative measure based on manual visual inspection. Both of these methods are sub-optimal: The latter does not scale well to large numbers of images, while the former does not necessarily reflect the perceived visual quality. We address these issues in this paper and propose an evaluation method based on image classification. We show that perceptual image quality measures like structural similarity are not suitable for evaluation of SR methods. On the other hand a systematic evaluation using large datasets of thousands of real-world images provides a consistent comparison of SR algorithms that corresponds to perceived visual quality. We verify the success of our approach by presenting an evaluation of three recent super-resolution algorithms on standard image classification datasets."
]
} |
1901.09482 | 2913508739 | What is the current state-of-the-art for image restoration and enhancement applied to degraded images acquired under less than ideal circumstances? Can the application of such algorithms as a pre-processing step to improve image interpretability for manual analysis or automatic visual recognition to classify scene content? While there have been important advances in the area of computational photography to restore or enhance the visual quality of an image, the capabilities of such techniques have not always translated in a useful way to visual recognition tasks. Consequently, there is a pressing need for the development of algorithms that are designed for the joint problem of improving visual appearance and recognition, which will be an enabling factor for the deployment of visual recognition tools in many real-world scenarios. To address this, we introduce the UG^2 dataset as a large-scale benchmark composed of video imagery captured under challenging conditions, and two enhancement tasks designed to test algorithmic impact on visual quality and automatic object recognition. Furthermore, we propose a set of metrics to evaluate the joint improvement of such tasks as well as individual algorithmic advances, including a novel psychophysics-based evaluation regime for human assessment and a realistic set of quantitative measures for object recognition performance. We introduce six new algorithms for image restoration or enhancement, which were created as part of the IARPA sponsored UG^2 Challenge workshop held at CVPR 2018. Under the proposed evaluation regime, we present an in-depth analysis of these algorithms and a host of deep learning-based and classic baseline approaches. From the observed results, it is evident that we are in the early days of building a bridge between computational photography and visual recognition, leaving many opportunities for innovation in this area. | While the above approaches employ object recognition in addition to visual enhancement, there are approaches designed to overlook the visual appearance of the image and instead make use of enhancement techniques to exclusively improve the object recognition performance. Sharma al @cite_79 make use of dynamic enhancement filters in an end-to-end processing and classification pipeline that incorporates two loss functions (enhancement and classification). The approach focuses on improving the performance of challenging high quality images. In contrast to this, Yim al @cite_13 propose a classification architecture (comprised of a pre-processing module and a neural network model) to handle images degraded by noise. Li al @cite_40 introduced a dehazing method that is concatenated with Faster R-CNN and jointly optimized as a unified pipeline. It outperforms traditional Faster R-CNN and other non-joint approaches. | {
"cite_N": [
"@cite_79",
"@cite_40",
"@cite_13"
],
"mid": [
"2963068450",
"2779176852",
"2766947087"
],
"abstract": [
"Convolutional neural networks rely on image texture and structure to serve as discriminative features to classify the image content. Image enhancement techniques can be used as preprocessing steps to help improve the overall image quality and in turn improve the overall effectiveness of a CNN. Existing image enhancement methods, however, are designed to improve the perceptual quality of an image for a human observer. In this paper, we are interested in learning CNNs that can emulate image enhancement and restoration, but with the overall goal to improve image classification and not necessarily human perception. To this end, we present a unified CNN architecture that uses a range of enhancement filters that can enhance image-specific details via end-to-end dynamic filter learning. We demonstrate the effectiveness of this strategy on four challenging benchmark datasets for fine-grained, object, scene, and texture classification: CUB-200-2011, PASCAL-VOC2007, MIT-Indoor, and DTD. Experiments using our proposed enhancement show promising results on all the datasets. In addition, our approach is capable of improving the performance of all generic CNN architectures.",
"This paper proposes an image dehazing model built with a convolutional neural network (CNN), called All-in-One Dehazing Network (AOD-Net). It is designed based on a re-formulated atmospheric scattering model. Instead of estimating the transmission matrix and the atmospheric light separately as most previous models did, AOD-Net directly generates the clean image through a light-weight CNN. Such a novel end-to-end design makes it easy to embed AOD-Net into other deep models, e.g., Faster R-CNN, for improving high-level tasks on hazy images. Experimental results on both synthesized and natural hazy image datasets demonstrate our superior performance than the state-of-the-art in terms of PSNR, SSIM and the subjective visual quality. Furthermore, when concatenating AOD-Net with Faster R-CNN, we witness a large improvement of the object detection performance on hazy images.",
"Despite the appeal of deep neural networks that largely replace the traditional handmade filters, they still suffer from isolated cases that cannot be properly handled only by the training of convolutional filters. Abnormal factors, including real-world noise, blur, or other quality degradations, ruin the output of a neural network. These unexpected problems can produce critical complications, and it is surprising that there has only been minimal research into the effects of noise in the deep neural network model. Therefore, we present an exhaustive investigation into the effect of noise in image classification and suggest a generalized architecture of a dual-channel model to treat quality degraded input images. We compare the proposed dual-channel model with a simple single model and show it improves the overall performance of neural networks on various types of quality degraded input datasets."
]
} |
1901.09237 | 2950847180 | Digitally retouching images has become a popular trend, with people posting altered images on social media and even magazines posting flawless facial images of celebrities. Further, with advancements in Generative Adversarial Networks (GANs), now changing attributes and retouching have become very easy. Such synthetic alterations have adverse effect on face recognition algorithms. While researchers have proposed to detect image tampering, detecting GANs generated images has still not been explored. This paper proposes a supervised deep learning algorithm using Convolutional Neural Networks (CNNs) to detect synthetically altered images. The algorithm yields an accuracy of 99.65 on detecting retouching on the ND-IIITD dataset. It outperforms the previous state of the art which reported an accuracy of 87 on the database. For distinguishing between real images and images generated using GANs, the proposed algorithm yields an accuracy of 99.83 . | Retouching, makeup detection, face spoofing and morphing are widely studied areas, that can be considered similar to retouching detection. Recent work by @cite_2 makes use of supervised deep Boltzmann machine algorithm for detecting retouching on the ND-IIITD database. It also introduces the ND-IIITD dataset which consists of 2600 original and 2275 retouched images. It uses different facial parts to learn features for classification. In 2017, @cite_14 proposed an algorithm which uses semi-supervised autoencoders. The paper has reported results on the Multi-Demographic Retouched Faces (MDRF) dataset. Earlier research by Kee and Farid @cite_12 learned a support vector regression (SVR) between the retouched and original images. They used both geometric and photometric features for training the SVR on various celebrity images. | {
"cite_N": [
"@cite_14",
"@cite_12",
"@cite_2"
],
"mid": [
"2757024555",
"2112069974",
"2432677034"
],
"abstract": [
"Digital retouching of face images is becoming more widespread due to the introduction of software packages that automate the task. Several researchers have introduced algorithms to detect whether a face image is original or retouched. However, previous work on this topic has not considered whether or how accuracy of retouching detection varies with the demography of face images. In this paper, we introduce a new Multi-Demographic Retouched Faces (MDRF) dataset, which contains images belonging to two genders, male and female, and three ethnicities, Indian, Chinese, and Caucasian. Further, retouched images are created using two different retouching software packages. The second major contribution of this research is a novel semi-supervised autoencoder incorporating \"subclass\" information to improve classification. The proposed approach outperforms existing state-of-the-art detection algorithms for the task of generalized retouching detection. Experiments conducted with multiple combinations of ethnicities show that accuracy of retouching detection can vary greatly based on the demographics of the training and testing images.",
"In recent years, advertisers and magazine editors have been widely criticized for taking digital photo retouching to an extreme. Impossibly thin, tall, and wrinkle- and blemish-free models are routinely splashed onto billboards, advertisements, and magazine covers. The ubiquity of these unrealistic and highly idealized images has been linked to eating disorders and body image dissatisfaction in men, women, and children. In response, several countries have considered legislating the labeling of retouched photos. We describe a quantitative and perceptually meaningful metric of photo retouching. Photographs are rated on the degree to which they have been digitally altered by explicitly modeling and estimating geometric and photometric changes. This metric correlates well with perceptual judgments of photo retouching and can be used to objectively judge by how much a retouched photo has strayed from reality.",
"Digitally altering, or retouching, face images is a common practice for images on social media, photo sharing websites, and even identification cards when the standards are not strictly enforced. This research demonstrates the effect of digital alterations on the performance of automatic face recognition, and also introduces an algorithm to classify face images as original or retouched with high accuracy. We first introduce two face image databases with unaltered and retouched images. Face recognition experiments performed on these databases show that when a retouched image is matched with its original image or an unaltered gallery image, the identification performance is considerably degraded, with a drop in matching accuracy of up to 25 . However, when images are retouched with the same style, the matching accuracy can be misleadingly high in comparison with matching original images. To detect retouching in face images, a novel supervised deep Boltzmann machine algorithm is proposed. It uses facial parts to learn discriminative features to classify face images as original or retouched. The proposed approach for classifying images as original or retouched yields an accuracy of over 87 on the data sets introduced in this paper and over 99 on three other makeup data sets used by previous researchers. This is a substantial increase in accuracy over the previous state-of-the-art algorithm, which has shown <50 accuracy in classifying original and retouched images from the ND-IIITD retouched faces database."
]
} |
1901.09244 | 2912003633 | Video recognition models have progressed significantly over the past few years, evolving from shallow classifiers trained on hand-crafted features to deep spatiotemporal networks. However, labeled video data required to train such models have not been able to keep up with the ever-increasing depth and sophistication of these networks. In this work, we propose an alternative approach to learning video representations that require no semantically labeled videos and instead leverages the years of effort in collecting and labeling large and clean still-image datasets. We do so by using state-of-the-art models pre-trained on image datasets as "teachers" to train video models in a distillation framework. We demonstrate that our method learns truly spatiotemporal features, despite being trained only using supervision from still-image networks. Moreover, it learns good representations across different input modalities, using completely uncurated raw video data sources and with different 2D teacher models. Our method obtains strong transfer performance, outperforming standard techniques for bootstrapping video architectures with image-based models by 16 . We believe that our approach opens up new approaches for learning spatiotemporal representations from unlabeled video data. | Video understanding, specifically for the task of human action recognition, is a well studied problem in computer vision. Analogously to the progress of image-based recognition methods, which have advanced from hand-crafted features @cite_14 @cite_34 to modern deep networks @cite_2 @cite_11 @cite_32 , video understanding methods have also evolved from hand-designed models @cite_19 @cite_13 @cite_37 to deep spatiotemporal networks @cite_26 @cite_33 . However, while image based recognition has seen dramatic gains in accuracy, improvements in video analysis have been more modest. In the still-image domain, deep models have greatly benefited from the availability of well-labeled datasets, such as ImageNet @cite_12 or Places @cite_31 . | {
"cite_N": [
"@cite_37",
"@cite_14",
"@cite_26",
"@cite_33",
"@cite_32",
"@cite_19",
"@cite_2",
"@cite_31",
"@cite_34",
"@cite_13",
"@cite_12",
"@cite_11"
],
"mid": [
"",
"2151103935",
"1522734439",
"2156303437",
"2962835968",
"2105101328",
"2274287116",
"2732026016",
"2161969291",
"2126574503",
"2952020226",
"2949650786"
],
"abstract": [
"",
"This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.",
"We propose a simple, yet effective approach for spatiotemporal feature learning using deep 3-dimensional convolutional networks (3D ConvNets) trained on a large scale supervised video dataset. Our findings are three-fold: 1) 3D ConvNets are more suitable for spatiotemporal feature learning compared to 2D ConvNets, 2) A homogeneous architecture with small 3x3x3 convolution kernels in all layers is among the best performing architectures for 3D ConvNets, and 3) Our learned features, namely C3D (Convolutional 3D), with a simple linear classifier outperform state-of-the-art methods on 4 different benchmarks and are comparable with current best methods on the other 2 benchmarks. In addition, the features are compact: achieving 52.8 accuracy on UCF101 dataset with only 10 dimensions and also very efficient to compute due to the fast inference of ConvNets. Finally, they are conceptually very simple and easy to train and use.",
"We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multitask learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification.",
"Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.",
"Recently dense trajectories were shown to be an efficient video representation for action recognition and achieved state-of-the-art results on a variety of datasets. This paper improves their performance by taking into account camera motion to correct them. To estimate camera motion, we match feature points between frames using SURF descriptors and dense optical flow, which are shown to be complementary. These matches are, then, used to robustly estimate a homography with RANSAC. Human motion is in general different from camera motion and generates inconsistent matches. To improve the estimation, a human detector is employed to remove these matches. Given the estimated camera motion, we remove trajectories consistent with it. We also use this estimation to cancel out camera motion from the optical flow. This significantly improves motion-based descriptors, such as HOF and MBH. Experimental results on four challenging action datasets (i.e., Hollywood2, HMDB51, Olympic Sports and UCF50) significantly outperform the current state of the art.",
"Very deep convolutional networks have been central to the largest advances in image recognition performance in recent years. One example is the Inception architecture that has been shown to achieve very good performance at relatively low computational cost. Recently, the introduction of residual connections in conjunction with a more traditional architecture has yielded state-of-the-art performance in the 2015 ILSVRC challenge; its performance was similar to the latest generation Inception-v3 network. This raises the question of whether there are any benefit in combining the Inception architecture with residual connections. Here we give clear empirical evidence that training with residual connections accelerates the training of Inception networks significantly. There is also some evidence of residual Inception networks outperforming similarly expensive Inception networks without residual connections by a thin margin. We also present several new streamlined architectures for both residual and non-residual Inception networks. These variations improve the single-frame recognition performance on the ILSVRC 2012 classification task significantly. We further demonstrate how proper activation scaling stabilizes the training of very wide residual Inception networks. With an ensemble of three residual and one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the ImageNet classification (CLS) challenge",
"The rise of multi-million-item dataset initiatives has enabled data-hungry machine learning algorithms to reach near-human semantic classification performance at tasks such as visual object and scene recognition. Here we describe the Places Database, a repository of 10 million scene photographs, labeled with scene semantic categories, comprising a large and diverse list of the types of environments encountered in the world. Using the state-of-the-art Convolutional Neural Networks (CNNs), we provide scene classification CNNs (Places-CNNs) as baselines, that significantly outperform the previous approaches. Visualization of the CNNs trained on Places shows that object detectors emerge as an intermediate representation of scene classification. With its high-coverage and high-diversity of exemplars, the Places Database along with the Places-CNNs offer a novel resource to guide future progress on scene recognition problems.",
"We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.",
"Feature trajectories have shown to be efficient for representing videos. Typically, they are extracted using the KLT tracker or matching SIFT descriptors between frames. However, the quality as well as quantity of these trajectories is often not sufficient. Inspired by the recent success of dense sampling in image classification, we propose an approach to describe videos by dense trajectories. We sample dense points from each frame and track them based on displacement information from a dense optical flow field. Given a state-of-the-art optical flow algorithm, our trajectories are robust to fast irregular motions as well as shot boundaries. Additionally, dense trajectories cover the motion information in videos well. We, also, investigate how to design descriptors to encode the trajectory information. We introduce a novel descriptor based on motion boundary histograms, which is robust to camera motion. This descriptor consistently outperforms other state-of-the-art descriptors, in particular in uncontrolled realistic videos. We evaluate our video description in the context of action classification with a bag-of-features approach. Experimental results show a significant improvement over the state of the art on four datasets of varying difficulty, i.e. KTH, YouTube, Hollywood2 and UCF sports.",
"The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the five years of the challenge, and propose future directions and improvements.",
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation."
]
} |
1901.09244 | 2912003633 | Video recognition models have progressed significantly over the past few years, evolving from shallow classifiers trained on hand-crafted features to deep spatiotemporal networks. However, labeled video data required to train such models have not been able to keep up with the ever-increasing depth and sophistication of these networks. In this work, we propose an alternative approach to learning video representations that require no semantically labeled videos and instead leverages the years of effort in collecting and labeling large and clean still-image datasets. We do so by using state-of-the-art models pre-trained on image datasets as "teachers" to train video models in a distillation framework. We demonstrate that our method learns truly spatiotemporal features, despite being trained only using supervision from still-image networks. Moreover, it learns good representations across different input modalities, using completely uncurated raw video data sources and with different 2D teacher models. Our method obtains strong transfer performance, outperforming standard techniques for bootstrapping video architectures with image-based models by 16 . We believe that our approach opens up new approaches for learning spatiotemporal representations from unlabeled video data. | Until recently, video datasets have either been well-labeled but small @cite_47 @cite_24 @cite_7 , or large but weakly-labeled @cite_44 @cite_22 . A recently introduced dataset, Kinetics @cite_35 , is currently the largest well-annotated dataset, with around 300K videos labeled into 400 categories (we note a larger version with 600K videos in 600 categories was recently released). It is nearly two orders of magnitude larger than previously established benchmarks in video classification @cite_47 @cite_24 . As expected, pre-training networks on this dataset has yielded significant gains in accuracy @cite_39 on many standard benchmarks @cite_47 @cite_24 @cite_7 , and have won CVPR 2017 ActivityNet and Charades challenges. However, it is worth noting that this dataset was collected at a significant curation and annotation effort @cite_35 . | {
"cite_N": [
"@cite_35",
"@cite_22",
"@cite_7",
"@cite_39",
"@cite_24",
"@cite_44",
"@cite_47"
],
"mid": [
"2619947201",
"2524365899",
"2337252826",
"2619082050",
"24089286",
"2016053056",
"2126579184"
],
"abstract": [
"We describe the DeepMind Kinetics human action video dataset. The dataset contains 400 human action classes, with at least 400 video clips for each action. Each clip lasts around 10s and is taken from a different YouTube video. The actions are human focussed and cover a broad range of classes including human-object interactions such as playing instruments, as well as human-human interactions such as shaking hands. We describe the statistics of the dataset, how it was collected, and give some baseline performance figures for neural network architectures trained and tested for human action classification on this dataset. We also carry out a preliminary analysis of whether imbalance in the dataset leads to bias in the classifiers.",
"Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of 8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.",
"Computer vision has a great potential to help our daily lives by searching for lost keys, watering flowers or reminding us to take a pill. To succeed with such tasks, computer vision methods need to be trained from real and diverse examples of our daily dynamic scenes. While most of such scenes are not particularly exciting, they typically do not appear on YouTube, in movies or TV broadcasts. So how do we collect sufficiently many diverse but boring samples representing our lives? We propose a novel Hollywood in Homes approach to collect such data. Instead of shooting videos in the lab, we ensure diversity by distributing and crowdsourcing the whole process of video creation from script writing to video recording and annotation. Following this procedure we collect a new dataset, Charades, with hundreds of people recording videos in their own homes, acting out casual everyday activities. The dataset is composed of 9,848 annotated videos with an average length of 30 s, showing activities of 267 people from three continents. Each video is annotated by multiple free-text descriptions, action labels, action intervals and classes of interacted objects. In total, Charades provides 27,847 video descriptions, 66,500 temporally localized intervals for 157 action classes and 41,104 labels for 46 object classes. Using this rich data, we evaluate and provide baseline results for several tasks including action recognition and automatic description generation. We believe that the realism, diversity, and casual nature of this dataset will present unique challenges and new opportunities for computer vision community.",
"The paucity of videos in current action classification datasets (UCF-101 and HMDB-51) has made it difficult to identify good video architectures, as most methods obtain similar performance on existing small-scale benchmarks. This paper re-evaluates state-of-the-art architectures in light of the new Kinetics Human Action Video dataset. Kinetics has two orders of magnitude more data, with 400 human action classes and over 400 clips per class, and is collected from realistic, challenging YouTube videos. We provide an analysis on how current architectures fare on the task of action classification on this dataset and how much performance improves on the smaller benchmark datasets after pre-training on Kinetics. We also introduce a new Two-Stream Inflated 3D ConvNet (I3D) that is based on 2D ConvNet inflation: filters and pooling kernels of very deep image classification ConvNets are expanded into 3D, making it possible to learn seamless spatio-temporal feature extractors from video while leveraging successful ImageNet architecture designs and even their parameters. We show that, after pre-training on Kinetics, I3D models considerably improve upon the state-of-the-art in action classification, reaching 80.9 on HMDB-51 and 98.0 on UCF-101.",
"We introduce UCF101 which is currently the largest dataset of human actions. It consists of 101 action classes, over 13k clips and 27 hours of video data. The database consists of realistic user uploaded videos containing camera motion and cluttered background. Additionally, we provide baseline action recognition results on this new dataset using standard bag of words approach with overall performance of 44.5 . To the best of our knowledge, UCF101 is currently the most challenging dataset of actions due to its large number of classes, large number of clips and also unconstrained nature of such clips.",
"Convolutional Neural Networks (CNNs) have been established as a powerful class of models for image recognition problems. Encouraged by these results, we provide an extensive empirical evaluation of CNNs on large-scale video classification using a new dataset of 1 million YouTube videos belonging to 487 classes. We study multiple approaches for extending the connectivity of a CNN in time domain to take advantage of local spatio-temporal information and suggest a multiresolution, foveated architecture as a promising way of speeding up the training. Our best spatio-temporal networks display significant performance improvements compared to strong feature-based baselines (55.3 to 63.9 ), but only a surprisingly modest improvement compared to single-frame models (59.3 to 60.9 ). We further study the generalization performance of our best model by retraining the top layers on the UCF-101 Action Recognition dataset and observe significant performance improvements compared to the UCF-101 baseline model (63.3 up from 43.9 ).",
"With nearly one billion online videos viewed everyday, an emerging new frontier in computer vision research is recognition and search in video. While much effort has been devoted to the collection and annotation of large scalable static image datasets containing thousands of image categories, human action datasets lag far behind. Current action recognition databases contain on the order of ten different action categories collected under fairly controlled conditions. State-of-the-art performance on these datasets is now near ceiling and thus there is a need for the design and creation of new benchmarks. To address this issue we collected the largest action video database to-date with 51 action categories, which in total contain around 7,000 manually annotated clips extracted from a variety of sources ranging from digitized movies to YouTube. We use this database to evaluate the performance of two representative computer vision systems for action recognition and explore the robustness of these methods under various conditions such as camera motion, viewpoint, video quality and occlusion."
]
} |
1901.09244 | 2912003633 | Video recognition models have progressed significantly over the past few years, evolving from shallow classifiers trained on hand-crafted features to deep spatiotemporal networks. However, labeled video data required to train such models have not been able to keep up with the ever-increasing depth and sophistication of these networks. In this work, we propose an alternative approach to learning video representations that require no semantically labeled videos and instead leverages the years of effort in collecting and labeling large and clean still-image datasets. We do so by using state-of-the-art models pre-trained on image datasets as "teachers" to train video models in a distillation framework. We demonstrate that our method learns truly spatiotemporal features, despite being trained only using supervision from still-image networks. Moreover, it learns good representations across different input modalities, using completely uncurated raw video data sources and with different 2D teacher models. Our method obtains strong transfer performance, outperforming standard techniques for bootstrapping video architectures with image-based models by 16 . We believe that our approach opens up new approaches for learning spatiotemporal representations from unlabeled video data. | The challenge in generating large-scale well-labeled video datasets stems from the fact that a human annotator has to spend much longer to label a video compared to a single image. Previous work has attempted to reduce this labeling effort through heuristics @cite_8 , but these methods still require a human annotator to clean up the final labels. There has also been some work in learning unsupervised video representations @cite_15 @cite_21 , however has typically lead to inferior results compared to supervised features. | {
"cite_N": [
"@cite_15",
"@cite_21",
"@cite_8"
],
"mid": [
"2949536664",
"2487442924",
"2780470340"
],
"abstract": [
"We propose a self-supervised approach for learning representations and robotic behaviors entirely from unlabeled videos recorded from multiple viewpoints, and study how this representation can be used in two robotic imitation settings: imitating object interactions from videos of humans, and imitating human poses. Imitation of human behavior requires a viewpoint-invariant representation that captures the relationships between end-effectors (hands or robot grippers) and the environment, object attributes, and body pose. We train our representations using a metric learning loss, where multiple simultaneous viewpoints of the same observation are attracted in the embedding space, while being repelled from temporal neighbors which are often visually similar but functionally different. In other words, the model simultaneously learns to recognize what is common between different-looking images, and what is different between similar-looking images. This signal causes our model to discover attributes that do not change across viewpoint, but do change across time, while ignoring nuisance variables such as occlusions, motion blur, lighting and background. We demonstrate that this representation can be used by a robot to directly mimic human poses without an explicit correspondence, and that it can be used as a reward function within a reinforcement learning algorithm. While representations are learned from an unlabeled collection of task-related videos, robot behaviors such as pouring are learned by watching a single 3rd-person demonstration by a human. Reward functions obtained by following the human demonstrations under the learned representation enable efficient reinforcement learning that is practical for real-world robotic systems. Video results, open-source code and dataset are available at this https URL",
"In this paper, we present an approach for learning a visual representation from the raw spatiotemporal signals in videos. Our representation is learned without supervision from semantic labels. We formulate our method as an unsupervised sequential verification task, i.e., we determine whether a sequence of frames from a video is in the correct temporal order. With this simple task and no semantic labels, we learn a powerful visual representation using a Convolutional Neural Network (CNN). The representation contains complementary information to that learned from supervised image datasets like ImageNet. Qualitative results show that our method captures information that is temporally varying, such as human pose. When used as pre-training for action recognition, our method gives significant gains over learning without external data on benchmark datasets like UCF101 and HMDB51. To demonstrate its sensitivity to human pose, we show results for pose estimation on the FLIC and MPII datasets that are competitive, or better than approaches using significantly more supervision. Our method can be combined with supervised representations to provide an additional boost in accuracy.",
"This paper describes a procedure for the creation of large-scale video datasets for action classification and localization from unconstrained, realistic web data. The scalability of the proposed procedure is demonstrated by building a novel video benchmark, named SLAC (Sparsely Labeled ACtions), consisting of over 520K untrimmed videos and 1.75M clip annotations spanning 200 action categories. Using our proposed framework, annotating a clip takes merely 8.8 seconds on average. This represents a saving in labeling time of over 95 compared to the traditional procedure of manual trimming and localization of actions. Our approach dramatically reduces the amount of human labeling by automatically identifying hard clips, i.e., clips that contain coherent actions but lead to prediction disagreement between action classifiers. A human annotator can disambiguate whether such a clip truly contains the hypothesized action in a handful of seconds, thus generating labels for highly informative samples at little cost. We show that our large-scale dataset can be used to effectively pre-train action recognition models, significantly improving final metrics on smaller-scale benchmarks after fine-tuning. On Kinetics, UCF-101 and HMDB-51, models pre-trained on SLAC outperform baselines trained from scratch, by 2.0 , 20.1 and 35.4 in top-1 accuracy, respectively when RGB input is used. Furthermore, we introduce a simple procedure that leverages the sparse labels in SLAC to pre-train action localization models. On THUMOS14 and ActivityNet-v1.3, our localization model improves the mAP of baseline model by 8.6 and 2.5 , respectively."
]
} |
1901.09244 | 2912003633 | Video recognition models have progressed significantly over the past few years, evolving from shallow classifiers trained on hand-crafted features to deep spatiotemporal networks. However, labeled video data required to train such models have not been able to keep up with the ever-increasing depth and sophistication of these networks. In this work, we propose an alternative approach to learning video representations that require no semantically labeled videos and instead leverages the years of effort in collecting and labeling large and clean still-image datasets. We do so by using state-of-the-art models pre-trained on image datasets as "teachers" to train video models in a distillation framework. We demonstrate that our method learns truly spatiotemporal features, despite being trained only using supervision from still-image networks. Moreover, it learns good representations across different input modalities, using completely uncurated raw video data sources and with different 2D teacher models. Our method obtains strong transfer performance, outperforming standard techniques for bootstrapping video architectures with image-based models by 16 . We believe that our approach opens up new approaches for learning spatiotemporal representations from unlabeled video data. | The question we pose is: since labeling images is faster, and since we already have large, well-labeled image datasets such as ImageNet, can we instead use these to bootstrap the learning of spatiotemporal video architectures? Unsurprisingly, various previous approaches have attempted this. The popular two-stream architecture @cite_33 uses individual frames from the video as input. Hence it initializes the RGB stream of the network with weights pre-trained on ImageNet and then fine-tunes them for action classification on the action dataset. More recent variants of two-stream architectures have also initialized the flow stream @cite_28 from weights pretrained on ImageNet by viewing optical flow as a grayscale image. | {
"cite_N": [
"@cite_28",
"@cite_33"
],
"mid": [
"2507009361",
"2156303437"
],
"abstract": [
"Deep convolutional networks have achieved great success for visual recognition in still images. However, for action recognition in videos, the advantage over traditional methods is not so evident. This paper aims to discover the principles to design effective ConvNet architectures for action recognition in videos and learn these models given limited training samples. Our first contribution is temporal segment network (TSN), a novel framework for video-based action recognition. which is based on the idea of long-range temporal structure modeling. It combines a sparse temporal sampling strategy and video-level supervision to enable efficient and effective learning using the whole action video. The other contribution is our study on a series of good practices in learning ConvNets on video data with the help of temporal segment network. Our approach obtains the state-the-of-art performance on the datasets of HMDB51 ( ( 69.4 , )) and UCF101 ( ( 94.2 , )). We also visualize the learned ConvNet models, which qualitatively demonstrates the effectiveness of temporal segment network and the proposed good practices (Models and code at https: github.com yjxiong temporal-segment-networks).",
"We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multitask learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification."
]
} |
1901.09244 | 2912003633 | Video recognition models have progressed significantly over the past few years, evolving from shallow classifiers trained on hand-crafted features to deep spatiotemporal networks. However, labeled video data required to train such models have not been able to keep up with the ever-increasing depth and sophistication of these networks. In this work, we propose an alternative approach to learning video representations that require no semantically labeled videos and instead leverages the years of effort in collecting and labeling large and clean still-image datasets. We do so by using state-of-the-art models pre-trained on image datasets as "teachers" to train video models in a distillation framework. We demonstrate that our method learns truly spatiotemporal features, despite being trained only using supervision from still-image networks. Moreover, it learns good representations across different input modalities, using completely uncurated raw video data sources and with different 2D teacher models. Our method obtains strong transfer performance, outperforming standard techniques for bootstrapping video architectures with image-based models by 16 . We believe that our approach opens up new approaches for learning spatiotemporal representations from unlabeled video data. | However, such initializations are only applicable to video models that use 2D convolutions, analogous to those applied in CNNs for still-images. What about more complex, truly spatiotemporal models, such as 3D convolutional architectures @cite_26 ? Until recently, such models have largely been limited to pre-training on large but weakly-labeled video datasets, such as Sports1M @cite_44 . Recent work @cite_39 @cite_29 proposed a nice alternative, consisting of inflating standard 2D CNNs kernels to 3D, by simply replicating the 2D kernels in time. While effective in getting strong performance on large benchmarks, on small datasets this approach tends to bias video models to be close to static replicas of the image models. Moreover, such initialization constrains the 3D architecture to be identical to the 2D CNN, except for the additional third dimension in kernels. This effectively restricts the design of video models to extensions of what works best in the still-image domain, which may not be the architectures for video analysis. | {
"cite_N": [
"@cite_44",
"@cite_29",
"@cite_26",
"@cite_39"
],
"mid": [
"2016053056",
"2553594924",
"1522734439",
"2619082050"
],
"abstract": [
"Convolutional Neural Networks (CNNs) have been established as a powerful class of models for image recognition problems. Encouraged by these results, we provide an extensive empirical evaluation of CNNs on large-scale video classification using a new dataset of 1 million YouTube videos belonging to 487 classes. We study multiple approaches for extending the connectivity of a CNN in time domain to take advantage of local spatio-temporal information and suggest a multiresolution, foveated architecture as a promising way of speeding up the training. Our best spatio-temporal networks display significant performance improvements compared to strong feature-based baselines (55.3 to 63.9 ), but only a surprisingly modest improvement compared to single-frame models (59.3 to 60.9 ). We further study the generalization performance of our best model by retraining the top layers on the UCF-101 Action Recognition dataset and observe significant performance improvements compared to the UCF-101 baseline model (63.3 up from 43.9 ).",
"Two-stream Convolutional Networks (ConvNets) have shown strong performance for human action recognition in videos. Recently, Residual Networks (ResNets) have arisen as a new technique to train extremely deep architectures. In this paper, we introduce spatiotemporal ResNets as a combination of these two approaches. Our novel architecture generalizes ResNets for the spatiotemporal domain by introducing residual connections in two ways. First, we inject residual connections between the appearance and motion pathways of a two-stream architecture to allow spatiotemporal interaction between the two streams. Second, we transform pretrained image ConvNets into spatiotemporal networks by equipping these with learnable convolutional filters that are initialized as temporal residual connections and operate on adjacent feature maps in time. This approach slowly increases the spatiotemporal receptive field as the depth of the model increases and naturally integrates image ConvNet design principles. The whole model is trained end-to-end to allow hierarchical learning of complex spatiotemporal features. We evaluate our novel spatiotemporal ResNet using two widely used action recognition benchmarks where it exceeds the previous state-of-the-art.",
"We propose a simple, yet effective approach for spatiotemporal feature learning using deep 3-dimensional convolutional networks (3D ConvNets) trained on a large scale supervised video dataset. Our findings are three-fold: 1) 3D ConvNets are more suitable for spatiotemporal feature learning compared to 2D ConvNets, 2) A homogeneous architecture with small 3x3x3 convolution kernels in all layers is among the best performing architectures for 3D ConvNets, and 3) Our learned features, namely C3D (Convolutional 3D), with a simple linear classifier outperform state-of-the-art methods on 4 different benchmarks and are comparable with current best methods on the other 2 benchmarks. In addition, the features are compact: achieving 52.8 accuracy on UCF101 dataset with only 10 dimensions and also very efficient to compute due to the fast inference of ConvNets. Finally, they are conceptually very simple and easy to train and use.",
"The paucity of videos in current action classification datasets (UCF-101 and HMDB-51) has made it difficult to identify good video architectures, as most methods obtain similar performance on existing small-scale benchmarks. This paper re-evaluates state-of-the-art architectures in light of the new Kinetics Human Action Video dataset. Kinetics has two orders of magnitude more data, with 400 human action classes and over 400 clips per class, and is collected from realistic, challenging YouTube videos. We provide an analysis on how current architectures fare on the task of action classification on this dataset and how much performance improves on the smaller benchmark datasets after pre-training on Kinetics. We also introduce a new Two-Stream Inflated 3D ConvNet (I3D) that is based on 2D ConvNet inflation: filters and pooling kernels of very deep image classification ConvNets are expanded into 3D, making it possible to learn seamless spatio-temporal feature extractors from video while leveraging successful ImageNet architecture designs and even their parameters. We show that, after pre-training on Kinetics, I3D models considerably improve upon the state-of-the-art in action classification, reaching 80.9 on HMDB-51 and 98.0 on UCF-101."
]
} |
1907.09597 | 2963065757 | In this paper we explore how actor-critic methods in deep reinforcement learning, in particular Asynchronous Advantage Actor-Critic (A3C), can be extended with agent modeling. Inspired by recent works on representation learning and multiagent deep reinforcement learning, we propose two architectures to perform agent modeling: the first one based on parameter sharing, and the second one based on agent policy features. Both architectures aim to learn other agents' policies as auxiliary tasks, besides the standard actor (policy) and critic (values). We performed experiments in both cooperative and competitive domains. The former is a problem of coordinated multiagent object transportation and the latter is a two-player mini version of the Pommerman game. Our results show that the proposed architectures stabilize learning and outperform the standard A3C architecture when learning a best response in terms of expected rewards. | Deep Reinforcement Opponent Network (DRON) @cite_5 was the first DRL work that performed opponent modeling. DRON's idea is to have two networks: one learns @math values (similar to DQN @cite_2 ) and a second learns a representation of the opponent's policy. DRON used hand-crafted features to define the opponent network. In contrast, Deep Policy Inference Q-Network (DPIQN) and Deep Policy Inference Recurrent Q-Network (DPIRQN) @cite_0 learned opponent directly from raw observations of the other agents. The way to learn these policy features is by means of auxiliary tasks @cite_19 that provide additional learning goals; in this case, the auxiliary task is to learn the opponent's policy. Then, the @math value function of the learning agent is conditioned on the policy features, which aim to reduce the non-stationarity of the multiagent environment. In contrast, our proposals do not need an experience replay buffer, learn completely on-policy and we make use of full parameter sharing @cite_23 . | {
"cite_N": [
"@cite_0",
"@cite_19",
"@cite_23",
"@cite_2",
"@cite_5"
],
"mid": [
"2963259091",
"2950872548",
"2949201811",
"1757796397",
"2475089067"
],
"abstract": [
"We present DPIQN, a deep policy inference Q-network that targets multi-agent systems composed of controllable agents, collaborators, and opponents that interact with each other. We focus on one challenging issue in such systems---modeling agents with varying strategies---and propose to employ \"policy features'' learned from raw observations (e.g., raw images) of collaborators and opponents by inferring their policies. DPIQN incorporates the learned policy features as a hidden vector into its own deep Q-network (DQN), such that it is able to predict better Q values for the controllable agents than the state-of-the-art deep reinforcement learning models. We further propose an enhanced version of DPIQN, called deep recurrent policy inference Q-network (DRPIQN), for handling partial observability. Both DPIQN and DRPIQN are trained by an adaptive training procedure, which adjusts the network's attention to learn the policy features and its own Q-values at different phases of the training process. We present a comprehensive analysis of DPIQN and DRPIQN, and highlight their effectiveness and generalizability in various multi-agent settings. Our models are evaluated in a classic soccer game involving both competitive and collaborative scenarios. Experimental results performed on 1 vs. 1 and 2 vs. 2 games show that DPIQN and DRPIQN demonstrate superior performance to the baseline DQN and deep recurrent Q-network (DRQN) models. We also explore scenarios in which collaborators or opponents dynamically change their policies, and show that DPIQN and DRPIQN do lead to better overall performance in terms of stability and mean scores.",
"Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880 expert human performance, and a challenging suite of first-person, three-dimensional tasks leading to a mean speedup in learning of 10 @math and averaging 87 expert human performance on Labyrinth.",
"Many real-world problems, such as network packet routing and urban traffic control, are naturally modeled as multi-agent reinforcement learning (RL) problems. However, existing multi-agent RL methods typically scale poorly in the problem size. Therefore, a key challenge is to translate the success of deep learning on single-agent RL to the multi-agent setting. A major stumbling block is that independent Q-learning, the most popular multi-agent RL method, introduces nonstationarity that makes it incompatible with the experience replay memory on which deep Q-learning relies. This paper proposes two methods that address this problem: 1) using a multi-agent variant of importance sampling to naturally decay obsolete data and 2) conditioning each agent's value function on a fingerprint that disambiguates the age of the data sampled from the replay memory. Results on a challenging decentralised variant of StarCraft unit micromanagement confirm that these methods enable the successful combination of experience replay with multi-agent RL.",
"We present the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning. The model is a convolutional neural network, trained with a variant of Q-learning, whose input is raw pixels and whose output is a value function estimating future rewards. We apply our method to seven Atari 2600 games from the Arcade Learning Environment, with no adjustment of the architecture or learning algorithm. We find that it outperforms all previous approaches on six of the games and surpasses a human expert on three of them.",
"Opponent modeling is necessary in multi-agent settings where secondary agents with competing goals also adapt their strategies, yet it remains challenging because strategies interact with each other and change. Most previous work focuses on developing probabilistic models or parameterized strategies for specific applications. Inspired by the recent success of deep reinforcement learning, we present neural-based models that jointly learn a policy and the behavior of opponents. Instead of explicitly predicting the opponent's action, we encode observation of the opponents into a deep Q-Network (DQN); however, we retain explicit modeling (if desired) using multitasking. By using a Mixture-of-Experts architecture, our model automatically discovers different strategy patterns of opponents without extra supervision. We evaluate our models on a simulated soccer game and a popular trivia game, showing superior performance over DQN and its variants."
]
} |
1907.09597 | 2963065757 | In this paper we explore how actor-critic methods in deep reinforcement learning, in particular Asynchronous Advantage Actor-Critic (A3C), can be extended with agent modeling. Inspired by recent works on representation learning and multiagent deep reinforcement learning, we propose two architectures to perform agent modeling: the first one based on parameter sharing, and the second one based on agent policy features. Both architectures aim to learn other agents' policies as auxiliary tasks, besides the standard actor (policy) and critic (values). We performed experiments in both cooperative and competitive domains. The former is a problem of coordinated multiagent object transportation and the latter is a two-player mini version of the Pommerman game. Our results show that the proposed architectures stabilize learning and outperform the standard A3C architecture when learning a best response in terms of expected rewards. | Deep Cognitive Hierarchies @cite_11 is an algorithm that aims to avoid overfitting in two-player games. It uses deep reinforcement learning to compute best responses to a distribution over policies and empirical game-theoretic analysis to compute new meta-strategy distributions. Theory of Mind Network @cite_16 tackles the problem of meta-learning, i.e., the proposed network should acquire a strong prior model for agents’ behaviour to bootstrap to richer predictions. DeepBPR+ studies the problem of efficient policy detection and reuse when playing against non-stationary agents in Markov games @cite_20 . In contrast, our goal is to estimate the opponent teammate's policy at the same time that the agent is learning its respective (best response) policy; since these two elements are linked to each other our proposals improve the stability of the learning process as well as increase the obtained rewards. | {
"cite_N": [
"@cite_16",
"@cite_20",
"@cite_11"
],
"mid": [
"2962940757",
"2891287243",
"2567015638"
],
"abstract": [
"",
"In multiagent domains, coping with non-stationary agents that change behaviors constantly is a challenging problem, where an agent is usually required to be able to quickly detect the other agent's policy during online interaction, and then adapt its own policy accordingly. This paper studies efficient policy detecting and reusing techniques when playing against non-stationary agents in Markov games. We propose a new deep BPR+ algorithm by extending the recent BPR+ algorithm with a neural network as the value-function approximator. To detect policy accurately, we propose the taking advantage of the to infer the other agent's policy from reward signals and its behaviors. Instead of directly storing individual policies as BPR+, we introduce that serves as the policy library in BPR+, using policy distillation to achieve efficient online policy learning and reuse. Deep BPR+ inherits all the advantages of BPR+ and empirically shows better performance in terms of detection accuracy, cumulative rewards and speed of convergence compared to existing algorithms in complex Markov games with raw visual inputs.",
"Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks to bootstrap learning. In particular we consider jointly learning the goal-driven reinforcement learning problem with an unsupervised depth prediction task and a self-supervised loop closure classification task. Using this approach we can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, that show that the agent implicitly learns key navigation abilities, with only sparse rewards and without direct supervision."
]
} |
1907.09624 | 2964198301 | Object classes that surround us have a natural tendency to emerge at varying levels of abstraction. We propose a Bayesian approach to zero-shot learning (ZSL) that introduces the notion of meta-classes and implements a Bayesian hierarchy around these classes to effectively blend data likelihood with local and global priors. Local priors driven by data from seen classes, i.e. classes that are available at training time, become instrumental in recovering unseen classes, i.e. classes that are missing at training time, in a generalized ZSL setting. Hyperparameters of the Bayesian model offer a convenient way to optimize the trade-off between seen and unseen class accuracy in addition to guiding other aspects of model fitting. We conduct experiments on seven benchmark datasets including the large scale ImageNet and show that our model improves the current state of the art in the challenging generalized ZSL setting. | Generative models for zero-shot learning. Although most of the early work focused on discriminative models there are a few studies that use generative models to tackle ZSL @cite_22 @cite_0 . The study in @cite_22 uses Normal distributions to model both image features and semantic vectors and learns a multimodal mapping between two spaces. This mapping is optimized by minimizing a similarity based cross domain loss function. In a similar fashion the study in @cite_0 utilizes a regression model to optimize a mapping between class attributes and parameters of class conditional distributions. A comprehensive review of these techniques and their performance on several benchmark data sets can be found in @cite_6 . | {
"cite_N": [
"@cite_0",
"@cite_22",
"@cite_6"
],
"mid": [
"2964307109",
"2561940122",
"2963499153"
],
"abstract": [
"We present a simple generative framework for learning to predict previously unseen classes, based on estimating class-attribute-gated class-conditional distributions. We model each class-conditional distribution as an exponential family distribution and the parameters of the distribution of each seen unseen class are defined as functions of the respective observed class attributes. These functions can be learned using only the seen class data and can be used to predict the parameters of the class-conditional distribution of each unseen class. Unlike most existing methods for zero-shot learning that represent classes as fixed embeddings in some vector space, our generative model naturally represents each class as a probability distribution. It is simple to implement and also allows leveraging additional unlabeled data from unseen classes to improve the estimates of their class-conditional distributions using transductive semi-supervised learning. Moreover, it extends seamlessly to few-shot learning by easily updating these distributions when provided with a small number of additional labelled examples from unseen classes. Through a comprehensive set of experiments on several benchmark data sets, we demonstrate the efficacy of our framework.",
"",
"Due to the importance of zero-shot learning, i.e., classifying images where there is a lack of labeled training data, the number of proposed approaches has recently increased steadily. We argue that it is time to take a step back and to analyze the status quo of the area. The purpose of this paper is three-fold. First, given the fact that there is no agreed upon zero-shot learning benchmark, we first define a new benchmark by unifying both the evaluation protocols and data splits of publicly available datasets used for this task. This is an important contribution as published results are often not comparable and sometimes even flawed due to, e.g., pre-training on zero-shot test classes. Moreover, we propose a new zero-shot learning dataset, the Animals with Attributes 2 (AWA2) dataset which we make publicly available both in terms of image features and the images themselves. Second, we compare and analyze a significant number of the state-of-the-art methods in depth, both in the classic zero-shot setting but also in the more realistic generalized zero-shot setting. Finally, we discuss in detail the limitations of the current status of the area which can be taken as a basis for advancing it."
]
} |
1907.09624 | 2964198301 | Object classes that surround us have a natural tendency to emerge at varying levels of abstraction. We propose a Bayesian approach to zero-shot learning (ZSL) that introduces the notion of meta-classes and implements a Bayesian hierarchy around these classes to effectively blend data likelihood with local and global priors. Local priors driven by data from seen classes, i.e. classes that are available at training time, become instrumental in recovering unseen classes, i.e. classes that are missing at training time, in a generalized ZSL setting. Hyperparameters of the Bayesian model offer a convenient way to optimize the trade-off between seen and unseen class accuracy in addition to guiding other aspects of model fitting. We conduct experiments on seven benchmark datasets including the large scale ImageNet and show that our model improves the current state of the art in the challenging generalized ZSL setting. | Another close line of work leveraging hierarchical Bayesian model is done in @cite_3 . Their method is similar to ours in the way priors are used to achieve knowledge transfer across classes. However, unlike ours, no semantic information is used when establishing the Bayesian hierarchy in @cite_3 and class discovery is performed in a fully unsupervised fashion. Also, @math and @math play critical roles in modeling expected global and local class dispersions (not modeled in [A1]). | {
"cite_N": [
"@cite_3"
],
"mid": [
"2150385401"
],
"abstract": [
"We develop a hierarchical Bayesian model that learns categories from single training examples. The model transfers acquired knowledge from previously learned categories to a novel category, in the form of a prior over category means and variances. The model discovers how to group categories into meaningful super-categories that express different priors for new classes. Given a single example of a novel category, we can efficiently infer which super-category the novel category belongs to, and thereby estimate not only the new category's mean but also an appropriate similarity metric based on parameters inherited from the super-category. On MNIST and MSR Cambridge image datasets the model learns useful representations of novel categories based on just a single training example, and performs significantly better than simpler hierarchical Bayesian approaches. It can also discover new categories in a completely unsupervised fashion, given just one or a few examples."
]
} |
1907.09658 | 2962765560 | Although skeleton-based action recognition has achieved great success in recent years, most of the existing methods may suffer from a large model size and slow execution speed. To alleviate this issue, we analyze skeleton sequence properties to propose a Double-feature Double-motion Network (DD-Net) for skeleton-based action recognition. By using a lightweight network structure (i.e., 0.15 million parameters), DD-Net can reach a super fast speed, as 3,500 FPS on one GPU, or, 2,000 FPS on one CPU. By employing robust features, DD-Net achieves the state-of-the-art performance on our experimental datasets: SHREC (i.e., hand actions) and JHMDB (i.e., body actions). Our code will be released with this paper later. | Nowadays, with the fast advancement of deep learning, skeleton acquisition is not limited to use motion capture systems @cite_25 and depth cameras @cite_37 . The RGB data, for instance, can be used to infer 2D skeletons @cite_27 @cite_30 or 3D skeletons @cite_38 @cite_43 in real time. Moreover, even WiFi signals can be used to estimate skeleton data @cite_40 @cite_17 . Those achievements have made skeleton-based action recognition available on a huge amount of multimedia resources and therefore have stimulated the model's development. | {
"cite_N": [
"@cite_30",
"@cite_38",
"@cite_37",
"@cite_43",
"@cite_27",
"@cite_40",
"@cite_25",
"@cite_17"
],
"mid": [
"2963402313",
"2611932403",
"2056898157",
"2802545914",
"2559085405",
"2799062425",
"2090110089",
"2933379262"
],
"abstract": [
"There has been significant progress on pose estimation and increasing interests on pose tracking in recent years. At the same time, the overall algorithm and system complexity increases as well, making the algorithm analysis and comparison more difficult. This work provides simple and effective baseline methods. They are helpful for inspiring and evaluating new ideas for the field. State-of-the-art results are achieved on challenging benchmarks. The code will be available at https: github.com leoxiaobin pose.pytorch.",
"We present the first real-time method to capture the full global 3D skelet al pose of a human in a stable, temporally consistent manner using a single RGB camera. Our method combines a new convolutional neural network (CNN) based pose regressor with kinematic skeleton fitting. Our novel fully-convolutional pose formulation regresses 2D and 3D joint positions jointly in real time and does not require tightly cropped input frames. A real-time kinematic skeleton fitting method uses the CNN output to yield temporally stable 3D global pose reconstructions on the basis of a coherent kinematic skeleton. This makes our approach the first monocular RGB method usable in real-time applications such as 3D character control---thus far, the only monocular methods for such applications employed specialized RGB-D cameras. Our method's accuracy is quantitatively on par with the best offline 3D monocular RGB pose estimation methods. Our results are qualitatively comparable to, and sometimes better than, results from monocular RGB-D approaches, such as the Kinect. However, we show that our approach is more broadly applicable than RGB-D solutions, i.e., it works for outdoor scenes, community videos, and low quality commodity RGB cameras.",
"Recent advances in 3D depth cameras such as Microsoft Kinect sensors (www.xbox.com en-US kinect) have created many opportunities for multimedia computing. The Kinect sensor lets the computer directly sense the third dimension (depth) of the players and the environment. It also understands when users talk, knows who they are when they walk up to it, and can interpret their movements and translate them into a format that developers can use to build new experiences. While the Kinect sensor incorporates several advanced sensing hardware, this article focuses on the vision aspect of the Kinect sensor and its impact beyond the gaming industry.",
"In this paper, we present a novel method for real-time 3D hand pose estimation from single depth images using 3D Convolutional Neural Networks (CNNs). Image-based features extracted by 2D CNNs are not directly suitable for 3D hand pose estimation due to the lack of 3D spatial information. Our proposed 3D CNN-based method, taking a 3D volumetric representation of the hand depth image as input and extracting 3D features from the volumetric input, can capture the 3D spatial structure of the hand and accurately regress full 3D hand pose in a single pass. In order to make the 3D CNN robust to variations in hand sizes and global orientations, we perform 3D data augmentation on the training data. To further improve the estimation accuracy, we propose applying the 3D deep network architectures and leveraging the complete hand surface as intermediate supervision for learning 3D hand pose from depth images. Extensive experiments on three challenging datasets demonstrate that our proposed approach outperforms baselines and state-of-the-art methods. A cross-dataset experiment also shows that our method has good generalization ability. Furthermore, our method is fast as our implementation runs at over 91 frames per second on a standard computer with a single GPU.",
"We present an approach to efficiently detect the 2D pose of multiple people in an image. The approach uses a nonparametric representation, which we refer to as Part Affinity Fields (PAFs), to learn to associate body parts with individuals in the image. The architecture encodes global context, allowing a greedy bottom-up parsing step that maintains high accuracy while achieving realtime performance, irrespective of the number of people in the image. The architecture is designed to jointly learn part locations and their association via two branches of the same sequential prediction process. Our method placed first in the inaugural COCO 2016 keypoints challenge, and significantly exceeds the previous state-of-the-art result on the MPII Multi-Person benchmark, both in performance and efficiency.",
"This paper demonstrates accurate human pose estimation through walls and occlusions. We leverage the fact that wireless signals in the WiFi frequencies traverse walls and reflect off the human body. We introduce a deep neural network approach that parses such radio signals to estimate 2D poses. Since humans cannot annotate radio signals, we use state-of-the-art vision model to provide cross-modal supervision. Specifically, during training the system uses synchronized wireless and visual inputs, extracts pose information from the visual stream, and uses it to guide the training process. Once trained, the network uses only the wireless signal for pose estimation. We show that, when tested on visible scenes, the radio-based system is almost as accurate as the vision-based system used to train it. Yet, unlike vision-based pose estimation, the radio-based system can estimate 2D poses through walls despite never trained on such scenarios. Demo videos are available at our website.",
"A comprehensive survey of computer vision-based human motion capture literature from the past two decades is presented. The focus is on a general overview based on a taxonomy of system functionalities, broken down into four processes: initialization, tracking, pose estimation, and recognition. Each process is discussed and divided into subprocesses and or categories of methods to provide a reference to describe and compare the more than 130 publications covered by the survey. References are included throughout the paper to exemplify important issues and their relations to the various methods. A number of general assumptions used in this research field are identified and the character of these assumptions indicates that the research field is still in an early stage of development. To evaluate the state of the art, the major application areas are identified and performances are analyzed in light of the methods presented in the survey. Finally, suggestions for future research directions are offered.",
"WiFi human sensing has achieved great progress in indoor localization, activity classification, etc. Retracing the development of these work, we have a natural question: can WiFi devices work like cameras for vision applications? In this paper We try to answer this question by exploring the ability of WiFi on estimating single person pose. We use a 3-antenna WiFi sender and a 3-antenna receiver to generate WiFi data. Meanwhile, we use a synchronized camera to capture person videos for corresponding keypoint annotations. We further propose a fully convolutional network (FCN), termed WiSPPN, to estimate single person pose from the collected data and annotations. Evaluation on over 80k images (16 sites and 8 persons) replies aforesaid question with a positive answer. Codes have been made publicly available at this https URL."
]
} |
1907.09658 | 2962765560 | Although skeleton-based action recognition has achieved great success in recent years, most of the existing methods may suffer from a large model size and slow execution speed. To alleviate this issue, we analyze skeleton sequence properties to propose a Double-feature Double-motion Network (DD-Net) for skeleton-based action recognition. By using a lightweight network structure (i.e., 0.15 million parameters), DD-Net can reach a super fast speed, as 3,500 FPS on one GPU, or, 2,000 FPS on one CPU. By employing robust features, DD-Net achieves the state-of-the-art performance on our experimental datasets: SHREC (i.e., hand actions) and JHMDB (i.e., body actions). Our code will be released with this paper later. | In general, in order to achieve a better performance for skeleton-based action recognition, previous studies attempt to work on two aspects: introduce new features for skeleton sequences @cite_28 @cite_7 @cite_33 @cite_18 @cite_36 @cite_14 @cite_35 , and, propose novel neural network architectures @cite_2 @cite_34 @cite_44 @cite_29 @cite_46 @cite_39 @cite_1 . | {
"cite_N": [
"@cite_35",
"@cite_18",
"@cite_14",
"@cite_33",
"@cite_7",
"@cite_28",
"@cite_36",
"@cite_29",
"@cite_1",
"@cite_39",
"@cite_44",
"@cite_2",
"@cite_46",
"@cite_34"
],
"mid": [
"2910217813",
"2755813981",
"2796633859",
"2593146028",
"2467634805",
"2142203883",
"2792345332",
"",
"",
"2914992758",
"",
"2761860076",
"",
""
],
"abstract": [
"Dynamic hand gesture recognition has attracted increasing attention because of its importance for human–computer interaction. In this paper, we propose a novel motion feature augmented network (MFA-Net) for dynamic hand gesture recognition from skelet al data. MFA-Net exploits motion features of finger and global movements to augment features of deep network for gesture recognition. To describe finger articulated movements, finger motion features are extracted from the hand skeleton sequence via a variational autoencoder. Global motion features are utilized to represent the global movements of hand skeleton. These motion features along with the skeleton sequence are then fed into three branches of a recurrent neural network (RNN), which augment the motion features for RNN and improve the classification performance. The proposed MFA-Net is evaluated on two challenging skeleton-based dynamic hand gesture datasets, including DHG-14 28 dataset and SHREC’17 dataset. Experimental results demonstrate that our proposed method achieves comparable performance on DHG-14 28 dataset and better performance on SHREC’17 dataset when compared with start-of-the-art methods.",
"In this paper we present a simple 3D gesture recognizer based on trajectory matching, showing its good performances in classification and retrieval of command gestures based on single hand trajectories. We demonstrate that further simplifications in porting the classic \"1 dollar\" algorithm approach from the 2D to the 3D gesture recognition and retrieval problems can result in very high classification accuracy and retrieval scores even on datasets with a large number of different gestures executed by different users. Furthermore, recognition can be good even with heavily subsampled path traces and with incomplete gestures.",
"Most state-of-the-art methods for action recognition rely on a two-stream architecture that processes appearance and motion independently. In this paper, we claim that considering them jointly offers rich information for action recognition. We introduce a novel representation that gracefully encodes the movement of some semantic keypoints. We use the human joints as these keypoints and term our Pose moTion representation PoTion. Specifically, we first run a state-of-the-art human pose estimator [4] and extract heatmaps for the human joints in each frame. We obtain our PoTion representation by temporally aggregating these probability maps. This is achieved by 'colorizing' each of them depending on the relative time of the frames in the video clip and summing them. This fixed-size representation for an entire video clip is suitable to classify actions using a shallow convolutional neural network. Our experimental evaluation shows that PoTion outperforms other state-of-the-art pose representations [6, 48]. Furthermore, it is complementary to standard appearance and motion streams. When combining PoTion with the recent two-stream I3D approach [5], we obtain state-of-the-art performance on the JHMDB, HMDB and UCF101 datasets.",
"Sequence-based view invariant transform can effectively cope with view variations.Enhanced skeleton visualization method encodes spatio-temporal skeletons as visual and motion enhanced color images in a compact yet distinctive manner.Multi-stream convolutional neural networks fusion model is able to explore complementary properties among different types of enhanced color images.Our method consistently achieves the highest accuracies on four datasets, including the largest and most challenging NTU RGB+D dataset for skeleton-based action recognition. Human action recognition based on skeletons has wide applications in humancomputer interaction and intelligent surveillance. However, view variations and noisy data bring challenges to this task. Whats more, it remains a problem to effectively represent spatio-temporal skeleton sequences. To solve these problems in one goal, this work presents an enhanced skeleton visualization method for view invariant human action recognition. Our method consists of three stages. First, a sequence-based view invariant transform is developed to eliminate the effect of view variations on spatio-temporal locations of skeleton joints. Second, the transformed skeletons are visualized as a series of color images, which implicitly encode the spatio-temporal information of skeleton joints. Furthermore, visual and motion enhancement methods are applied on color images to enhance their local patterns. Third, a convolutional neural networks-based model is adopted to extract robust and discriminative features from color images. The final action class scores are generated by decision level fusion of deep features. Extensive experiments on four challenging datasets consistently demonstrate the superiority of our method.",
"In this paper, a new skeleton-based approach is proposed for 3D hand gesture recognition. Specifically, we exploit the geometric shape of the hand to extract an effective descriptor from hand skeleton connected joints returned by the Intel RealSense depth camera. Each descriptor is then encoded by a Fisher Vector representation obtained using a Gaussian Mixture Model. A multi-level representation of Fisher Vectors and other skeleton-based geometric features is guaranteed by a temporal pyramid to obtain the final feature vector, used later to achieve the classification by a linear SVM classifier. The proposed approach is evaluated on a challenging hand gesture dataset containing 14 gestures, performed by 20 participants performing the same gesture with two different numbers of fingers. Experimental results show that our skeleton-based approach consistently achieves superior performance over a depth-based approach.",
"Estimating 3D pose similarity is a fundamental problem on 3D motion data. Most previous work calculates L2-like distance of joint orientations or coordinates, which does not sufficiently reflect the pose similarity of human perception. In this paper, we present a new pose distance metric. First, we propose a new rich pose feature set called Geometric Pose Descriptor (GPD). GPD is more effective in encoding pose similarity by utilizing features on geometric relations among body parts, as well as temporal information such as velocities and accelerations. Based on GPD, we propose a semisupervised distance metric learning algorithm called Regularized Distance Metric Learning with Sparse Representation (RDSR), which integrates information from both unsupervised data relationship and labels. We apply the proposed pose distance metric to applications of motion transition decision and content-based pose retrieval. Quantitative evaluations demonstrate that our method achieves better results with only a small amount of human labels, showing that the proposed pose distance metric is a promising building block for various 3D-motion related applications.",
"Recent skeleton-based action recognition approaches achieve great improvement by using recurrent neural network (RNN) models. Currently, these approaches build an end-to-end network from coordinates of joints to class categories and improve accuracy by extending RNN to spatial domains. First, while such well-designed models and optimization strategies explore relations between different parts directly from joint coordinates, we provide a simple universal spatial modeling method perpendicular to the RNN model enhancement. Specifically, according to the evolution of previous work, we select a set of simple geometric features, and then seperately feed each type of features to a three-layer LSTM framework. Second, we propose a multistream LSTM architecture with a new smoothed score fusion technique to learn classification from different geometric feature streams. Furthermore, we observe that the geometric relational features based on distances between joints and selected lines outperform other features and the fusion results achieve the state-of-the-art performance on four datasets. We also show the sparsity of input gate weights in the first LSTM layer trained by geometric features and demonstrate that utilizing joint-line distances as input require less data for training.",
"",
"",
"Dynamic hand gesture recognition is a crucial yet challenging task in computer vision. The key of this task lies in an effective extraction of discriminative spatial and temporal features to model the evolutions of different gestures. In this paper, we propose an end-to-end Spatial-Temporal Attention Residual Temporal Convolutional Network (STA-Res-TCN) for skeleton-based dynamic hand gesture recognition, which learns different levels of attention and assigns them to each spatial-temporal feature extracted by the convolution filters at each time step. The proposed attention branch assists the networks to adaptively focus on the informative time frames and features while exclude the irrelevant ones that often bring in unnecessary noise. Moreover, our proposed STA-Res-TCN is a lightweight model that can be trained and tested in an extremely short time. Experiments on DHG-14 28 Dataset and SHREC’17 Track Dataset show that STA-Res-TCN outperforms state-of-the-art methods on both the 14 gestures setting and the more complicated 28 gestures setting.",
"",
"Motivated by the promising performance achieved by deep learning, an effective yet simple method is proposed to encode the spatio-temporal information of skeleton sequences into color texture images, referred to as joint distance maps (JDMs), and convolutional neural networks are employed to exploit the discriminative features from the JDMs for human action and interaction recognition. The pair-wise distances between joints over a sequence of single or multiple person skeletons are encoded into color variations to capture temporal information. The efficacy of the proposed method has been verified by the state-of-the-art results on the large RGB+D Dataset and small UTD-MHAD Dataset in both single-view and cross-view settings.",
"",
""
]
} |
1907.09658 | 2962765560 | Although skeleton-based action recognition has achieved great success in recent years, most of the existing methods may suffer from a large model size and slow execution speed. To alleviate this issue, we analyze skeleton sequence properties to propose a Double-feature Double-motion Network (DD-Net) for skeleton-based action recognition. By using a lightweight network structure (i.e., 0.15 million parameters), DD-Net can reach a super fast speed, as 3,500 FPS on one GPU, or, 2,000 FPS on one CPU. By employing robust features, DD-Net achieves the state-of-the-art performance on our experimental datasets: SHREC (i.e., hand actions) and JHMDB (i.e., body actions). Our code will be released with this paper later. | A good skeleton-sequence representation should contain global motion information and be location-viewpoint invariant. However, it is challenging to satisfy both requirements in one feature. The studies @cite_7 @cite_18 @cite_42 @cite_35 focused on global motions without considering the location-viewpoint variation in their features. Other studies @cite_28 @cite_33 @cite_36 , on the contrary, introduced location-viewpoint invariant features without considering global motions. Our work bridges their gaps by seamlessly integrating a location-viewpoint invariant feature and a two-scale global motion feature together. | {
"cite_N": [
"@cite_35",
"@cite_18",
"@cite_33",
"@cite_7",
"@cite_28",
"@cite_36",
"@cite_42"
],
"mid": [
"2910217813",
"2755813981",
"2593146028",
"2467634805",
"2142203883",
"2792345332",
""
],
"abstract": [
"Dynamic hand gesture recognition has attracted increasing attention because of its importance for human–computer interaction. In this paper, we propose a novel motion feature augmented network (MFA-Net) for dynamic hand gesture recognition from skelet al data. MFA-Net exploits motion features of finger and global movements to augment features of deep network for gesture recognition. To describe finger articulated movements, finger motion features are extracted from the hand skeleton sequence via a variational autoencoder. Global motion features are utilized to represent the global movements of hand skeleton. These motion features along with the skeleton sequence are then fed into three branches of a recurrent neural network (RNN), which augment the motion features for RNN and improve the classification performance. The proposed MFA-Net is evaluated on two challenging skeleton-based dynamic hand gesture datasets, including DHG-14 28 dataset and SHREC’17 dataset. Experimental results demonstrate that our proposed method achieves comparable performance on DHG-14 28 dataset and better performance on SHREC’17 dataset when compared with start-of-the-art methods.",
"In this paper we present a simple 3D gesture recognizer based on trajectory matching, showing its good performances in classification and retrieval of command gestures based on single hand trajectories. We demonstrate that further simplifications in porting the classic \"1 dollar\" algorithm approach from the 2D to the 3D gesture recognition and retrieval problems can result in very high classification accuracy and retrieval scores even on datasets with a large number of different gestures executed by different users. Furthermore, recognition can be good even with heavily subsampled path traces and with incomplete gestures.",
"Sequence-based view invariant transform can effectively cope with view variations.Enhanced skeleton visualization method encodes spatio-temporal skeletons as visual and motion enhanced color images in a compact yet distinctive manner.Multi-stream convolutional neural networks fusion model is able to explore complementary properties among different types of enhanced color images.Our method consistently achieves the highest accuracies on four datasets, including the largest and most challenging NTU RGB+D dataset for skeleton-based action recognition. Human action recognition based on skeletons has wide applications in humancomputer interaction and intelligent surveillance. However, view variations and noisy data bring challenges to this task. Whats more, it remains a problem to effectively represent spatio-temporal skeleton sequences. To solve these problems in one goal, this work presents an enhanced skeleton visualization method for view invariant human action recognition. Our method consists of three stages. First, a sequence-based view invariant transform is developed to eliminate the effect of view variations on spatio-temporal locations of skeleton joints. Second, the transformed skeletons are visualized as a series of color images, which implicitly encode the spatio-temporal information of skeleton joints. Furthermore, visual and motion enhancement methods are applied on color images to enhance their local patterns. Third, a convolutional neural networks-based model is adopted to extract robust and discriminative features from color images. The final action class scores are generated by decision level fusion of deep features. Extensive experiments on four challenging datasets consistently demonstrate the superiority of our method.",
"In this paper, a new skeleton-based approach is proposed for 3D hand gesture recognition. Specifically, we exploit the geometric shape of the hand to extract an effective descriptor from hand skeleton connected joints returned by the Intel RealSense depth camera. Each descriptor is then encoded by a Fisher Vector representation obtained using a Gaussian Mixture Model. A multi-level representation of Fisher Vectors and other skeleton-based geometric features is guaranteed by a temporal pyramid to obtain the final feature vector, used later to achieve the classification by a linear SVM classifier. The proposed approach is evaluated on a challenging hand gesture dataset containing 14 gestures, performed by 20 participants performing the same gesture with two different numbers of fingers. Experimental results show that our skeleton-based approach consistently achieves superior performance over a depth-based approach.",
"Estimating 3D pose similarity is a fundamental problem on 3D motion data. Most previous work calculates L2-like distance of joint orientations or coordinates, which does not sufficiently reflect the pose similarity of human perception. In this paper, we present a new pose distance metric. First, we propose a new rich pose feature set called Geometric Pose Descriptor (GPD). GPD is more effective in encoding pose similarity by utilizing features on geometric relations among body parts, as well as temporal information such as velocities and accelerations. Based on GPD, we propose a semisupervised distance metric learning algorithm called Regularized Distance Metric Learning with Sparse Representation (RDSR), which integrates information from both unsupervised data relationship and labels. We apply the proposed pose distance metric to applications of motion transition decision and content-based pose retrieval. Quantitative evaluations demonstrate that our method achieves better results with only a small amount of human labels, showing that the proposed pose distance metric is a promising building block for various 3D-motion related applications.",
"Recent skeleton-based action recognition approaches achieve great improvement by using recurrent neural network (RNN) models. Currently, these approaches build an end-to-end network from coordinates of joints to class categories and improve accuracy by extending RNN to spatial domains. First, while such well-designed models and optimization strategies explore relations between different parts directly from joint coordinates, we provide a simple universal spatial modeling method perpendicular to the RNN model enhancement. Specifically, according to the evolution of previous work, we select a set of simple geometric features, and then seperately feed each type of features to a three-layer LSTM framework. Second, we propose a multistream LSTM architecture with a new smoothed score fusion technique to learn classification from different geometric feature streams. Furthermore, we observe that the geometric relational features based on distances between joints and selected lines outperform other features and the fusion results achieve the state-of-the-art performance on four datasets. We also show the sparsity of input gate weights in the first LSTM layer trained by geometric features and demonstrate that utilizing joint-line distances as input require less data for training.",
""
]
} |
1907.09658 | 2962765560 | Although skeleton-based action recognition has achieved great success in recent years, most of the existing methods may suffer from a large model size and slow execution speed. To alleviate this issue, we analyze skeleton sequence properties to propose a Double-feature Double-motion Network (DD-Net) for skeleton-based action recognition. By using a lightweight network structure (i.e., 0.15 million parameters), DD-Net can reach a super fast speed, as 3,500 FPS on one GPU, or, 2,000 FPS on one CPU. By employing robust features, DD-Net achieves the state-of-the-art performance on our experimental datasets: SHREC (i.e., hand actions) and JHMDB (i.e., body actions). Our code will be released with this paper later. | Although Recurrent Neural Networks (RNNs) are commonly used in skeleton-based action recognition @cite_26 @cite_24 @cite_20 @cite_23 @cite_21 @cite_45 , we argue that it is relatively slow and difficult for parallel computing, compared with methods @cite_2 @cite_41 @cite_35 that use Convolutional Neural Networks (CNNs). Since we take the model speed as one of our priorities, we utilize 1D CNNs to construct the backbone network of DD-Net. | {
"cite_N": [
"@cite_35",
"@cite_26",
"@cite_41",
"@cite_21",
"@cite_24",
"@cite_45",
"@cite_23",
"@cite_2",
"@cite_20"
],
"mid": [
"2910217813",
"1950788856",
"2777985171",
"",
"2510185399",
"",
"",
"2761860076",
"2606294640"
],
"abstract": [
"Dynamic hand gesture recognition has attracted increasing attention because of its importance for human–computer interaction. In this paper, we propose a novel motion feature augmented network (MFA-Net) for dynamic hand gesture recognition from skelet al data. MFA-Net exploits motion features of finger and global movements to augment features of deep network for gesture recognition. To describe finger articulated movements, finger motion features are extracted from the hand skeleton sequence via a variational autoencoder. Global motion features are utilized to represent the global movements of hand skeleton. These motion features along with the skeleton sequence are then fed into three branches of a recurrent neural network (RNN), which augment the motion features for RNN and improve the classification performance. The proposed MFA-Net is evaluated on two challenging skeleton-based dynamic hand gesture datasets, including DHG-14 28 dataset and SHREC’17 dataset. Experimental results demonstrate that our proposed method achieves comparable performance on DHG-14 28 dataset and better performance on SHREC’17 dataset when compared with start-of-the-art methods.",
"Human actions can be represented by the trajectories of skeleton joints. Traditional methods generally model the spatial structure and temporal dynamics of human skeleton with hand-crafted features and recognize human actions by well-designed classifiers. In this paper, considering that recurrent neural network (RNN) can model the long-term contextual information of temporal sequences well, we propose an end-to-end hierarchical RNN for skeleton based action recognition. Instead of taking the whole skeleton as the input, we divide the human skeleton into five parts according to human physical structure, and then separately feed them to five subnets. As the number of layers increases, the representations extracted by the subnets are hierarchically fused to be the inputs of higher layers. The final representations of the skeleton sequences are fed into a single-layer perceptron, and the temporally accumulated output of the perceptron is the final decision. We compare with five other deep RNN architectures derived from our model to verify the effectiveness of the proposed network, and also compare with several other methods on three publicly available datasets. Experimental results demonstrate that our model achieves the state-of-the-art performance with high computational efficiency.",
"Abstract In this work, we address human activity and hand gesture recognition problems using 3D data sequences obtained from full-body and hand skeletons, respectively. To this aim, we propose a deep learning-based approach for temporal 3D pose recognition problems based on a combination of a Convolutional Neural Network (CNN) and a Long Short-Term Memory (LSTM) recurrent network. We also present a two-stage training strategy which firstly focuses on CNN training and, secondly, adjusts the full method (CNN+LSTM). Experimental testing demonstrated that our training method obtains better results than a single-stage training strategy. Additionally, we propose a data augmentation method that has also been validated experimentally. Finally, we perform an extensive experimental study on publicly available data benchmarks. The results obtained show how the proposed approach reaches state-of-the-art performance when compared to the methods identified in the literature. The best results were obtained for small datasets, where the proposed data augmentation strategy has greater impact.",
"",
"3D action recognition – analysis of human actions based on 3D skeleton data – becomes popular recently due to its succinctness, robustness, and view-invariant representation. Recent attempts on this problem suggested to develop RNN-based learning methods to model the contextual dependency in the temporal domain. In this paper, we extend this idea to spatio-temporal domains to analyze the hidden sources of action-related information within the input data over both domains concurrently. Inspired by the graphical structure of the human skeleton, we further propose a more powerful tree-structure based traversal method. To handle the noise and occlusion in 3D skeleton data, we introduce new gating mechanism within LSTM to learn the reliability of the sequential input data and accordingly adjust its effect on updating the long-term context information stored in the memory cell. Our method achieves state-of-the-art performance on 4 challenging benchmark datasets for 3D human action analysis.",
"",
"",
"Motivated by the promising performance achieved by deep learning, an effective yet simple method is proposed to encode the spatio-temporal information of skeleton sequences into color texture images, referred to as joint distance maps (JDMs), and convolutional neural networks are employed to exploit the discriminative features from the JDMs for human action and interaction recognition. The pair-wise distances between joints over a sequence of single or multiple person skeletons are encoded into color variations to capture temporal information. The efficacy of the proposed method has been verified by the state-of-the-art results on the large RGB+D Dataset and small UTD-MHAD Dataset in both single-view and cross-view settings.",
"Recently, skeleton based action recognition gains more popularity due to cost-effective depth sensors coupled with real-time skeleton estimation algorithms. Traditional approaches based on handcrafted features are limited to represent the complexity of motion patterns. Recent methods that use Recurrent Neural Networks (RNN) to handle raw skeletons only focus on the contextual dependency in the temporal domain and neglect the spatial configurations of articulated skeletons. In this paper, we propose a novel two-stream RNN architecture to model both temporal dynamics and spatial configurations for skeleton based action recognition. We explore two different structures for the temporal stream: stacked RNN and hierarchical RNN. Hierarchical RNN is designed according to human body kinematics. We also propose two effective methods to model the spatial structure by converting the spatial graph into a sequence of joints. To improve generalization of our model, we further exploit 3D transformation based data augmentation techniques including rotation and scaling transformation to transform the 3D coordinates of skeletons during training. Experiments on 3D action recognition benchmark datasets show that our method brings a considerable improvement for a variety of actions, i.e., generic actions, interaction activities and gestures."
]
} |
1907.09682 | 2964161024 | Knowledge distillation is a widely applicable technique for training a student neural network under the guidance of a trained teacher network. For example, in neural network compression, a high-capacity teacher is distilled to train a compact student; in privileged learning, a teacher trained with privileged data is distilled to train a student without access to that data. The distillation loss determines how a teacher's knowledge is captured and transferred to the student. In this paper, we propose a new form of knowledge distillation loss that is inspired by the observation that semantically similar inputs tend to elicit similar activation patterns in a trained network. Similarity-preserving knowledge distillation guides the training of a student network such that input pairs that produce similar (dissimilar) activations in the teacher network produce similar (dissimilar) activations in the student network. In contrast to previous distillation methods, the student is not required to mimic the representation space of the teacher, but rather to preserve the pairwise similarities in its own representation space. Experiments on three public datasets demonstrate the potential of our approach. | We presented in this paper a novel distillation loss for capturing and transferring knowledge from a teacher network to a student network. Several prior alternatives @cite_44 @cite_36 @cite_45 @cite_14 are described in the introduction and some key differences are highlighted in Section . In addition to the knowledge capture (or loss definition) aspect of distillation studied in this paper, another important open question is the architectural design of students and teachers. In most studies of knowledge distillation, including ours, the student network is a thinner and or shallower version of the teacher network. Inspired by efficient architectures such as MobileNet and ShuffleNet, @cite_16 proposed to replace regular convolutions in the teacher network with cheaper grouped and pointwise convolutions in the student. @cite_29 developed a reinforcement learning approach to learn the student architecture. @cite_18 demonstrated how a quantized student network can be trained using a full-precision teacher network. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_36",
"@cite_29",
"@cite_44",
"@cite_45",
"@cite_16"
],
"mid": [
"2964203871",
"2561238782",
"2964118293",
"2963387524",
"1821462560",
"2739879705",
"2962935923"
],
"abstract": [
"Deep neural networks (DNNs) continue to make significant advances, solving tasks from image classification to translation or reinforcement learning. One aspect of the field receiving considerable attention is efficiently executing deep models in resource-constrained environments, such as mobile or embedded devices. This paper focuses on this problem, and proposes two new compression methods, which jointly leverage weight quantization and distillation of larger teacher networks into smaller student networks. The first method we propose is called quantized distillation and leverages distillation during the training process, by incorporating distillation loss, expressed with respect to the teacher, into the training of a student network whose weights are quantized to a limited set of levels. The second method, differentiable quantization, optimizes the location of quantization points through stochastic gradient descent, to better fit the behavior of the teacher model. We validate both methods through experiments on convolutional and recurrent architectures. We show that quantized shallow students can reach similar accuracy levels to full-precision teacher models, while providing order of magnitude compression, and inference speedup that is linear in the depth reduction. In sum, our results enable DNNs for resource-constrained environments to leverage architecture and accuracy advances developed on more powerful devices.",
"Attention plays a critical role in human visual experience. Furthermore, it has recently been demonstrated that attention can also play an important role in the context of applying artificial neural networks to a variety of tasks from fields such as computer vision and NLP. In this work we show that, by properly defining attention for convolutional neural networks, we can actually use this type of information in order to significantly improve the performance of a student CNN network by forcing it to mimic the attention maps of a powerful teacher network. To that end, we propose several novel methods of transferring attention, showing consistent improvement across a variety of datasets and convolutional neural network architectures.",
"Abstract: While depth tends to improve network performances, it also makes gradient-based training more difficult since deeper networks tend to be more non-linear. The recently proposed knowledge distillation approach is aimed at obtaining small and fast-to-execute models, and it has shown that a student network could imitate the soft output of a larger teacher network or ensemble of networks. In this paper, we extend this idea to allow the training of a student that is deeper and thinner than the teacher, using not only the outputs but also the intermediate representations learned by the teacher as hints to improve the training process and final performance of the student. Because the student intermediate hidden layer will generally be smaller than the teacher's intermediate hidden layer, additional parameters are introduced to map the student hidden layer to the prediction of the teacher hidden layer. This allows one to train deeper students that can generalize better or run faster, a trade-off that is controlled by the chosen student capacity. For example, on CIFAR-10, a deep student network with almost 10.4 times less parameters outperforms a larger, state-of-the-art teacher network.",
"While bigger and deeper neural network architectures continue to advance the state-of-the-art for many computer vision tasks, real-world adoption of these networks is impeded by hardware and speed constraints. Conventional model compression methods attempt to address this problem by modifying the architecture manually or using pre-defined heuristics. Since the space of all reduced architectures is very large, modifying the architecture of a deep neural network in this way is a difficult task. In this paper, we tackle this issue by introducing a principled method for learning reduced network architectures in a data-driven way using reinforcement learning. Our approach takes a larger 'teacher' network as input and outputs a compressed 'student' network derived from the 'teacher' network. In the first stage of our method, a recurrent policy network aggressively removes layers from the large 'teacher' model. In the second stage, another recurrent policy network carefully reduces the size of each remaining layer. The resulting network is then evaluated to obtain a reward -- a score based on the accuracy and compression of the network. Our approach uses this reward signal with policy gradients to train the policies to find a locally optimal student network. Our experiments show that we can achieve compression rates of more than 10x for models such as ResNet-34 while maintaining similar performance to the input 'teacher' network. We also present a valuable transfer learning result which shows that policies which are pre-trained on smaller 'teacher' networks can be used to rapidly speed up training on larger 'teacher' networks.",
"A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.",
"We introduce a novel technique for knowledge transfer, where knowledge from a pretrained deep neural network (DNN) is distilled and transferred to another DNN. As the DNN performs a mapping from the input space to the output space through many layers sequentially, we define the distilled knowledge to be transferred in terms of flow between layers, which is calculated by computing the inner product between features from two layers. When we compare the student DNN and the original network with the same size as the student DNN but trained without a teacher network, the proposed method of transferring the distilled knowledge as the flow between two layers exhibits three important phenomena: (1) the student DNN that learns the distilled knowledge is optimized much faster than the original model, (2) the student DNN outperforms the original DNN, and (3) the student DNN can learn the distilled knowledge from a teacher DNN that is trained at a different task, and the student DNN outperforms the original DNN that is trained from scratch.",
"Many engineers wish to deploy modern neural networks in memory-limited settings; but the development of flexible methods for reducing memory use is in its infancy, and there is little knowledge of the resulting cost-benefit. We propose structural model distillation for memory reduction using a strategy that produces a student architecture that is a simple transformation of the teacher architecture: no redesign is needed, and the same hyperparameters can be used. Using attention transfer, we provide Pareto curves tables for distillation of residual networks with four benchmark datasets, indicating the memory versus accuracy payoff. We show that substantial memory savings are possible with very little loss of accuracy, and confirm that distillation provides student network performance that is better than training that student architecture directly on data."
]
} |
1907.09682 | 2964161024 | Knowledge distillation is a widely applicable technique for training a student neural network under the guidance of a trained teacher network. For example, in neural network compression, a high-capacity teacher is distilled to train a compact student; in privileged learning, a teacher trained with privileged data is distilled to train a student without access to that data. The distillation loss determines how a teacher's knowledge is captured and transferred to the student. In this paper, we propose a new form of knowledge distillation loss that is inspired by the observation that semantically similar inputs tend to elicit similar activation patterns in a trained network. Similarity-preserving knowledge distillation guides the training of a student network such that input pairs that produce similar (dissimilar) activations in the teacher network produce similar (dissimilar) activations in the student network. In contrast to previous distillation methods, the student is not required to mimic the representation space of the teacher, but rather to preserve the pairwise similarities in its own representation space. Experiments on three public datasets demonstrate the potential of our approach. | State-of-the-art network compression methods can achieve significant reductions in network size, in some cases by an order of magnitude, but often require specialized software or hardware support. For example, unstructured pruning requires optimized sparse matrix multiplication routines to realize practical acceleration @cite_32 , platform constraint-aware compression @cite_35 @cite_6 @cite_15 requires hardware simulators or empirical measurements, and arbitrary-bit quantization @cite_42 @cite_17 requires specialized hardware. One of the advantages of knowledge distillation is that it is easily implemented in any off-the-shelf deep learning framework without the need for extra software or hardware. Moreover, distillation can be integrated with other network compression techniques for further gains in performance @cite_18 . | {
"cite_N": [
"@cite_35",
"@cite_18",
"@cite_42",
"@cite_32",
"@cite_6",
"@cite_15",
"@cite_17"
],
"mid": [
"2895260923",
"2964203871",
"2962768518",
"2963705792",
"",
"2962861284",
"2786098160"
],
"abstract": [
"Deep neural network compression has the potential to bring modern resource-hungry deep networks to resource-limited devices. However, in many of the most compelling deployment scenarios of compressed deep networks, the operational constraints matter: for example, a pedestrian detection network on a self-driving car may have to satisfy a latency constraint for safe operation. We propose the first principled treatment of deep network compression under operational constraints. We formulate the compression learning problem from the perspective of constrained Bayesian optimization, and introduce a cooling (annealing) strategy to guide the network compression towards the target constraints. Experiments on ImageNet demonstrate the value of modelling constraints directly in network compression.",
"Deep neural networks (DNNs) continue to make significant advances, solving tasks from image classification to translation or reinforcement learning. One aspect of the field receiving considerable attention is efficiently executing deep models in resource-constrained environments, such as mobile or embedded devices. This paper focuses on this problem, and proposes two new compression methods, which jointly leverage weight quantization and distillation of larger teacher networks into smaller student networks. The first method we propose is called quantized distillation and leverages distillation during the training process, by incorporating distillation loss, expressed with respect to the teacher, into the training of a student network whose weights are quantized to a limited set of levels. The second method, differentiable quantization, optimizes the location of quantization points through stochastic gradient descent, to better fit the behavior of the teacher model. We validate both methods through experiments on convolutional and recurrent architectures. We show that quantized shallow students can reach similar accuracy levels to full-precision teacher models, while providing order of magnitude compression, and inference speedup that is linear in the depth reduction. In sum, our results enable DNNs for resource-constrained environments to leverage architecture and accuracy advances developed on more powerful devices.",
"Recent work has shown that performing inference with fast, very-low-bitwidth (e.g., 1 to 2 bits) representations of values in models can yield surprisingly accurate results. However, although 2-bit approximated networks have been shown to be quite accurate, 1 bit approximations, which are twice as fast, have restrictively low accuracy. We propose a method to train models whose weights are a mixture of bitwidths, that allows us to more finely tune the accuracy speed trade-off. We present the “middle-out” criterion for determining the bitwidth for each value, and show how to integrate it into training models with a desired mixture of bitwidths. We evaluate several architectures and binarization techniques on the ImageNet dataset. We show that our heterogeneous bitwidth approximation achieves superlinear scaling of accuracy with bitwidth. Using an average of only 1.4 bits, we are able to outperform state-of-the-art 2-bit architectures.",
"Phenomenally successful in practical inference problems, convolutional neural networks (CNN) are widely deployed in mobile devices, data centers, and even supercomputers. The number of parameters needed in CNNs, however, are often large and undesirable. Consequently, various methods have been developed to prune a CNN once it is trained. Nevertheless, the resulting CNNs offer limited benefits. While pruning the fully connected layers reduces a CNN's size considerably, it does not improve inference speed noticeably as the compute heavy parts lie in convolutions. Pruning CNNs in a way that increase inference speed often imposes specific sparsity structures, thus limiting the achievable sparsity levels. @PARASPLIT We present a method to realize simultaneously size economy and speed improvement while pruning CNNs. Paramount to our success is an efficient general sparse-with-dense matrix multiplication implementation that is applicable to convolution of feature maps with kernels of arbitrary sparsity patterns. Complementing this, we developed a performance model that predicts sweet spots of sparsity levels for different layers and on different computer architectures. Together, these two allow us to demonstrate 3.1-7.3x convolution speedups over dense convolution in AlexNet, on Intel Atom, Xeon, and Xeon Phi processors, spanning the spectrum from mobile devices to supercomputers.",
"",
"This work proposes an algorithm, called NetAdapt, that automatically adapts a pre-trained deep neural network to a mobile platform given a resource budget. While many existing algorithms simplify networks based on the number of MACs or weights, optimizing those indirect metrics may not necessarily reduce the direct metrics, such as latency and energy consumption. To solve this problem, NetAdapt incorporates direct metrics into its adaptation algorithm. These direct metrics are evaluated using empirical measurements, so that detailed knowledge of the platform and toolchain is not required. NetAdapt automatically and progressively simplifies a pre-trained network until the resource budget is met while maximizing the accuracy. Experiment results show that NetAdapt achieves better accuracy versus latency trade-offs on both mobile CPU and mobile GPU, compared with the state-of-the-art automated network simplification algorithms. For image classification on the ImageNet dataset, NetAdapt achieves up to a 1.7 ( ) speedup in measured inference latency with equal or higher accuracy on MobileNets (V1&V2).",
"Despite the state-of-the-art accuracy of Deep Neural Networks (DNN) in various classification problems, their deployment onto resource constrained edge computing devices remains challenging, due to their large size and complexity. Several recent studies have reported remarkable results in reducing this complexity through quantization of DNN models. However, these studies usually do not consider the change in loss when performing quantization, nor do they take the disparate importance of DNN connections to the accuracy into account. We address these issues in this paper by proposing a new method, called adaptive quantization, which simplifies a trained DNN model by finding a unique, optimal precision for each connection weight such that the increase in loss is minimized. The optimization problem at the core of this method iteratively uses the loss function gradient to determine an error margin for each weight and assign it a precision accordingly. Since this problem uses linear functions, it is computationally cheap and, as we will show, has a closed-form approximate solution. Experiments on MNIST, CIFAR, and SVHN datasets showed that the proposed method can achieve near or better than state-of-the-art reduction in model size with similar error rate. Furthermore, it can achieve compressions close to floating-point model compression methods without loss of accuracy."
]
} |
1907.09702 | 2963448607 | Temporal action proposal generation is an challenging and promising task which aims to locate temporal regions in real-world videos where action or event may occur. Current bottom-up proposal generation methods can generate proposals with precise boundary, but cannot efficiently generate adequately reliable confidence scores for retrieving proposals. To address these difficulties, we introduce the Boundary-Matching (BM) mechanism to evaluate confidence scores of densely distributed proposals, which denote a proposal as a matching pair of starting and ending boundaries and combine all densely distributed BM pairs into the BM confidence map. Based on BM mechanism, we propose an effective, efficient and end-to-end proposal generation method, named Boundary-Matching Network (BMN), which generates proposals with precise temporal boundaries as well as reliable confidence scores simultaneously. The two-branches of BMN are jointly trained in an unified framework. We conduct experiments on two challenging datasets: THUMOS-14 and ActivityNet-1.3, where BMN shows significant performance improvement with remarkable efficiency and generalizability. Further, combining with existing action classifier, BMN can achieve state-of-the-art temporal action detection performance. | Action recognition is a fundamental and important task of video understanding area. Hand-crafted features such as HOG, HOF and MBH are widely used in earlier works, such as improved Dense Trajectory (iDT) @cite_21 @cite_30 . Recently, deep learning models have achieved significantly performance promotion in action recognition task. The mainstream networks fall into two categories: two-stream networks @cite_19 @cite_34 @cite_22 exploit appearance and motion clues from RGB image and stacked optical flow separately; 3D networks @cite_14 @cite_6 exploit appearance and motion clues directly from raw video volume. In our work, by convention, we adopt action recognition models to extract visual feature sequence of untrimmed video. | {
"cite_N": [
"@cite_30",
"@cite_14",
"@cite_22",
"@cite_21",
"@cite_6",
"@cite_19",
"@cite_34"
],
"mid": [
"2105101328",
"1522734439",
"787785461",
"2126574503",
"2963820951",
"2342662179",
"2156303437"
],
"abstract": [
"Recently dense trajectories were shown to be an efficient video representation for action recognition and achieved state-of-the-art results on a variety of datasets. This paper improves their performance by taking into account camera motion to correct them. To estimate camera motion, we match feature points between frames using SURF descriptors and dense optical flow, which are shown to be complementary. These matches are, then, used to robustly estimate a homography with RANSAC. Human motion is in general different from camera motion and generates inconsistent matches. To improve the estimation, a human detector is employed to remove these matches. Given the estimated camera motion, we remove trajectories consistent with it. We also use this estimation to cancel out camera motion from the optical flow. This significantly improves motion-based descriptors, such as HOF and MBH. Experimental results on four challenging action datasets (i.e., Hollywood2, HMDB51, Olympic Sports and UCF50) significantly outperform the current state of the art.",
"We propose a simple, yet effective approach for spatiotemporal feature learning using deep 3-dimensional convolutional networks (3D ConvNets) trained on a large scale supervised video dataset. Our findings are three-fold: 1) 3D ConvNets are more suitable for spatiotemporal feature learning compared to 2D ConvNets, 2) A homogeneous architecture with small 3x3x3 convolution kernels in all layers is among the best performing architectures for 3D ConvNets, and 3) Our learned features, namely C3D (Convolutional 3D), with a simple linear classifier outperform state-of-the-art methods on 4 different benchmarks and are comparable with current best methods on the other 2 benchmarks. In addition, the features are compact: achieving 52.8 accuracy on UCF101 dataset with only 10 dimensions and also very efficient to compute due to the fast inference of ConvNets. Finally, they are conceptually very simple and easy to train and use.",
"Deep convolutional networks have achieved great success for object recognition in still images. However, for action recognition in videos, the improvement of deep convolutional networks is not so evident. We argue that there are two reasons that could probably explain this result. First the current network architectures (e.g. Two-stream ConvNets) are relatively shallow compared with those very deep models in image domain (e.g. VGGNet, GoogLeNet), and therefore their modeling capacity is constrained by their depth. Second, probably more importantly, the training dataset of action recognition is extremely small compared with the ImageNet dataset, and thus it will be easy to over-fit on the training dataset. To address these issues, this report presents very deep two-stream ConvNets for action recognition, by adapting recent very deep architectures into video domain. However, this extension is not easy as the size of action recognition is quite small. We design several good practices for the training of very deep two-stream ConvNets, namely (i) pre-training for both spatial and temporal nets, (ii) smaller learning rates, (iii) more data augmentation techniques, (iv) high drop out ratio. Meanwhile, we extend the Caffe toolbox into Multi-GPU implementation with high computational efficiency and low memory consumption. We verify the performance of very deep two-stream ConvNets on the dataset of UCF101 and it achieves the recognition accuracy of @math .",
"Feature trajectories have shown to be efficient for representing videos. Typically, they are extracted using the KLT tracker or matching SIFT descriptors between frames. However, the quality as well as quantity of these trajectories is often not sufficient. Inspired by the recent success of dense sampling in image classification, we propose an approach to describe videos by dense trajectories. We sample dense points from each frame and track them based on displacement information from a dense optical flow field. Given a state-of-the-art optical flow algorithm, our trajectories are robust to fast irregular motions as well as shot boundaries. Additionally, dense trajectories cover the motion information in videos well. We, also, investigate how to design descriptors to encode the trajectory information. We introduce a novel descriptor based on motion boundary histograms, which is robust to camera motion. This descriptor consistently outperforms other state-of-the-art descriptors, in particular in uncontrolled realistic videos. We evaluate our video description in the context of action classification with a bag-of-features approach. Experimental results show a significant improvement over the state of the art on four datasets of varying difficulty, i.e. KTH, YouTube, Hollywood2 and UCF sports.",
"Convolutional Neural Networks (CNN) have been regarded as a powerful class of models for image recognition problems. Nevertheless, it is not trivial when utilizing a CNN for learning spatio-temporal video representation. A few studies have shown that performing 3D convolutions is a rewarding approach to capture both spatial and temporal dimensions in videos. However, the development of a very deep 3D CNN from scratch results in expensive computational cost and memory demand. A valid question is why not recycle off-the-shelf 2D networks for a 3D CNN. In this paper, we devise multiple variants of bottleneck building blocks in a residual learning framework by simulating 3 x 3 x 3 convolutions with 1 × 3 × 3 convolutional filters on spatial domain (equivalent to 2D CNN) plus 3 × 1 × 1 convolutions to construct temporal connections on adjacent feature maps in time. Furthermore, we propose a new architecture, named Pseudo-3D Residual Net (P3D ResNet), that exploits all the variants of blocks but composes each in different placement of ResNet, following the philosophy that enhancing structural diversity with going deep could improve the power of neural networks. Our P3D ResNet achieves clear improvements on Sports-1M video classification dataset against 3D CNN and frame-based 2D CNN by 5.3 and 1.8 , respectively. We further examine the generalization performance of video representation produced by our pre-trained P3D ResNet on five different benchmarks and three different tasks, demonstrating superior performances over several state-of-the-art techniques.",
"Recent applications of Convolutional Neural Networks (ConvNets) for human action recognition in videos have proposed different solutions for incorporating the appearance and motion information. We study a number of ways of fusing ConvNet towers both spatially and temporally in order to best take advantage of this spatio-temporal information. We make the following findings: (i) that rather than fusing at the softmax layer, a spatial and temporal network can be fused at a convolution layer without loss of performance, but with a substantial saving in parameters, (ii) that it is better to fuse such networks spatially at the last convolutional layer than earlier, and that additionally fusing at the class prediction layer can boost accuracy, finally (iii) that pooling of abstract convolutional features over spatiotemporal neighbourhoods further boosts performance. Based on these studies we propose a new ConvNet architecture for spatiotemporal fusion of video snippets, and evaluate its performance on standard benchmarks where this architecture achieves state-of-the-art results.",
"We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multitask learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification."
]
} |
1907.09702 | 2963448607 | Temporal action proposal generation is an challenging and promising task which aims to locate temporal regions in real-world videos where action or event may occur. Current bottom-up proposal generation methods can generate proposals with precise boundary, but cannot efficiently generate adequately reliable confidence scores for retrieving proposals. To address these difficulties, we introduce the Boundary-Matching (BM) mechanism to evaluate confidence scores of densely distributed proposals, which denote a proposal as a matching pair of starting and ending boundaries and combine all densely distributed BM pairs into the BM confidence map. Based on BM mechanism, we propose an effective, efficient and end-to-end proposal generation method, named Boundary-Matching Network (BMN), which generates proposals with precise temporal boundaries as well as reliable confidence scores simultaneously. The two-branches of BMN are jointly trained in an unified framework. We conduct experiments on two challenging datasets: THUMOS-14 and ActivityNet-1.3, where BMN shows significant performance improvement with remarkable efficiency and generalizability. Further, combining with existing action classifier, BMN can achieve state-of-the-art temporal action detection performance. | Correlation matching algorithms are widely used in many computer vision tasks, such as image registration, action recognition and stereo matching. Specifically, stereo matching aims to find corresponding pixels from stereo images. For each pixel in left image of a rectified image pair, the stereo matching method need to find corresponding pixel in right image along horizontal direction, or we can say finding right pixel with minimum cost. Thus, the cost minimization of all left pixels can be denoted as a cost volume, which denotes each left-right pixel pair as a point in volume. Based on cost volume, many recent works @cite_9 @cite_36 @cite_31 achieve end-to-end network via generating cost volume directly from combining two feature maps, using correlation layer @cite_36 or feature concatenation @cite_3 . Inspired by cost volume, our proposed BM confidence map contains pairs of temporal starting and ending boundaries as proposals, thus can directly generate confidence scores for all proposals using convolutional layers. We propose BM layer to efficiently generate BM feature map via sampling feature among starting and ending boundaries of each proposal simultaneously. | {
"cite_N": [
"@cite_36",
"@cite_9",
"@cite_31",
"@cite_3"
],
"mid": [
"2259424905",
"2793302268",
"2773913336",
"2963619659"
],
"abstract": [
"Recent work has shown that optical flow estimation can be formulated as a supervised learning task and can be successfully solved with convolutional networks. Training of the so-called FlowNet was enabled by a large synthetically generated dataset. The present paper extends the concept of optical flow estimation via convolutional networks to disparity and scene flow estimation. To this end, we propose three synthetic stereo video datasets with sufficient realism, variation, and size to successfully train large networks. Our datasets are the first large-scale datasets to enable training and evaluation of scene flow methods. Besides the datasets, we present a convolutional network for real-time disparity estimation that provides state-of-the-art results. By combining a flow and disparity estimation network and training it jointly, we demonstrate the first scene flow estimation with a convolutional network.",
"Recently convolutional neural network (CNN) promotes the development of stereo matching greatly. Especially those end-to-end stereo methods achieve best performance. However less attention is paid on encoding context information, simplifying two-stage disparity learning pipeline and improving details in disparity maps. Differently we focus on these problems. Firstly, we propose an one-stage context pyramid based residual pyramid network (CP-RPN) for disparity estimation, in which a context pyramid is embedded to encode multi-scale context clues explicitly. Next, we design a CNN based multi-task learning network called EdgeStereo to recover missing details in disparity maps, utilizing mid-level features from edge detection task. In EdgeStereo, CP-RPN is integrated with a proposed edge detector HED @math based on two-fold multi-task interactions. The end-to-end EdgeStereo outputs the edge map and disparity map directly from a stereo pair without any post-processing or regularization. We discover that edge detection task and stereo matching task can help each other in our EdgeStereo framework. Comprehensive experiments on stereo benchmarks such as Scene Flow and KITTI 2015 show that our method achieves state-of-the-art performance.",
"Stereo matching algorithms usually consist of four steps, including matching cost calculation, matching cost aggregation, disparity calculation, and disparity refinement. Existing CNN-based methods only adopt CNN to solve parts of the four steps, or use different networks to deal with different steps, making them difficult to obtain the overall optimal solution. In this paper, we propose a network architecture to incorporate all steps of stereo matching. The network consists of three parts. The first part calculates the multi-scale shared features. The second part performs matching cost calculation, matching cost aggregation and disparity calculation to estimate the initial disparity using shared features. The initial disparity and the shared features are used to calculate the prior and posterior feature constancy. The initial disparity, the prior and posterior feature constancy are then fed to a sub-network to refine the initial disparity through a Bayesian inference process. The proposed method has been evaluated on the Scene Flow and KITTI datasets. It achieves the state-of-the-art performance on the KITTI 2012 and KITTI 2015 benchmarks while maintaining a very fast running time.",
"Recent work has shown that depth estimation from a stereo pair of images can be formulated as a supervised learning task to be resolved with convolutional neural networks (CNNs). However, current architectures rely on patch-based Siamese networks, lacking the means to exploit context information for finding correspondence in ill-posed regions. To tackle this problem, we propose PSMNet, a pyramid stereo matching network consisting of two main modules: spatial pyramid pooling and 3D CNN. The spatial pyramid pooling module takes advantage of the capacity of global context information by aggregating context in different scales and locations to form a cost volume. The 3D CNN learns to regularize cost volume using stacked multiple hourglass networks in conjunction with intermediate supervision. The proposed approach was evaluated on several benchmark datasets. Our method ranked first in the KITTI 2012 and 2015 leaderboards before March 18, 2018. The codes of PSMNet are available at: https: github.com JiaRenChang PSMNet."
]
} |
1907.09702 | 2963448607 | Temporal action proposal generation is an challenging and promising task which aims to locate temporal regions in real-world videos where action or event may occur. Current bottom-up proposal generation methods can generate proposals with precise boundary, but cannot efficiently generate adequately reliable confidence scores for retrieving proposals. To address these difficulties, we introduce the Boundary-Matching (BM) mechanism to evaluate confidence scores of densely distributed proposals, which denote a proposal as a matching pair of starting and ending boundaries and combine all densely distributed BM pairs into the BM confidence map. Based on BM mechanism, we propose an effective, efficient and end-to-end proposal generation method, named Boundary-Matching Network (BMN), which generates proposals with precise temporal boundaries as well as reliable confidence scores simultaneously. The two-branches of BMN are jointly trained in an unified framework. We conduct experiments on two challenging datasets: THUMOS-14 and ActivityNet-1.3, where BMN shows significant performance improvement with remarkable efficiency and generalizability. Further, combining with existing action classifier, BMN can achieve state-of-the-art temporal action detection performance. | As aforementioned, the goal of temporal action detection task is to detect action instances in untrimmed videos with temporal boundaries and action categories, which can be divided into temporal proposal generation and action classification stages. These two stages are taken apart in most detection methods @cite_2 @cite_24 @cite_28 , and are taken together as single model in some methods @cite_35 @cite_27 . For proposal generation task, most previous works @cite_10 @cite_23 @cite_4 @cite_16 @cite_2 adopt fashion to generate proposals with pre-defined duration and interval, where the main drawback is the lack of boundary precision and duration flexibility. There are also some methods @cite_28 @cite_26 adopt fashion. TAG @cite_28 generates proposals using temporal watershed algorithm, but lack confidence scores for retrieving. Recently, BSN @cite_26 generates proposals via locally locating temporal boundaries and globally evaluating confidence scores, and achieves significant performance promotion over previous proposal generation methods. In this work, we propose the Boundary-Matching mechanism for proposal confidence evaluation, which can largely simplify the pipeline of BSN and bring significant promotion in both efficiency and effectiveness. | {
"cite_N": [
"@cite_35",
"@cite_26",
"@cite_4",
"@cite_28",
"@cite_24",
"@cite_27",
"@cite_23",
"@cite_2",
"@cite_16",
"@cite_10"
],
"mid": [
"2766402183",
"2962677524",
"",
"2964216549",
"2470774766",
"2893390896",
"2471143248",
"2964214371",
"",
"2755876276"
],
"abstract": [
"Temporal action detection is a very important yet challenging problem, since videos in real applications are usually long, untrimmed and contain multiple action instances. This problem requires not only recognizing action categories but also detecting start time and end time of each action instance. Many state-of-the-art methods adopt the \"detection by classification\" framework: first do proposal, and then classify proposals. The main drawback of this framework is that the boundaries of action instance proposals have been fixed during the classification step. To address this issue, we propose a novel Single Shot Action Detector (SSAD) network based on 1D temporal convolutional layers to skip the proposal generation step via directly detecting action instances in untrimmed video. On pursuit of designing a particular SSAD network that can work effectively for temporal action detection, we empirically search for the best network architecture of SSAD due to lacking existing models that can be directly adopted. Moreover, we investigate into input feature types and fusion strategies to further improve detection accuracy. We conduct extensive experiments on two challenging datasets: THUMOS 2014 and MEXaction2. When setting Intersection-over-Union threshold to 0.5 during evaluation, SSAD significantly outperforms other state-of-the-art systems by increasing mAP from @math to @math on THUMOS 2014 and from 7.4 to @math on MEXaction2.",
"Temporal action proposal generation is an important yet challenging problem, since temporal proposals with rich action content are indispensable for analysing real-world videos with long duration and high proportion irrelevant content. This problem requires methods not only generating proposals with precise temporal boundaries, but also retrieving proposals to cover truth action instances with high recall and high overlap using relatively fewer proposals. To address these difficulties, we introduce an effective proposal generation method, named Boundary-Sensitive Network (BSN), which adopts “local to global” fashion. Locally, BSN first locates temporal boundaries with high probabilities, then directly combines these boundaries as proposals. Globally, with Boundary-Sensitive Proposal feature, BSN retrieves proposals by evaluating the confidence of whether a proposal contains an action within its region. We conduct experiments on two challenging datasets: ActivityNet-1.3 and THUMOS14, where BSN outperforms other state-of-the-art temporal action proposal generation methods with high recall and high temporal precision. Finally, further experiments demonstrate that by combining existing action classifiers, our method significantly improves the state-of-the-art temporal action detection performance.",
"",
"Detecting actions in untrimmed videos is an important yet challenging task. In this paper, we present the structured segment network (SSN), a novel framework which models the temporal structure of each action instance via a structured temporal pyramid. On top of the pyramid, we further introduce a decomposed discriminative model comprising two classifiers, respectively for classifying actions and determining completeness. This allows the framework to effectively distinguish positive proposals from background or incomplete ones, thus leading to both accurate recognition and localization. These components are integrated into a unified network that can be efficiently trained in an end-to-end fashion. Additionally, a simple yet effective temporal action proposal scheme, dubbed temporal actionness grouping (TAG) is devised to generate high quality action proposals. On two challenging benchmarks, THUMOS14 and ActivityNet, our method remarkably outperforms previous state-of-the-art methods, demonstrating superior accuracy and strong adaptivity in handling actions with various temporal structures.",
"Current state-of-the-art human activity recognition is focused on the classification of temporally trimmed videos in which only one action occurs per frame. We propose a simple, yet effective, method for the temporal detection of activities in temporally untrimmed videos with the help of untrimmed classification. Firstly, our model predicts the top k labels for each untrimmed video by analysing global video-level features. Secondly, frame-level binary classification is combined with dynamic programming to generate the temporally trimmed activity proposals. Finally, each proposal is assigned a label based on the global label, and scored with the score of the temporal activity proposal and the global score. Ultimately, we show that untrimmed video classification models can be used as stepping stone for temporal detection.",
"",
"In many large-scale video analysis scenarios, one is interested in localizing and recognizing human activities that occur in short temporal intervals within long untrimmed videos. Current approaches for activity detection still struggle to handle large-scale video collections and the task remains relatively unexplored. This is in part due to the computational complexity of current action recognition approaches and the lack of a method that proposes fewer intervals in the video, where activity processing can be focused. In this paper, we introduce a proposal method that aims to recover temporal segments containing actions in untrimmed videos. Building on techniques for learning sparse dictionaries, we introduce a learning framework to represent and retrieve activity proposals. We demonstrate the capabilities of our method in not only producing high quality proposals but also in its efficiency. Finally, we show the positive impact our method has on recognition performance when it is used for action detection, while running at 10FPS.",
"We address temporal action localization in untrimmed long videos. This is important because videos in real applications are usually unconstrained and contain multiple action instances plus video content of background scenes or other activities. To address this challenging issue, we exploit the effectiveness of deep networks in temporal action localization via three segment-based 3D ConvNets: (1) a proposal network identifies candidate segments in a long video that may contain actions, (2) a classification network learns one-vs-all action classification model to serve as initialization for the localization network, and (3) a localization network fine-tunes the learned classification network to localize each action instance. We propose a novel loss function for the localization network to explicitly consider temporal overlap and achieve high temporal localization accuracy. In the end, only the proposal network and the localization network are used during prediction. On two largescale benchmarks, our approach achieves significantly superior performances compared with other state-of-the-art systems: mAP increases from 1.7 to 7.4 on MEXaction2 and increases from 15.0 to 19.0 on THUMOS 2014.",
"",
"Our paper presents a new approach for temporal detection of human actions in long, untrimmed video sequences. We introduce Single-Stream Temporal Action Proposals (SST), a new effective and efficient deep architecture for the generation of temporal action proposals. Our network can run continuously in a single stream over very long input video sequences, without the need to divide input into short overlapping clips or temporal windows for batch processing. We demonstrate empirically that our model outperforms the state-of-the-art on the task of temporal action proposal generation, while achieving some of the fastest processing speeds in the literature. Finally, we demonstrate that using SST proposals in conjunction with existing action classifiers results in improved state-of-the-art temporal action detection performance."
]
} |
1907.09706 | 2970623248 | Currently, the visually impaired rely on either a sighted human, guide dog, or white cane to safely navigate. However, the training of guide dogs is extremely expensive, and canes cannot provide essential information regarding the color of traffic lights and direction of crosswalks. In this paper, we propose a deep learning based solution that provides information regarding the traffic light mode and the position of the zebra crossing. Previous solutions that utilize machine learning only provide one piece of information and are mostly binary: only detecting red or green lights. The proposed convolutional neural network, LYTNet, is designed for comprehensiveness, accuracy, and computational efficiency. LYTNet delivers both of the two most important pieces of information for the visually impaired to cross the road. We provide five classes of pedestrian traffic lights rather than the commonly seen three or four, and a direction vector representing the midline of the zebra crossing that is converted from the 2D image plane to real-world positions. We created our own dataset of pedestrian traffic lights containing over 5000 photos taken at hundreds of intersections in Shanghai. The experiments carried out achieve a classification accuracy of 94 , average angle error of (6.35^ ), with a frame rate of 20 frames per second when testing the network on an iPhone 7 with additional post-processing steps. | Some industrialized countries have developed acoustic pedestrian traffic lights that produce sound when the light is green, and is used as a signal for the visually impaired to know when to cross the street @cite_14 @cite_17 @cite_3 . However, for less economically developed countries, crossing streets is still a problem for the blind, and acoustic pedestrian traffic lights are not ubiquitous even in developed nations @cite_14 . | {
"cite_N": [
"@cite_14",
"@cite_3",
"@cite_17"
],
"mid": [
"2184993296",
"2771992117",
""
],
"abstract": [
"TL-recognizer detects traffic lights from a mobile device camera.Robust method for unsupervised image acquisition and segmentation.Robust solution: traffic lights are clearly visible in different light conditions.Solution is reliable: precision 1 and recall 0.8 in different light conditions.Solution is efficient: computation time 100źms on a Nexus 5. Independent mobility involves a number of challenges for people with visual impairment or blindness. In particular, in many countries the majority of traffic lights are still not equipped with acoustic signals. Recognizing traffic lights through the analysis of images acquired by a mobile device camera is a viable solution already experimented in scientific literature. However, there is a major issue: the recognition techniques should be robust under different illumination conditions.This contribution addresses the above problem with an effective solution: besides image processing and recognition, it proposes a robust setup for image capture that makes it possible to acquire clearly visible traffic light images regardless of daylight variability due to time and weather. The proposed recognition technique that adopts this approach is reliable (full precision and high recall), robust (works in different illumination conditions) and efficient (it can run several times a second on commercial smartphones). The experimental evaluation conducted with visual impaired subjects shows that the technique is also practical in supporting road crossing.",
"In defect of intelligent assistant approaches, the visually impaired feel hard to cross the roads in urban environments. Aiming to tackle the problem, a real-time Pedestrian Crossing Lights (PCL) detection algorithm for the visually impaired is proposed in this paper. Different from previous works which utilize analytic image processing to detect the PCL in ideal scenarios, the proposed algorithm detects PCL using machine learning scheme in the challenging scenarios, where PCL have arbitrary sizes and locations in acquired image and suffer from the shake and movement of camera. In order to achieve the robustness and efficiency in those scenarios, the detection algorithm is designed to include three procedures: candidate extraction, candidate recognition and temporal-spatial analysis. A public dataset of PCL, which includes manually labeled ground truth data, is established for tuning parameters, training samples and evaluating the performance. The algorithm is implemented on a portable PC with color camera. The experiments carried out in various practical scenarios prove that the precision and recall of detection are both close to 100 , meanwhile the frame rate is up to 21 frames per second (FPS).",
""
]
} |
1907.09706 | 2970623248 | Currently, the visually impaired rely on either a sighted human, guide dog, or white cane to safely navigate. However, the training of guide dogs is extremely expensive, and canes cannot provide essential information regarding the color of traffic lights and direction of crosswalks. In this paper, we propose a deep learning based solution that provides information regarding the traffic light mode and the position of the zebra crossing. Previous solutions that utilize machine learning only provide one piece of information and are mostly binary: only detecting red or green lights. The proposed convolutional neural network, LYTNet, is designed for comprehensiveness, accuracy, and computational efficiency. LYTNet delivers both of the two most important pieces of information for the visually impaired to cross the road. We provide five classes of pedestrian traffic lights rather than the commonly seen three or four, and a direction vector representing the midline of the zebra crossing that is converted from the 2D image plane to real-world positions. We created our own dataset of pedestrian traffic lights containing over 5000 photos taken at hundreds of intersections in Shanghai. The experiments carried out achieve a classification accuracy of 94 , average angle error of (6.35^ ), with a frame rate of 20 frames per second when testing the network on an iPhone 7 with additional post-processing steps. | The task of detecting traffic light for autonomous driving has been explored by many and has developed over the years @cite_2 @cite_15 @cite_1 @cite_7 . @cite_8 created a model that is able to detect traffic lights as small as @math pixels and with relatively high accuracy. Though most models for traffic lights have a high precision and recall rate of nearly 100 @cite_13 were one of the first to develop an algorithm to detect pedestrian traffic lights and the length of the zebra-crossing. Others such as and @cite_14 @cite_16 both developed an analytic image processing algorithm, which undergoes candidate extraction, candidate recognition, and candidate classification. @cite_3 proposed a more robust real-time pedestrian traffic lights detection algorithm, which gets rid of the analytic image processing method and uses candidate extraction and a concise machine learning scheme. | {
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_8",
"@cite_1",
"@cite_3",
"@cite_2",
"@cite_15",
"@cite_16",
"@cite_13"
],
"mid": [
"2184993296",
"2039510769",
"2737202447",
"1968974896",
"2771992117",
"2797363357",
"1981562962",
"2059050813",
"2128208213"
],
"abstract": [
"TL-recognizer detects traffic lights from a mobile device camera.Robust method for unsupervised image acquisition and segmentation.Robust solution: traffic lights are clearly visible in different light conditions.Solution is reliable: precision 1 and recall 0.8 in different light conditions.Solution is efficient: computation time 100źms on a Nexus 5. Independent mobility involves a number of challenges for people with visual impairment or blindness. In particular, in many countries the majority of traffic lights are still not equipped with acoustic signals. Recognizing traffic lights through the analysis of images acquired by a mobile device camera is a viable solution already experimented in scientific literature. However, there is a major issue: the recognition techniques should be robust under different illumination conditions.This contribution addresses the above problem with an effective solution: besides image processing and recognition, it proposes a robust setup for image capture that makes it possible to acquire clearly visible traffic light images regardless of daylight variability due to time and weather. The proposed recognition technique that adopts this approach is reliable (full precision and high recall), robust (works in different illumination conditions) and efficient (it can run several times a second on commercial smartphones). The experimental evaluation conducted with visual impaired subjects shows that the technique is also practical in supporting road crossing.",
"The traffic sign detection and recognition system is an essential module of the driver warning and assistance system. Smart vehicles are the order of the day. Such vehicles have the ability to warn drivers of pending situations, remind them of speed limits and even automatically take evasive action. Due to the visual nature of existing infrastructure, signs and line markings, image processing will play a large part in these systems. In this paper we present an automated Traffic sign recognition system allowing an invariance localization to changes in position, scale, rotation, weather conditions, partial occlusion, and the presence of other objects of the same color. The reliability demonstrated by the proposed method suggests that this system could be a part of an integrated driver warning and assistance system based on computer vision technology.",
"Reliable traffic light detection and classification is crucial for automated driving in urban environments. Currently, there are no systems that can reliably perceive traffic lights in real-time, without map-based information, and in sufficient distances needed for smooth urban driving. We propose a complete system consisting of a traffic light detector, tracker, and classifier based on deep learning, stereo vision, and vehicle odometry which perceives traffic lights in real-time. Within the scope of this work, we present three major contributions. The first is an accurately labeled traffic light dataset of 5000 images for training and a video sequence of 8334 frames for evaluation. The dataset is published as the Bosch Small Traffic Lights Dataset and uses our results as baseline. It is currently the largest publicly available labeled traffic light dataset and includes labels down to the size of only 1 pixel in width. The second contribution is a traffic light detector which runs at 10 frames per second on 1280×720 images. When selecting the confidence threshold that yields equal error rate, we are able to detect traffic lights as small as 4 pixels in width. The third contribution is a traffic light tracker which uses stereo vision and vehicle odometry to compute the motion estimate of traffic lights and a neural network to correct the aforementioned motion estimate.",
"The recognition and tracking of traffic lights for intelligent vehicles based on a vehicle-mounted camera are studied in this paper. The candidate region of the traffic light is extracted using the threshold segmentation method and the morphological operation. Then, the recognition algorithm of the traffic light based on machine learning is employed. To avoid false negatives and tracking loss, the target tracking algorithm CAMSHIFT (Continuously Adaptive Mean Shift), which uses the color histogram as the target model, is adopted. In addition to traffic signal pre-processing and the recognition method of learning, the initialization problem of the search window of CAMSHIFT algorithm is resolved. Moreover, the window setting method is used to shorten the processing time of the global HSV color space conversion. The real vehicle experiments validate the performance of the presented approach.",
"In defect of intelligent assistant approaches, the visually impaired feel hard to cross the roads in urban environments. Aiming to tackle the problem, a real-time Pedestrian Crossing Lights (PCL) detection algorithm for the visually impaired is proposed in this paper. Different from previous works which utilize analytic image processing to detect the PCL in ideal scenarios, the proposed algorithm detects PCL using machine learning scheme in the challenging scenarios, where PCL have arbitrary sizes and locations in acquired image and suffer from the shake and movement of camera. In order to achieve the robustness and efficiency in those scenarios, the detection algorithm is designed to include three procedures: candidate extraction, candidate recognition and temporal-spatial analysis. A public dataset of PCL, which includes manually labeled ground truth data, is established for tuning parameters, training samples and evaluating the performance. The algorithm is implemented on a portable PC with color camera. The experiments carried out in various practical scenarios prove that the precision and recall of detection are both close to 100 , meanwhile the frame rate is up to 21 frames per second (FPS).",
"Traffic lights detection and recognition research has grown every year. Time is coming when autonomous vehicle can navigate in urban roads and streets and intelligent systems aboard those cars would have to recognize traffic lights in real time. This article proposes a traffic light recognition (TLR) device prototype using a smartphone as camera and processing unit that can be used as a driver assistance. A TLR device has to be able to visualize the traffic scene from inside of a vehicle, generate stable images, and be protected from adverse conditions. To validate this layout prototype, a dataset was built and used to test an algorithm that uses an adaptive background suppression filter (AdaBSF) and Support Vector Machines (SVMs) to detect traffic lights. The application of AdaBSF and subsequent classification with SVM to the dataset achieved 100 precision rate and recall of 65 . Road testing shows that the TLR device prototype meets the requirements to be used as a driver assistance device.",
"The concern of the intelligent transportation system rises and many driver support systems have been developed. In this paper, a fast method of detecting a traffic light in a scene image is proposed. By converting the color space from RGB to normalized RGB, some regions are selected as candidates of a traffic light. Then a method based on the Hough transform is applied to detect an exact region. Experimental results using images including a traffic light verifies the effectiveness of the proposed method.",
"In this paper we introduce a real-time traffic light recognition system for intelligent vehicles. The method proposed is fully based on image processing. Detection step is achieved in grayscale with spot light detection, and recognition is done using our generic “adaptive templates”. The whole process was kept modular which make our TLR capable of recognizing different traffic lights from various countries.",
"This paper proposes a method for measurement of the length of a pedestrian crossing and for the detection of traffic lights from image data observed with a single camera. The length of a crossing is measured from image data of white lines painted on the road at a crossing by using projective geometry. Furthermore, the state of the traffic lights, green (go signal) or red (stop signal), is detected by extracting candidates for the traffic light region with colour similarity and selecting a true traffic light from them using affine moment invariants. From the experimental results, the length of a crossing is measured with an accuracy such that the maximum relative error of measured length is less than 5 and the rms error is 0.38 m. A traffic light is efficiently detected by selecting a true traffic light region with an affine moment invariant."
]
} |
1907.09706 | 2970623248 | Currently, the visually impaired rely on either a sighted human, guide dog, or white cane to safely navigate. However, the training of guide dogs is extremely expensive, and canes cannot provide essential information regarding the color of traffic lights and direction of crosswalks. In this paper, we propose a deep learning based solution that provides information regarding the traffic light mode and the position of the zebra crossing. Previous solutions that utilize machine learning only provide one piece of information and are mostly binary: only detecting red or green lights. The proposed convolutional neural network, LYTNet, is designed for comprehensiveness, accuracy, and computational efficiency. LYTNet delivers both of the two most important pieces of information for the visually impaired to cross the road. We provide five classes of pedestrian traffic lights rather than the commonly seen three or four, and a direction vector representing the midline of the zebra crossing that is converted from the 2D image plane to real-world positions. We created our own dataset of pedestrian traffic lights containing over 5000 photos taken at hundreds of intersections in Shanghai. The experiments carried out achieve a classification accuracy of 94 , average angle error of (6.35^ ), with a frame rate of 20 frames per second when testing the network on an iPhone 7 with additional post-processing steps. | A limitation that many attempts faced was the speed of hardware. Thus, @cite_5 created an algorithm specifically for mobile devices with an accelerator to detect pedestrian traffic lights in real time. @cite_4 incorporated external servers to remove the limitation of hardware and provide more accurate information. Though the external servers are able to run deeper models than phones, it requires fast and stable internet connection at all times. Moreover, the advancement of efficient neural networks such as MobileNet v2 enable a deep-learning approach to be implemented on a mobile device @cite_11 . | {
"cite_N": [
"@cite_5",
"@cite_4",
"@cite_11"
],
"mid": [
"1605938783",
"2141963603",
"2963163009"
],
"abstract": [
"The Transportation Equity Act for the 21st Century (TEA-21), the successor to ISTEA, directs that pedestrian safety considerations such as the installation of audible traffic signals be included in new transportation plans and projects. This article discusses the topic of accessible pedestrian signals (APSs), with sections devoted to: information requirements at intersections; accessible traffic signal technologies; determining when to install APSs; pedestrian detection technology; the matrix of APS functional characteristics; and APS product sources, among others.",
"Context-awareness is a critical aspect of safe navigation especially for the blind and visually impaired in unfamiliar environments. Existing mobile devices for context-aware navigation fall short in many cases due to their dependence on specific infrastructure requirements as well as having limited access to resources that could provide a wealth of contextual clues. In this paper, we propose a mobile-cloud collaborative approach for context-aware navigation by exploiting the computational power of resources made available by Cloud Computing providers as well as the wealth of location-specific resources available on the Internet. We propose an extensible system architecture that minimizes reliance on infrastructure, thus allowing for wide usability. We present a traffic light detector that we developed as an initial application component of the proposed system. We present preliminary results of experiments performed to test the appropriateness for the real-time nature of the application.",
"In this paper we describe a new mobile architecture, MobileNetV2, that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes. We also describe efficient ways of applying these mobile models to object detection in a novel framework we call SSDLite. Additionally, we demonstrate how to build mobile semantic segmentation models through a reduced form of DeepLabv3 which we call Mobile DeepLabv3. is based on an inverted residual structure where the shortcut connections are between the thin bottleneck layers. The intermediate expansion layer uses lightweight depthwise convolutions to filter features as a source of non-linearity. Additionally, we find that it is important to remove non-linearities in the narrow layers in order to maintain representational power. We demonstrate that this improves performance and provide an intuition that led to this design. Finally, our approach allows decoupling of the input output domains from the expressiveness of the transformation, which provides a convenient framework for further analysis. We measure our performance on ImageNet [1] classification, COCO object detection [2], VOC image segmentation [3]. We evaluate the trade-offs between accuracy, and number of operations measured by multiply-adds (MAdd), as well as actual latency, and the number of parameters."
]
} |
1907.09706 | 2970623248 | Currently, the visually impaired rely on either a sighted human, guide dog, or white cane to safely navigate. However, the training of guide dogs is extremely expensive, and canes cannot provide essential information regarding the color of traffic lights and direction of crosswalks. In this paper, we propose a deep learning based solution that provides information regarding the traffic light mode and the position of the zebra crossing. Previous solutions that utilize machine learning only provide one piece of information and are mostly binary: only detecting red or green lights. The proposed convolutional neural network, LYTNet, is designed for comprehensiveness, accuracy, and computational efficiency. LYTNet delivers both of the two most important pieces of information for the visually impaired to cross the road. We provide five classes of pedestrian traffic lights rather than the commonly seen three or four, and a direction vector representing the midline of the zebra crossing that is converted from the 2D image plane to real-world positions. We created our own dataset of pedestrian traffic lights containing over 5000 photos taken at hundreds of intersections in Shanghai. The experiments carried out achieve a classification accuracy of 94 , average angle error of (6.35^ ), with a frame rate of 20 frames per second when testing the network on an iPhone 7 with additional post-processing steps. | Direction is another factor to consider when helping the visually impaired cross the street. Though the visually impaired can have a good sense of the general direction to cross the road in familiar environments, relying on one's memory has its limitations @cite_6 . Therefore, solutions to provide specific direction have also been devised. Other than detecting the color of pedestrian traffic lights, @cite_6 also created an algorithm for detecting zebra crossings. The system obtains information of how much of the zebra-crossing is visible to help the visually impaired know whether or not they are generally facing in the correct direction, but it does not provide the specific location of the zebra crossing. , , and Banich @cite_9 @cite_10 @cite_0 also use deep learning neural network within computer vision to detect zebra crossings to help the visually impaired cross streets. However, no deep learning method is able to output both traffic light and zebra crossing information simultaneously. | {
"cite_N": [
"@cite_0",
"@cite_9",
"@cite_10",
"@cite_6"
],
"mid": [
"2526513285",
"2100894161",
"37640100",
"201681701"
],
"abstract": [
"",
"In smart-cities, computer vision has the potential to dramatically improve the quality of life of people suffering of visual impairments. In this field, we have been working on a wearable mobility aid aimed at detecting in real-time obstacles in front of a visually impaired. Our approach relies on a custom RGBD camera, with FPGA on-board processing, worn as traditional eyeglasses and effective point-cloud processing implemented on a compact and lightweight embedded computer. This latter device also provides feedback to the user by means of an haptic interface as well as audio messages. In this paper we address crosswalk recognition that, as pointed out by several visually impaired users involved in the evaluation of our system, is a crucial requirement in the design of an effective mobility aid. Specifically, we propose a reliable methodology to detect and categorize crosswalks by leveraging on point-cloud processing and deep-learning techniques. The experimental results reported, on 10000+ frames, confirm that the proposed approach is invariant to head camera pose and extremely effective even when dealing with large occlusions typically found in urban environments.",
"1- University Hospital Ulm - Department of Internal Medicine IRobert Koch Str. 8, 89069 Ulm - Germany2- University of Ulm - Dept of Neural Information Processing89069 Ulm - GermanyAbstract. This paper introduces a visual zebra crossing detector basedon the Viola-Jones approach. The basic properties of this cascaded clas-sifier and the use of integral images are explained. Additional pre- andpostprocessing for this task are introduced and evaluated.",
"Urban intersections are the most dangerous parts of a blind or visually impaired person's travel. To address this problem, this paper describes the novel \"Crosswatch\" system, which uses computer vision to provide information about the location and orientation of crosswalks to a blind or visually impaired pedestrian holding a camera cell phone. A prototype of the system runs on an off-the-shelf Nokia camera phone in real time, which automatically takes a few images per second, uses the cell phone's built-in computer to analyze each image in a fraction of a second and sounds an audio tone when it detects a crosswalk. Tests with blind subjects demonstrate the feasibility of the system and its ability to provide useful crosswalk alignment information under real-world conditions."
]
} |
1907.09642 | 2963590785 | Image smoothing is a fundamental procedure in applications of both computer vision and graphics. The required smoothing properties can be different or even contradictive among different tasks. Nevertheless, the inherent smoothing nature of one smoothing operator is usually fixed and thus cannot meet the various requirements of different applications. In this paper, a non-convex non-smooth optimization framework is proposed to achieve diverse smoothing natures where even contradictive smoothing behaviors can be achieved. To this end, we first introduce the truncated Huber penalty function which has seldom been used in image smoothing. A robust framework is then proposed. When combined with the strong flexibility of the truncated Huber penalty function, our framework is capable of a range of applications and can outperform the state-of-the-art approaches in several tasks. In addition, an efficient numerical solution is provided and its convergence is theoretically guaranteed even the optimization framework is non-convex and non-smooth. The effectiveness and superior performance of our approach are validated through comprehensive experimental results in a range of applications. | In terms of structure-preserving smoothing, Zhang et al. @cite_28 proposed to smooth structures of different scales with a rolling guidance filter (RGF). Cho et al. @cite_27 modified the original BLF with local patch-based analysis of texture features and obtained a bilateral texture filter (BTF) for image texture removal. Karacan et al. @cite_7 proposed to smooth image textures by making use of region covariances that captured local structure and textural information. Xu et al. @cite_8 adopted the relative total variation (RTV) as a prior to regularize the texture smoothing procedure. Chen et al. @cite_10 proved that the TV- @math model @cite_10 @cite_43 could smooth images in a scale-aware manner, and it is thus ideal for structure-preserving smoothing such as image texture removal @cite_1 @cite_33 . | {
"cite_N": [
"@cite_33",
"@cite_7",
"@cite_8",
"@cite_28",
"@cite_1",
"@cite_43",
"@cite_27",
"@cite_10"
],
"mid": [
"",
"2057203222",
"2078718577",
"2231495490",
"2165154777",
"",
"2109075629",
"1998999283"
],
"abstract": [
"",
"Recent years have witnessed the emergence of new image smoothing techniques which have provided new insights and raised new questions about the nature of this well-studied problem. Specifically, these models separate a given image into its structure and texture layers by utilizing non-gradient based definitions for edges or special measures that distinguish edges from oscillations. In this study, we propose an alternative yet simple image smoothing approach which depends on covariance matrices of simple image features, aka the region covariances. The use of second order statistics as a patch descriptor allows us to implicitly capture local structure and texture information and makes our approach particularly effective for structure extraction from texture. Our experimental results have shown that the proposed approach leads to better image decompositions as compared to the state-of-the-art methods and preserves prominent edges and shading well. Moreover, we also demonstrate the applicability of our approach on some image editing and manipulation tasks such as image abstraction, texture and detail enhancement, image composition, inverse halftoning and seam carving.",
"It is ubiquitous that meaningful structures are formed by or appear over textured surfaces. Extracting them under the complication of texture patterns, which could be regular, near-regular, or irregular, is very challenging, but of great practical importance. We propose new inherent variation and relative total variation measures, which capture the essential difference of these two types of visual forms, and develop an efficient optimization system to extract main structures. The new variation measures are validated on millions of sample patches. Our approach finds a number of new applications to manipulate, render, and reuse the immense number of \"structure with texture\" images and drawings that were traditionally difficult to be edited properly.",
"Images contain many levels of important structures and edges. Compared to masses of research to make filters edge preserving, finding scale-aware local operations was seldom addressed in a practical way, albeit similarly vital in image processing and computer vision. We propose a new framework to filter images with the complete control of detail smoothing under a scale measure. It is based on a rolling guidance implemented in an iterative manner that converges quickly. Our method is simple in implementation, easy to understand, fully extensible to accommodate various data operations, and fast to produce results. Our implementation achieves realtime performance and produces artifact-free results in separating different scale structures. This filter also introduces several inspiring properties different from previous edge-preserving ones.",
"Surrogate maximization (or minimization) (SM) algorithms are a family of algorithm that can be regarded as a generalization of expectation-maximization (EM) algorithms. There are three major approaches to the construction of surrogate function, all relying on the convexity of some function. In this paper, we solve the boosting problem by proposing SM algorithms for the corresponding optimization problem. Specifically, for AdaBoost, we derive an SM algorithm that can be shown to be identical to the algorithm proposed by (2002) based on Bregman distance. More importantly, for LogitBoost (or logistic boosting), we use several methods to construct different surrogate functions which result in different SM algorithms. By combining multiple methods, we are able to derive an SM algorithm that is also the same as an algorithm derived by (2002). Our approach based on SM algorithms is much simpler and convergence results follow naturally.",
"",
"This paper presents a novel structure-preserving image decomposition operator called bilateral texture filter. As a simple modification of the original bilateral filter [Tomasi and Manduchi 1998], it performs local patch-based analysis of texture features and incorporates its results into the range filter kernel. The central idea to ensure proper texture structure separation is based on patch shift that captures the texture information from the most representative texture patch clear of prominent structure edges. Our method outperforms the original bilateral filter in removing texture while preserving main image structures, at the cost of some added computation. It inherits well-known advantages of the bilateral filter, such as simplicity, local nature, ease of implementation, scalability, and adaptability to other application scenarios.",
"The total variation--based image denoising model of Rudin, Osher, and Fatemi [Phys. D, 60, (1992), pp. 259--268] has been generalized and modified in many ways in the literature; one of these modifications is to use the L1 -norm as the fidelity term. We study the interesting consequences of this modification, especially from the point of view of geometric properties of its solutions. It turns out to have interesting new implications for data-driven scale selection and multiscale image decomposition."
]
} |
1907.09478 | 2964056701 | Digital histology images are amenable to the application of convolutional neural network (CNN) for analysis due to the sheer size of pixel data present in them. CNNs are generally used for representation learning from small image patches (e.g. 224x224) extracted from digital histology images due to computational and memory constraints. However, this approach does not incorporate high-resolution contextual information in histology images. We propose a novel way to incorporate larger context by a context-aware neural network based on images with a dimension of 1,792x1,792 pixels. The proposed framework first encodes the local representation of a histology image into high dimensional features then aggregates the features by considering their spatial organization to make a final prediction. The proposed method is evaluated for colorectal cancer grading and breast cancer classification. A comprehensive analysis of some variants of the proposed method is presented. Our method outperformed the traditional patch-based approaches, problem-specific methods, and existing context-based methods quantitatively by a margin of 3.61 . Code and dataset related information is available at this link: this https URL | In literature, various different approaches have been presented to incorporate the contextual information for the classification of histology images. Some researchers @cite_28 @cite_29 @cite_7 used image down-sampling, a common practice followed in natural image classification, to capture the context from larger histology image. However, this approach is not suitable for problems where cell information is as important as the context. Adaptive patch sampling @cite_18 and discriminative patch selection @cite_21 from histology images is another way to integrate the sparse context. These methods are not capable of capturing small regions of interest at high resolution e.g. tumor cells and their local contextual arrangement. Some methods @cite_6 @cite_23 @cite_9 @cite_17 leverage the multi-resolution nature of histology images and use multi-resolution based classifiers to capture context. These multi-resolution approaches only consider a small part of an image at high resolution and the remaining part at low resolutions to make a prediction. Therefore, these approaches lack the contextual information of cellular architecture at high resolution in a histology image. | {
"cite_N": [
"@cite_18",
"@cite_7",
"@cite_28",
"@cite_29",
"@cite_21",
"@cite_9",
"@cite_6",
"@cite_23",
"@cite_17"
],
"mid": [
"2920507514",
"",
"2806140099",
"",
"2803416021",
"2593345132",
"2888716711",
"2612377680",
"2890705060"
],
"abstract": [
"",
"",
"Breast cancer is one of the most commonly occurring types of cancer and the treatment administered to a subject is dependent on the grade or type of the lesion. In this manuscript, we make use of an ensemble of convolutional neural networks (CNN) to classify histology images as Normal, In-situ, Benign or Invasive. The performance of CNN is dependent on the network architecture, number of training instances and also on the data normalization scheme. However, there exists neither a single architecture nor a pre-processing regime that promises best performance. For the reason stated above, we use 3 CNNs trained on different pre-processing regimes to form an ensemble. On the held out test data (n = 40), the proposed scheme achieved an accuracy of 97.5 . On the challenge data (n = 100) provided by the organizers, the proposed technique achieved an accuracy of 87 and was jointly adjudged as the top performing algorithm for the task of classification of breast cancer from histology images.",
"",
"",
"Each year, the treatment decisions for more than 230,000 breast cancer patients in the U.S. hinge on whether the cancer has metastasized away from the breast. Metastasis detection is currently performed by pathologists reviewing large expanses of biological tissues. This process is labor intensive and error-prone. We present a framework to automatically detect and localize tumors as small as 100 x 100 pixels in gigapixel microscopy images sized 100,000 x 100,000 pixels. Our method leverages a convolutional neural network (CNN) architecture and obtains state-of-the-art results on the Camelyon16 dataset in the challenging lesion-level tumor detection task. At 8 false positives per image, we detect 92.4 of the tumors, relative to 82.7 by the previous best automated approach. For comparison, a human pathologist attempting exhaustive search achieved 73.2 sensitivity. We achieve image-level AUC scores above 97 on both the Camelyon16 test set and an independent set of 110 slides. In addition, we discover that two slides in the Camelyon16 training set were erroneously labeled normal. Our approach could considerably reduce false negative rates in metastasis detection.",
"Identifying lung adenocarcinoma growth patterns is critical for diagnosis and treatment of lung cancer patients. Growth patterns have variable texture, shape, size and location. They could appear individually or fused together in a way that makes it difficult to avoid inter intra variability in pathologists reports. Thus, employing a machine learning method to learn these patterns and automatically locate them within the tumour is indeed necessary. This will reduce the effort, assessment variability and provide a second opinion to support pathologies decision. To the best of our knowledge, no work has been done to classify growth patterns in lung adenocarcinoma. In this paper, we propose applying deep learning framework to perform lung adenocarcinoma pattern classification. We investigate what contextual information is adequate for training using patches extracted at several resolutions. We find that both cellular and architectural morphology features are required to achieve the best performance. Therefore, we propose using multi-resolution deep CNN for growth pattern classification in lung adenocarcinoma. Our preliminarily results show an increase in the overall classification accuracy.",
"Abstract Accurate subtyping of ovarian carcinomas is an increasingly critical and often challenging diagnostic process. This work focuses on the development of an automatic classification model for ovarian carcinoma subtyping. Specifically, we present a novel clinically inspired contextual model for histopathology image subtyping of ovarian carcinomas. A whole slide image is modelled using a collection of tissue patches extracted at multiple magnifications. An efficient and effective feature learning strategy is used for feature representation of a tissue patch. The locations of salient, discriminative tissue regions are treated as latent variables allowing the model to explicitly ignore portions of the large tissue section that are unimportant for classification. These latent variables are considered in a structured formulation to model the contextual information represented from the multi-magnification analysis of tissues. A novel, structured latent support vector machine formulation is defined and used to combine information from multiple magnifications while simultaneously operating within the latent variable framework. The structural and contextual nature of our method addresses the challenges of intra-class variation and pathologists’ workload, which are prevalent in histopathology image classification. Extensive experiments on a dataset of 133 patients demonstrate the efficacy and accuracy of the proposed method against state-of-the-art approaches for histopathology image classification. We achieve an average multi-class classification accuracy of 90 , outperforming existing works while obtaining substantial agreement with six clinicians tested on the same dataset.",
"Breast cancer (BC) is the second most leading cause of cancer deaths in women and BC metastasis accounts for the majority of deaths. Early detection of breast cancer metastasis in sentinel lymph nodes is of high importance for prediction and management of breast cancer progression. In this paper, we propose a novel deep learning framework for automatic detection of micro- and macro- metastasis in multi-gigapixel whole-slide images (WSIs) of sentinel lymph nodes. One of our main contributions is to incorporate a Bayesian solution for the optimization of network’s hyperparameters on one of the largest histology dataset, which leads to 5 gain in overall patch-based accuracy. Furthermore, we present an ensemble of two multi-resolution deep learning networks, one captures the cell level information and the other incorporates the contextual information to make the final prediction. Finally, we propose a two-step thresholding method to post-process the output of ensemble network. We evaluate our proposed method on the CAMELYON16 dataset, where we outperformed “human experts” and achieved the second best performance compared to 32 other competing methods."
]
} |
1907.09478 | 2964056701 | Digital histology images are amenable to the application of convolutional neural network (CNN) for analysis due to the sheer size of pixel data present in them. CNNs are generally used for representation learning from small image patches (e.g. 224x224) extracted from digital histology images due to computational and memory constraints. However, this approach does not incorporate high-resolution contextual information in histology images. We propose a novel way to incorporate larger context by a context-aware neural network based on images with a dimension of 1,792x1,792 pixels. The proposed framework first encodes the local representation of a histology image into high dimensional features then aggregates the features by considering their spatial organization to make a final prediction. The proposed method is evaluated for colorectal cancer grading and breast cancer classification. A comprehensive analysis of some variants of the proposed method is presented. Our method outperformed the traditional patch-based approaches, problem-specific methods, and existing context-based methods quantitatively by a margin of 3.61 . Code and dataset related information is available at this link: this https URL | Awan @cite_32 presented a method for two-tier CRC grading based on the extent of deviation of the gland from its normal shape (circular elliptical). They proposed a novel Best Alignment Metric (BAM) for this purpose. As a pre-processing step, CNN based gland segmentation was performed, followed by the calculation of BAM for each gland. For every image, average BAM was considered as a feature along with two more features inspired by BAM values. In the end, an SVM classifier was trained using this feature set for CRC grading. Our proposed method differs from these existing methods in two ways. First, it does not depend on the intermediate step of gland segmentation making it independent of segmentation inaccuracies. Second, the proposed method is entirely based on a deep neural network which makes this framework independent of cancer type. Therefore, the proposed framework could be used for other context-based histology image analysis problems. In this regard, besides CRC grading, we have demonstrated the application of the proposed method for breast cancer classification. | {
"cite_N": [
"@cite_32"
],
"mid": [
"2769999077"
],
"abstract": [
"Determining the grade of colon cancer from tissue slides is a routine part of the pathological analysis. In the case of colorectal adenocarcinoma (CRA), grading is partly determined by morphology and degree of formation of glandular structures. Achieving consistency between pathologists is difficult due to the subjective nature of grading assessment. An objective grading using computer algorithms will be more consistent, and will be able to analyse images in more detail. In this paper, we measure the shape of glands with a novel metric that we call the Best Alignment Metric (BAM). We show a strong correlation between a novel measure of glandular shape and grade of the tumour. We used shape specific parameters to perform a two-class classification of images into normal or cancerous tissue and a three-class classification into normal, low grade cancer, and high grade cancer. The task of detecting gland boundaries, which is a prerequisite of shape-based analysis, was carried out using a deep convolutional neural network designed for segmentation of glandular structures. A support vector machine (SVM) classifier was trained using shape features derived from BAM. Through cross-validation, we achieved an accuracy of 97 for the two-class and 91 for three-class classification."
]
} |
1907.09695 | 2962736944 | The problem of a deep learning model losing performance on a previously learned task when fine-tuned to a new one is a phenomenon known as Catastrophic forgetting. There are two major ways to mitigate this problem: either preserving activations of the initial network during training with a new task; or restricting the new network activations to remain close to the initial ones. The latter approach falls under the denomination of lifelong learning, where the model is updated in a way that it performs well on both old and new tasks, without having access to the old task's training samples anymore. Recently, approaches like pruning networks for freeing network capacity during sequential learning of tasks have been gaining in popularity. Such approaches allow learning small networks while making redundant parameters available for the next tasks. The common problem encountered with these approaches is that the pruning percentage is hard-coded, irrespective of the number of samples, of the complexity of the learning task and of the number of classes in the dataset. We propose a method based on Bayesian optimization to perform adaptive compression pruning of the network and show its effectiveness in lifelong learning. Our method learns to perform heavy pruning for small and or simple datasets while using milder compression rates for large and or complex data. Experiments on classification and semantic segmentation demonstrate the applicability of learning network compression, where we are able to effectively preserve performances along sequences of tasks of varying complexity. | The most common way to learn a new task from a model trained on another is to fine-tune it @cite_15 @cite_11 . Fine-tuning works generally very well for the new task, but at the price of a drop in accuracy for the former, since the weights are modified and tuned for the new task. A first possible solution is to keep a copy of the original model trained on the original task, but this leads to heavy memory requirements with an increase in the number of tasks. Another solution would be to perform multi-task learning @cite_16 , but this strategy relies on labeled data for all tasks to be available during training, which is typically not possible in sequential learning. | {
"cite_N": [
"@cite_15",
"@cite_16",
"@cite_11"
],
"mid": [
"2102605133",
"",
"2155541015"
],
"abstract": [
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.",
"",
"We evaluate whether features extracted from the activation of a deep convolutional network trained in a fully supervised fashion on a large, fixed set of object recognition tasks can be repurposed to novel generic tasks. Our generic tasks may differ significantly from the originally trained tasks and there may be insufficient labeled or unlabeled data to conventionally train or adapt a deep architecture to the new tasks. We investigate and visualize the semantic clustering of deep convolutional features with respect to a variety of such tasks, including scene recognition, domain adaptation, and fine-grained recognition challenges. We compare the efficacy of relying on various network levels to define a fixed feature, and report novel results that significantly outperform the state-of-the-art on several important vision challenges. We are releasing DeCAF, an open-source implementation of these deep convolutional activation features, along with all associated network parameters to enable vision researchers to be able to conduct experimentation with deep representations across a range of visual concept learning paradigms."
]
} |
1907.09695 | 2962736944 | The problem of a deep learning model losing performance on a previously learned task when fine-tuned to a new one is a phenomenon known as Catastrophic forgetting. There are two major ways to mitigate this problem: either preserving activations of the initial network during training with a new task; or restricting the new network activations to remain close to the initial ones. The latter approach falls under the denomination of lifelong learning, where the model is updated in a way that it performs well on both old and new tasks, without having access to the old task's training samples anymore. Recently, approaches like pruning networks for freeing network capacity during sequential learning of tasks have been gaining in popularity. Such approaches allow learning small networks while making redundant parameters available for the next tasks. The common problem encountered with these approaches is that the pruning percentage is hard-coded, irrespective of the number of samples, of the complexity of the learning task and of the number of classes in the dataset. We propose a method based on Bayesian optimization to perform adaptive compression pruning of the network and show its effectiveness in lifelong learning. Our method learns to perform heavy pruning for small and or simple datasets while using milder compression rates for large and or complex data. Experiments on classification and semantic segmentation demonstrate the applicability of learning network compression, where we are able to effectively preserve performances along sequences of tasks of varying complexity. | The issue of accessing the data of previous tasks is mitigated to a large extent in the Learning without Forgetting' (LwF) framework @cite_5 @cite_24 . LwF combines fine-tuning and distillation networks @cite_2 , where a knowledge distillation loss @cite_2 tries to preserve the output of the former classifier on data from the new task. However, LwF uses several losses, whose number (and balancing weights involved) scales linearly with the number of tasks. The authors in @cite_0 @cite_25 propose approaches where the distance between parameters of the models trained on the old and new tasks is regulated via @math losses. As for LwF, the number of parameters increases with the number of tasks. In @cite_23 , the authors use autoencoders in addition to LwF. This approach has an overhead of a linearly increasing number of autoencoders and task-specific classifiers, several hyperparameters and also a distillation loss between the single-task and the multitask model, making its training complex. | {
"cite_N": [
"@cite_24",
"@cite_0",
"@cite_23",
"@cite_2",
"@cite_5",
"@cite_25"
],
"mid": [
"",
"2560647685",
"2964244278",
"1821462560",
"2473930607",
"2605043629"
],
"abstract": [
"",
"Abstract The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence. Until now neural networks have not been capable of this and it has been widely thought that catastrophic forgetting is an inevitable feature of connectionist models. We show that it is possible to overcome this limitation and train networks that can maintain expertise on tasks that they have not experienced for a long time. Our approach remembers old tasks by selectively slowing down learning on the weights important for those tasks. We demonstrate our approach is scalable and effective by solving a set of classification tasks based on a hand-written digit dataset and by learning several Atari 2600 games sequentially.",
"This paper introduces a new lifelong learning solution where a single model is trained for a sequence of tasks. The main challenge that vision systems face in this context is catastrophic forgetting: as they tend to adapt to the most recently seen task, they lose performance on the tasks that were learned previously. Our method aims at preserving the knowledge of the previous tasks while learning a new one by using autoencoders. For each task, an under-complete autoencoder is learned, capturing the features that are crucial for its achievement. When a new task is presented to the system, we prevent the reconstructions of the features with these autoencoders from changing, which has the effect of preserving the information on which the previous tasks are mainly relying. At the same time, the features are given space to adjust to the most recent environment as only their projection into a low dimension submanifold is controlled. The proposed system is evaluated on image classification tasks and shows a reduction of forgetting over the state-ofthe-art.",
"A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.",
"When building a unified vision system or gradually adding new apabilities to a system, the usual assumption is that training data for all tasks is always available. However, as the number of tasks grows, storing and retraining on such data becomes infeasible. A new problem arises where we add new capabilities to a Convolutional Neural Network (CNN), but the training data for its existing capabilities are unavailable. We propose our Learning without Forgetting method, which uses only new task data to train the network while preserving the original capabilities. Our method performs favorably compared to commonly used feature extraction and fine-tuning adaption techniques and performs similarly to multitask learning that uses original task data we assume unavailable. A more surprising observation is that Learning without Forgetting may be able to replace fine-tuning with similar old and new task datasets for improved new task performance.",
"Catastrophic forgetting is a problem of neural networks that loses the information of the first task after training the second task. Here, we propose a method, i.e. incremental moment matching (IMM), to resolve this problem. IMM incrementally matches the moment of the posterior distribution of the neural network which is trained on the first and the second task, respectively. To make the search space of posterior parameter smooth, the IMM procedure is complemented by various transfer learning techniques including weight transfer, L2-norm of the old and the new parameter, and a variant of dropout with the old parameter. We analyze our approach on a variety of datasets including the MNIST, CIFAR-10, Caltech-UCSD-Birds, and Lifelog datasets. The experimental results show that IMM achieves state-of-the-art performance by balancing the information between an old and a new network."
]
} |
1907.09695 | 2962736944 | The problem of a deep learning model losing performance on a previously learned task when fine-tuned to a new one is a phenomenon known as Catastrophic forgetting. There are two major ways to mitigate this problem: either preserving activations of the initial network during training with a new task; or restricting the new network activations to remain close to the initial ones. The latter approach falls under the denomination of lifelong learning, where the model is updated in a way that it performs well on both old and new tasks, without having access to the old task's training samples anymore. Recently, approaches like pruning networks for freeing network capacity during sequential learning of tasks have been gaining in popularity. Such approaches allow learning small networks while making redundant parameters available for the next tasks. The common problem encountered with these approaches is that the pruning percentage is hard-coded, irrespective of the number of samples, of the complexity of the learning task and of the number of classes in the dataset. We propose a method based on Bayesian optimization to perform adaptive compression pruning of the network and show its effectiveness in lifelong learning. Our method learns to perform heavy pruning for small and or simple datasets while using milder compression rates for large and or complex data. Experiments on classification and semantic segmentation demonstrate the applicability of learning network compression, where we are able to effectively preserve performances along sequences of tasks of varying complexity. | An alternative direction to the above is the idea of removing redundant parameters by neural network compression @cite_18 . The authors report good results but only use a fixed pruning percentage for all the tasks, irrespective of the complexity of the data involved. Other works have used masks on networks' weights, either using attention @cite_27 or by learning binary weights masks end-to-end, in order to use only the weights useful for the new task @cite_6 . | {
"cite_N": [
"@cite_27",
"@cite_18",
"@cite_6"
],
"mid": [
"2963813679",
"2963072899",
"2791091755"
],
"abstract": [
"Comunicacio presentada a: 35th International Conference on Machine Learning, celebrat a Stockholmsmassan, Suecia, del 10 al 15 de juliol del 2018.",
"This paper presents a method for adding multiple tasks to a single deep neural network while avoiding catastrophic forgetting. Inspired by network pruning techniques, we exploit redundancies in large deep networks to free up parameters that can then be employed to learn new tasks. By performing iterative pruning and network re-training, we are able to sequentially \"pack\" multiple tasks into a single network while ensuring minimal drop in performance and minimal storage overhead. Unlike prior work that uses proxy losses to maintain accuracy on older tasks, we always optimize for the task at hand. We perform extensive experiments on a variety of network architectures and large-scale datasets, and observe much better robustness against catastrophic forgetting than prior work. In particular, we are able to add three fine-grained classification tasks to a single ImageNet-trained VGG-16 network and achieve accuracies close to those of separately trained networks for each task.",
"This work presents a method for adapting a single, fixed deep neural network to multiple tasks without affecting performance on already learned tasks. By building upon ideas from network quantization and pruning, we learn binary masks that “piggyback” on an existing network, or are applied to unmodified weights of that network to provide good performance on a new task. These masks are learned in an end-to-end differentiable fashion, and incur a low overhead of 1 bit per network parameter, per task. Even though the underlying network is fixed, the ability to mask individual weights allows for the learning of a large number of filters. We show performance comparable to dedicated fine-tuned networks for a variety of classification tasks, including those with large domain shifts from the initial task (ImageNet), and a variety of network architectures. Our performance is agnostic to task ordering and we do not suffer from catastrophic forgetting or competition between tasks."
]
} |
1907.09595 | 2963037463 | Depthwise convolution is becoming increasingly popular in modern efficient ConvNets, but its kernel size is often overlooked. In this paper, we systematically study the impact of different kernel sizes, and observe that combining the benefits of multiple kernel sizes can lead to better accuracy and efficiency. Based on this observation, we propose a new mixed depthwise convolution (MixConv), which naturally mixes up multiple kernel sizes in a single convolution. As a simple drop-in replacement of vanilla depthwise convolution, our MixConv improves the accuracy and efficiency for existing MobileNets on both ImageNet classification and COCO object detection. To demonstrate the effectiveness of MixConv, we integrate it into AutoML search space and develop a new family of models, named as MixNets, which outperform previous mobile models including MobileNetV2 [20] (ImageNet top-1 accuracy +4.2 ), ShuffleNetV2 [16] (+3.5 ), MnasNet [26] (+1.3 ), ProxylessNAS [2] (+2.2 ), and FBNet [27] (+2.0 ). In particular, our MixNet-L achieves a new state-of-the-art 78.9 ImageNet top-1 accuracy under typical mobile settings (<600M FLOPS). Code is at this https URL tensorflow tpu tree master models official mnasnet mixnet | In recent years, significant efforts have been spent on improving ConvNet efficiency, from more efficient convolutional operations @cite_30 @cite_5 @cite_32 , bottleneck layers @cite_15 @cite_0 , to more efficient architectures @cite_17 @cite_26 @cite_13 . In particular, depthwise convolution has been increasingly popular in all mobile-size ConvNets, such as MobileNets @cite_5 @cite_15 , ShuffleNets @cite_20 @cite_28 , MnasNet @cite_17 , and beyond @cite_21 @cite_6 @cite_4 . Recently, EfficientNet @cite_27 even achieves both state-of-the-art ImageNet accuracy and ten-fold better efficiency by extensively using depthwise and pointwise convolutions. Unlike regular convolution, depthwise convolution performs convolutional kernels for each channel separately, thus reducing parameter size and computational cost. Our proposed MixConv generalizes the concept of depthwise convolution, and can be considered as a drop-in replacement of vanilla depthwise convolution. | {
"cite_N": [
"@cite_30",
"@cite_26",
"@cite_4",
"@cite_28",
"@cite_21",
"@cite_32",
"@cite_6",
"@cite_0",
"@cite_27",
"@cite_5",
"@cite_15",
"@cite_13",
"@cite_20",
"@cite_17"
],
"mid": [
"2279098554",
"",
"1977295328",
"2951975363",
"2951583185",
"",
"2964081807",
"2953328958",
"2955425717",
"2612445135",
"2963163009",
"2964259004",
"",
"2963918968"
],
"abstract": [
"Recent research on deep neural networks has focused primarily on improving accuracy. For a given accuracy level, it is typically possible to identify multiple DNN architectures that achieve that accuracy level. With equivalent accuracy, smaller DNN architectures offer at least three advantages: (1) Smaller DNNs require less communication across servers during distributed training. (2) Smaller DNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on FPGAs and other hardware with limited memory. To provide all of these advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques we are able to compress SqueezeNet to less than 0.5MB (510x smaller than AlexNet).",
"",
"We investigate the fine grained object categorization problem of determining the breed of animal from an image. To this end we introduce a new annotated dataset of pets covering 37 different breeds of cats and dogs. The visual problem is very challenging as these animals, particularly cats, are very deformable and there can be quite subtle differences between the breeds. We make a number of contributions: first, we introduce a model to classify a pet breed automatically from an image. The model combines shape, captured by a deformable part model detecting the pet face, and appearance, captured by a bag-of-words model that describes the pet fur. Fitting the model involves automatically segmenting the animal in the image. Second, we compare two classification approaches: a hierarchical one, in which a pet is first assigned to the cat or dog family and then to a breed, and a flat one, in which the breed is obtained directly. We also investigate a number of animal and image orientated spatial layouts. These models are very good: they beat all previously published results on the challenging ASIRRA test (cat vs dog discrimination). When applied to the task of discriminating the 37 different breeds of pets, the models obtain an average accuracy of about 59 , a very encouraging result considering the difficulty of the problem.",
"Currently, the neural network architecture design is mostly guided by the metric of computation complexity, i.e., FLOPs. However, the metric, e.g., speed, also depends on the other factors such as memory access cost and platform characterics. Thus, this work proposes to evaluate the direct metric on the target platform, beyond only considering FLOPs. Based on a series of controlled experiments, this work derives several practical for efficient network design. Accordingly, a new architecture is presented, called . Comprehensive ablation experiments verify that our model is the state-of-the-art in terms of speed and accuracy tradeoff.",
"We present an interpretation of Inception modules in convolutional neural networks as being an intermediate step in-between regular convolution and the depthwise separable convolution operation (a depthwise convolution followed by a pointwise convolution). In this light, a depthwise separable convolution can be understood as an Inception module with a maximally large number of towers. This observation leads us to propose a novel deep convolutional neural network architecture inspired by Inception, where Inception modules have been replaced with depthwise separable convolutions. We show that this architecture, dubbed Xception, slightly outperforms Inception V3 on the ImageNet dataset (which Inception V3 was designed for), and significantly outperforms Inception V3 on a larger image classification dataset comprising 350 million images and 17,000 classes. Since the Xception architecture has the same number of parameters as Inception V3, the performance gains are not due to increased capacity but rather to a more efficient use of model parameters.",
"",
"Developing neural network image classification models often requires significant architecture engineering. In this paper, we study a method to learn the model architectures directly on the dataset of interest. As this approach is expensive when the dataset is large, we propose to search for an architectural building block on a small dataset and then transfer the block to a larger dataset. The key contribution of this work is the design of a new search space (which we call the \"NASNet search space\") which enables transferability. In our experiments, we search for the best convolutional layer (or \"cell\") on the CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking together more copies of this cell, each with their own parameters to design a convolutional architecture, which we name a \"NASNet architecture\". We also introduce a new regularization technique called ScheduledDropPath that significantly improves generalization in the NASNet models. On CIFAR-10 itself, a NASNet found by our method achieves 2.4 error rate, which is state-of-the-art. Although the cell is not searched for directly on ImageNet, a NASNet constructed from the best cell achieves, among the published works, state-of-the-art accuracy of 82.7 top-1 and 96.2 top-5 on ImageNet. Our model is 1.2 better in top-1 accuracy than the best human-invented architectures while having 9 billion fewer FLOPS - a reduction of 28 in computational demand from the previous state-of-the-art model. When evaluated at different levels of computational cost, accuracies of NASNets exceed those of the state-of-the-art human-designed models. For instance, a small version of NASNet also achieves 74 top-1 accuracy, which is 3.1 better than equivalently-sized, state-of-the-art models for mobile platforms. Finally, the image features learned from image classification are generically useful and can be transferred to other computer vision problems. On the task of object detection, the learned features by NASNet used with the Faster-RCNN framework surpass state-of-the-art by 4.0 achieving 43.1 mAP on the COCO dataset.",
"We present a simple, highly modularized network architecture for image classification. Our network is constructed by repeating a building block that aggregates a set of transformations with the same topology. Our simple design results in a homogeneous, multi-branch architecture that has only a few hyper-parameters to set. This strategy exposes a new dimension, which we call \"cardinality\" (the size of the set of transformations), as an essential factor in addition to the dimensions of depth and width. On the ImageNet-1K dataset, we empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy. Moreover, increasing cardinality is more effective than going deeper or wider when we increase the capacity. Our models, named ResNeXt, are the foundations of our entry to the ILSVRC 2016 classification task in which we secured 2nd place. We further investigate ResNeXt on an ImageNet-5K set and the COCO detection set, also showing better results than its ResNet counterpart. The code and models are publicly available online.",
"",
"We present a class of efficient models called MobileNets for mobile and embedded vision applications. MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks. We introduce two simple global hyper-parameters that efficiently trade off between latency and accuracy. These hyper-parameters allow the model builder to choose the right sized model for their application based on the constraints of the problem. We present extensive experiments on resource and accuracy tradeoffs and show strong performance compared to other popular models on ImageNet classification. We then demonstrate the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization.",
"In this paper we describe a new mobile architecture, MobileNetV2, that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes. We also describe efficient ways of applying these mobile models to object detection in a novel framework we call SSDLite. Additionally, we demonstrate how to build mobile semantic segmentation models through a reduced form of DeepLabv3 which we call Mobile DeepLabv3. is based on an inverted residual structure where the shortcut connections are between the thin bottleneck layers. The intermediate expansion layer uses lightweight depthwise convolutions to filter features as a source of non-linearity. Additionally, we find that it is important to remove non-linearities in the narrow layers in order to maintain representational power. We demonstrate that this improves performance and provide an intuition that led to this design. Finally, our approach allows decoupling of the input output domains from the expressiveness of the transformation, which provides a convenient framework for further analysis. We measure our performance on ImageNet [1] classification, COCO object detection [2], VOC image segmentation [3]. We evaluate the trade-offs between accuracy, and number of operations measured by multiply-adds (MAdd), as well as actual latency, and the number of parameters.",
"",
"",
""
]
} |
1907.09595 | 2963037463 | Depthwise convolution is becoming increasingly popular in modern efficient ConvNets, but its kernel size is often overlooked. In this paper, we systematically study the impact of different kernel sizes, and observe that combining the benefits of multiple kernel sizes can lead to better accuracy and efficiency. Based on this observation, we propose a new mixed depthwise convolution (MixConv), which naturally mixes up multiple kernel sizes in a single convolution. As a simple drop-in replacement of vanilla depthwise convolution, our MixConv improves the accuracy and efficiency for existing MobileNets on both ImageNet classification and COCO object detection. To demonstrate the effectiveness of MixConv, we integrate it into AutoML search space and develop a new family of models, named as MixNets, which outperform previous mobile models including MobileNetV2 [20] (ImageNet top-1 accuracy +4.2 ), ShuffleNetV2 [16] (+3.5 ), MnasNet [26] (+1.3 ), ProxylessNAS [2] (+2.2 ), and FBNet [27] (+2.0 ). In particular, our MixNet-L achieves a new state-of-the-art 78.9 ImageNet top-1 accuracy under typical mobile settings (<600M FLOPS). Code is at this https URL tensorflow tpu tree master models official mnasnet mixnet | Our idea shares a lot of similarities to prior multi-branch ConvNets, such as Inceptions @cite_1 @cite_7 , Inception-ResNet @cite_31 , ResNeXt @cite_0 , and NASNet @cite_6 . By using multiple branches in each layer, these ConvNets are able to utilize different operations (such as convolution and pooling) in a single layer. Similarly, there are also many prior work on combining multi-scale feature maps from different layers, such as DenseNet @cite_12 @cite_3 and feature pyramid network @cite_10 . However, unlike these prior works that mostly focus on changing the macro-architecture of neural networks in order to utilize different convolutional ops, our work aims to design a drop-in replacement of a single depthwise convolution, with the goal of easily utilizing different kernel sizes without changing the network structure. | {
"cite_N": [
"@cite_7",
"@cite_1",
"@cite_6",
"@cite_3",
"@cite_0",
"@cite_31",
"@cite_10",
"@cite_12"
],
"mid": [
"2274287116",
"2949605076",
"2964081807",
"2950014519",
"2953328958",
"",
"2949533892",
"2963446712"
],
"abstract": [
"Very deep convolutional networks have been central to the largest advances in image recognition performance in recent years. One example is the Inception architecture that has been shown to achieve very good performance at relatively low computational cost. Recently, the introduction of residual connections in conjunction with a more traditional architecture has yielded state-of-the-art performance in the 2015 ILSVRC challenge; its performance was similar to the latest generation Inception-v3 network. This raises the question of whether there are any benefit in combining the Inception architecture with residual connections. Here we give clear empirical evidence that training with residual connections accelerates the training of Inception networks significantly. There is also some evidence of residual Inception networks outperforming similarly expensive Inception networks without residual connections by a thin margin. We also present several new streamlined architectures for both residual and non-residual Inception networks. These variations improve the single-frame recognition performance on the ILSVRC 2012 classification task significantly. We further demonstrate how proper activation scaling stabilizes the training of very wide residual Inception networks. With an ensemble of three residual and one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the ImageNet classification (CLS) challenge",
"Convolutional networks are at the core of most state-of-the-art computer vision solutions for a wide variety of tasks. Since 2014 very deep convolutional networks started to become mainstream, yielding substantial gains in various benchmarks. Although increased model size and computational cost tend to translate to immediate quality gains for most tasks (as long as enough labeled data is provided for training), computational efficiency and low parameter count are still enabling factors for various use cases such as mobile vision and big-data scenarios. Here we explore ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization. We benchmark our methods on the ILSVRC 2012 classification challenge validation set demonstrate substantial gains over the state of the art: 21.2 top-1 and 5.6 top-5 error for single frame evaluation using a network with a computational cost of 5 billion multiply-adds per inference and with using less than 25 million parameters. With an ensemble of 4 models and multi-crop evaluation, we report 3.5 top-5 error on the validation set (3.6 error on the test set) and 17.3 top-1 error on the validation set.",
"Developing neural network image classification models often requires significant architecture engineering. In this paper, we study a method to learn the model architectures directly on the dataset of interest. As this approach is expensive when the dataset is large, we propose to search for an architectural building block on a small dataset and then transfer the block to a larger dataset. The key contribution of this work is the design of a new search space (which we call the \"NASNet search space\") which enables transferability. In our experiments, we search for the best convolutional layer (or \"cell\") on the CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking together more copies of this cell, each with their own parameters to design a convolutional architecture, which we name a \"NASNet architecture\". We also introduce a new regularization technique called ScheduledDropPath that significantly improves generalization in the NASNet models. On CIFAR-10 itself, a NASNet found by our method achieves 2.4 error rate, which is state-of-the-art. Although the cell is not searched for directly on ImageNet, a NASNet constructed from the best cell achieves, among the published works, state-of-the-art accuracy of 82.7 top-1 and 96.2 top-5 on ImageNet. Our model is 1.2 better in top-1 accuracy than the best human-invented architectures while having 9 billion fewer FLOPS - a reduction of 28 in computational demand from the previous state-of-the-art model. When evaluated at different levels of computational cost, accuracies of NASNets exceed those of the state-of-the-art human-designed models. For instance, a small version of NASNet also achieves 74 top-1 accuracy, which is 3.1 better than equivalently-sized, state-of-the-art models for mobile platforms. Finally, the image features learned from image classification are generically useful and can be transferred to other computer vision problems. On the task of object detection, the learned features by NASNet used with the Faster-RCNN framework surpass state-of-the-art by 4.0 achieving 43.1 mAP on the COCO dataset.",
"In this paper we investigate image classification with computational resource limits at test time. Two such settings are: 1. anytime classification, where the network's prediction for a test example is progressively updated, facilitating the output of a prediction at any time; and 2. budgeted batch classification, where a fixed amount of computation is available to classify a set of examples that can be spent unevenly across \"easier\" and \"harder\" inputs. In contrast to most prior work, such as the popular Viola and Jones algorithm, our approach is based on convolutional neural networks. We train multiple classifiers with varying resource demands, which we adaptively apply during test time. To maximally re-use computation between the classifiers, we incorporate them as early-exits into a single deep convolutional neural network and inter-connect them with dense connectivity. To facilitate high quality classification early on, we use a two-dimensional multi-scale network architecture that maintains coarse and fine level features all-throughout the network. Experiments on three image-classification tasks demonstrate that our framework substantially improves the existing state-of-the-art in both settings.",
"We present a simple, highly modularized network architecture for image classification. Our network is constructed by repeating a building block that aggregates a set of transformations with the same topology. Our simple design results in a homogeneous, multi-branch architecture that has only a few hyper-parameters to set. This strategy exposes a new dimension, which we call \"cardinality\" (the size of the set of transformations), as an essential factor in addition to the dimensions of depth and width. On the ImageNet-1K dataset, we empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy. Moreover, increasing cardinality is more effective than going deeper or wider when we increase the capacity. Our models, named ResNeXt, are the foundations of our entry to the ILSVRC 2016 classification task in which we secured 2nd place. We further investigate ResNeXt on an ImageNet-5K set and the COCO detection set, also showing better results than its ResNet counterpart. The code and models are publicly available online.",
"",
"Feature pyramids are a basic component in recognition systems for detecting objects at different scales. But recent deep learning object detectors have avoided pyramid representations, in part because they are compute and memory intensive. In this paper, we exploit the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost. A top-down architecture with lateral connections is developed for building high-level semantic feature maps at all scales. This architecture, called a Feature Pyramid Network (FPN), shows significant improvement as a generic feature extractor in several applications. Using FPN in a basic Faster R-CNN system, our method achieves state-of-the-art single-model results on the COCO detection benchmark without bells and whistles, surpassing all existing single-model entries including those from the COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU and thus is a practical and accurate solution to multi-scale object detection. Code will be made publicly available.",
"Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections—one between each layer and its subsequent layer—our network has L(L+1) 2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less memory and computation to achieve high performance. Code and pre-trained models are available at https: github.com liuzhuang13 DenseNet."
]
} |
1907.09705 | 2962957458 | Scene text recognition has been an important, active research topic in computer vision for years. Previous approaches mainly consider text as 1D signals and cast scene text recognition as a sequence prediction problem, by feat of CTC or attention based encoder-decoder framework, which is originally designed for speech recognition. However, different from speech voices, which are 1D signals, text instances are essentially distributed in 2D image spaces. To adhere to and make use of the 2D nature of text for higher recognition accuracy, we extend the vanilla CTC model to a second dimension, thus creating 2D-CTC. 2D-CTC can adaptively concentrate on most relevant features while excluding the impact from clutters and noises in the background; It can also naturally handle text instances with various forms (horizontal, oriented and curved) while giving more interpretable intermediate predictions. The experiments on standard benchmarks for scene text recognition, such as IIIT-5K, ICDAR 2015, SVP-Perspective, and CUTE80, demonstrate that the proposed 2D-CTC model outperforms state-of-the-art methods on the text of both regular and irregular shapes. Moreover, 2D-CTC exhibits its superiority over prior art on training and testing speed. Our implementation and models of 2D-CTC will be made publicly available soon later. | Another admirable direction of frame-wise prediction alignment is attention-based sequence encoder and decoder framework @cite_24 @cite_5 @cite_22 @cite_36 @cite_0 @cite_32 @cite_37 . The models focus on one position and predict the corresponding character at each time step, but suffer from the problems of misalignment and attention drift @cite_24 . Miss-classification at previous steps may lead to drifted attention locations and wrong predictions in successive time steps because of the recursive mechanism. Recent works take attention decoders forward to even better accuracy by suggesting new loss functions @cite_24 @cite_38 and introducing image rectification modules @cite_36 @cite_40 . Both of them bring appreciable improvement to attention decoders. However, despite the high accuracy attention decoder has achieved, the considerably lower inference speed is the fundamental factor which has limited its application in real-world text recognition systems. Detailed experiments are presented in Sec . | {
"cite_N": [
"@cite_38",
"@cite_37",
"@cite_22",
"@cite_36",
"@cite_32",
"@cite_24",
"@cite_0",
"@cite_40",
"@cite_5"
],
"mid": [
"",
"2963327605",
"2294053032",
"2962986948",
"2963360699",
"2963054155",
"2806488338",
"",
"2795619303"
],
"abstract": [
"",
"We consider the scene text recognition problem under the attention-based encoder-decoder framework, which is the state of the art. The existing methods usually employ a frame-wise maximal likelihood loss to optimize the models. When we train the model, the misalignment between the ground truth strings and the attention's output sequences of probability distribution, which is caused by missing or superfluous characters, will confuse and mislead the training process, and consequently make the training costly and degrade the recognition accuracy. To handle this problem, we propose a novel method called edit probability (EP) for scene text recognition. EP tries to effectively estimate the probability of generating a string from the output sequence of probability distribution conditioned on the input image, while considering the possible occurrences of missing superfluous characters. The advantage lies in that the training process can focus on the missing, superfluous and unrecognized characters, and thus the impact of the misalignment problem can be alleviated or even overcome. We conduct extensive experiments on standard benchmarks, including the IIIT-5K, Street View Text and ICDAR datasets. Experimental results show that the EP can substantially boost scene text recognition performance.",
"We present recursive recurrent neural networks with attention modeling (R2AM) for lexicon-free optical character recognition in natural scene images. The primary advantages of the proposed method are: (1) use of recursive convolutional neural networks (CNNs), which allow for parametrically efficient and effective image feature extraction, (2) an implicitly learned character-level language model, embodied in a recurrent neural network which avoids the need to use N-grams, and (3) the use of a soft-attention mechanism, allowing the model to selectively exploit image features in a coordinated way, and allowing for end-to-end training within a standard backpropagation framework. We validate our method with state-of-the-art performance on challenging benchmark datasets: Street View Text, IIIT5k, ICDAR and Synth90k.",
"Text detection and recognition in natural images have long been considered as two separate tasks that are processed sequentially. Jointly training two tasks is non-trivial due to significant differences in learning difficulties and convergence rates. In this work, we present a conceptually simple yet efficient framework that simultaneously processes the two tasks in a united framework. Our main contributions are three-fold: (1) we propose a novel text-alignment layer that allows it to precisely compute convolutional features of a text instance in arbitrary orientation, which is the key to boost the performance; (2) a character attention mechanism is introduced by using character spatial information as explicit supervision, leading to large improvements in recognition; (3) two technologies, together with a new RNN branch for word recognition, are integrated seamlessly into a single model which is end-to-end trainable. This allows the two tasks to work collaboratively by sharing convolutional features, which is critical to identify challenging text instances. Our model obtains impressive results in end-to-end recognition on the ICDAR 2015 [19], significantly advancing the most recent results [2], with improvements of F-measure from (0.54, 0.51, 0.47) to (0.82, 0.77, 0.63), by using a strong, weak and generic lexicon respectively. Thanks to joint training, our method can also serve as a good detector by achieving a new state-of-the-art detection performance on related benchmarks. Code is available at https: github.com tonghe90 textspotter.",
"In this paper we propose an approach to lexicon-free recognition of text in scene images. Our approach relies on a LSTM-based soft visual attention model learned from convolutional features. A set of feature vectors are derived from an intermediate convolutional layer corresponding to different areas of the image. This permits encoding of spatial information into the image representation. In this way, the framework is able to learn how to selectively focus on different parts of the image. At every time step the recognizer emits one character using a weighted combination of the convolutional feature vectors according to the learned attention model. Training can be done end-to-end using only word level annotations. In addition, we show that modifying the beam search algorithm by integrating an explicit language model leads to significantly better recognition results. We validate the performance of our approach on standard SVT and ICDAR'03 scene text datasets, showing state-of-the-art performance in unconstrained text recognition.",
"Scene text recognition has been a hot research topic in computer vision due to its various applications. The state of the art is the attention-based encoder-decoder framework that learns the mapping between input images and output sequences in a purely data-driven way. However, we observe that existing attention-based methods perform poorly on complicated and or low-quality images. One major reason is that existing methods cannot get accurate alignments between feature areas and targets for such images. We call this phenomenon “attention drift”. To tackle this problem, in this paper we propose the FAN (the abbreviation of Focusing Attention Network) method that employs a focusing attention mechanism to automatically draw back the drifted attention. FAN consists of two major components: an attention network (AN) that is responsible for recognizing character targets as in the existing methods, and a focusing network (FN) that is responsible for adjusting attention by evaluating whether AN pays attention properly on the target areas in the images. Furthermore, different from the existing methods, we adopt a ResNet-based network to enrich deep representations of scene text images. Extensive experiments on various benchmarks, including the IIIT5k, SVT and ICDAR datasets, show that the FAN method substantially outperforms the existing methods.",
"Scene text recognition has drawn great attentions in the community of computer vision and artificial intelligence due to its challenges and wide applications. State-of-the-art recurrent neural networks (RNN) based models map an input sequence to a variable length output sequence, but are usually applied in a black box manner and lack of transparency for further improvement, and the maintaining of the entire past hidden states prevents parallel computation in a sequence. In this paper, we investigate the intrinsic characteristics of text recognition, and inspired by human cognition mechanisms in reading texts, we propose a scene text recognition method with sliding convolutional attention network (SCAN). Similar to the eye movement during reading, the process of SCAN can be viewed as an alternation between saccades and visual fixations. Compared to the previous recurrent models, computations over all elements of SCAN can be fully parallelized during training. Experimental results on several challenging benchmarks, including the IIIT5k, SVT and ICDAR 2003 2013 datasets, demonstrate the superiority of SCAN over state-of-the-art methods in terms of both the model interpretability and performance.",
"",
"Recognizing text from natural images is a hot research topic in computer vision due to its various applications. Despite the enduring research of several decades on optical character recognition (OCR), recognizing texts from natural images is still a challenging task. This is because scene texts are often in irregular (e.g. curved, arbitrarily-oriented or seriously distorted) arrangements, which have not yet been well addressed in the literature. Existing methods on text recognition mainly work with regular (horizontal and frontal) texts and cannot be trivially generalized to handle irregular texts. In this paper, we develop the arbitrary orientation network (AON) to directly capture the deep features of irregular texts, which are combined into an attention-based decoder to generate character sequence. The whole network can be trained end-to-end by using only images and word-level annotations. Extensive experiments on various benchmarks, including the CUTE80, SVT-Perspective, IIIT5k, SVT and ICDAR datasets, show that the proposed AON-based method achieves the-state-of-the-art performance in irregular datasets, and is comparable to major existing methods in regular datasets."
]
} |
1907.09705 | 2962957458 | Scene text recognition has been an important, active research topic in computer vision for years. Previous approaches mainly consider text as 1D signals and cast scene text recognition as a sequence prediction problem, by feat of CTC or attention based encoder-decoder framework, which is originally designed for speech recognition. However, different from speech voices, which are 1D signals, text instances are essentially distributed in 2D image spaces. To adhere to and make use of the 2D nature of text for higher recognition accuracy, we extend the vanilla CTC model to a second dimension, thus creating 2D-CTC. 2D-CTC can adaptively concentrate on most relevant features while excluding the impact from clutters and noises in the background; It can also naturally handle text instances with various forms (horizontal, oriented and curved) while giving more interpretable intermediate predictions. The experiments on standard benchmarks for scene text recognition, such as IIIT-5K, ICDAR 2015, SVP-Perspective, and CUTE80, demonstrate that the proposed 2D-CTC model outperforms state-of-the-art methods on the text of both regular and irregular shapes. Moreover, 2D-CTC exhibits its superiority over prior art on training and testing speed. Our implementation and models of 2D-CTC will be made publicly available soon later. | In contrast to the aforementioned methods, Liao al @cite_19 recently propose to utilize instance segmentation to simultaneously predict character locations and recognition results, avoiding the problem of attention misalignment. They also notice the conflict between the 2D image feature and collapsed sequence presentation, and propose a reasonable solution. However, this method requires character-level annotations, and are limited to real-world applications, especially for areas where detailed annotations are hardly available (e.g., handwritten text recognition). | {
"cite_N": [
"@cite_19"
],
"mid": [
"2963233387"
],
"abstract": [
"Inspired by speech recognition, recent state-of-the-art algorithms mostly consider scene text recognition as a sequence prediction problem. Though achieving excellent performance, these methods usually neglect an important fact that text in images are actually distributed in two-dimensional space. It is a nature quite different from that of speech, which is essentially a one-dimensional signal. In principle, directly compressing features of text into a one-dimensional form may lose useful information and introduce extra noise. In this paper, we approach scene text recognition from a two-dimensional perspective. A simple yet effective model, called Character Attention Fully Convolutional Network (CA-FCN), is devised for recognizing the text of arbitrary shapes. Scene text recognition is realized with a semantic segmentation network, where an attention mechanism for characters is adopted. Combined with a word formation module, CA-FCN can simultaneously recognize the script and predict the position of each character. Experiments demonstrate that the proposed algorithm outperforms previous methods on both regular and irregular text datasets. Moreover, it is proven to be more robust to imprecise localizations in the text detection phase, which are very common in practice."
]
} |
1907.09705 | 2962957458 | Scene text recognition has been an important, active research topic in computer vision for years. Previous approaches mainly consider text as 1D signals and cast scene text recognition as a sequence prediction problem, by feat of CTC or attention based encoder-decoder framework, which is originally designed for speech recognition. However, different from speech voices, which are 1D signals, text instances are essentially distributed in 2D image spaces. To adhere to and make use of the 2D nature of text for higher recognition accuracy, we extend the vanilla CTC model to a second dimension, thus creating 2D-CTC. 2D-CTC can adaptively concentrate on most relevant features while excluding the impact from clutters and noises in the background; It can also naturally handle text instances with various forms (horizontal, oriented and curved) while giving more interpretable intermediate predictions. The experiments on standard benchmarks for scene text recognition, such as IIIT-5K, ICDAR 2015, SVP-Perspective, and CUTE80, demonstrate that the proposed 2D-CTC model outperforms state-of-the-art methods on the text of both regular and irregular shapes. Moreover, 2D-CTC exhibits its superiority over prior art on training and testing speed. Our implementation and models of 2D-CTC will be made publicly available soon later. | In concern of both accuracy and efficiency, 2D-CTC recognizes text from a 2D perspective similar to @cite_19 , but trained without any character-level annotations. By extending vanilla CTC, 2D-CTC achieves state-of-the-art performance, while retaining the high efficiency of CTC models. | {
"cite_N": [
"@cite_19"
],
"mid": [
"2963233387"
],
"abstract": [
"Inspired by speech recognition, recent state-of-the-art algorithms mostly consider scene text recognition as a sequence prediction problem. Though achieving excellent performance, these methods usually neglect an important fact that text in images are actually distributed in two-dimensional space. It is a nature quite different from that of speech, which is essentially a one-dimensional signal. In principle, directly compressing features of text into a one-dimensional form may lose useful information and introduce extra noise. In this paper, we approach scene text recognition from a two-dimensional perspective. A simple yet effective model, called Character Attention Fully Convolutional Network (CA-FCN), is devised for recognizing the text of arbitrary shapes. Scene text recognition is realized with a semantic segmentation network, where an attention mechanism for characters is adopted. Combined with a word formation module, CA-FCN can simultaneously recognize the script and predict the position of each character. Experiments demonstrate that the proposed algorithm outperforms previous methods on both regular and irregular text datasets. Moreover, it is proven to be more robust to imprecise localizations in the text detection phase, which are very common in practice."
]
} |
1907.09656 | 2962692517 | Tactile perception is an essential ability of intelligent robots in interaction with their surrounding environments. This perception as an intermediate level acts between sensation and action and has to be defined properly to generate suitable action in response to sensed data. In this paper, we propose a feedback approach to address robot grasping task using force-torque tactile sensing. While visual perception is an essential part for gross reaching, constant utilization of this sensing modality can negatively affect the grasping process with overwhelming computation. In such case, human being utilizes tactile sensing to interact with objects. Inspired by, the proposed approach is presented and evaluated on a real robot to demonstrate the effectiveness of the suggested framework. Moreover, we utilize a deep learning framework called Deep Calibration in order to eliminate the effect of bias in the collected data from the robot sensors. | In this paper, we aim to propose an interactive tactile perception with use of force-torque sensing for a grasping task. Grasping task can be initiated by either visual or tactile perception. Although the first one is powerful in terms of object recognition, a continuous processing of visual data during interaction is not a simple task for a light robot. In real world, a human also barely utilizes vision ability in close proximity of the object to be grasped; indeed, tactile perception ability is more useful in the vicinity of object @cite_20 , @cite_21 . | {
"cite_N": [
"@cite_21",
"@cite_20"
],
"mid": [
"2802840089",
"2897170587"
],
"abstract": [
"Can a robot grasp an unknown object without seeing it? In this paper, we present a tactile-sensing based approach to this challenging problem of grasping novel objects without prior knowledge of their location or physical properties. Our key idea is to combine touch based object localization with tactile based re-grasping. To train our learning models, we created a large-scale grasping dataset, including more than 30 RGB frames and over 2.8 million tactile samples from 7800 grasp interactions of 52 objects. To learn a representation of tactile signals, we propose an unsupervised auto-encoding scheme, which shows a significant improvement of 4-9 over prior methods on a variety of tactile perception tasks. Our system consists of two steps. First, our touch localization model sequentially 'touch-scans' the workspace and uses a particle filter to aggregate beliefs from multiple hits of the target. It outputs an estimate of the object's location, from which an initial grasp is established. Next, our re-grasping model learns to progressively improve grasps with tactile feedback based on the learned features. This network learns to estimate grasp stability and predict adjustment for the next grasp. Re-grasping thus is performed iteratively until our model identifies a stable grasp. Finally, we demonstrate extensive experimental results on grasping a large set of novel objects using tactile sensing alone. Furthermore, when applied on top of a vision-based policy, our re-grasping model significantly boosts the overall accuracy by 10.6 . We believe this is the first attempt at learning to grasp with only tactile sensing and without any prior object knowledge.",
"Contact-rich manipulation tasks in unstructured environments often require both haptic and visual feedback. However, it is non-trivial to manually design a robot controller that combines modalities with very different characteristics. While deep reinforcement learning has shown success in learning control policies for high-dimensional inputs, these algorithms are generally intractable to deploy on real robots due to sample complexity. We use self-supervision to learn a compact and multimodal representation of our sensory inputs, which can then be used to improve the sample efficiency of our policy learning. We evaluate our method on a peg insertion task, generalizing over different geometry, configurations, and clearances, while being robust to external perturbations. Results for simulated and real robot experiments are presented."
]
} |
1907.09656 | 2962692517 | Tactile perception is an essential ability of intelligent robots in interaction with their surrounding environments. This perception as an intermediate level acts between sensation and action and has to be defined properly to generate suitable action in response to sensed data. In this paper, we propose a feedback approach to address robot grasping task using force-torque tactile sensing. While visual perception is an essential part for gross reaching, constant utilization of this sensing modality can negatively affect the grasping process with overwhelming computation. In such case, human being utilizes tactile sensing to interact with objects. Inspired by, the proposed approach is presented and evaluated on a real robot to demonstrate the effectiveness of the suggested framework. Moreover, we utilize a deep learning framework called Deep Calibration in order to eliminate the effect of bias in the collected data from the robot sensors. | Grasping is an essential and complex daily activity. Through this task, humans show an intention to affect surrounding environment in a controllable manner. Humans primarily utilize a combination of control strategy and learning from repetitive experiments to anticipate grasping in different situations @cite_12 . In this regard, the properties of an object such as size, shape, and contact surface are important parameters during grasping task @cite_4 . | {
"cite_N": [
"@cite_4",
"@cite_12"
],
"mid": [
"2021570254",
"2057299504"
],
"abstract": [
"This paper presents a novel robot grasping planning approach that extracts grasp strategies (grasp type, and thumb placement and direction) from human demonstration and integrates them into the grasp planning procedure to generate a feasible grasp concerning the target object geometry and manipulation task. Our study results show that the grasp strategies of grasp type and thumb placement not only represent important human grasp intentions, but also provide meaningful constraints on hand posture and wrist position, which highly reduce both the feasible workspace of a robotic hand and the search space of the grasp planning. This approach has been thoroughly evaluated both in simulation and with a real robotic system for multiple daily living representative objects. We have also demonstrated the robustness of our approach in the experiment with different levels of perception uncertainties.",
"We present a novel robotic grasp controller that allows a sensorized parallel jaw gripper to gently pick up and set down unknown objects once a grasp location has been selected. Our approach is inspired by the control scheme that humans employ for such actions, which is known to centrally depend on tactile sensation rather than vision or proprioception. Our controller processes measurements from the gripper's fingertip pressure arrays and hand-mounted accelerometer in real time to generate robotic tactile signals that are designed to mimic human SA-I, FA-I, and FA-II channels. These signals are combined into tactile event cues that drive the transitions between six discrete states in the grasp controller: Close, Load, Lift and Hold, Replace, Unload, and Open. The controller selects an appropriate initial grasping force, detects when an object is slipping from the grasp, increases the grasp force as needed, and judges when to release an object to set it down. We demonstrate the promise of our approach through implementation on the PR2 robotic platform, including grasp testing on a large number of real-world objects."
]
} |
1907.09656 | 2962692517 | Tactile perception is an essential ability of intelligent robots in interaction with their surrounding environments. This perception as an intermediate level acts between sensation and action and has to be defined properly to generate suitable action in response to sensed data. In this paper, we propose a feedback approach to address robot grasping task using force-torque tactile sensing. While visual perception is an essential part for gross reaching, constant utilization of this sensing modality can negatively affect the grasping process with overwhelming computation. In such case, human being utilizes tactile sensing to interact with objects. Inspired by, the proposed approach is presented and evaluated on a real robot to demonstrate the effectiveness of the suggested framework. Moreover, we utilize a deep learning framework called Deep Calibration in order to eliminate the effect of bias in the collected data from the robot sensors. | Tactile sensation is a very informative feature to recognize object properties. In @cite_19 , the authors proposed a tactile perception strategy to measure tactile features for mobile robots. Tactile sensing also is used to propose a robust controller for reliable grasping @cite_17 and slipping avoidance @cite_0 . Visual sensing and tactile sensing are complementary in robot grasping. A combination of both of them through deep architecture is a promising solution in @cite_2 , @cite_6 . However, processing these high-dimensional data is not an easy task and a meaningful compact representation would be needed @cite_9 , @cite_23 . A robot can learn the manipulation using tactile sensation through demonstrations @cite_1 , @cite_9 , @cite_14 . | {
"cite_N": [
"@cite_14",
"@cite_9",
"@cite_1",
"@cite_6",
"@cite_0",
"@cite_19",
"@cite_23",
"@cite_2",
"@cite_17"
],
"mid": [
"2340935928",
"2567455162",
"1979709633",
"2963654998",
"2737781003",
"2149606722",
"2963634205",
"2736762515",
"2598908072"
],
"abstract": [
"One of the central tasks for a household robot is searching for specific objects. It does not only require localizing the target object but also identifying promising search locations in the scene if the target is not immediately visible. As computation time and hardware resources are usually limited in robotics, it is desirable to avoid expensive visual processing steps that are exhaustively applied over the entire image. The human visual system can quickly select those image locations that have to be processed in detail for a given task. This allows us to cope with huge amounts of information and to efficiently deploy the limited capacities of our visual system. In this paper, we therefore propose to use human fixation data to train a top-down saliency model that predicts relevant image locations when searching for specific objects. We show that the learned model can successfully prune bounding box proposals without rejecting the ground truth object locations. In this aspect, the proposed model outperforms a model that is trained only on the ground truth segmentations of the target object instead of fixation data.",
"For many tasks, tactile or visual feedback is helpful or even crucial. However, designing controllers that take such high-dimensional feedback into account is non-trivial. Therefore, robots should be able to learn tactile skills through trial and error by using reinforcement learning algorithms. The input domain for such tasks, however, might include strongly correlated or non-relevant dimensions, making it hard to specify a suitable metric on such domains. Auto-encoders specialize in finding compact representations, where defining such a metric is likely to be easier. Therefore, we propose a reinforcement learning algorithm that can learn non-linear policies in continuous state spaces, which leverages representations learned using auto-encoders. We first evaluate this method on a simulated toy-task with visual input. Then, we validate our approach on a real-robot tactile stabilization task.",
"Tactile sensing is a fundamental component of object manipulation and tool handling skills. With robots entering unstructured environments, tactile feedback also becomes an important ability for robot manipulation. In this work, we explore how a robot can learn to use tactile sensing in object manipulation tasks. We first address the problem of in-hand object localization and adapt three pose estimation algorithms from computer vision. Second, we employ dynamic motor primitives to learn robot movements from human demonstrations and record desired tactile signal trajectories. Then, we add tactile feedback to the control loop and apply relative entropy policy search to learn the parameters of the tactile coupling. Additionally, we show how the learning of tactile feedback can be performed more efficiently by reducing the dimensionality of the tactile information through spectral clustering and principal component analysis. Our approach is implemented on a real robot, which learns to perform a scraping task with a spatula in an altered environment.",
"Robots which interact with the physical world will benefit from a fine-grained tactile understanding of objects and surfaces. Additionally, for certain tasks, robots may need to know the haptic properties of an object before touching it. To enable better tactile understanding for robots, we propose a method of classifying surfaces with haptic adjectives (e.g., compressible or smooth) from both visual and physical interaction data. Humans typically combine visual predictions and feedback from physical interactions to accurately predict haptic properties and interact with the world. Inspired by this cognitive pattern, we propose and explore a purely visual haptic prediction model. Purely visual models enable a robot to “feel” without physical interaction. Furthermore, we demonstrate that using both visual and physical interaction signals together yields more accurate haptic classification. Our models take advantage of recent advances in deep neural networks by employing a unified approach to learning features for physical interaction and visual observations. Even though we employ little domain specific knowledge, our model still achieves better results than methods based on hand-designed features.",
"In-hand manipulation is certainly one of the most challenging problems in robotic manipulation. Solutions to this problem depend on the specific device used to grab the object, but nowadays, the trend is to exploit not only the gripper but also external constraints, such as other objects in the environment or external forces, like gravity. This allows a robot to manipulate an object even with very simple grippers, like a parallel gripper. Nevertheless, even for a simple grasping task, which aims at grabbing the object with a given fixed orientation or for executing a controlled slip, information on the contact between the fingers of the gripper and the object is relevant. In these cases, both linear and rotational slipping should be controlled during the grasping phase and during the motion phase. The present paper proposes a control strategy for the first objective, namely slipping avoidance. The strategy is based on contact information provided by a six-axis force tactile sensor, able to measure contact force and torque as well as able to provide information on the contact geometry, that means orientation of the object with respect to the gripper. Experiments on a parallel gripper sensorized with a new force tactile sensor and mounted on a Kuka iiwa show how the strategy successfully allows the robot to safely manipulate a rigid object in various friction conditions of its surface.",
"Tactile information is valuable in determining properties of objects that are inaccessible from visual perception. In this paper, we present a tactile perception strategy that allows a mobile robot with tactile sensors in its gripper to measure a generic set of tactile features while manipulating an object. We propose a switching velocity-force controller that grasps an object safely and reveals, at the same time, its deformation properties. By gently rolling the object, the robot can extract additional information about the contents of the object. As an application, we show that a robot can use these features to distinguish the internal state of bottles and cans-purely from tactile sensing-from a small training set. The robot can distinguish open from closed bottles and cans and full ones from empty ones. We also show how the high-frequency component in tactile information can be used to detect movement inside a container, e.g., in order to detect the presence of liquid. To prove that this is a hard recognition problem, we also conducted a comparative study with 17 human test subjects. The recognition rates of the human subjects were comparable with that of the robot.",
"Reinforcement learning provides a powerful and flexible framework for automated acquisition of robotic motion skills. However, applying reinforcement learning requires a sufficiently detailed representation of the state, including the configuration of task-relevant objects. We present an approach that automates state-space construction by learning a state representation directly from camera images. Our method uses a deep spatial autoencoder to acquire a set of feature points that describe the environment for the current task, such as the positions of objects, and then learns a motion skill with these feature points using an efficient reinforcement learning method based on local linear models. The resulting controller reacts continuously to the learned feature points, allowing the robot to dynamically manipulate objects in the world with closed-loop control. We demonstrate our method with a PR2 robot on tasks that include pushing a free-standing toy block, picking up a bag of rice using a spatula, and hanging a loop of rope on a hook at various positions. In each task, our method automatically learns to track task-relevant objects and manipulate their configuration with the robot's arm.",
"The robotic grasp detection is a great challenge in the area of robotics. Previous work mainly employs the visual approaches to solve this problem. In this paper, a hybrid deep architecture combining the visual and tactile sensing for robotic grasp detection is proposed. We have demonstrated that the visual sensing and tactile sensing are complementary to each other and important for the robotic grasping. A new THU grasp dataset has also been collected which contains the visual, tactile and grasp configuration information. The experiments conducted on a public grasp dataset and our collected dataset show that the performance of the proposed model is superior to state of the art methods. The results also indicate that the tactile data could help to enable the network to learn better visual features for the robotic grasp detection task.",
"A fundamental problem in cooperative HumanRobot Interaction is object handover. Existing works in this area assume the human can reliably grasp the object from the robot hand. However, in some situations the human can produce perturbing forces in the object that are not meant to end in a handover. These perturbations can result in the object being dropped or the robot hand being damaged. This paper addresses this problem and presents a mechanism for reliable robot to human object handover implemented in a Shadow Robot hand endowed with tactile sensing. Given a stable grasping configuration, using BioTAC sensors we are able to estimate the contact forces applied to the object, and provide a feedback signal to a joint effort controller to maintain grasp forces despite perturbations. Our system is able to identify between object pulling forces which should result in an object handover, and other disturbances. Experimental results show that the hand releases the object only when the object is pulled, validating the proposed algorithm."
]
} |
1907.09656 | 2962692517 | Tactile perception is an essential ability of intelligent robots in interaction with their surrounding environments. This perception as an intermediate level acts between sensation and action and has to be defined properly to generate suitable action in response to sensed data. In this paper, we propose a feedback approach to address robot grasping task using force-torque tactile sensing. While visual perception is an essential part for gross reaching, constant utilization of this sensing modality can negatively affect the grasping process with overwhelming computation. In such case, human being utilizes tactile sensing to interact with objects. Inspired by, the proposed approach is presented and evaluated on a real robot to demonstrate the effectiveness of the suggested framework. Moreover, we utilize a deep learning framework called Deep Calibration in order to eliminate the effect of bias in the collected data from the robot sensors. | A lower dimensional representation of tactile data is also more useful for object material classification @cite_16 . With more processing approaches such as bag-of-words, identification of objects would be possible in advance @cite_22 . Extraction of object pose via touch based perception can be used for manipulation @cite_15 . Moreover, localization will be improved by contact information gathered by tactile sensor @cite_7 , @cite_5 . The robot can control and adjust the pose of hand with stability consideration after evaluating the tactile experiences @cite_18 , @cite_11 . | {
"cite_N": [
"@cite_18",
"@cite_22",
"@cite_7",
"@cite_5",
"@cite_15",
"@cite_16",
"@cite_11"
],
"mid": [
"2021473074",
"2138648596",
"2035557456",
"2421020186",
"",
"",
"2026499961"
],
"abstract": [
"An important ability of a robot that interacts with the environment and manipulates objects is to deal with the uncertainty in sensory data. Sensory information is necessary to, for example, perform online assessment of grasp stability. We present methods to assess grasp stability based on haptic data and machine-learning methods, including AdaBoost, support vector machines (SVMs), and hidden Markov models (HMMs). In particular, we study the effect of different sensory streams to grasp stability. This includes object information such as shape; grasp information such as approach vector; tactile measurements from fingertips; and joint configuration of the hand. Sensory knowledge affects the success of the grasping process both in the planning stage (before a grasp is executed) and during the execution of the grasp (closed-loop online control). In this paper, we study both of these aspects. We propose a probabilistic learning framework to assess grasp stability and demonstrate that knowledge about grasp stability can be inferred using information from tactile sensors. Experiments on both simulated and real data are shown. The results indicate that the idea to exploit the learning approach is applicable in realistic scenarios, which opens a number of interesting venues for the future research.",
"In this paper, we present a novel approach for identifying objects using touch sensors installed in the finger tips of a manipulation robot. Our approach operates on low-resolution intensity images that are obtained when the robot grasps an object. We apply a bag-of-words approach for object identification. By means of unsupervised clustering on training data, our approach learns a vocabulary from tactile observations which is used to generate a histogram codebook. The histogram codebook models distributions over the vocabulary and is the core identification mechanism. As the objects are larger than the sensor, the robot typically needs multiple grasp actions at different positions to uniquely identify an object. To reduce the number of required grasp actions, we apply a decision-theoretic framework that minimizes the entropy of the probabilistic belief about the type of the object. In our experiments carried out with various industrial and household objects, we demonstrate that our approach is able to discriminate between a large set of objects. We furthermore show that using our approach, a robot is able to distinguish visually similar objects that have different elasticity properties by using only the information from the touch sensor.",
"It is frequently accepted in the manipulation literature that tactile sensing is needed to improve the precision of robot manipulation. However, there is no consensus on how this may be achieved. This paper applies particle filtering to the problem of localizing the pose and shape of an object that the robot touches. We are motivated by the situation where the robot has enclosed its fingers around an object but has not yet grasped it. This might be the case just prior to grasping or when the robot is holding on to something fixtured elsewhere in the environment. In order to solve this problem, we propose a new model for position measurements of points on the robot manipulator that tactile sensing indicates are touching the object. We also propose a model for points on the manipulator that tactile measurements indicate are not touching the object. Finally, we characterize the approach in simulation and use it to localize an object that Robonaut 2 holds in its hand.",
"Robust manipulation and insertion of small parts can be challenging because of the small tolerances typically involved. The key to robust control of these kinds of manipulation interactions is accurate tracking and control of the parts involved. Typically, this is accomplished using visual servoing or force-based control. However, these approaches have drawbacks. Instead, we propose a new approach that uses tactile sensing to accurately localize the pose of a part grasped in the robot hand. Using a feature-based matching technique in conjunction with a newly developed tactile sensing technology known as GelSight that has much higher resolution than competing methods, we synthesize high-resolution height maps of object surfaces. As a result of these high-resolution tactile maps, we are able to localize small parts held in a robot hand very accurately. We quantify localization accuracy in benchtop experiments and experimentally demonstrate the practicality of the approach in the context of a small parts insertion problem.",
"",
"",
"This paper deals with the problem of stable grasping under pose uncertainty. Our method utilizes tactile sensing data to estimate grasp stability and make necessary hand adjustments after an initial grasp is established. We first discuss a learning approach to estimating grasp stability based on tactile sensing data. This estimator can be used as an indicator to the stability of the current grasp during a grasping procedure. We then present a tactile experience based hand adjustment algorithm to synthesize a hand adjustment and optimize the hand pose to achieve a stable grasp. Experiments show that our method improves the grasping performance under pose uncertainty."
]
} |
1907.09511 | 2963009244 | Most state-of-the-art person re-identification (re-id) methods depend on supervised model learning with a large set of cross-view identity labelled training data. Even worse, such trained models are limited to only the same-domain deployment with significantly degraded cross-domain generalization capability, i.e. "domain specific". To solve this limitation, there are a number of recent unsupervised domain adaptation and unsupervised learning methods that leverage unlabelled target domain training data. However, these methods need to train a separate model for each target domain as supervised learning methods. This conventional " train once, run once " pattern is unscalable to a large number of target domains typically encountered in real-world deployments. We address this problem by presenting a "train once, run everywhere" pattern industry-scale systems are desperate for. We formulate a "universal model learning' approach enabling domain-generic person re-id using only limited training data of a " single " seed domain. Specifically, we train a universal re-id deep model to discriminate between a set of transformed person identity classes. Each of such classes is formed by applying a variety of random appearance transformations to the images of that class, where the transformations simulate the camera viewing conditions of any domains for making the model training domain generic. Extensive evaluations show the superiority of our method for universal person re-id over a wide variety of state-of-the-art unsupervised domain adaptation and unsupervised learning re-id methods on five standard benchmarks: Market-1501, DukeMTMC, CUHK03, MSMT17, and VIPeR. | Unsupervised domain adaptation person re-id. The limitation of supervised learning re-id methods in cross-domain scalability can be addressed by using unsupervised domain adaptation (UDA) techniques. Existing UDA re-id methods generally fall into two categories: (1) image synthesis @cite_25 @cite_50 @cite_43 @cite_11 , and (2) feature alignment @cite_19 @cite_21 @cite_40 @cite_2 @cite_23 . The former aims to transfer the labelled source identity classes from the source domain to the target domain through cross-domain conditional image generation in the appearance style and background context at pixel level. The synthetic images are then used to fine-tune the model towards the target domain. On the contrary, the latter transfers the discriminative feature information learned from the labelled source training data to the target feature space by distribution alignment. These methods often use discrete attribute labels for facilitating the information transition across domains due to their better domain invariance property than low-level feature representations. | {
"cite_N": [
"@cite_21",
"@cite_19",
"@cite_43",
"@cite_40",
"@cite_50",
"@cite_2",
"@cite_23",
"@cite_25",
"@cite_11"
],
"mid": [
"2904973725",
"2441160157",
"2799107345",
"2963721283",
"2896016251",
"2794651663",
"2963983343",
"",
"2964186374"
],
"abstract": [
"Person re-identification (Re-ID) aims to match identities across non-overlapping camera views. Researchers have proposed many Re-ID models which require quantities of cross-view pairwise labelled data. This limits their scalabilities to many applications where a large amount of cross-view data is available but unlabelled. Although some unsupervised Re-ID models have been proposed to address the scalability problem, they often suffer from the view-specific bias which is caused by dramatic variances across camera views, e.g., different illumination and viewpoints. The dramatic variances induce view-specific feature distortions, which is disturbing in finding cross-view discriminative information. We propose to explicitly address this problem by learning an asymmetric distance metric based on cross-view clustering. The asymmetric metric allows specific feature transformations for each view to tackle the specific feature distortions. We then design a novel unsupervised loss function to embed the asymmetric metric into a deep neural network to develop a novel unsupervised framework. In such a way, it jointly learns the feature representation and the unsupervised asymmetric metric. Our framework learns a compact cross-view cluster structure of Re-ID data, and thus help alleviate the view-specific bias and facilitate mining the potential cross-view discriminative information for unsupervised Re-ID. Extensive experiments show our framework's effectiveness.",
"Most existing person re-identification (Re-ID) approaches follow a supervised learning framework, in which a large number of labelled matching pairs are required for training. This severely limits their scalability in realworld applications. To overcome this limitation, we develop a novel cross-dataset transfer learning approach to learn a discriminative representation. It is unsupervised in the sense that the target dataset is completely unlabelled. Specifically, we present an multi-task dictionary learning method which is able to learn a dataset-shared but targetdata-biased representation. Experimental results on five benchmark datasets demonstrate that the method significantly outperforms the state-of-the-art.",
"Drastic variations in illumination across surveillance cameras make the person re-identification problem extremely challenging. Current large scale re-identification datasets have a significant number of training subjects, but lack diversity in lighting conditions. As a result, a trained model requires fine-tuning to become effective under an unseen illumination condition. To alleviate this problem, we introduce a new synthetic dataset that contains hundreds of illumination conditions. Specifically, we use 100 virtual humans illuminated with multiple HDR environment maps which accurately model realistic indoor and outdoor lighting. To achieve better accuracy in unseen illumination conditions we propose a novel domain adaptation technique that takes advantage of our synthetic data and performs fine-tuning in a completely unsupervised way. Our approach yields significantly higher accuracy than semi-supervised and unsupervised state-of-the-art methods, and is very competitive with supervised techniques.",
"While metric learning is important for Person reidentification (RE-ID), a significant problem in visual surveillance for cross-view pedestrian matching, existing metric models for RE-ID are mostly based on supervised learning that requires quantities of labeled samples in all pairs of camera views for training. However, this limits their scalabilities to realistic applications, in which a large amount of data over multiple disjoint camera views is available but not labelled. To overcome the problem, we propose unsupervised asymmetric metric learning for unsupervised RE-ID. Our model aims to learn an asymmetric metric, i.e., specific projection for each view, based on asymmetric clustering on cross-view person images. Our model finds a shared space where view-specific bias is alleviated and thus better matching performance can be achieved. Extensive experiments have been conducted on a baseline and five large-scale RE-ID datasets to demonstrate the effectiveness of the proposed model. Through the comparison, we show that our model works much more suitable for unsupervised RE-ID compared to classical unsupervised metric learning models. We also compare with existing unsupervised REID methods, and our model outperforms them with notable margins. Specifically, we report the results on large-scale unlabelled RE-ID dataset, which is important but unfortunately less concerned in literatures.",
"Person re-identification (re-ID) poses unique challenges for unsupervised domain adaptation (UDA) in that classes in the source and target sets (domains) are entirely different and that image variations are largely caused by cameras. Given a labeled source training set and an unlabeled target training set, we aim to improve the generalization ability of re-ID models on the target testing set. To this end, we introduce a Hetero-Homogeneous Learning (HHL) method. Our method enforces two properties simultaneously: (1) camera invariance, learned via positive pairs formed by unlabeled target images and their camera style transferred counterparts; (2) domain connectedness, by regarding source target images as negative matching pairs to the target source images. The first property is implemented by homogeneous learning because training pairs are collected from the same domain. The second property is achieved by heterogeneous learning because we sample training pairs from both the source and target domains. On Market-1501, DukeMTMC-reID and CUHK03, we show that the two properties contribute indispensably and that very competitive re-ID UDA accuracy is achieved. Code is available at: https: github.com zhunzhong07 HHL.",
"Most existing person re-identification (re-id) methods require supervised model learning from a separate large set of pairwise labelled training data for every single camera pair. This significantly limits their scalability and usability in real-world large scale deployments with the need for performing re-id across many camera views. To address this scalability problem, we develop a novel deep learning method for transferring the labelled information of an existing dataset to a new unseen (unlabelled) target domain for person re-id without any supervised learning in the target domain. Specifically, we introduce an Transferable Joint Attribute-Identity Deep Learning (TJ-AIDL) for simultaneously learning an attribute-semantic and identitydiscriminative feature representation space transferrable to any new (unseen) target domain for re-id tasks without the need for collecting new labelled training data from the target domain (i.e. unsupervised learning in the target domain). Extensive comparative evaluations validate the superiority of this new TJ-AIDL model for unsupervised person re-id over a wide range of state-of-the-art methods on four challenging benchmarks including VIPeR, PRID, Market-1501, and DukeMTMC-ReID.",
"",
"",
"Person Re-identification (re-id) faces two major challenges: the lack of cross-view paired training data and learning discriminative identity-sensitive and view-invariant features in the presence of large pose variations. In this work, we address both problems by proposing a novel deep person image generation model for synthesizing realistic person images conditional on the pose. The model is based on a generative adversarial network (GAN) designed specifically for pose normalization in re-id, thus termed pose-normalization GAN (PN-GAN). With the synthesized images, we can learn a new type of deep re-id features free of the influence of pose variations. We show that these features are complementary to features learned with the original images. Importantly, a more realistic unsupervised learning setting is considered in this work, and our model is shown to have the potential to be generalizable to a new re-id dataset without any fine-tuning. The codes will be released at https: github.com naiq PN_GAN."
]
} |
1907.09607 | 2964204277 | The success of deep learning in medical imaging is mostly achieved at the cost of a large labeled data set. Semi-supervised learning (SSL) provides a promising solution by leveraging the structure of unlabeled data to improve learning from a small set of labeled data. Self-ensembling is a simple approach used in SSL to encourage consensus among ensemble predictions of unknown labels, improving generalization of the model by making it more insensitive to the latent space. Currently, such an ensemble is obtained by randomization such as dropout regularization and random data augmentation. In this work, we hypothesize -- from the generalization perspective -- that self-ensembling can be improved by exploiting the stochasticity of a disentangled latent space. To this end, we present a stacked SSL model that utilizes unsupervised disentangled representation learning as the stochastic embedding for self-ensembling. We evaluate the presented model for multi-label classification using chest X-ray images, demonstrating its improved performance over related SSL models as well as the interpretability of its disentangled representations. | This work is mostly related to two lines of research: 1) SSL based on regularization with random transformations, and 2) disentangled representation learning and its use in SSL. In the former, consistency-based regularization is applied on ensemble predictions obtained by randomization techniques such as random data augmentation, dropout, and random max-pooling @cite_8 . This randomization was empirically shown to improve the generalization and stability of the SSL model, while its theoretical basis was recently shown to be related to the reduction of model sensitivity to the latent space @cite_9 @cite_10 . Motivated by this theory, in this work, we attempt to utilize the knowledge about the stochastic latent space -- obtained in unsupervised learning -- in this randomization process. | {
"cite_N": [
"@cite_9",
"@cite_10",
"@cite_8"
],
"mid": [
"2789122432",
"2963229033",
"2530816535"
],
"abstract": [
"This paper introduces a novel measure-theoretic learning theory to analyze generalization behaviors of practical interest. The proposed learning theory has the following abilities: 1) to utilize the qualities of each learned representation on the path from raw inputs to outputs in representation learning, 2) to guarantee good generalization errors possibly with arbitrarily rich hypothesis spaces (e.g., arbitrarily large capacity and Rademacher complexity) and non-stable non-robust learning algorithms, and 3) to clearly distinguish each individual problem instance from each other. Our generalization bounds are relative to a representation of the data, and hold true even if the representation is learned. We discuss several consequences of our results on deep learning, one-shot learning and curriculum learning. Unlike statistical learning theory, the proposed learning theory analyzes each problem instance individually via measure theory, rather than a set of problem instances via statistics. Because of the differences in the assumptions and the objectives, the proposed learning theory is meant to be complementary to previous learning theory and is not designed to compete with it.",
"Deep learning networks have shown state-of-the-art performance in many image reconstruction problems. However, it is not well understood what properties of representation and learning may improve the generalization ability of the network. In this paper, we propose that the generalization ability of an encoder-decoder network for inverse reconstruction can be improved in two means. First, drawing from analytical learning theory, we theoretically show that a stochastic latent space will improve the ability of a network to generalize to test data outside the training distribution. Second, following the information bottleneck principle, we show that a latent representation minimally informative of the input data will help a network generalize to unseen input variations that are irrelevant to the output reconstruction. Therefore, we present a sequence image reconstruction network optimized by a variational approximation of the information bottleneck principle with stochastic latent space. In the application setting of reconstructing the sequence of cardiac transmembrane potential from body-surface potential, we assess the two types of generalization abilities of the presented network against its deterministic counterpart. The results demonstrate that the generalization ability of an inverse reconstruction network can be improved by stochasticity as well as the information bottleneck.",
"In this paper, we present a simple and efficient method for training deep neural networks in a semi-supervised setting where only a small portion of training data is labeled. We introduce self-ensembling, where we form a consensus prediction of the unknown labels using the outputs of the network-in-training on different epochs, and most importantly, under different regularization and input augmentation conditions. This ensemble prediction can be expected to be a better predictor for the unknown labels than the output of the network at the most recent training epoch, and can thus be used as a target for training. Using our method, we set new records for two standard semi-supervised learning benchmarks, reducing the (non-augmented) classification error rate from 18.44 to 7.05 in SVHN with 500 labels and from 18.63 to 16.55 in CIFAR-10 with 4000 labels, and further to 5.12 and 12.16 by enabling the standard augmentations. We additionally obtain a clear improvement in CIFAR-100 classification accuracy by using random images from the Tiny Images dataset as unlabeled extra inputs during training. Finally, we demonstrate good tolerance to incorrect labels."
]
} |
1907.09607 | 2964204277 | The success of deep learning in medical imaging is mostly achieved at the cost of a large labeled data set. Semi-supervised learning (SSL) provides a promising solution by leveraging the structure of unlabeled data to improve learning from a small set of labeled data. Self-ensembling is a simple approach used in SSL to encourage consensus among ensemble predictions of unknown labels, improving generalization of the model by making it more insensitive to the latent space. Currently, such an ensemble is obtained by randomization such as dropout regularization and random data augmentation. In this work, we hypothesize -- from the generalization perspective -- that self-ensembling can be improved by exploiting the stochasticity of a disentangled latent space. To this end, we present a stacked SSL model that utilizes unsupervised disentangled representation learning as the stochastic embedding for self-ensembling. We evaluate the presented model for multi-label classification using chest X-ray images, demonstrating its improved performance over related SSL models as well as the interpretability of its disentangled representations. | Besideds the approaches discussed above, there is also an active line of research in GAN-based SSL methods @cite_7 @cite_5 . The general idea is to add a classification objective to the original mini-max game and increases the capacity of the discriminator to associate the inputs to the corresponding labels. The presented work differs from this line of research by the emphasis on obtaining, regularizing, and interpreting the latent representations in SSL. | {
"cite_N": [
"@cite_5",
"@cite_7"
],
"mid": [
"2806321514",
"2548275288"
],
"abstract": [
"Deep learning algorithms require large amounts of labeled data which is difficult to attain for medical imaging. Even if a particular dataset is accessible, a learned classifier struggles to maintain the same level of performance on a different medical imaging dataset from a new or never-seen data source domain. Utilizing generative adversarial networks in a semi-supervised learning architecture, we address both problems of labeled data scarcity and data domain overfitting. For cardiac abnormality classification in chest X-rays, we demonstrate that an order of magnitude less data is required with semi-supervised learning generative adversarial networks than with conventional supervised learning convolutional neural networks. In addition, we demonstrate its robustness across different datasets for similar classification tasks.",
"Synthesizing high resolution photorealistic images has been a long-standing challenge in machine learning. In this paper we introduce new methods for the improved training of generative adversarial networks (GANs) for image synthesis. We construct a variant of GANs employing label conditioning that results in 128x128 resolution image samples exhibiting global coherence. We expand on previous work for image quality assessment to provide two new analyses for assessing the discriminability and diversity of samples from class-conditional image synthesis models. These analyses demonstrate that high resolution samples provide class information not present in low resolution samples. Across 1000 ImageNet classes, 128x128 samples are more than twice as discriminable as artificially resized 32x32 samples. In addition, 84.7 of the classes have samples exhibiting diversity comparable to real ImageNet data."
]
} |
1907.09607 | 2964204277 | The success of deep learning in medical imaging is mostly achieved at the cost of a large labeled data set. Semi-supervised learning (SSL) provides a promising solution by leveraging the structure of unlabeled data to improve learning from a small set of labeled data. Self-ensembling is a simple approach used in SSL to encourage consensus among ensemble predictions of unknown labels, improving generalization of the model by making it more insensitive to the latent space. Currently, such an ensemble is obtained by randomization such as dropout regularization and random data augmentation. In this work, we hypothesize -- from the generalization perspective -- that self-ensembling can be improved by exploiting the stochasticity of a disentangled latent space. To this end, we present a stacked SSL model that utilizes unsupervised disentangled representation learning as the stochastic embedding for self-ensembling. We evaluate the presented model for multi-label classification using chest X-ray images, demonstrating its improved performance over related SSL models as well as the interpretability of its disentangled representations. | An increased interest in SSL has also been seen in medical image anlaysis. The use of an unsupervised representation learning for better generalization has been investigated for the task of myocardial segmentation @cite_1 . In @cite_5 , SSL was used in a similar X-ray data set, although the scope was limited to binary classifications between normal and abnormal categories. To our knowledge, we present the first multi-label SSL that investigates disentangled learning and self-ensembling of stochastic latent space in medical image classification. | {
"cite_N": [
"@cite_5",
"@cite_1"
],
"mid": [
"2806321514",
"2963721223"
],
"abstract": [
"Deep learning algorithms require large amounts of labeled data which is difficult to attain for medical imaging. Even if a particular dataset is accessible, a learned classifier struggles to maintain the same level of performance on a different medical imaging dataset from a new or never-seen data source domain. Utilizing generative adversarial networks in a semi-supervised learning architecture, we address both problems of labeled data scarcity and data domain overfitting. For cardiac abnormality classification in chest X-rays, we demonstrate that an order of magnitude less data is required with semi-supervised learning generative adversarial networks than with conventional supervised learning convolutional neural networks. In addition, we demonstrate its robustness across different datasets for similar classification tasks.",
"The success and generalisation of deep learning algorithms heavily depend on learning good feature representations. In medical imaging this entails representing anatomical information, as well as properties related to the specific imaging setting. Anatomical information is required to perform further analysis, whereas imaging information is key to disentangle scanner variability and potential artefacts. The ability to factorise these would allow for training algorithms only on the relevant information according to the task. To date, such factorisation has not been attempted. In this paper, we propose a methodology of latent space factorisation relying on the cycle-consistency principle. As an example application, we consider cardiac MR segmentation, where we separate information related to the myocardium from other features related to imaging and surrounding substructures. We demonstrate the proposed method’s utility in a semi-supervised setting: we use very few labelled images together with many unlabelled images to train a myocardium segmentation neural network. Specifically, we achieve comparable performance to fully supervised networks using a fraction of labelled images in experiments on ACDC and a dataset from Edinburgh Imaging Facility QMRI. Code will be made available at https: github.com agis85 spatial_factorisation."
]
} |
1907.09591 | 2964224524 | Modern trajectory optimization based approaches to motion planning are fast, easy to implement, and effective on a wide range of robotics tasks. However, trajectory optimization algorithms have parameters that are typically set in advance (and rarely discussed in detail). Setting these parameters properly can have a significant impact on the practical performance of the algorithm, sometimes making the difference between finding a feasible plan or failing at the task entirely. We propose a method for leveraging past experience to learn how to automatically adapt the parameters of Gaussian Process Motion Planning (GPMP) algorithms. Specifically, we propose a differentiable extension to the GPMP2 algorithm, so that it can be trained end-to-end from data. We perform several experiments that validate our algorithm and illustrate the benefits of our proposed learning-based approach to motion planning. | Recent work in structured learning techniques offer avenues towards contending with these challenges. Several methods have focused on incorporating optimization within neural network architectures. For example, @cite_8 implicitly learns to perform nonlinear least squares optimization by learning an RNN that predicts its update steps, @cite_16 learns to perform gradient descent, and @cite_18 utilizes a ODE solver within its network. Other methods like @cite_6 learn a sequential quadratic program as a layer in its network, which was later extended to solve model predictive control @cite_10 . @cite_9 learns structured dynamics models for reactive visuomotor control. Taking inspiration from this body of work, in this paper we present a differentiable inference-based motion planning technique that through its structure allows us to combine the strengths of both traditional model-based methods and modern learning methods, while mitigating their respective weaknesses. | {
"cite_N": [
"@cite_18",
"@cite_8",
"@cite_9",
"@cite_6",
"@cite_16",
"@cite_10"
],
"mid": [
"2963755523",
"2895289727",
"2890290306",
"2963970238",
"2963775850",
"2963414638"
],
"abstract": [
"We introduce a new family of deep neural network models. Instead of specifying a discrete sequence of hidden layers, we parameterize the derivative of the hidden state using a neural network. The output of the network is computed using a blackbox differential equation solver. These continuous-depth models have constant memory cost, adapt their evaluation strategy to each input, and can explicitly trade numerical precision for speed. We demonstrate these properties in continuous-depth residual networks and continuous-time latent variable models. We also construct continuous normalizing flows, a generative model that can train by maximum likelihood, without partitioning or ordering the data dimensions. For training, we show how to scalably backpropagate through any ODE solver, without access to its internal operations. This allows end-to-end training of ODEs within larger models.",
"Sum-of-squares objective functions are very popular in computer vision algorithms. However, these objective functions are not always easy to optimize. The underlying assumptions made by solvers are often not satisfied and many problems are inherently ill-posed. In this paper, we propose a neural nonlinear least squares optimization algorithm which learns to effectively optimize these cost functions even in the presence of adversities. Unlike traditional approaches, the proposed solver requires no hand-crafted regularizers or priors as these are implicitly learned from the data. We apply our method to the problem of motion stereo ie. jointly estimating the motion and scene geometry from pairs of images of a monocular sequence. We show that our learned optimizer is able to efficiently and effectively solve this challenging optimization problem.",
"",
"This paper presents OptNet, a network architecture that integrates optimization problems (here, specifically in the form of quadratic programs) as individual layers in larger end-to-end train-able deep networks. These layers encode constraints and complex dependencies between the hidden states that traditional convolutional and fully-connected layers often cannot capture. In this paper, we explore the foundations for such an architecture: we show how techniques from sensitivity analysis, bilevel optimization, and implicit differentiation can be used to exactly differentiate through these layers and with respect to layer parameters; we develop a highly efficient solver for these layers that exploits fast GPU-based batch solves within a primal-dual interior point method, and which provides backpropagation gradients with virtually no additional cost on top of the solve; and we highlight the application of these approaches in several problems. In one notable example, we show that the method is capable of learning to play mini-Sudoku (4x4) given just input and output games, with no a priori information about the rules of the game; this highlights the ability of our architecture to learn hard constraints better than other neural architectures.",
"The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorithms, implemented by LSTMs, outperform generic, hand-designed competitors on the tasks for which they are trained, and also generalize well to new tasks with similar structure. We demonstrate this on a number of tasks, including simple convex problems, training neural networks, and styling images with neural art.",
"In this paper we present foundations for using model predictive control (MPC) as a differentiable policy class in reinforcement learning. Specifically, we differentiate through MPC by using the KKT conditions of the convex approximation at a fixed point of the solver. Using this strategy, we are able to learn the cost and dynamics of a controller via end-to-end learning in a larger system. We empirically show results in an imitation learning setting, demonstrating that we can recover the underlying dynamics and cost more efficiently and reliably than with a generic neural network policy class."
]
} |
1907.09133 | 2963230693 | Sensors producing 3D point clouds such as 3D laser scanners and RGB-D cameras are widely used in robotics, be it for autonomous driving or manipulation. Aligning point clouds produced by these sensors is a vital component in such applications to perform tasks such as model registration, pose estimation, and SLAM. Iterative closest point (ICP) is the most widely used method for this task, due to its simplicity and efficiency. In this paper we propose a novel method which solves the optimisation problem posed by ICP using stochastic gradient descent (SGD). Using SGD allows us to improve the convergence speed of ICP without sacrificing solution quality. Experiments using Kinect as well as Velodyne data show that, our proposed method is faster than existing methods, while obtaining solutions comparable to standard ICP. An additional benefit is robustness to parameters when processing data from different sensors. | Selecting corresponding points in the source and reference point clouds is an important step in ICP, as evidenced by the various ICP variants that have been proposed in the literature to improve this part of ICP. @cite_7 use simulated annealing to obtain a good initial starting point. Masuda and Yokoyo @cite_16 combine a least median square estimator and ICP using random samples of points in each iteration. While this method uses random sub-sampling, the goal is to remove outliers from the point cloud processed by ICP and does not perform incremental updates of the transformation estimate as our proposed method does. | {
"cite_N": [
"@cite_16",
"@cite_7"
],
"mid": [
"2176551120",
"2136391352"
],
"abstract": [
"Registration and segmentation of multiple range images are important problems in range image analysis. We propose a new algorithm of range data registration and segmentation that is robust in the presence of outlying points (outliers) like noise and occlusion. The registration algorithm determines rigid motion parameters from a pair of range images. Our method is an integration of the iterative closest point (ICP) algorithm with random sampling and least median of squares (LMS or LMedS) estimator. The segmentation method classifies the input data points into four categories comprising inliers and 3 types of outliers. Finally, we integrate the inliers obtained from multiple range images to construct a data set representing an entire object. We have experimented with our method both on synthetic range images and on real range images taken by two kinds of rangefinders. The proposed method does not need preliminary processes such as smoothing or trimming of isolated points because of its robustness. It also offers the advantage of reducing the computational cost.",
"The need to register data is abundant in applications such as: world modeling, part inspection and manufacturing, object recognition, pose estimation, robotic navigation, and reverse engineering. Registration occurs by aligning the regions that are common to multiple images. The largest difficulty in performing this registration is dealing with outliers and local minima while remaining efficient. A commonly used technique, iterative closest point, is efficient but is unable to deal with outliers or avoid local minima. Another commonly used optimization algorithm, simulated annealing, is effective at dealing with local minima but is very slow. Therefore, the algorithm developed in this paper is a hybrid algorithm that combines the speed of iterative closest point with the robustness of simulated annealing. Additionally, a robust error function is incorporated to deal with outliers. This algorithm is incorporated into a complete modeling system that inputs two sets of range data, registers the sets, and outputs a composite model."
]
} |
1907.09133 | 2963230693 | Sensors producing 3D point clouds such as 3D laser scanners and RGB-D cameras are widely used in robotics, be it for autonomous driving or manipulation. Aligning point clouds produced by these sensors is a vital component in such applications to perform tasks such as model registration, pose estimation, and SLAM. Iterative closest point (ICP) is the most widely used method for this task, due to its simplicity and efficiency. In this paper we propose a novel method which solves the optimisation problem posed by ICP using stochastic gradient descent (SGD). Using SGD allows us to improve the convergence speed of ICP without sacrificing solution quality. Experiments using Kinect as well as Velodyne data show that, our proposed method is faster than existing methods, while obtaining solutions comparable to standard ICP. An additional benefit is robustness to parameters when processing data from different sensors. | Chen and Medioni @cite_2 propose a robust version of ICP which minimises the distance between points and planes (point-to-plane ICP) instead of the traditional point-to-point distance. In point-to-plane ICP, matching points in the two point clouds are determined by intersecting a ray from the source point in direction of source point's normal with the reference point's surface. This method is more robust to local minima when compared to standard ICP @cite_20 . @cite_1 combine point-to-point and point-to-plane ICP into a single probabilistic framework called Generalised-ICP (GICP). GICP is a plane-to-plane algorithm that models the local planar structure in both source and reference point clouds instead of just the reference's, as is typically done with the point-to-plane ICP. Comprehensive summaries of different ICP variants and their performance and properties can be found in @cite_18 and @cite_8 . | {
"cite_N": [
"@cite_18",
"@cite_8",
"@cite_1",
"@cite_2",
"@cite_20"
],
"mid": [
"2119851068",
"2271206385",
"",
"1883517952",
"2118631462"
],
"abstract": [
"The ICP (Iterative Closest Point) algorithm is widely used for geometric alignment of three-dimensional models when an initial estimate of the relative pose is known. Many variants of ICP have been proposed, affecting all phases of the algorithm from the selection and matching of points to the minimization strategy. We enumerate and classify many of these variants, and evaluate their effect on the speed with which the correct alignment is reached. In order to improve convergence for nearly-flat meshes with small features, such as inscribed surfaces, we introduce a new variant based on uniform sampling of the space of normals. We conclude by proposing a combination of ICP variants optimized for high speed. We demonstrate an implementation that is able to align two range images in a few tens of milliseconds, assuming a good initial guess. This capability has potential application to real-time 3D model acquisition and model-based tracking.",
"The topic of this review is geometric registration in robotics. Registrationalgorithms associate sets of data into a common coordinate system.They have been used extensively in object reconstruction, inspection,medical application, and localization of mobile robotics. We focus onmobile robotics applications in which point clouds are to be registered.While the underlying principle of those algorithms is simple, manyvariations have been proposed for many different applications. In thisreview, we give a historical perspective of the registration problem andshow that the plethora of solutions can be organized and differentiatedaccording to a few elements. Accordingly, we present a formalizationof geometric registration and cast algorithms proposed in the literatureinto this framework. Finally, we review a few applications of thisframework in mobile robotics that cover different kinds of platforms,environments, and tasks. These examples allow us to study the specificrequirements of each use case and the necessary configuration choicesleading to the registration implementation. Ultimately, the objective ofthis review is to provide guidelines for the choice of geometric registrationconfiguration.",
"",
"The problem of creating a complete model of a physical object is studied. Although this may be possible using intensity images, the authors use range images which directly provide access to three-dimensional information. The first problem that needs to be solved is to find the transformation between the different views. Previous approaches have either assumed this transformation to be known (which is extremely difficult for a complete model) or computed it with feature matching (which is not accurate enough for integration. The authors propose an approach that works on range data directly and registers successive views with enough overlapping area to get an accurate transformation between views. This is performed by minimizing a functional that does not require point-to-point matches. Details are given of the registration method and modeling procedure, and they are illustrated on range images of complex objects. >",
"The three-dimensional reconstruction of real objects is an important topic in computer vision. Most of the acquisition systems are limited to reconstruct a partial view of the object obtaining in blind areas and occlusions, while in most applications a full reconstruction is required. Many authors have proposed techniques to fuse 3D surfaces by determining the motion between the different views. The first problem is related to obtaining a rough registration when such motion is not available. The second one is focused on obtaining a fine registration from an initial approximation. In this paper, a survey of the most common techniques is presented. Furthermore, a sample of the techniques has been programmed and experimental results are reported to determine the best method in the presence of noise and outliers, providing a useful guide for an interested reader including a Matlab toolbox available at the webpage of the authors."
]
} |
1907.09133 | 2963230693 | Sensors producing 3D point clouds such as 3D laser scanners and RGB-D cameras are widely used in robotics, be it for autonomous driving or manipulation. Aligning point clouds produced by these sensors is a vital component in such applications to perform tasks such as model registration, pose estimation, and SLAM. Iterative closest point (ICP) is the most widely used method for this task, due to its simplicity and efficiency. In this paper we propose a novel method which solves the optimisation problem posed by ICP using stochastic gradient descent (SGD). Using SGD allows us to improve the convergence speed of ICP without sacrificing solution quality. Experiments using Kinect as well as Velodyne data show that, our proposed method is faster than existing methods, while obtaining solutions comparable to standard ICP. An additional benefit is robustness to parameters when processing data from different sensors. | Once the corresponding point pairs are selected the next challenge is to use this information to find the optimal transformation. Several optimisation techniques are used to minimise ICP's cost function. Fitzgibbon @cite_3 presents a method which uses a non-linear Levenberg-Marquardt method @cite_6 which combines batch gradient descent and Gauss-Newton to optimise the cost function. Levine @cite_5 uses simulated annealing to minimise the cost function. | {
"cite_N": [
"@cite_5",
"@cite_6",
"@cite_3"
],
"mid": [
"2071992612",
"2432517183",
"2004312117"
],
"abstract": [
"Concerns the problem of range image registration for the purpose of building surface models of 3D objects. The registration task involves finding the translation and rotation parameters which properly align overlapping views of the object so as to reconstruct from these partial surfaces, an integrated surface representation of the object. The registration task is expressed as an optimization problem. We define a function which measures the quality of the alignment between the partial surfaces contained in two range images as produced by a set of motion parameters. This function computes a sum of Euclidean distances from control points on one surfaces to corresponding points on the other. The strength of this approach is in the method used to determine point correspondences. It reverses the rangefinder calibration process, resulting in equations which can be used to directly compute the location of a point in a range image corresponding to an arbitrary point in 3D space. A stochastic optimization technique, very fast simulated reannealing (VFSR), is used to minimize the cost function. Dual-view registration experiments yielded excellent results in very reasonable time. A multiview registration experiment took a long time. A complete surface model was then constructed from the integration of multiple partial views. The effectiveness with which registration of range images can be accomplished makes this method attractive for many practical applications where surface models of 3D objects must be constructed. >",
"From the Publisher: This is the revised and greatly expanded Second Edition of the hugely popular Numerical Recipes: The Art of Scientific Computing. The product of a unique collaboration among four leading scientists in academic research and industry, Numerical Recipes is a complete text and reference book on scientific computing. In a self-contained manner it proceeds from mathematical and theoretical considerations to actual practical computer routines. With over 100 new routines (now well over 300 in all), plus upgraded versions of many of the original routines, this book is more than ever the most practical, comprehensive handbook of scientific computing available today. The book retains the informal, easy-to-read style that made the first edition so popular, with many new topics presented at the same accessible level. In addition, some sections of more advanced material have been introduced, set off in small type from the main body of the text. Numerical Recipes is an ideal textbook for scientists and engineers and an indispensable reference for anyone who works in scientific computing. Highlights of the new material include a new chapter on integral equations and inverse methods; multigrid methods for solving partial differential equations; improved random number routines; wavelet transforms; the statistical bootstrap method; a new chapter on \"less-numerical\" algorithms including compression coding and arbitrary precision arithmetic; band diagonal linear systems; linear algebra on sparse matrices; Cholesky and QR decomposition; calculation of numerical derivatives; Pade approximants, and rational Chebyshev approximation; new special functions; Monte Carlo integration in high-dimensional spaces; globally convergent methods for sets of nonlinear equations; an expanded chapter on fast Fourier methods; spectral analysis on unevenly sampled data; Savitzky-Golay smoothing filters; and two-dimensional Kolmogorov-Smirnoff tests. All this is in addition to material on such basic top",
"Abstract This paper introduces a new method of registering point sets. The registration error is directly minimized using general-purpose non-linear optimization (the Levenberg–Marquardt algorithm). The surprising conclusion of the paper is that this technique is comparable in speed to the special-purpose Iterated Closest Point algorithm, which is most commonly used for this task. Because the routine directly minimizes an energy function, it is easy to extend it to incorporate robust estimation via a Huber kernel, yielding a basin of convergence that is many times wider than existing techniques. Finally, we introduce a data structure for the minimization based on the chamfer distance transform, which yields an algorithm that is both faster and more robust than previously described methods."
]
} |
1907.09133 | 2963230693 | Sensors producing 3D point clouds such as 3D laser scanners and RGB-D cameras are widely used in robotics, be it for autonomous driving or manipulation. Aligning point clouds produced by these sensors is a vital component in such applications to perform tasks such as model registration, pose estimation, and SLAM. Iterative closest point (ICP) is the most widely used method for this task, due to its simplicity and efficiency. In this paper we propose a novel method which solves the optimisation problem posed by ICP using stochastic gradient descent (SGD). Using SGD allows us to improve the convergence speed of ICP without sacrificing solution quality. Experiments using Kinect as well as Velodyne data show that, our proposed method is faster than existing methods, while obtaining solutions comparable to standard ICP. An additional benefit is robustness to parameters when processing data from different sensors. | Another avenue of research are methods that improve the ability of ICP to handle an unknown degree of overlap between the point clouds. This includes the use of a trimmed square cost function to estimate optimal transformation @cite_21 , rejection of corresponding point pairs based on a threshold defined by the standard deviation of point pair distances @cite_15 , or the use of a semi differential invariant for correspondence estimation @cite_9 . | {
"cite_N": [
"@cite_9",
"@cite_15",
"@cite_21"
],
"mid": [
"2130109198",
"1602060479",
"2140711847"
],
"abstract": [
"A method for matching 3-D curves under Euclidean motions is presented. Our approach uses a semi-differential invariant description requiring only first derivatives and one reference point, thus avoiding the computation of high order derivatives. A novel curve similarity measure building on the notion of spl epsiv -reciprocal correspondence is proposed. It is shown that by combining spl epsiv -reciprocal correspondence with the robust least median of squares motion estimation, the registration of partially occluded curves can be accomplished. An experiment with real curves extracted from 3-D surfaces demonstrates that curve matching can be successfully performed even on data from a simple and cheap 3-D sensor. >",
"Robust registration of two 3-D point sets is a common problem in computer vision. The iterative closest point (ICP) algorithm is undoubtedly the most popular algorithm for solving this kind of problem. In this paper, we present the Picky ICP algorithm, which has been created by merging several extensions of the standard ICP algorithm, thus improving its robustness and computation time. Using pure 3-D point sets as input data, we do not consider additional information like point color or neighborhood relations. In addition to the standard ICP algorithm and the Picky ICP algorithm proposed in this paper, a robust algorithm due to Masuda and Yokoya and the RICP algorithm by are evaluated. We have experimentally determined the basin of convergence, robustness to noise and outliers, and computation time of these four ICP based algorithms.",
"The problem of geometric alignment of two roughly preregistered, partially overlapping, rigid, noisy 3D point sets is considered. A new natural and simple, robustified extension of the popular Iterative Closest Point (ICP) algorithm (Besl and McKay, 1992) is presented, called the Trimmed ICP (TrICP). The new algorithm is based on the consistent use of the least trimmed squares (LTS) approach in all phases of the operation. Convergence is proved and an efficient implementation is discussed. TrICP is fast, applicable to overlaps under 50 , robust to erroneous measurements and shape defects, and has easy-to-set parameters. ICP is a special case of TrICP when the overlap parameter is 100 . Results of testing the new algorithm are shown."
]
} |
1907.09189 | 2963689651 | This paper is concerned with evaluating different multiagent learning (MAL) algorithms in problems where individual agents may be heterogenous, in the sense of utilizing different learning strategies, without the opportunity for prior agreements or information regarding coordination. Such a situation arises in ad hoc team problems, a model of many practical multiagent systems applications. Prior work in multiagent learning has often been focussed on homogeneous groups of agents, meaning that all agents were identical and a priori aware of this fact. Also, those algorithms that are specifically designed for ad hoc team problems are typically evaluated in teams of agents with fixed behaviours, as opposed to agents which are adapting their behaviours. In this work, we empirically evaluate five MAL algorithms, representing major approaches to multiagent learning but originally developed with the homogeneous setting in mind, to understand their behaviour in a set of ad hoc team problems. All teams consist of agents which are continuously adapting their behaviours. The algorithms are evaluated with respect to a comprehensive characterisation of repeated matrix games, using performance criteria that include considerations such as attainment of equilibrium, social welfare and fairness. Our main conclusion is that there is no clear winner. However, the comparative evaluation also highlights the relative strengths of different algorithms with respect to the type of performance criteria, e.g., social welfare vs. attainment of equilibrium. | Harsanyi pioneered the study of incomplete information games. In his 1967 paper @cite_31 , he describes the , a game in which players have beliefs about missing information. He develops the concept of the Bayesian Nash equilibrium @cite_34 in which each player plays a best response against the other players, based on the personal beliefs of the player. Jordan @cite_17 showed that, for any repeated game, if the players play a Bayesian Nash equilibrium in each repetition, and if the personal beliefs of the players satisfy certain conditions, then this will converge to a true Nash equilibrium. | {
"cite_N": [
"@cite_31",
"@cite_34",
"@cite_17"
],
"mid": [
"",
"2102794165",
"2129760941"
],
"abstract": [
"",
"Part I of this paper has described a new theory for the analysis of games with incomplete information. It has been shown that, if the various players' subjective probability distributions satisfy a certain mutual-consistency requirement, then any given game with incomplete information will be equivalent to a certain game with complete information, called the “Bayes-equivalent” of the original game, or briefly a “Bayesian game.” Part II of the paper will now show that any Nash equilibrium point of this Bayesian game yields a “Bayesian equilibrium point” for the original game and conversely. This result will then be illustrated by numerical examples, representing two-person zero-sum games with incomplete information. We shall also show how our theory enables us to analyze the problem of exploiting the opponent's erroneous beliefs. However, apart from its indubitable usefulness in locating Bayesian equilibrium points, we shall show it on a numerical example the Bayes-equivalent of a two-person cooperative game that the normal form of a Bayesian game is in many cases a highly unsatisfactory representation of the game situation and has to be replaced by other representations e.g., by the semi-normal form. We shall argue that this rather unexpected result is due to the fact that Bayesian games must be interpreted as games with “delayed commitment” whereas the normal-form representation always envisages a game with “immediate commitment.”",
"This paper studies myopic Bayesian learning processes for finite-player, finite-strategy normal form games. Initially, each player is presumed to know his own payoff function but not the payoff functions of the other players. Assuming that the common prior distribution of payoff functions satisfies independence across players, it is proved that the conditional distributions on strategies converge to a set of Nash equilibria with probability one. Under a further assumption that the prior distributions are sufficiently uniform, convergence to a set of Nash equilibria is proved for every profile of payoff functions, that is, every normal form game."
]
} |