input
stringlengths 286
19k
| output
stringlengths 1
15.8k
| metadata
dict | _instance_id
stringlengths 15
62
|
---|---|---|---|
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
There has been a large amount of interest, both in the past and particularly recently, into the relative advantage of different families of universal function approximators, for instance neural networks, polynomials, rational functions, etc.
However, current research has focused almost exclusively on understanding this problem in a worst case setting: e.g. characterizing the best L1 or L_{infty} approximation in a box (or sometimes, even under an adversarially constructed data distribution.)
In this setting many classical tools from approximation theory can be effectively used.
However, in typical applications we expect data to be high dimensional, but structured -- so, it would only be important to approximate the desired function well on the relevant part of its domain, e.g. a small manifold on which real input data actually lies.
Moreover, even within this domain the desired quality of approximation may not be uniform; for instance in classification problems, the approximation needs to be more accurate near the decision boundary.
These issues, to the best of our knowledge, have remain unexplored until now.
With this in mind, we analyze the performance of neural networks and polynomial kernels in a natural regression setting where the data enjoys sparse latent structure, and the labels depend in a simple way on the latent variables.
We give an almost-tight theoretical analysis of the performance of both neural networks and polynomials for this problem, as well as verify our theory with simulations.
Our results both involve new (complex-analytic) techniques, which may be of independent interest, and show substantial qualitative differences with what is known in the worst-case setting.
The concept of representational power has been always of great interest in machine learning.
In part the reason for this is that classes of "universal approximators" abound -e.g. polynomials, radial bases, rational functions, etc.
Some of these were known to mathematicians as early as Bernstein and Lebesgue 1 -yet it is apparent that not all such classes perform well empirically.In recent years, the class of choice is neural networks in tasks as simple as supervised classification, and as complicated as reinforcement learning -inspiring an immense amount of theoretical study.Research has focus on several angles of this question, e.g. comparative power to other classes of functions (Yarotsky, 2017; Safran and Shamir, 2017; BID0 , the role of depth and the importance of architecture (Telgarsky, 2016; Safran and Shamir, 2017; BID6 , and many other topics such as their generalization properties and choice of optimization procedure BID7 Zhang et al., 2017; BID0 .Our
results fall in the first category: comparing the relative power of polynomial kernels and ReLU networks -with a significant twist, that makes our results more relevant to real-life settings. The
flavor of existing results in this subject is roughly the following: every function in a class C 1 can be approximately represented as a function in a different class C 2 , with some blowup in the size/complexity of the function (e.g. degree, number of nodes, depth). The
unsatisfying aspect of such results is the "worst-case" way in which the approximation is measured: typically, one picks a domain coarsely relevant for the approximation (e.g. an interval or a box), and considers the L ∞ , L 2 , L 1 , . . . norm of the difference between the two functions on this domain. In
some of the constructions (e.g. BID6 Safran and Shamir, 2017) ), the evaluation is even more adversarial: it's the mean-square error over a specially-designed measure.Instead, in practically relevant settings, it's reasonable to expect that approximating a predictor function well only on some "relevant domain" would suffice, e.g. near the prediction boundary or near a lower-dimensional manifold on which the data lives, as would be the case in settings like images, videos, financial data, etc. A
good image classifier need not care about "typical" data points from the ∞ -ball, which mostly look like white noise.The difficulty with the above question is that it's not immediate how to formalize what the "relevant domain" is or how to model the data distribution. We
tackle here a particularly simple (but natural) incarnation of this question: namely, when the data distribution has sparse latent structure, and all we ask is to predict a linear function of the latent variables based upon (noisy) observations. The
assumption of sparsity is very natural in the context of realistic, high-dimensional data: sparsity under the correct choice of basis is essentially the reason that methods such as lossy image compression work well, and it is also the engine behind the entire field of compressed sensing BID5 .
In this paper, we considered the problem of providing representation lower and upper bounds for different classes of universal approximators in a natural statistical setup that exhibits sparse latent structure.
We hope this will inspire researchers to move beyond the worst-case setup when considering the representational power of different function classes.
Figure 1: Degree vs Log L2 Error on test set for different values of n, the dimensionality of the problem.
This plot was generated using a training set of 8000 examples from the generative model and a test set of 1000 additional examples; error is unnormalized.The techniques we develop are interesting in their own right: unlike standard approximation theory setups, we need to design polynomials which may only need to be accurate in certain regions.
Conceivably, in classification setups, similar wisdom may be helpful: the approximator needs to only be accurate near the decision boundary.Finally, we conclude with a tantalizing open problem: In general it is possible to obtain non-trivial sparse recovery guarantees for LASSO even when the sparsity k is nearly of the same order as n under assumptions such as RIP.
Since LASSO can be computed quickly using iterated soft thresholding (ISTA and FISTA, see Beck and Teboulle (2009)) , we see that sufficiently deep neural networks can compute a near-optimal solution in this setting as well.
It would be interesting to determine whether shallower networks and polynomials of degree polylog(n) can achieve a similar guarantees.Ankur Moitra. | Beyond-worst-case analysis of the representational power of ReLU nets & polynomial kernels -- in particular in the presence of sparse latent structure. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:709 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
End-to-end acoustic-to-word speech recognition models have recently gained popularity because they are easy to train, scale well to large amounts of training data, and do not require a lexicon.
In addition, word models may also be easier to integrate with downstream tasks such as spoken language understanding, because inference (search) is much simplified compared to phoneme, character or any other sort of sub-word units.
In this paper, we describe methods to construct contextual acoustic word embeddings directly from a supervised sequence-to-sequence acoustic-to-word speech recognition model using the learned attention distribution.
On a suite of 16 standard sentence evaluation tasks, our embeddings show competitive performance against a word2vec model trained on the speech transcriptions.
In addition, we evaluate these embeddings on a spoken language understanding task and observe that our embeddings match the performance of text-based embeddings in a pipeline of first performing speech recognition and then constructing word embeddings from transcriptions.
The task of learning fixed-size representations for variable length data like words or sentences, either text or speech-based, is an interesting problem and a focus of much current research.
In the natural language processing community, methods like word2vec BID0 , GLoVE BID1 , CoVe BID2 and ELMo BID3 have become increasingly popular, due to their utility in several natural language processing tasks.
Similar research has progressed in the speech recognition community, where however the input is a sequence of short-term audio features, rather than words or characters.
Therefore, the variability in speakers, acoustics or microphones for different occurrences of the same word or sentence adds to the challenge.Prior work towards the problem of learning word representations from variable length acoustic frames involved either providing word boundaries to align speech and text BID4 , or chunking ("chopping" or "padding") input speech into fixed-length segments that usually span only one word BID5 BID6 BID7 BID8 .
Since these techniques learn acoustic word embeddings from audio fragment and word pairs obtained via a given segmentation of the audio data, they ignore the specific audio context associated with a particular word.
So the resulting word embeddings do not capture the contextual dependencies in speech.
In contrast, our work constructs individual acoustic word embeddings grounded in utterance-level acoustics.In this paper, we present different methods of obtaining acoustic word embeddings from an attention-based sequence-to-sequence * Equal contribution model BID9 BID10 BID11 trained for direct Acoustic-to-Word (A2W) speech recognition BID12 .
Using this model, we jointly learn to automatically segment and classify input speech into individual words, hence getting rid of the problem of chunking or requiring pre-defined word boundaries.
As our A2W model is trained at the utterance level, we show that we can not only learn acoustic word embeddings, but also learn them in the proper context of their containing sentence.
We also evaluate our contextual acoustic word embeddings on a spoken language understanding task, demonstrating that they can be useful in non-transcription downstream tasks.Our main contributions in this paper are the following:
1. We demonstrate the usability of attention not only for aligning words to acoustic frames without any forced alignment but also for constructing Contextual Acoustic Word Embeddings (CAWE).
2. We demonstrate that our methods to construct word representations (CAWE) directly from a speech recognition model are highly competitive with the text-based word2vec embeddings BID0 , as evaluated on 16 standard sentence evaluation benchmarks.
3. We demonstrate the utility of CAWE on a speech-based downstream task of Spoken Language Understanding showing that pretrained speech models could be used for transfer learning similar to VGG in vision BID13 or CoVe in natural language understanding BID2 .
We present a method to learn contextual acoustic word embeddings from a sequence-to-sequence acoustic-to-word speech recognition model that learns to jointly segment and classify speech.
We analyze the role of attention in constructing contextual acoustic word embeddings, and find our acoustic embeddings to be highly competitive with word2vec (CBOW) text embeddings.
We discuss two variants of such contextual acoustic word embeddings which outperform the simple unweighted average method by upto 34% on semantic textual similarity tasks.
The embeddings also matched the performance of text-based embeddings in spoken language understanding, showing the use of this model as a pre-trained model for other speech-based downstream tasks.
We surmise that contextual audio embeddings will generalize and improve downstream tasks in a way that is similar to their text counterparts, despite the additional complexity presented by noisy audio input.
In the future, we will explore ways to scale our model to larger corpora, larger vocabularies and compare with non-contextual acoustic word embedding methods. | Methods to learn contextual acoustic word embeddings from an end-to-end speech recognition model that perform competitively with text-based word embeddings. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:71 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Recent deep generative models can provide photo-realistic images as well as visual or textual content embeddings useful to address various tasks of computer vision and natural language processing.
Their usefulness is nevertheless often limited by the lack of control over the generative process or the poor understanding of the learned representation.
To overcome these major issues, very recent works have shown the interest of studying the semantics of the latent space of generative models.
In this paper, we propose to advance on the interpretability of the latent space of generative models by introducing a new method to find meaningful directions in the latent space of any generative model along which we can move to control precisely specific properties of the generated image like position or scale of the object in the image.
Our method is weakly supervised and particularly well suited for the search of directions encoding simple transformations of the generated image, such as translation, zoom or color variations.
We demonstrate the effectiveness of our method qualitatively and quantitatively, both for GANs and variational auto-encoders.
With the success of recent generative models to produce high-resolution photo-realistic images (Karras et al., 2018; Brock et al., 2018; Razavi et al., 2019) , an increasing number of applications are emerging, such as image in-painting, dataset-synthesis, and deep-fakes.
However, the use of generative models is often limited by the lack of control over the generated images.
Indeed, more control could, for instance, be used to improve existing approaches which aim at generating new training examples (Bowles et al., 2018) by allowing the user to choose more specific properties of the generated images.
First attempts in this direction showed that one can modify an attribute of a generated image by adding a learned vector on its latent code (Radford et al., 2015) or by combining the latent code of two images (Karras et al., 2018) .
Moreover, the study of the latent space of generative models provides insights about its structure which is of particular interest as generative models are also powerful tools to learn unsupervised data representations.
For example, Radford et al. (2015) observed on auto-encoders trained on datasets with labels for some factors of variations, that their latent spaces exhibit a vector space structure where some directions encode the said factors of variations.
We suppose that images result from underlying factors of variation such as the presence of objects, their relative positions or the lighting of the scene.
We distinguish two categories of factors of variations.
Modal factors of variation are discrete values that correspond to isolated clusters in the data distribution, such as the category of the generated object.
On the other hand, the size of an object or its position are described by Continuous factors of variations, expressed in a range of possible values.
As humans, we naturally describe images by using factors of variations suggesting that they are an efficient representation of natural images.
For example, to describe a scene, one likely enumerates the objects seen, their relative positions and relations and their characteristics (Berg et al., 2012) .
This way of characterizing images is also described in Krishna et al. (2016) .
Thus, explaining the latent space of generative models through the lens of factors of variation is promising.
However, the control over the image generation is often limited to discrete factors and requires both labels and an encoder model.
Moreover, for continuous factors of variations described by a real parameter t, previous works do not provide a way to get precise control over t.
In this paper, we propose a method to find meaningful directions in the latent space of generative models that can be used to control precisely specific continuous factors of variations while the literature has mainly tackled semantic labeled attributes like gender, emotion or object category (Radford et al., 2015; Odena et al., 2016) .
We test our method on image generative models for three factors of variation of an object in an image: vertical position, horizontal position and scale.
Our method has the advantage of not requiring a labeled dataset nor a model with an encoder.
It could be adapted to other factors of variations such as rotations, change of brightness, contrast, color or more sophisticated transformations like local deformations.
However, we focused on the position and scale as these are quantities that can be evaluated, allowing us to measure quantitatively the effectiveness of our method.
We demonstrate both qualitatively and quantitatively that such directions can be used to control precisely the generative process and show that our method can reveal interesting insights about the structure of the latent space.
Our main contributions are:
• We propose a method to find interpretable directions in the latent space of generative models, corresponding to parametrizable continuous factors of variations of the generated image.
• We show that properties of generated images can be controlled precisely by sampling latent representations along linear directions.
• We propose a novel reconstruction loss for inverting generative models with gradient descent.
• We give insights of why inverting generative models with optimization can be difficult by reasoning about the geometry of the natural image manifold.
• We study the impacts of disentanglement on the ability to control the generative models.
Generative models are increasingly more powerful but suffer from little control over the generative process and the lack of interpretability in their latent representations.
In this context, we propose a method to extract meaningful directions in the latent space of such models and use them to control precisely some properties of the generated images.
We show that a linear subspace of the latent space of BigGAN can be interpreted in term of intuitive factors of variation (namely translation and scale).
It is an important step toward the understanding of the representations learned by generative models. | A model to control the generation of images with GAN and beta-VAE with regard to scale and position of the objects | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:710 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Modern neural network architectures use structured linear transformations, such as low-rank matrices, sparse matrices, permutations, and the Fourier transform, to improve inference speed and reduce memory usage compared to general linear maps.
However, choosing which of the myriad structured transformations to use (and its associated parameterization) is a laborious task that requires trading off speed, space, and accuracy.
We consider a different approach: we introduce a family of matrices called kaleidoscope matrices (K-matrices) that provably capture any structured matrix with near-optimal space (parameter) and time (arithmetic operation) complexity.
We empirically validate that K-matrices can be automatically learned within end-to-end pipelines to replace hand-crafted procedures, in order to improve model quality.
For example, replacing channel shuffles in ShuffleNet improves classification accuracy on ImageNet by up to 5%.
Learnable K-matrices can also simplify hand-engineered pipelines---we replace filter bank feature computation in speech data preprocessing with a kaleidoscope layer, resulting in only 0.4% loss in accuracy on the TIMIT speech recognition task.
K-matrices can also capture latent structure in models: for a challenging permuted image classification task, adding a K-matrix to a standard convolutional architecture can enable learning the latent permutation and improve accuracy by over 8 points.
We provide a practically efficient implementation of our approach, and use K-matrices in a Transformer network to attain 36% faster end-to-end inference speed on a language translation task.
Structured linear maps are fundamental and ubiquitous in modern machine learning.
Their efficiency in speed (fast algorithms) and space (few parameters) can reduce computation and memory usage.
They include fixed specialized transforms such as the discrete Fourier transform (DFT) and Hadamard transform used in signal processing (Cooley et al., 1969) , convolutions for image, language, and speech modeling (Gu et al., 2018) , and low-rank and sparse matrices for efficient storage and inference on edge devices (Yu et al., 2017) .
Forms of structure such as sparsity have been at the forefront of recent advances in ML (Frankle & Carbin, 2019) , and are critical for on-device and energy-efficient models, two application areas of tremendous recent interest (Tsidulko, 2019; Schwartz et al., 2019) .
There are a plethora of classes of structured linear maps, each with a significantly different representation, algorithm, and implementation.
They have different tradeoffs in terms of inference speed, training speed, and accuracy, and the conventional wisdom is that no one class works uniformly well across all applications.
As a result, ML practitioners currently hand-pick specific classes of structured linear maps for each of their applications.
This is a difficult and labor-intensive task.
Ideally, these problems should be addressed with a universal representation for structured linear maps: (i) Such a parameterization should be expressive enough to capture important classes of structure, with a nearly tight parameter count and runtime: the space required to represent the linear map should be close to optimal, and the resulting algorithm for matrix vector multiplication should be close to the fastest possible algorithm.
(ii) The parameterization should be differentiable in order to be learned as a component of end-to-end ML pipelines, enabling it to easily be used as a drop-in replacement for manually engineered structured components.
(iii) The parameterization should admit practically efficient algorithms for training and inference, in terms of both speed and memory.
Currently, no class of structured linear maps satisfies all of these criteria.
Most existing classes of structured matrices-such as the class of low-rank matrices-fail to tightly capture other important types of structure.
For example, the DFT has an efficient structured representation of size O(n log n), yet cannot be well-approximated by a low-rank transform of size n 2 .
Sparsity is another important type of structure; lots of exciting recent work has focused on the design of sparse neural networks.
For instance, sparse networks of comparable quality to their dense counterparts-yet an order of magnitude fewer parameters-may be created via pruning (Han et al., 2016) or by identifying "winning lottery tickets" (Frankle & Carbin, 2019) .
In parallel, recent theoretical results by De Sa et al. (2018) show that sparsity and the notion of structure in linear maps are fundamentally linked: any given matrix can be factored into a product of sparse matrices with total parameter count equal to the efficiency (i.e. minimum arithmetic circuit complexity) of the matrix.
In other words, the representation of linear maps as products of sparse matrices tightly captures all forms of structure.
Unfortunately, actually learning sparse representations is difficult, because it requires finding the matrices' sparsity patterns-a discrete, nondifferentiable search problem.
So, current methods for training sparse neural networks are either expensive (Frankle & Carbin, 2019) , or rely on highly handtuned heuristics for evolving the sparsity patterns throughout training (Dettmers & Zettlemoyer, 2019) .
By contrast, we propose a representation of linear maps as products of sparse matrices with specific predefined sparsity patterns (Section 2), and show that it does satisfy our desiderata: it retains the expressiveness of unstructured sparsity, while being differentiably learnable and efficient like other structured representations.
Concretely, our representation is based on products of a particular building block known as a butterfly matrix (Parker, 1995; Dao et al., 2019) ; we term such products kaleidoscope matrices (K-matrices for short).
1 (i) Our main theoretical contribution (Section 2.3) concerns the expressiveness of this representation: we show that any structured linear map (i.e. one that can be applied using s n 2 arithmetic operations) can be represented as a K-matrix, with a nearly tight number of parameters and algorithmic complexity (both on the order of s up to logarithmic factors).
(ii) The kaleidoscope representation is fully differentiable; thus, all the parameters of a K-matrix can be learned using standard optimization algorithms such as SGD.
(iii) Because of their simple, regular structure, K-matrices are practical and easy to use.
We provide memory-and runtime-efficient implementations of K-matrix multiplication on CPU and GPU for training and inference, with a simple PyTorch interface.
We empirically validate that, due to their expressiveness, learnability, and efficiency, we can use K-matrices as a drop-in replacement for linear components in deep learning models.
In Section 3.1, we use K-matrices to replace hand-crafted structure in two different settings.
We simplify the six steps of filter bank computation in speech preprocessing into a single learnable K-matrix step, with only an 0.4% accuracy drop on the TIMIT speech recognition task.
We use K-matrices to replace channel shuffles in ShuffleNet, improving ImageNet classification accuracy by up to 5%.
In Section 3.2, we show that K-matrices can successfully recover latent structure; a K-matrix is used to learn latent permutations in a permuted image dataset (Permuted CIFAR), resulting in 9 points higher accuracy in a downstream CNN model.
In Section 3.3, we show that our efficient K-matrix multiplication implementation can be applied to speed up real-world tasks: we replace linear layers with K-matrices in a DynamicConv-Transformer network to attain 36% faster end-to-end inference speed with a 1.0 drop in BLEU score on the IWSLT14 German→English translation task.
We address the problem of having to manually choose among the numerous classes of structured linear maps by proposing the universal (expressive, efficient, and learnable) family of kaleidoscope matrices.
We prove that K-matrices can represent any structured linear maps with near-optimal space and time complexity.
Empirical validations suggest that K-matrices are a promising way to employ structure in modern ML; they can be used to reduce the need for hand-engineering, capture challenging latent structure, and improve efficiency in models.
We are excited about future work on further hardware-optimized implementations of K-matrices, to fully realize the size and speed benefits of structured matrices on a broad array of real-world applications.
Structured linear maps such as the DFT, the Hadamard transform and convolution are a workhorse of machine learning, with diverse applications ranging from data preprocessing, random projection, featurization, to model compression.
For example, the DFT is a crucial step in the standard filter bank speech preprocessing pipeline (Jurafsky & Martin, 2014) .
Fast random projection and kernel approximation methods rely on the fast Hadamard transform (Le et al., 2013; Yu et al., 2016) and convolution (Yu et al., 2015) .
Large learnable classes of structured matrices such as Toeplitz-like matrices (Sindhwani et al., 2015) and low-displacement rank (LDR) matrices (Thomas et al., 2018) have been used for model compression.
However, despite their theoretical speedup, they lack efficient implementations, especially on GPUs.
Therefore their use has been confined to small models (e.g. single hidden layer neural nets) and small datasets (e.g. CIFAR-10). | We propose a differentiable family of "kaleidoscope matrices," prove that all structured matrices can be represented in this form, and use them to replace hand-crafted linear maps in deep learning models. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:711 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We propose a method, called Label Embedding Network, which can learn label representation (label embedding) during the training process of deep networks.
With the proposed method, the label embedding is adaptively and automatically learned through back propagation.
The original one-hot represented loss function is converted into a new loss function with soft distributions, such that the originally unrelated labels have continuous interactions with each other during the training process.
As a result, the trained model can achieve substantially higher accuracy and with faster convergence speed.
Experimental results based on competitive tasks demonstrate the effectiveness of the proposed method, and the learned label embedding is reasonable and interpretable.
The proposed method achieves comparable or even better results than the state-of-the-art systems.
Most of the existing methods of neural networks use one-hot vector representations for labels.
The one-hot vector has two main restrictions.
The first restriction is the "discrete distribution", where each label is distributed at a completely different dimension from the others.
The second restriction is the "extreme value" based representation, where the value at each dimension is either 1 or 0, and there is no "soft value" allowed.
Those deficiencies may cause the following two potential problems.First, it is not easy to measure the correlation among the labels due to the "discrete distribution".
Not being able to measure the label correlation is potentially harmful to the learned models, e.g., causing the data sparseness problem.
Given an image recognition task, the image of the shark is often similar to the image of the dolphin.
Naturally, we expect the two labels to be "similar".
Suppose that we have a lot of training examples for shark, and very few training examples for dolphin.
If the label shark and the label dolphin have similar representations, the prediction for the label dolphin will suffer less from the data sparsity problem.Second, the 0/1 value encoding is easy to cause the overfitting problem.
Suppose A and B are labels of two similar types of fishes.
One-hot label representation prefers the ultimate separation of those two labels.
For example, if currently the system output probability for A is 0.8 and the probability for B is 0.2, it is good enough to make a correct prediction of A. However, with the one-hot label representation, it suggests that further modification to the parameters is still required, until the probability of A becomes 1 and the probability of B becomes 0.
Because the fish A and the fish B are very similar in appearance, it is probably more reasonable to have the probability 0.8 for A and 0.2 for B, rather than completely 1 for A and 0 for B, which could lead to the overfitting problem.We aim to address those problems.
We propose a method that can automatically learn label representation for deep neural networks.
As the training proceeds, the label embedding is iteratively learned and optimized based on the proposed label embedding network through back propagation.
The original one-hot represented loss function is softly converted to a new loss function with soft distributions, such that those originally unrelated labels have continuous interactions with each other during the training process.
As a result, the trained model can achieve substantially higher accuracy, faster convergence speed, and more stable performance.
The related prior studies include the traditional label representation methods BID7 BID10 BID1 , the "soft label" methods BID22 , and the model distillation methods BID9 ).Our
method is substantially different from those existing work, and the detailed comparisons are summarized in Appendix E. The contributions of this work are as follows:• Learning label embedding and compressed embedding: We propose the Label Embedding Network that can learn label representation for soft training of deep networks. Furthermore
, some large-scale tasks have a massive number of labels, and a naive version of label embedding network will suffer from intractable memory cost problem. We propose
a solution to automatically learn compressed label embedding, such that the memory cost is substantially reduced.• Interpretable
and reusable: The learned label embeddings are reasonable and interpretable, such that we can find meaningful similarities among the labels. The proposed method
can learn interpretable label embeddings on both image processing tasks and natural language processing tasks. In addition, the learned
label embeddings can be directly adapted for training a new model with improved accuracy and convergence speed.• General-purpose solution
and competitive results: The proposed method can be widely applied to various models, including CNN, ResNet, and Seq-to-Seq models. We conducted experiments on
computer vision tasks including CIFAR-100, CIFAR-10, and MNIST, and on natural language processing tasks including LCSTS text summarization task and IWSLT2015 machine translation task. Results suggest that the proposed
method achieves significantly better accuracy than the existing methods (CNN, ResNet, and Seq-to-Seq). We achieve results comparable or
even better than the state-of-the-art systems on those tasks.
We propose a method that can learn label representation during the training process of deep neural networks.
Furthermore, we propose a solution to automatically learn compressed label embedding, such that the memory cost is substantially reduced.
The proposed method can be widely applied to different models.
We conducted experiments on CV tasks including CIFAR-100, CIFAR-10, and MNIST, and also on natural language processing tasks including LCSTS and IWSLT2015.
Results suggest that the proposed method achieves significant better accuracies than the existing methods (CNN, ResNet, and Seq-to-Seq).
Moreover, the learned label embeddings are reasonable and interpretable, which provides meaningful semantics of the labels.
We achieve comparable or even better results with the state-of-the-art systems on those tasks. | Learning Label Representation for Deep Networks | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:712 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Decentralized training of deep learning models is a key element for enabling data privacy and on-device learning over networks, as well as for efficient scaling to large compute clusters.
As current approaches are limited by network bandwidth, we propose the use of communication compression in the decentralized training context.
We show that Choco-SGD achieves linear speedup in the number of workers for arbitrary high compression ratios on general non-convex functions, and non-IID training data.
We demonstrate the practical performance of the algorithm in two key scenarios: the training of deep learning models
(i) over decentralized user devices, connected by a peer-to-peer network and
(ii) in a datacenter.
Distributed machine learning-i.e. the training of machine learning models using distributed optimization algorithms-has enabled many recent successful applications in research and industry.
Such methods offer two of the key success factors:
1) computational scalability by leveraging the simultaneous computational power of many devices, and
2) data-locality, the ability to perform joint training while keeping each part of the training data local to each participating device.
Recent theoretical results indicate that decentralized schemes can be as efficient as the centralized approaches, at least when considering convergence of training loss vs. iterations (Scaman et al., 2017; Lian et al., 2017; Tang et al., 2018; Koloskova et al., 2019; Assran et al., 2019) .
Gradient compression techniques have been proposed for the standard distributed training case (Alistarh et al., 2017; Wen et al., 2017; Lin et al., 2018b; Wangni et al., 2018; Stich et al., 2018) , to reduce the amount of data that has to be sent over each communication link in the network.
For decentralized training of deep neural networks, Tang et al. (2018) introduce two algorithms (DCD, ECD) which allow for communication compression.
However, both these algorithms are restrictive with respect to the used compression operators, only allowing for unbiased compressors and-more significantlyso far not supporting arbitrarily high compression ratios.
We here study CHOCO-SGD-recently introduced for convex problems only (Koloskova et al., 2019 )-which overcomes these constraints.
For the evaluation of our algorithm we in particular focus on the generalization performance (on the test-set) on standard machine learning benchmarks, hereby departing from previous work such as e.g. (Tang et al., 2018; Wang et al., 2019; Tang et al., 2019b; Reisizadeh et al., 2019 ) that mostly considered training performance (on the train-set).
We study two different scenarios: firstly,
(i) training on a challenging peer-to-peer setting, where the training data is distributed over the training devices (and not allowed to move), similar to the federated learning setting (McMahan et al., 2017) .
We are again able to show speed-ups for CHOCO-SGD over the decentralized baseline (Lian et al., 2017) with much less communication overhead.
Secondly,
(ii) training in a datacenter setting, where decentralized communication patterns allow better scalability than centralized approaches.
For this setting we show that communication efficient CHOCO-SGD can improve time-to-accuracy on large tasks, such as e.g. ImageNet training.
However, when investigating the scaling of decentralized algorithms to larger number of nodes we observe that (all) decentralized schemes encounter difficulties and often do not reach the same (test and train) performance as centralized schemes.
As these findings do point out some deficiencies of current decentralized training schemes (and are not particular to our scheme) we think that reporting these results is a helpful contribution to the community to spur further research on decentralized training schemes that scale to large number of peers.
We propose the use of CHOCO-SGD (and its momentum version) for enabling decentralized deep learning training in bandwidth-constrained environments.
We provide theoretical convergence guarantees for the non-convex setting and show that the algorithm enjoys the a linear speedup in the number of nodes.
We empirically study the performance of the algorithm in a variety of settings on image classification (ImageNet-1k, Cifar10) and on a language modeling task (WikiText-2).
Whilst previous work successfully demonstrated that decentralized methods can be a competitive alternative to centralized training schemes when no communication constraints are present (Lian et al., 2017; Assran et al., 2019) , our main contribution is to enable training in strongly communication-restricted environments, and while respecting the challenging constraint of locality of the training data.
We theoretically and practically demonstrate the performance of decentralized schemes for arbitrary high communication compression, and under data-locality, and thus significantly expand the reach of potential applications of fully decentralized deep learning. | We propose Choco-SGD---decentralized SGD with compressed communication---for non-convex objectives and show its strong performance in various deep learning applications (on-device learning, datacenter case). | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:713 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We show that Entropy-SGD (Chaudhari et al., 2017), when viewed as a learning algorithm, optimizes a PAC-Bayes bound on the risk of a Gibbs (posterior) classifier, i.e., a randomized classifier obtained by a risk-sensitive perturbation of the weights of a learned classifier.
Entropy-SGD works by optimizing the bound’s prior, violating the hypothesis of the PAC-Bayes theorem that the prior is chosen independently of the data.
Indeed, available implementations of Entropy-SGD rapidly obtain zero training error on random labels and the same holds of the Gibbs posterior.
In order to obtain a valid generalization bound, we show that an ε-differentially private prior yields a valid PAC-Bayes bound, a straightforward consequence of results connecting generalization with differential privacy.
Using stochastic gradient Langevin dynamics (SGLD) to approximate the well-known exponential release mechanism, we observe that generalization error on MNIST (measured on held out data) falls within the (empirically nonvacuous) bounds computed under the assumption that SGLD produces perfect samples.
In particular, Entropy-SGLD can be configured to yield relatively tight generalization bounds and still fit real labels, although these same settings do not obtain state-of-the-art performance.
Optimization is central to much of machine learning, but generalization is the ultimate goal.
Despite this, the generalization properties of many optimization-based learning algorithms are poorly understood.
The standard example is stochastic gradient descent (SGD), one of the workhorses of deep learning, which has good generalization performance in many settings, even under overparametrization BID36 , but rapidly overfits in others BID44 .
Can we develop high performance learning algorithms with provably strong generalization guarantees?
Or is their a limit?In
this work, we study an optimization algorithm called Entropy-SGD BID10 , which was designed to outperform SGD in terms of generalization error when optimizing an empirical risk. Entropy-SGD
minimizes an objective f : R p → R indirectly by performing (approximate) stochastic gradient ascent on the so-called local entropy DISPLAYFORM0 where C is a constant and N denotes a zero-mean isotropic multivariate normal distribution on R p .Our first contribution
is connecting Entropy-SGD to results in statistical learning theory, showing that maximizing the local entropy corresponds to minimizing a PAC-Bayes bound BID31 on the risk of the so-called Gibbs posterior. The distribution of w
+ ξ is the PAC-Bayesian "prior", and so optimizing the local entropy optimizes the bound's prior. This connection between
local entropy and PAC-Bayes follows from a result due to Catoni (2007, Lem. 1.1.3) in the case of bounded risk. (See Theorem 4.1.) In the
special case where
f is the empirical cross entropy, the local entropy is literally a Bayesian log marginal density. The connection between minimizing
PACBayes bounds under log loss and maximizing log marginal densities is the subject of recent work by BID19 . Similar connections have been made
by BID45 ; Zhang (2006b) ; BID20 ; BID21 .Despite the connection to PAC-Bayes
, as well as theoretical results by Chaudhari et al. suggesting that Entropy-SGD may be more stable than SGD, we demonstrate that Entropy-SGD (and its corresponding Gibbs posterior) can rapidly overfit, just like SGD. We identify two changes, motivated
by theoretical analysis, that suffice to control generalization error, and thus prevent overfitting.The first change relates to the stability of optimizing the prior mean. The PAC-Bayes theorem requires that
the prior be independent of the data, and so by optimizing the prior mean, Entropy-SGD invalidates the bound. Indeed, the bound does not hold empirically
. While a PAC-Bayes prior may not be chosen
based on the data, it can depend on the data distribution. This suggests that if the prior depends only
weakly on the data, it may be possible to derive a valid bound.We formalize this intuition using differential privacy BID13 BID17 . By modifying the cross entropy loss to be bounded
and replacing SGD with stochastic gradient Langevin dynamics (SGLD; BID43 , the data-dependent prior mean can be shown to be (ε, δ )-differentially private BID42 BID34 . We refer to the SGLD variant as Entropy-SGLD. Using
results connecting statistical validity and differential
privacy (Dwork et al., 2015b, Thm. 11) , we show that an ε-differentially private prior mean yields a valid, though looser, generalization bound using the PAC-Bayes theorem. (See Theorem 5.4.)A gap remains between pure and approximate differential
privacy. Under some technical conditions, in the limit as the number of iterations
diverges, the distribution of SGLD's output is known to converge weakly to the corresponding stationary distribution, which is the well-known exponential mechanism in differential privacy (Teh, Thiery, and Vollmer, 2016, Thm. 7) . Weak convergence, however, falls short of implying that SGLD achieves pure
ε-differential privacy. We proceed under the approximation that SGLD enjoys the same privacy as the
exponential release mechanism, and apply our ε-differentially private PAC-Bayes bound. We find that the corresponding 95% confidence intervals are reasonably tight
but still conservative in our experiments. While the validity of our bounds are subject to our approximation, the bounds
give us a view as to the limitations of combining differential privacy with PAC-Bayes bounds: when the privacy of Entropy-SGLD is tuned to contribute no more than 2ε 2 × 100 ≈ 0.2% to the generalization error, the test error of the learned network is 3-8%, which is approximately 5-10 times higher than the state of the art, which for MNIST is between 0.2-1%, although the community has almost certainly overfit its networks/learning rates/loss functions/optimizers to MNIST. We return to these points in the discussion.The second change pertains to the
stability of the stochastic gradient estimate made on each iteration of Entropy-SGD. This estimate is made using SGLD. (Hence Entropy-SGD is SGLD within SGD.) Chaudhari
et al. make a subtle but critical
modification to the noise term in SGLD
update: the noise is divided by a factor that ranges from 10 3 to 10 4 . (This factor was ostensibly tuned to produce good empirical results.) Our analysis
shows that, as a result of this modification, the Lipschitz constant
of the objective function is approximately 10 6 -10 8 times larger, and the conclusion that the Entropy-SGD objective is smoother than the original risk surface no longer stands. This change to the noise also negatively impacts the differential privacy of the
prior mean. Working backwards from the desire to obtain tight generalization bounds, we are
led to divide the SGLD noise by a factor of only 4 √ m, where m is the number of data points. (For MNIST, 4 √ m ≈ 16.) The resulting bounds are nonvacuous and tighter than those
recently published by BID18
, although it must be emphasized that the bound presented here hold subject to the approximation concerning privacy of the prior mean, which is certainly violated but to an unknown degree.We begin with a review of some related work, before introducing sufficient background so that we can make a formal connection between local entropy and PAC-Bayes bounds. We then introduce a differentially private PAC-Bayes bound. In Section 6, we present experiments
on MNIST which provide evidence for our theoretical analysis
. (Empirical validation is required in order to address the aforementioned gap between pure and approximate
differential privacy.) We close with a short discussion.
Our work reveals that Entropy-SGD can be understood as optimizing a PAC-Bayes generalization bound in terms of the bound's prior.
Because the prior must be independent of the data, the bound is invalid, and, indeed, we observe overfitting in our experiments with Entropy-SGD when the thermal noise 2/τ is set to 0.0001 as suggested by Chaudhari et al. for MNIST.
PAC-Bayes priors can, however, depend on the data distribution.
This flexibility seems wasted, since the data sample is typically viewed as one's only view onto the data distribution.
However, using differential privacy, we can span this gap.
By performing a private computation on the data, we can extract information about the underlying distribution, without undermining the statistical validity of a subsequent PAC-Bayes bound.
Our PAC-Bayes bound based on a differentially private prior is made looser by the use of a private data-dependent prior, but the gains in choosing a datadistribution-dependent prior more than make up for the expansion of the bound due to the privacy.
(The gains come from the KL term being much smaller on the account of the prior being better matched to the posterior.)
Understanding how our approach compares to local PAC-Bayes priors BID9 is an important open problem.The most elegant way to make Entropy-SGD private is to replace SGD with a sample from the Gibbs distribution (known as the exponential mechanism in the differential privacy literature).
However, generating an exact sample is intractable, and so practicioners use SGLD to generate an approximate sample, relying on the fact that SGLD converges weakly to the exponential mechanism under certain technical conditions.
Our privacy approximation allows us to proceed with a theoretical analysis by assuming that SGLD achieves the same privacy as the exponential mechanism.
On the one hand, we do not find overt evidence that our approximation is grossly violated.
On the other, we likely do not require such strong privacy in order to control generalization error.We might view our privacy-based bounds as being optimistic and representing the bounds we might be able to achieve rigorously should there be a major advance in private optimization.
(No analysis of the privacy of SGLD takes advantage of the fact that it mixes weakly.)
On the account of using private data-dependent priors, our bounds are significantly tighter than those reported by BID18 .
However, despite our bounds potentially being optimistic, the test set error we are able to achieve is still 5-10 times that of SGD.
Differential privacy may be too conservative for our purposes, leading us to underfit.
Indeed, we think it is unlikely that Entropy-SGD has strong differential privacy, yet we are able to achieve good generalization on both true and random labels under 0.01 thermal noise.
Identifying the appropriate notion of privacy/stability to combine with PAC-Bayes bounds is an important problem.Despite our progress on building learning algorithms with strong generalization performance, and identifying a path to much tighter PAC-Bayes bounds, Entropy-SGLD learns much more slowly than Entropy-SGD, the risk of Entropy-SGLD is far from state of the art, and our PAC-Bayes bounds are loose.
It seems likely that there is a fundamental tradeoff between the speed of learning, the excess risk, and the ability to produce a certificate of one's generalization error via a rigorous bound.
Characterizing the relationship between these quantities is an important open problem.A BACKGROUND: DIFFERENTIAL PRIVACY Here we formally define some of the differential privacy related terms used in the main text.
(See BID13 BID15 for more details.)
Let U,U 1 ,U 2 , . . . be independent uniform (0, 1) random variables, independent also of any random variables introduced by P and E, and let π : DISPLAYFORM0 Definition A.1.
A randomized algorithm A from R to T , denoted A : R T , is a measurable map A : [0, 1] × R → T .
Associated to A is a (measurable) collection of random variables {A r : r ∈ R} that satisfy A r = A (U, r).
When there is no risk of confusion, we will write A (r) for A r .
Definition A.2.
A randomized algorithm A : Z m T is (ε, δ )-differentially private if, for all pairs S, S ∈ Z m that differ at only one coordinate, and all measurable subsets B ⊆ T , we have P(A (S) ∈ B) ≤ e ε P(A (S ) ∈ B) + δ .We
will write ε-differentially private to mean (ε, 0)-differentially private algorithm. Definition
A.3. Let A : R
T and A : DISPLAYFORM1 Lemma A.4 (post-processing). Let A : Z
m T be (ε, δ )-differentially private and let F : T T be arbitrary. Then F •
A is (ε, δ )-differentially private. | We show that Entropy-SGD optimizes the prior of a PAC-Bayes bound, violating the requirement that the prior be independent of data; we use differential privacy to resolve this and improve generalization. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:714 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
In this paper, we investigate learning the deep neural networks for automated optical inspection in industrial manufacturing.
Our preliminary result has shown the stunning performance improvement by transfer learning from the completely dissimilar source domain: ImageNet.
Further study for demystifying this improvement shows that the transfer learning produces a highly compressible network, which was not the case for the network learned from scratch.
The experimental result shows that there is a negligible accuracy drop in the network learned by transfer learning until it is compressed to 1/128 reduction of the number of convolution filters.
This result is contrary to the compression without transfer learning which loses more than 5% accuracy at the same compression rate. | We experimentally show that transfer learning makes sparse features in the network and thereby produces a more compressible network. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:715 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The key challenge in semi-supervised learning is how to effectively leverage unlabeled data to improve learning performance.
The classical label propagation method, despite its popularity, has limited modeling capability in that it only exploits graph information for making predictions.
In this paper, we consider label propagation from a graph signal processing perspective and decompose it into three components: signal, filter, and classifier.
By extending the three components, we propose a simple generalized label propagation (GLP) framework for semi-supervised learning.
GLP naturally integrates graph and data feature information, and offers the flexibility of selecting appropriate filters and domain-specific classifiers for different applications.
Interestingly, GLP also provides new insight into the popular graph convolutional network and elucidates its working mechanisms.
Extensive experiments on three citation networks, one knowledge graph, and one image dataset demonstrate the efficiency and effectiveness of GLP.
The success of deep learning and neural networks comes at the cost of large amount of training data and long training time.
Semi-supervised learning BID37 BID8 ) is interesting and important as it can leverage ample available unlabeled data to aid supervised learning, thus greatly saving the cost, trouble, and time for human labeling.
Many researches have shown that when used properly, unlabeled data can significantly improve learning performance BID38 BID16 .
The key challenge for semi-supervised learning is how to effectively leverage the information of unlabeled data, such as graph structures and data features.Label propagation BID39 BID36 BID2 is arguably the most popular method for graph-based semi-supervised learning.
As a simple and effective tool, it has been widely used in many scientific research fields and has found numerous industrial applications.
Given a non-oriented graph G = (V, W, X) with n = |V| vertices, a nonnegative symmetric affinity matrix W ∈ R n×n + encoding edge weights, and a feature matrix X ∈ R n×m which contains an mdimensional feature vector of each vertex.
For semi-supervised classification, only a small subset of vertices are labeled, and the goal is to predict the labels of other vertices.
Denote by Y ∈ {0, 1} n×l the labeling matrix 1 with l being the number of classes.
The objective of of label propagation (LP) is to find a prediction (embedding) matrix Z ∈ R n×l which agrees with Y while being smooth on the graph such that nearby vertices have similar embeddings: DISPLAYFORM0 where α is a balancing parameter, L = D − W is the graph Laplacian 2 and D is the degree matrix.
The term enforcing smoothness is called graph Laplacian regularization or Tikhonov regularization.
Solving the quadratic regularization framework gives the prediction of LP.As LP makes predictions only based on graph information (W ), its performance depends on whether the underlying graph structure can well represent the class information of data -vertices in the same 1 If the label of vertex vi is known, then Y (i, :) is a one-hot embedding of vi with yij = 1 if vi belongs to the j-th class and yij = 0 otherwise.
If the label of vertex vi is not given, then Y (i, :) is a vector of all zeros.2
Other variants such as the normalized Laplacian matrices are also applicable.cluster tend to have same labels.
For some applications such as social network analysis, data exhibits a natural graph structure.
For some other applications such as image or text classification, data may come in a vector form, and a graph is usually constructed using data features.
Nevertheless, in many cases, graphs only partially encode data information.
Take document classification in a citation network as an example, the citation links between documents form a graph which represents their citation relation, and each document is represented as a bag-of-words feature vector which describes its content.
To correctly classify a document, both the citation relations (W ) and the content information (X) need to be taken into account, as they contain different aspects of document information.
However, in this case, LP can only exploit the graph information to make predictions without using any of the feature information, thus resulting in poor performance.To go beyond the limit of LP and jointly model graph and feature information, a common approach is to train a supervised learner to classify data features while regularizing the classifier using graph information.
Manifold regularization BID1 trains a support vector machine with a graph Laplacian regularizer.
Deep semi-supervised embedding BID32 and Planetoid BID34 ) train a neural network with an embedding-based regularizer.
The recently proposed graph convolutional neural networks BID16 ) adopts a different approach by integrating graph and feature information in each of its convolutional layer, which is coupled with a projection layer for classification.In this paper, we extends the modeling capability of LP in the context of graph signal processing.
Casted in the spectral domain, LP can be interpreted as low-pass graph filtering BID10 BID11 .
In light of this, we decompose LP into three components: graph signal, graph filter, and classifier.
By naturally extending the three components, we propose a generalized label propagation (GLP) framework for semi-supervised learning.
In GLP, a low-pass graph filter is applied on vertex features to produce smooth features, which are then fed to a supervised learner for classification.
After filtering, the data features within each class are more similar and representative, making it possible to train a good classifier with few labeled examples.GLP not only extends LP to incorporate vertex features in a simple way, but also offers the flexibility of designing appropriate graph filters and adopting domain-specific classifiers for different semisupervised applications.
The popular graph convolutional networks (GCN) BID16 is closely related to GLP.
In fact, GCN without internal ReLUs is a special case of GLP with a certain graph filter and a multilayer perceptron classifier.
When revisited under the GLP framework, it makes clear the working mechanisms of GCN including its design of convolutional filter and model parameter setting.
Extensive experiments on citation networks, knowledge graphs, and image datasets show substantial improvement of GLP over GCN and other baselines for semi-supervised classification, confirming the effectiveness of this simple and flexible framework.The rest of the paper is organized as follows.
Section 2 interprets LP in the context of graph signal processing.
Section 3 presents the proposed GLP framework.
Section 4 revisits GCN under GLP.
Section 5 discusses the design of graph filters for GLP.
Section 6 presents experimental results.
Section 7 discusses related works.
Finally, section 8 concludes the paper.
In this paper, we have proposed a simple, flexible, and efficient framework GLP for semi-supervised learning, and demonstrated its effectiveness theoretically and empirically.
GLP offers new insights into existing methods and opens up possible avenues for new methods.
An important direction for future research is the design and selection of graph filters for GLP in different application scenarios.Other directions include making GLP readily applicable to inductive problems, developing faster algorithms for GLP, and applying GLP to solve large-scale real-world problems. | We extend the classical label propation methods to jointly model graph and feature information from a graph filtering perspective, and show connections to the graph convlutional networks. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:716 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Because the choice and tuning of the optimizer affects the speed, and ultimately the performance of deep learning, there is significant past and recent research in this area.
Yet, perhaps surprisingly, there is no generally agreed-upon protocol for the quantitative and reproducible evaluation of optimization strategies for deep learning.
We suggest routines and benchmarks for stochastic optimization, with special focus on the unique aspects of deep learning, such as stochasticity, tunability and generalization.
As the primary contribution, we present DeepOBS, a Python package of deep learning optimization benchmarks.
The package addresses key challenges in the quantitative assessment of stochastic optimizers, and automates most steps of benchmarking.
The library includes a wide and extensible set of ready-to-use realistic optimization problems, such as training Residual Networks for image classification on ImageNet or character-level language prediction models, as well as popular classics like MNIST and CIFAR-10.
The package also provides realistic baseline results for the most popular optimizers on these test problems, ensuring a fair comparison to the competition when benchmarking new optimizers, and without having to run costly experiments.
It comes with output back-ends that directly produce LaTeX code for inclusion in academic publications.
It supports TensorFlow and is available open source.
As deep learning has become mainstream, research on aspects like architectures BID15 BID16 BID48 BID50 BID41 and hardware BID33 BID9 Jouppi, 2016) has exploded, and helped professionalize the field.
In comparison, the optimization routines used to train deep nets have arguable changed only little.
Comparably simple first-order methods like SGD BID38 , its momentum variants (MOMENTUM) BID34 BID31 and ADAM BID20 remain standards BID14 BID19 .
The low practical relevance of more advanced optimization methods is not for lack of research, though.
There is a host of papers proposing new ideas for acceleration of first-order methods BID13 BID49 BID54 BID12 BID3 BID24 BID37 , incorporation of second-order information BID27 BID28 BID5 BID8 , and automating optimization BID43 BID25 BID39 , to name just a few.
One problem is that these methods are algorithmically involved and difficult to reproduce by practitioners.
If they are not provided in packages for popular frameworks like TENSORFLOW, PYTORCH etc., they get little traction.
Another problem, which we hope to address here, is that new optimization routines are often not convincingly compared to simpler alternatives in research papers, so practitioners are left wondering which of the many new choices is the best (and which ones even really work in the first place).Designing
an empirical protocol for deep learning optimizers is not straightforward, and the corresponding experiments can be time-consuming. This is partly
due to the idiosyncrasies of the domain:• Generalization: While the optimization algorithm (should) only ever see the training-set, the practitioner cares about performance of the trained model on the test set. Worse, in some
important application domains, the optimizer's loss function is not the objective we ultimately care about. For instance in
image classification, the real interest may be in the percentage of correctly labeled images, the accuracy. Since this 0-1
loss is infeasible in practice BID26 , a surrogate loss function is used instead. So which score
should actually be presented in a comparison of optimizers? Train loss, because
that is what the optimizer actually works on; test loss, because an over-fitting optimizer is useless, or test accuracy, because that's what the human user cares about?• Stochasticity: Sub-sampling
(batching) the data-set to compute estimates of the loss function and its gradient introduces stochasticity. Thus, when an optimizer is run
only once on a given problem, its performance may be misleading due to random fluctuations. The same stochasticity also causes
many optimization algorithms to have one or several tuning parameters (learning rates, etc.). How should an optimizer with two free
parameter be compared in a fair way with one that has only one, or even no free parameters?• Realistic Settings, Fair Competition
: There is a widely-held belief that popular standards like MNIST and CIFAR-10 are too simplistic to serve as a realistic place-holder for a contemporary combination of large-scale data set and architecture. While this worry is not unfounded, researchers
, ourselves included, have sometimes found it hard to satisfy the demands of reviewers for ever new data sets and architectures. Finding and preparing such data sets and building
a reasonable architecture for them is time-consuming for researchers who want to focus on their novel algorithm. Even when this is done, one then has to not just
run one's own algorithm, but also various competing baselines, like SGD, MOMENTUM, ADAM, etc. This step does not just cost time, it also poses
a risk of bias, as the competition invariably receives less care than one's own method. Reviewers and readers can never be quite sure that
an author has not tried a bit too much to make their own method look good, either by choosing a convenient training problem, or by neglecting to tune the competition.To address these problems, we propose an extensible, open-source benchmark specifically for optimization methods on deep learning architectures. We make the following three contributions:• A protocol
for benchmarking stochastic optimizers. Section 2 discusses and recommends best practices for
the evaluation of deep learning optimizers. We define three key performance indicators: final performance
, speed, and tunability, and suggest means of measuring all three in practice. We provide evidence that it is necessary to show the results
of multiple runs in order to get a realistic assessment. Finally, we strongly recommend reporting both loss and accuracy
, for both training and test set, when demonstrating a new optimizer as there is no obvious way those four learning curves are connected in general.• DEEPOBS 1 , a deep learning optimizer benchmark suite. We have
distilled the above ideas into an open-source python package
, written in TENSORFLOW BID0 , which automates most of the steps presented in section 2. The package currently provides over twenty off-the-shelf test problems
across four application domains, including image classification and natural language processing, and this collection can be extended and adapted as the field makes progress. The test problems range in complexity from stochastic two dimensional
functions to contemporary deep neural networks capable of delivering near state-of-the-art results on data sets such as IMAGENET. The package is easy to install in python, using the pip toolchain. It
automatically downloads data sets, sets up models, and provides a
back-end to automatically produce L A T E X code that can directly be included in academic publications. This automation does not just save time, it also helps researchers to
create reproducible, comparable, and interpretable results.• Benchmark of popular optimizers From the collection of test problems
, two sets, of four simple ("small") and four more demanding ("large") problems, respectively, are selected as a core set of benchmarks. Researchers can design their algorithm in rapid iterations on the simpler
set, then test on the more demanding set. We argue that this protocol saves time, while also reducing the risk of over-fitting
in the algorithm design loop. The package also provides realistic baselines results for the most popular optimizers
on those test problems.In Section 4 we report on the performance of SGD, SGD with momentum (MOMENTUM) and ADAM on the small and large benchmarks (this also demonstrates the output of the benchmark). For each optimizer we perform an exhaustive but realistic hyperparameter search. The
best performing results are provided with DEEPOBS and can be used as a fair performance
metric for new optimizers without the need to compute these baselines again.We invite the authors of other algorithms to add their own method to the benchmark (via a git pull-request). We hope that the benchmark will offer a common platform, allowing researchers to publicise
their algorithms, giving practitioners a clear view on the state of the art, and helping the field to more rapidly make progress.
Deep learning continues to pose a challenging domain for optimization algorithms.
Aspects like stochasticity and generalization make it challenging to benchmark optimization algorithms against each other.
We have discussed best practices for experimental protocols, and presented the DEEPOBS package, which provide an open-source implementation of these standards.
We hope that DEEPOBS can help researchers working on optimization for deep learning to build better algorithms, by simultaneously making the empirical evaluation simpler, yet also more reproducible and fair.
By providing a common ground for methods to be compared on, we aim to speed up the development of deep-learning optimizers, and aid practitioners in their decision for an algorithm. | We provide a software package that drastically simplifies, automates, and improves the evaluation of deep learning optimizers. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:717 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Pre-trained word embeddings are the primary
method for transfer learning in several Natural Language Processing (NLP) tasks.
Recent
works have focused on using unsupervised
techniques such as language modeling to obtain these embeddings.
In contrast, this work
focuses on extracting representations from
multiple pre-trained supervised models, which
enriches word embeddings with task and domain specific knowledge.
Experiments performed in cross-task, cross-domain and crosslingual settings indicate that such supervised
embeddings are helpful, especially in the lowresource setting, but the extent of gains is dependent on the nature of the task and domain.
Named entity recognition, semantic role labelling, relation extraction etc. can be thought of as primary tasks necessary for solving high level tasks like question answering, summarization etc.
However, labelling large amounts of data at this granularity is not only prohibitively expensive, but also unscalable.
Given that high performance models for these tasks already exist, it is desirable to leverage them for other language understanding tasks.Next, consider the domain adaptation setting where some domains have a lot of data, while others do not.
A model for a low-resource domain would benefit from information in expert models trained on other data rich domains.
Finally, consider the setting of cross-lingual adaptation, a common problem for personal assistants expanding to more languages.
As the number of languages increases, it becomes unfeasible to obtain human annotated data.
Again, the need to adapt to low-resource languages can be met by leveraging models that already exist for high-resource languages.Motivated by the above scenarios, we propose a simple method to transfer (1) supervised knowledge, from (2) multiple sources, (3) in an easy to implement manner.
In our approach, this knowledge is extracted from source models in the form of contextual word embeddings.
We treat preexisting models as embedding extractors, which are used to extract token level representations for an input sentence.
These representations are then combined via a task specific convex combination.Unsupervised transfer learning methods such as ELMo have shown great success for a variety of tasks BID15 .
While they have the advantage of being trained on very large corpora, the training objectives are unsupervised.
We show that in low-resource settings especially, leveraging representations from multiple pre-trained supervised models in related tasks, domains or languages can prove to be beneficial.The common way of supervised transfer learning via fine-tuning can transfer information only from a single source task BID11 .
One way to incorporate information from multiple external sources is via multi-task learning BID5 BID17 .
The limitations of multitask learning are the need for labelled data for the source models, longer training times and complex design decisions (weighing the losses for each task, sampling strategies, and choice of architecture).
In contrast, our plug-and-play approach is simple and does not assume availability of source model data at training time.
Finally, our approach also provides some interpretability (through the parameters of the convex combination) into which source tasks or domains are important for which other tasks and domains.
Cross-task SRL results (with GloVe and ELMo in 1k, 5k and full data settings) have been tabulated in TAB1 has the results for cross-domain NER and TAB2 shows the results for crosslingual transfer on NER.
All the reported numbers are F1 scores.Cross-task SRL With GloVe embeddings, adding the supervised embeddings gives us significant improvements in F1 scores ∼ 5% for 1k and ∼ 7% for 5k examples.
When we use the entire dataset, adding supervised embeddings provides no performance gains.
Examining the learned source task weights in the 1k setting, we find that weights for CP, DP and NER have values 0.41, 0.41 and 0.18 respectively which shows that SRL benefits greatly from syntactic tasks like CP and DP.
This is in agreement with SRL state-of-the-art models BID19 and BID8 which rely on syntactic features.When we replace GloVe with ELMo representations, we see that the baseline model improves by over ∼ 13%, showing that ELMo representations are indeed very strong.
But adding supervised embeddings in the 1k setting further improves upon the ELMo baseline by over ∼ 5%.
A similar improvement of ∼ 5% can be seen in the 5k setting as well.
Our model shows comparable performance as the baseline when we use the entire dataset.
These results suggest that the proposed supervised contextual embeddings further bring about improvements over already strong language model features in a low-resource setting.
This reinforces the learning that when sufficient data is available, supervised signals do not provide information that the model cannot learn by itself from the data alone.Cross-domain NER Supervised embeddings provide an impressive 4% improvement over the GloVe baseline with both 1,000 and 5,000 samples.
Even when we replace GloVe with ELMo, we see an improvement of 3% , indicating that the benefits of using knowledge from other domains is orthogonal to what ELMo can offer.
However, the gains vanish when the full dataset is used, suggesting that knowledge from other domains is particularly useful in the very low-resource setting.
However, if sufficient data is available, the model has enough resources to build upon generic word embeddings.
It is also interesting to note that for this dataset, GloVe based models outperform their ELMo counterparts.
This is probably due to the mismatch in the data used to train ELMo (formal language from the 1 billion word corpus) as opposed to the NER dataset which consists of informal language used in web blogs.Cross-lingual NER We observe substantial gains by exploiting information present in other languages.
For both German and Spanish the performance gains are highest when number of samples is 1,000 , thus validating the suitability of the proposed method for transfer to very low-resource settings.
Even when full dataset is used, we see gains over 1% for both languages.
We propose supervised contextual embeddings, an easy way to incorporate supervised knowledge from multiple pre-existing models.
We perform experiments in the cross-task, cross-domain and cross-lingual setups and find that the proposed embeddings are particularly useful in the lowresource setting.
Our work points to the potential of such embeddings in various downstream tasks in different transfer learning settings.
Future work includes incorporating more tasks, domains and languages, and understanding the relationships among them.
These explorations would build towards our larger vision of building a more complete taxonomy of transfer learning dependencies among NLP tasks, domains and languages. | extract contextual embeddings from off-the-shelf supervised model. Helps downstream NLP models in low-resource settings | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:718 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We build a theoretical framework for understanding practical meta-learning methods that enables the integration of sophisticated formalizations of task-similarity with the extensive literature on online convex optimization and sequential prediction algorithms in order to provide within-task performance guarantees.
Our approach improves upon recent analyses of parameter-transfer by enabling the task-similarity to be learned adaptively and by improving transfer-risk bounds in the setting of statistical learning-to-learn.
It also leads to straightforward derivations of average-case regret bounds for efficient algorithms in settings where the task-environment changes dynamically or the tasks share a certain geometric structure.
Meta-learning, or learning-to-learn (LTL) BID26 , has recently re-emerged as an important direction for developing algorithms capable of performing well in multitask learning, changing environments, and federated settings.
By using the data of numerous training tasks, meta-learning algorithms seek to perform well on new, potentially related test tasks without using many samples from them.
Successful modern approaches have also focused on exploiting the capacity of deep neural networks, whether by learning multi-task data representations passed to simple classifiers BID25 or by neural control of the optimization algorithms themselves BID23 .Because
of its simplicity and flexibility, a common approach is that of parameter-transfer, in which all tasks use the same class of Θ-parameterized functions f θ : X → Y; usually a shared global model φ ∈ Θ is learned that can then be used to train task-specific parameters. In gradient-based
meta-learning (GBML) BID11 , φ is a metainitialization such that a few stochastic gradient steps on a Preliminary work. Under review by the
International Conference on Machine Learning (ICML). Do not distribute.
few samples from a
new task suffice to learn a good taskspecific model. GBML is now used in
a variety of LTL domains such as vision BID18 BID21 BID17 , federated learning BID7 , and robotics BID0 . However, its simplicity
also raises many practical and theoretical questions concerning what task-relationships it is able to exploit and in which settings it may be expected to succeed.While theoretical LTL has a long history BID4 BID19 BID22 , there has recently been an effort to understand GBML in particular. This has naturally lead
to online convex optimization (OCO) (Zinkevich, 2003) , either directly BID12 BID16 or via online-to-batch conversion to statistical LTL BID16 BID9 . These efforts all consider
learning a shared initialization of a descent method; BID12 then prove learnability of a metalearning algorithm while BID16 and BID9 give meta-test-time performance guarantees.However, this line of work has so far considered at most a very restricted, if natural, notion of task-similarity -closeness to a single fixed point in the parameter space. We introduce a new theoretical
framework, Averaged-Regret Upper-Bound Analysis (ARUBA), that enables the derivation of meta-learning algorithms that can provably take advantage of much more sophisticated task-structure. Expanding significantly upon the
work of BID16 , ARUBA treats meta-learning as the online learning of a sequence of losses that each upper bound the regret on a single task. These bounds frequently have convenient
functional forms that are (a) nice enough for us to easily draw on
the existing OCO literature and (b) strongly dependent on both the task-data
and the meta-initialization, thus encoding task-similarity in a mathematically accessible way. Using ARUBA we provide new or dramatically improved
meta-learning algorithms in the following settings:• Adaptive Meta-Learning: A major drawback of previous work is the reliance on knowing the task-similarity beforehand to set the learning rate BID12 or regularization BID9 , or the use of a suboptimal guess-and-tune approach based on the doubling trick BID16 . ARUBA yields a simple and efficient gradient-based
algorithm that eliminates the need to guess the task-similarity by learning it on-the-fly.• Statistical LTL: ARUBA allows us to leverage powerful
results in online-to-batch conversion BID27 BID15 to derive new upper-bounds on the transfer risk when using GBML for statistical LTL BID4 , including fast rates in the number of tasks when the task-similarity is known and fully highprobability guarantees for a class of losses that includes linear regression. These results improve directly upon the guarantees of BID16
and BID9 for similar or identical GBML algorithms.• LTL in Dynamic Environments: Many practical applications of
GBML include settings where the optimal initialization may change over time due to a changing taskenvironment BID0 . However, current theoretical work on GBML has only considered
learning a fixed initialization BID12 BID9 . ARUBA reduces the problem of meta-learning in changing environments
to a dynamic regret-minimization problem, for which there exists a vast array of online algorithms with provable guarantees.• Meta-Learning the Task Geometry: A recurring theme in parameter-transfer
LTL is the idea that certain model weights, such as those encoding a shared representation, are common to all tasks, whereas others, such as those performing a task-specific classification, need to be updated on each one. However, by simply using a fixed initialization we are forced to re-learn
this structure on every task. Using ARUBA we provide an algorithm that can learn and take advantage of
such structure by adaptively determining which directions in parameter-space need to be updated. We further provide a fully adaptive, per-coordinate variant that may be
viewed as an analog for Reptile BID21 of the Meta-SGD modification of MAML BID11 BID18 , which learns a per-coordinate learning rate; in addition to its provable guarantees, our version is more efficient and can be applied to a variety of GBML methods.In the current paper we provide in Section 2 an introduction to ARUBA and use it to show guarantees for adaptive and statistical LTL. We defer our theory for meta-learning in dynamic environments and of different
task-geometries, as well as our empirical results, to the full version of the paper. | Practical adaptive algorithms for gradient-based meta-learning with provable guarantees. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:719 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Unsupervised monocular depth estimation has made great progress after deep
learning is involved.
Training with binocular stereo images is considered as a
good option as the data can be easily obtained.
However, the depth or disparity
prediction results show poor performance for the object boundaries.
The main
reason is related to the handling of occlusion areas during the training.
In this paper,
we propose a novel method to overcome this issue.
Exploiting disparity maps
property, we generate an occlusion mask to block the back-propagation of the occlusion
areas during image warping.
We also design new networks with flipped
stereo images to induce the networks to learn occluded boundaries.
It shows that
our method achieves clearer boundaries and better evaluation results on KITTI
driving dataset and Virtual KITTI dataset.
Monocular depth estimation becomes an active research topic as deep learning is applied in various computer vision tasks.
It has many applications, from navigation through to scene understanding.
A single traditional camera can be a cheaper alternative to the expensive LIDAR sensor for automotive cars if accurate estimation can be achieved.
Meanwhile, single camera simplifies the design of depth estimation solution which can be adopted quite widely at a low cost.
One straight-forward way to train deep depth estimation models is to use ground truth depth images as the supervision signals BID1 .
However, supervised deep learning method is eager for massive data with ground truth.
Collecting large datasets with ground truth depth in varied real scenarios is challenge and expensive.
Instead, training using stereo images without depth label is an alternative option.
BID7 proposed a method to exploit the left-right consistency of stereo images to tackle the monocular depth estimation, which achieved quite promising results.
However, the depth predicted by their method has blurred boundaries.
The issue is mainly due to the occlusions during the image warping.
Though it can be alleviated in some extent with proper post processing, the fundamental problem is not well addressed.In this paper, we propose a new method to overcome the blurred boundaries when using stereo pairs to train the monocular depth model.
An example is illustrated in FIG0 .
During the image warping, we generate an occlusion mask using the disparity map to block the inappropriate back-propagation gradients for occlusion areas.
However, the mask only cannot guarantee clear boundaries as there is no constrain for the masked areas.
Then we design new networks to fully exploit the information of stereo images.
With flipped stereo pairs, the network is induced to learn clear boundaries for occlusion areas.
Our method provides a solution to the fundamental learning difficulty of occluded areas introduced by image warping in depth estimation.
Empirical evaluation on KITTI driving dataset BID6 ) and Virtual KITTI dataset BID4 ) demonstrates the effectiveness of our approach.
Moreover, we find the depth label of KITTI 2015 is usually very sparse near the object boundaries, which is not very sensitive to evaluate the clearness of boundaries.
In this work, we present an occlusion mask and filp-over training scheme to enable effective learning of object boundaries when using image warping.
With our new network, our model achieves state of art result using only stereo images.
Moreover, as warping based image reconstruction is commonly used in depth estimation problem, our method provides a solution to the fundamental difficulty of occluded areas introduced by image warping.In the future, our method can be incorporated with more accurate network trained on trinocular data (temporal Stereo sequence) such as BID25 , BID17 and BID8 , which would further boost the accuracy.6
SUPPLEMENTARY MATERIALS | This paper propose a mask method which solves the previous blurred results of unsupervised monocular depth estimation caused by occlusion | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:72 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
In this work, we propose a self-supervised method to learn sentence representations with an injection of linguistic knowledge.
Multiple linguistic frameworks propose diverse sentence structures from which semantic meaning might be expressed out of compositional words operations.
We aim to take advantage of this linguist diversity and learn to represent sentences by contrasting these diverse views.
Formally, multiple views of the same sentence are mapped to close representations.
On the contrary, views from other sentences are mapped further.
By contrasting different linguistic views, we aim at building embeddings which better capture semantic and which are less sensitive to the sentence outward form.
We propose to learn sentence embeddings by contrasting multiple linguistic representations.
The motivation is to benefit from linguistic structures diversity to discard noises inherent to each representation.
We aim at encoding high-level representations by aligning the underlying shared information from multiple views.
As illustrated in Figure 1 , we train our model with a contrastive framework which aims at mapping close input sentences to close representations while separating unrelated sentences.
In Natural Language Processing (NLP), this framework has been widely used to learn word representations (Mikolov et al., 2013a; b) for example.
This model relies on the distributional hypothesis which conjectures that words within similar context share similar meaning.
Such framework has also been extended to sentences with the similar hypothesis that the meaning can be inferred from the context sentences (Logeswaran & Lee, 2018; .
We propose to extend this framework by assuming that different views of the same sentence should lead to close representation.
We considered the dependency trees, a linguistic framework that describes the compositional structure of a sentence.
As illustrated in Figure 1 , in this framework, the sentence is mathematically described as an oriented acyclic graph where the nodes are words and edges describe the relations between words.
Such structure has benefited from an important attention in the NLP community and efficient parser tools for various languages are available, which makes it possible to obtain such information almost freely in the sense it does not require additional hand annotated data.
Tree representations are then mapped in a shared embedding space using appropriate Tree LSTM networks introduced in Tai et al. (2015) .
Model parameters are learned using a discriminating objective as proposed in Logeswaran & Lee (2018) .
We exploit the diversity of linguistic structures to build sentence representations.
Out method shows promising results and does not require hand annotated data.
More scalable implementations might be considered to explore more experimentation setups.
Although results are below state of the art performances, our model is trained on only a small proportion of the bookcorpus sentences as stated in Figure 2 .
A larger exposition to the training data and an extended training time might benefit to the downstream and probing scores.
Other linguistic structures might also be tested such as constituency tree associated with N-ary Tree LSTM or Tree LSTM improved with an attention mechanism.
A COMPUTING METHOD FOR TREE LSTM
We implemented a batching procedure to fasten Tree LSTM computations.
Group of nodes are computed sequentially to insure all node children have already been computed.
Nodes are considered given their distance to the root node.
First, Leaf nodes with highest depth are computed to progressively compute inner nodes.
The Tree LSTM cell implementation is specifically designed to treat simultaneously all nodes in each the step.
Figure 4: The batching procedure to optimize the graph computation.
For each batch, the computation is decomposed in steps which insure that every node dependent have already been computed.
At each step, node with the same depth to the root are computed in a single operation and the output fed to the next computational step. | We aim to exploit the diversity of linguistic structures to build sentence representations. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:720 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The peripheral nervous system represents the input/output system for the brain.
Cuff electrodes implanted on the peripheral nervous system allow observation and control over this system, however, the data produced by these electrodes have a low signal-to-noise ratio and a complex signal content.
In this paper, we consider the analysis of neural data recorded from the vagus nerve in animal models, and develop an unsupervised learner based on convolutional neural networks that is able to simultaneously de-noise and cluster regions of the data by signal content.
Recent advances have made chronic observation [1] of, and limited control [2] over the peripheral nervous system possible.
To characterise the dynamics of the signals passing to and from the brain, we wish to categorise patterns of activity within the peripheral nervous system.
However, consistent detection of single neuron activity remains a challenge with current cuff electrode technology suitable for in vivo neural data acquisition.
The relative position of an extracellular recording electrode and neuronal axons close enough to sit above the noise floor affects the polarity of presented signal components [3] , and their summation at the electrode occludes the presence of individual action potentials during periods of neuronal activity.
Instead, local field potentials (LFPs), the combination of many neuronal responses arriving concurrently at the electrode are observed.
These population level responses are potentially informationally richer [4] , but preclude the use of conventional spike-sorting [5] methodologies on such data.
Instead, we develop a method based on convolutional neural networks (CNN) that simultaneously de-noises the data and categorises the observed signals.
We train this model on approximately one hour of data taken from a single subject approximately twelve hours post surgical implantation.
We further show that it is applicable without further training to data from a second subject thirty days post surgical implantation, demonstrating cross-time, cross-subject applicability of the trained models.
The recent development of chronic neural interfacing implant systems that are able to record neural signals over period of months or years will create large sets of primarily unlabelled data, with numerous signals occurring over a range of time-scales.
These data are currently un-characterisable with standard methods (e.g. spike-sorting).
Previous work in this field has relied on mixing categorical and real-valued latent vectors.
Westhuizen et al [9] used an adversarial auto-encoder to project neural data to labels, incorporating an approximately one-hot encoding in the latent space but also including an approximately Gaussian vector to allow reconstruction.
Since both vectors are trained simultaneously, the Gaussian component of the latent space may contain the relevant labelling information for one or more true classes.
InfoGAN [10] , a GAN implementation in which the discriminator identifies components of the latent space is similarly capable of one-hot latent representation of the data, but without constraints on the information carried within the one-hot encoding.
The Coordinate-VAE approach, in restricting the information available to the encoder creating the non-categorical portion of the latent space, allows unsupervised characterisation of the signals in timeseries data, while simultaneously de-noising the signal.
Models are transferable between individuals, suggesting that we may gain the ability to pre-train large models for the reduction to latent space representations.
As shown in Figure 3 , there is some evidence to suggest that these latent space representations are also informative for physiological features.
We might then rapidly train a final classifier or agent for monitoring or control of individual patients, as in Pandarianth et al [11] , in which an auto-encoder is used as a dimension reduction technique on collections of neural spiking data acquired from macaque motor and pre-motor cortices, following which a GLM is used to map the complex latent space to spiking activity. | Unsupervised analysis of data recorded from the peripheral nervous system denoises and categorises signals. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:721 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Adversarial attacks on convolutional neural networks (CNN) have gained significant attention and there have been active research efforts on defense mechanisms.
Stochastic input transformation methods have been proposed, where the idea is to recover the image from adversarial attack by random transformation, and to take the majority vote as consensus among the random samples.
However, the transformation improves the accuracy on adversarial images at the expense of the accuracy on clean images.
While it is intuitive that the accuracy on clean images would deteriorate, the exact mechanism in which how this occurs is unclear.
In this paper, we study the distribution of softmax induced by stochastic transformations.
We observe that with random transformations on the clean images, although the mass of the softmax distribution could shift to the wrong class, the resulting distribution of softmax could be used to correct the prediction.
Furthermore, on the adversarial counterparts, with the image transformation, the resulting shapes of the distribution of softmax are similar to the distributions from the clean images.
With these observations, we propose a method to improve existing transformation-based defenses.
We train a separate lightweight distribution classifier to recognize distinct features in the distributions of softmax outputs of transformed images.
Our empirical studies show that our distribution classifier, by training on distributions obtained from clean images only, outperforms majority voting for both clean and adversarial images.
Our method is generic and can be integrated with existing transformation-based defenses.
There has been widespread use of convolutional neural networks (CNN) in many critical real-life applications such as facial recognition (Parkhi et al., 2015) and self-driving cars (Jung et al., 2016) .
However, it has been found that CNNs could misclassify the input image when the image has been corrupted by an imperceptible change (Szegedy et al., 2013) .
In other words, CNNs are not robust to small, carefully-crafted image perturbations.
Such images are called adversarial examples and there have been active research efforts in designing attacks that show the susceptibility of CNNs.
Correspondingly, many defense methods that aim to increase robustness to attacks have been proposed.
Stochastic transformation-based defenses have shown considerable success in recovering from adversarial attacks.
Under these defenses, the input image is transformed in a certain way before feeding into the CNN, such that the transformed adversarial image would no longer be adversarial.
As the transformation is random, by feeding in samples of the transformed image through the CNN, we accumulate a set of CNN softmax outputs and predictions.
As such, existing transformationbased defenses take a majority vote of the CNN predictions from the randomly transformed image (Prakash et al., 2018; Guo et al., 2017) .
Transformation-based defenses are desirable as there is no need to retrain the CNN model.
However, they suffer from deterioration of performance on clean images.
With increasing number of pixel deflections (Prakash et al., 2018) , there is improvement on the performance on adversarial images, but this comes with a rapid deterioration of performance on clean images.
In transformation-based defenses, the image is transformed stochastically where each sample t x is drawn from the distribution T (x) and then fed to the CNN (blue box).
In our defense method, for each input image x, we build the marginal distribution of softmax probabilities from the transformed samples t
(1)
x , · · · .
The distributions are fed to a separate distribution classifier which performs the final classification.
Note that our distribution classifier is trained only on distributions obtained from clean images while tested on both clean and adversarial images.
The exact mechanism of the deterioration in performance on clean images is unclear.
We believe that the softmax distribution induced by the random transformation contains rich information which is not captured by majority vote that simply counts the final class predictions from the transformed samples.
Now, an interesting question is whether the features in the distribution of softmax could be better utilized.
In this paper, to elucidate how the deterioration in accuracy on clean images occurs, we study the effects of the random image transformations on the distribution of the softmax outputs and make some key observations.
After the image transform, some clean images show distributions of softmax with modes at an incorrect class, reflecting the deterioration in voting accuracy as observed before.
While the shifting of the distribution mode to the incorrect class is detrimental to the voting prediction, the resulting distribution of softmax contains features that is useful for correcting the prediction.
In addition, we observe that the adversarial counterparts show similar shifts in the distributions of softmax as the clean images.
We also look into the distribution shapes for the transformed clean and adversarial images and find that they are similar.
With these observations, we propose a simple method to improve existing transformation-based defenses, as illustrated in Figure 1 .
We train a separate lightweight distribution classifier to recognize distinct features in the distributions of softmax outputs of transformed clean images and predict the class label.
Without retraining the original CNN, our distribution classifier improves the performance of transformation-based defenses on both clean and adversarial images.
On the MNIST dataset, the improvements in accuracy over majority voting are 1.7% and 5.9% on the clean and adversarial images respectively.
On CIFAR10, the improvements are 6.4% and 3.6% respectively.
Note that the distributions obtained from the adversarial images are not included in the training of the distribution classifier.
In real-world settings, the type of attack is not known beforehand.
Training the distribution classifier on a specific attack may cause the classifier to overfit to that attack.
Hence, it is an advantage that our defense method is attack-agnostic.
Our experimental findings show that the features of the distribution in the softmax are useful and can be used to improve existing transformation-based defenses.
Our contributions are as follows:
1. We analyze the effects of image transformation in existing defenses on the softmax outputs for clean and adversarial images, with a key finding that the distributions of softmax obtained from clean and adversarial images share similar features.
2. We propose a method that trains a distribution classifier on the distributions of the softmax outputs of transformed clean images only, but show improvements in both clean and adversarial images.
This method is agnostic to the attack method, does not require retraining of the CNN and can be integrated with existing transformation-based methods.
In the following section, we describe our experimental setup to evaluate the performance on clean and adversarial images with our distribution classifier method.
Adversarial attacks on convolutional neural networks have gained significant research attention and stochastic input transformation defenses have been proposed.
However, with transformation-based defenses, the performance on clean images deteriorates and the exact mechanism in which how this happens is unclear.
In this paper, we conduct in-depth analysis on the effects of stochastic transformation-based defenses on the softmax outputs of clean and adversarial images.
We observe that after image transformation, the distributions of softmax obtained from clean and adversarial images share similar distinct features.
Exploiting this property, we propose a method that trains a distribution classifier on the distributions of the softmax outputs of transformed clean images only, but show improvements in both clean and adversarial images over majority voting.
In our current work, we have considered untargeted attacks on the CNN and it is interesting to test our distribution classifier method with targeted attacks. | We enhance existing transformation-based defenses by using a distribution classifier on the distribution of softmax obtained from transformed images. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:722 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Reinforcement learning in multi-agent scenarios is important for real-world applications but presents challenges beyond those seen in single-agent settings.
We present an actor-critic algorithm that trains decentralized policies in multi-agent settings, using centrally computed critics that share an attention mechanism which selects relevant information for each agent at every timestep.
This attention mechanism enables more effective and scalable learning in complex multi-agent environments, when compared to recent approaches.
Our approach is applicable not only to cooperative settings with shared rewards, but also individualized reward settings, including adversarial settings, and it makes no assumptions about the action spaces of the agents.
As such, it is flexible enough to be applied to most multi-agent learning problems
Reinforcement learning has recently made exciting progress in many domains, including Atari games , the ancient Chinese board game, Go , and complex continuous control tasks involving locomotion BID17 BID26 BID12 .
While most reinforcement learning paradigms focus on single agents acting in a static environment (or against themselves in the case of Go), real-world agents often compete or cooperate with other agents in a dynamically shifting environment.
In order to learn effectively in multi-agent environments, agents must not only learn the dynamics of their environment, but also those of the other learning agents present.To this end, several approaches for multi-agent reinforcement learning have been developed.
The simplest approach is to train each agent independently to maximize their individual reward, while treating other agents as part of the environment.
However, this approach violates the basic assumption underlying reinforcement learning, that the environment should be stationary and Markovian.
Any single agent's environment is dynamic and nonstationary due to other agents' changing policies.
As such, standard algorithms developed for stationary Markov decision processes fail.At the other end of the spectrum, all agents can be collectively modeled as a single-agent whose action space is the joint action space of all agents BID2 .
While allowing coordinated behaviors across agents, this approach is not scalable due to the action space size increasing exponentially with the number of agents.
It also demands a high degree of communication during execution, as the central policy must collect observations from and distribute actions to the individual agents.
In real-world settings, this demand can be problematic.Recent work BID20 attempts to combine the strengths of these two approaches.
In particular, a critic (or a number of critics) is centrally learned with information from all agents.
The actors, however, receive information only from their corresponding agents.
Thus, during testing, executing the policies does not require the knowledge of other agents' actions.
This paradigm circumvents the challenge of non-Markovian and non-stationary environments during learning.
Despite those progresses, however, algorithms for multi-agent reinforcement learning are still far from being scalable (to a larger number of agents) and being generically applicable to environments and tasks that are co-operative (sharing a global reward), competitive, or mixed.Our approach extends these prior works in several directions.
The main idea is to centrally learn a critic with an attention mechanism.
The intuition behind our idea is that in many real-world environ-ments, it is beneficial for agents to know what other agents it should pay attention to.
For example, a soccer defender needs to pay attention to attackers in their vicinity as well as the player with the ball, while she/he rarely needs to pay attention to the opposing team's goalie.
The specific attackers that the defender is paying attention to can change at different parts of the game, depending on the formation and strategy of the opponent.
A typical centralized approach to multi-agent reinforcement learning does not take these dynamics into account, instead simply considering all agents at all timepoints.
Our attention mechanism is able to dynamically select which agents to attend to at each time point, improving performance in multi-agent domains with complex interactions.The proposed approach has an input space linearly increasing with respect to the number of agents, as opposed to the quadratic increase in a previous approach BID20 .
It also works well in co-operative, competitive, and mixed environments, exceeding the capability of some prior work that focuses only on co-operative environments .We
have validated our approach on two simulated environments and tasks. We
plan to release the code for both the model and the environments after the reviewing period ends.The rest of the paper is organized as follows. In
section 2, we discuss related work, followed by a detailed description of our approach in section 3. We
report experimental studies in section 4 and conclude in section 5.
We propose an algorithm for training decentralized policies in multi-agent settings.
The key idea is to utilize attention in order to select relevant information for estimating critics.
We analyze the performance of the proposed approach with respect to the number of agents, different configurations of rewards, and the span of relevant observational information.
Empirical results are promising and we intend to extend to highly complicated and dynamic environments.
for j = 1 . . . num critic updates do Update target critic and policy parameters: DISPLAYFORM0 T update ← 0 Update critic: DISPLAYFORM1 2 , where Update policies: DISPLAYFORM2 DISPLAYFORM3 38: end function | We propose an approach to learn decentralized policies in multi-agent settings using attention-based critics and demonstrate promising results in environments with complex interactions. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:723 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The recent expansion of machine learning applications to molecular biology proved to have a significant contribution to our understanding of biological systems, and genome functioning in particular.
Technological advances enabled the collection of large epigenetic datasets, including information about various DNA binding factors (ChIP-Seq) and DNA spatial structure (Hi-C).
Several studies have confirmed the correlation between DNA binding factors and Topologically Associating Domains (TADs) in DNA structure.
However, the information about physical proximity represented by genomic coordinate was not yet used for the improvement of the prediction models.
In this research, we focus on Machine Learning methods for prediction of folding patterns of DNA in a classical model organism Drosophila melanogaster.
The paper considers linear models with four types of regularization, Gradient Boosting and Recurrent Neural Networks for the prediction of chromatin folding patterns from epigenetic marks.
The bidirectional LSTM RNN model outperformed all the models and gained the best prediction scores.
This demonstrates the utilization of complex models and the importance of memory of sequential DNA states for the chromatin folding.
We identify informative epigenetic features that lead to the further conclusion of their biological significance.
Machine Learning algorithms are used nowadays in multiple disciplines.
In particular, the utilization of these methods in molecular biology has a significant impact on our understanding of cell processes (Eraslan et al., 2019) .
Investigating the large-scale DNA structure, i.e. the spatial organization of the genome, or chromatin, is one of the challenging tasks in the field.
The relevance of this research is supported by multiple observations of interconnections between gene regulation, inheritance, disease and chromatin structure (Lupiáñez et al., 2016) .
Although the chromatin structure is folded 10 4 − 10 5 times, it maintains fundamental and vital processes of the cell.
Various regulation mechanisms were shown to act through the three-dimensional structure formation.
High-throughput experiments capturing contacting fragments of the genome, such as Hi-C, have unravelled many principles of chromosomal folding (Lieberman-Aiden et al., 2009) .
Although Hi-C-like techniques were developed ten years ago, the experiments of high quality started to be published mainly during the last several years, and the protocol is still elaborate and expensive.
Hi-C has also revealed that chromosomes are subdivided into a set of self-interacting domains called Topologically Associating Domains (TADs) (Ulianov et al., 2016 ) that can be seen in Figure 1 .
TADs were shown to correlate with units of replication timing regulation in mammals (Pope et al., 2014) , as well as with either active or repressed epigenetic domains in Drosophila (Sexton et al., 2012) .
Various factors were shown to contribute to structure formation.
ChIP-Seq is one of the highthroughput experiments dedicated to the detection of factors binding on the DNA in vivo.
The rapid growth of its data enables exploring the chromatin structure with more sophisticated and complex methods such as Machine Learning.
The datasets for various factors such as ChIP-Seq experiments for histone modifications become increasingly available in public databases (Ho et al., 2014) .
The relationship between TADs and epigenetics marks has been investigated recently (Ulianov et al., 2016) .
However, the mechanisms that underlie partitioning of the genome into TADs remain poorly understood.
Moreover, there is no comprehensive work investigating all the factors that are publicly available yet.
Figure 1: Typical representation of Hi-C interaction map as a genome-wide contact matrix, or a heatmap.
Bright triangles can be visible across the diagonal.
These structures are called TADs (topologically associating domains) and interpreted as compact globules of interacting chromatin.
Drosophila melanogaster S2-DRSC cells This study focuses on bringing insights into the 3D chromatin structure using Machine Learning.
The goal is to explore the principles of TAD folding and the role of epigenetics in this process.
To that end, the analysis of Drosophila melanogaster chromatin was performed using Linear Regression models and Recurrent Neural Networks.
Quality metrics were calculated, and informative features were investigated to identify which of chromatin marks are most significant in predicting information about TADs.
In addition, the same techniques might be used to explore the 3D chromatin structure of mammals and humans in particular.
Such reconstruction of the information about Hi-C map might be useful not only for understanding the chromatin structure formation but can also have various practical medical applications.
For example, gliomagenesis and limb malformations in humans were demonstrated to be caused by chromosomal topology disruption (Krijger & De Laat, 2016) .
The ChIP-Seq data usage for chromatin folding patterns prediction was confirmed by training ML models with dignified evaluation scores.
Moreover, the results were interpretable and biologically relevant.
Linear Regression models, Gradient Boosting Trees and Recurrent Neural Networks were for the first time applied to our new dataset of chromatin characteristics.
All models have performed better than constant prediction with the mean value of the training dataset.
The utilization of memory of previous states linearly ordered by DNA molecule improves the prediction significantly as the best results were obtained by bidirectional LSTM RNN model.
The optimal input window size was also equal to six which has a biological meaning as it strongly aligns with the average TAD length.
Feature importance analysis of the input ChIP-Seq data was conducted.
The Linear models weights provided a biologically meaningful prioritization of the ChIP-Seq.
Moreover, after training Linear Regression with L1 regularization detected one ChIP-Seq feature Chriz on both of the datasets as the most influencing.
The results of applying Neural Network models allowed the evaluation of the biological impact of the features.
Exploration of the transferability of the models between different cell types and species might be an interesting development of this work.
More input features of different biological nature, such as DNA sequence itself, is another direction of research.
The code is open sourced and the implemented pipeline can be easily adapted to any similar biological dataset of chromatin features.
A APPENDIX Figure Figure 10: MSE, MAE, R 2 , weighted MSE metrics for various ML models experiments.
Here "LR" stands for Linear Regression models, "GB-X" -Grad Boosting models with X estimators, "* best" means that the presented scores for the best of models of type *. | We apply RNN to solve the biological problem of chromatin folding patterns prediction from epigenetic marks and demonstrate for the first time that utilization of memory of sequential states on DNA molecule is significant for the best performance. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:724 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Previous work showed empirically that large neural networks can be significantly reduced in size while preserving their accuracy.
Model compression became a central research topic, as it is crucial for deployment of neural networks on devices with limited computational and memory resources.
The majority of the compression methods are based on heuristics and offer no worst-case guarantees on the trade-off between the compression rate and the approximation error for an arbitrarily new sample.
We propose the first efficient, data-independent neural pruning algorithm with a provable trade-off between its compression rate and the approximation error for any future test sample.
Our method is based on the coreset framework, which finds a small weighted subset of points that provably approximates the original inputs.
Specifically, we approximate the output of a layer of neurons by a coreset of neurons in the previous layer and discard the rest.
We apply this framework in a layer-by-layer fashion from the top to the bottom.
Unlike previous works, our coreset is data independent, meaning that it provably guarantees the accuracy of the function for any input $x\in \mathbb{R}^d$, including an adversarial one.
We demonstrate the effectiveness of our method on popular network architectures.
In particular, our coresets yield 90% compression of the LeNet-300-100 architecture on MNIST while improving the accuracy.
Neural networks today are the most popular and effective instrument of machine learning with numerous applications in different domains.
Since Krizhevsky et al. (2012) used a model with 60M parameters to win the ImageNet competition in 2012, network architectures have been growing wider and deeper.
The vast overparametrization of neural networks offers better convergence (AllenZhu et al., 2019) and better generalization (Neyshabur et al., 2018) .
The downside of the overparametrization is its high memory and computational costs, which prevent the use of these networks in small devices, e.g., smartphones.
Fortunately, it was observed that a trained network could be reduced to smaller sizes without much accuracy loss.
Following this observation, many approaches to compress existing models have been proposed (see Gale et al. (2019) for a recent review on network sparsification, and Mozer & Smolensky (1989) ; Srivastava et al. (2014) ; Yu et al. (2018) ; He et al. (2017) for neural pruning).
Although a variety of model compression heuristics have been successfully applied to different neural network models, such as Jacob et al. (2018) ; Han et al. (2015) ; Alvarez & Salzmann (2017) , these approaches generally lack strong provable guarantees on the trade-off between the compression rate and the approximation error.
The absence of worst-case performance analysis can potentially be a glaring problem depending on the application.
Moreover, data-dependent methods for model compression (e.g., Mozer & Smolensky (1989) ; Srivastava et al. (2014) ; Hu et al. (2016) ; Yu et al. (2018) ; Baykal et al. (2018) ) rely on the statistics presented in a data set.
Hence, these methods are vulnerable to adversarial attacks (Szegedy et al., 2014) , which design inputs that do not follow these statistics.
Ideally, a network compression framework should
1) provide provable guarantees on the tradeoff between the compression rate and the approximation error,
2) be data independent,
3) provide high compression rate, and
4) be computationally efficient.
To address these goals, we propose an efficient framework with provable guarantees for neural pruning, which is based on the existing theory of coresets such as (Braverman et al., 2016) .
Coresets decrease massive inputs to smaller instances while maintaining a good provable approximation of the original set with respect to a given function.
Our main idea is to treat neurons of a neural network as inputs in a coreset framework.
Specifically, we reduce the number of neurons in layer i by constructing a coreset of neurons in this layer that provably approximates the output of neurons in layer i + 1 and discarding the rest.
The coreset algorithm provides us with the choice of neurons in layer i and with the new weights connecting these neurons to layer i + 1.
The coreset algorithm is applied layer-wise from the bottom to the top of the network.
The size of the coreset, and consequently the number of remaining neurons in layer i, is provably related to the approximation error of the output for every neuron in layer i + 1.
Thus, we can theoretically derive the trade-off between the compression rate and the approximation error of any layer in the neural network.
The coreset approximation of neurons provably holds for any input; thus our compression is data-independent.
Similar to our approach, Baykal et al. (2018) used coresets for model compression.
However, their coresets are data-dependent; therefore, they cannot guarantee robustness over inputs.
Moreover, they construct coresets of weights, while our approach constructs coresets of neurons.
Neural pruning reduces the size of the weight tensors, while keeping the network dense.
Hence the implementation of the pruned network requires no additional effort.
Implementing networks with sparse weights (which is the result of weight pruning) is harder and in many cases does not result in actual computational savings.
Our empirical results on LeNet-300-100 for MNIST (LeCun et al., 1998) and VGG-16 (Simonyan & Zisserman, 2014) for CIFAR-10 (Krizhevsky, 2009 ) demonstrate that our framework based on coresets of neurons outperforms sampling-based coresets by improving compression without sacrificing the accuracy.
Finally, our construction is very fast; it took about 56 sec. to compress each dense layer in the VGG-16 network using the platform specified in the experimental section.
Our Contributions: We propose an efficient, data-independent neural pruning algorithm with a provable trade-off between the compression rate and the output approximation error.
This is the first framework to perform neural pruning via coresets.
We provide theoretical compression rates for some of the most popular neural activation functions summarized in Table 1.
2 RELATED WORK 2.1 CORESETS Our compression algorithm is based on a data summarization approach known as coresets.
Over the past decade, coreset constructions have been recognized for high achievements in data reduction in a variety of applications, including k-means, SVD, regression, low-rank approximation, PageRank, convex hull, and SVM; see details in Phillips (2016) .
Many of the non-deterministic coreset based methods rely on the sensitivity framework, in which elements of the input are sampled according to their sensitivity (Langberg & Schulman, 2010; Braverman et al., 2016; Tolochinsky & Feldman, 2018) , which is used as a measure of their importance.
The sampled elements are usually reweighted afterwards.
We proposed the first neural pruning algorithm with provable trade-offs between the compression rate and the approximation error for any future test sample.
We base our compression algorithm on the coreset framework and construct coresets for most common activation functions.
Our tests on ReLU networks show high compression rates with no accuracy loss, and our theory guarantees the worst case accuracy vs. compression trade-off for any future test sample, even an adversarial one.
In this paper we focused on pruning neurons.
In future work, we plan to extend the proposed framework to pruning filers in CNNs, to composition of layers, and to other architectures.
Putting all together.
By applying Theorem 1 with X = B β (0), we obtain that, with probability at least 1 − δ, ∀x ∈ B β (0) :
Assume that the last equality indeed holds.
Hence, ∀x ∈ B β (0) :
A.3
PROOF OF COROLLARY 8
We assume that φ is a non-decreasing function.
Otherwise, we apply the proof below for the nondecreasing function φ * = −φ and corresponding weight w * (p) = −w(p) for every p ∈ P .
The correctness follows since w(p)φ(p T x) = w * (p)φ * (p T x) for every p ∈ P .
Indeed, put x ∈ B β (0), and φ non-decreasing.
Hence,
Equation 6 is obtained by separating each sum into points with positive and negative weights and applying Cauchy-Schwarz inequality.
Next, we bound points with positive and negative weights separately using Theorem 7. | We propose an efficient, provable and data independent method for network compression via neural pruning using coresets of neurons -- a novel construction proposed in this paper. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:725 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Planning problems in partially observable environments cannot be solved directly with convolutional networks and require some form of memory.
But, even memory networks with sophisticated addressing schemes are unable to learn intelligent reasoning satisfactorily due to the complexity of simultaneously learning to access memory and plan.
To mitigate these challenges we propose the Memory Augmented Control Network (MACN).
The network splits planning into a hierarchical process.
At a lower level, it learns to plan in a locally observed space.
At a higher level, it uses a collection of policies computed on locally observed spaces to learn an optimal plan in the global environment it is operating in.
The performance of the network is evaluated on path planning tasks in environments in the presence of simple and complex obstacles and in addition, is tested for its ability to generalize to new environments not seen in the training set.
A planning task in a partially observable environment involves two steps: inferring the environment structure from local observation and acting based on the current environment estimate.
In the past, such perception-action loops have been learned using supervised learning with deep networks as well as deep reinforcement learning BID3 , BID1 , .
Popular approaches in this spirit are often end-to-end (i.e. mapping sensor readings directly to motion commands) and manage to solve problems in which the underlying dynamics of the environment or the agent are too complex to model.
Approaches to learn end-to-end perception-action loops have been extended to complex reinforcement learning tasks such as learning how to play Atari games (Mnih et al., 2013a) , as well as to imitation learning tasks like controlling a robot arm BID12 .Purely
convolutional architectures (CNNs) perform poorly when applied to planning problems due to the reactive nature of the policies learned by them BID21 , BID4 . The complexity
of this problem is compounded when the environment is only partially observable as is the case with most real world tasks. In planning problems
, when using a function approximator such as a convolutional neural network, the optimal actions are dependent on an internal state. If one wishes to use
a state-less network (such as a CNN) to obtain the optimal action, the input for the network should be the whole history of observations and actions. Since this does not
scale well, we need a network that has an internal state such as a recurrent neural network or a memory network. BID20 showed that when
learning how to plan in partially observable environments, it becomes necessary to use memory to retain information about states visited in the past. Using recurrent networks
to store past information and learn optimal control has been explored before in BID11 . While BID14 have shown that
recurrent networks are Turing complete and are hence capable of generating any arbitrary sequence in theory, this does not always translate into practice. Recent advances in memory augmented
networks have shown that it is beneficial to use external memory with read and write operators that can be learned by a neural network over recurrent neural networks BID5 , BID6 . Specifically, we are interested in
the Differentiable Neural Computer (DNC) BID6 which uses an external memory and a network controller to learn how to read, write and access locations in the external memory. The DNC is structured such that computation
and memory operations are separated from each other. Such a memory network can in principle be plugged
into the convolutional architectures described above, and be trained end to end since the read and write operations are differentiable. However, as we show in our work, directly using such
a memory scheme with CNNs performs poorly for partially observable planning problems and also does not generalize well to new environments.To address the aforementioned challenges we propose the Memory Augmented Control Network (MACN), a novel architecture specifically designed to learn how to plan in partially observable environments under sparse rewards.1 Environments with sparse rewards are harder to navigate
since there is no immediate feedback. The intuition behind this architecture is that planning
problem can be split into two levels of hierarchy. At a lower level, a planning module computes optimal policies
using a feature rich representation of the locally observed environment. This local policy along with a sparse feature representation
of the partially observed environment is part of the optimal solution in the global environment. Thus, the key to our approach is using a planning module to
output a local policy which is used to augment the neural memory to produce an optimal policy for the global environment. Our work builds on the idea of introducing options for planning
and knowledge representation while learning control policies in MDPs BID16 . The ability of the proposed model is evaluated by its ability to
learn policies (continuous and discrete) when trained in environments with the presence of simple and complex obstacles. Further, the model is evaluated on its ability to generalize to
environments and situations not seen in the training set.The key contributions of this paper are:1. A new network architecture that uses a differentiable memory scheme
to maintain an estimate of the environment geometry and a hierarchical planning scheme to learn how to plan paths to the goal. 2. Experimentation to analyze the ability of the architecture to learn
how
to plan and generalize in environments with high dimensional state and action spaces.2 METHODOLOGY Section 2.1 outlines notation and formally states the problem
considered in this paper. Section 2.2 and 2.3 briefly cover the theory behind value iteration networks
and memory augmented networks. Finally, in section 2.4 the intuition and the computation graph is explained
for the practical implementation of the model.
Planning in environments that are partially observable and have sparse rewards with deep learning has not received a lot of attention.
Also, the ability of policies learned with deep RL to generalize to new environments is often not investigated.
In this work we take a step toward designing architectures that compute optimal policies even when the rewards are sparse, and thoroughly investigate the generalization power of the learned policy.
In addition we show our network is able to scale well to large dimensional spaces.The grid world experiments offer conclusive evidence about the ability of our network to learn how to plan in such environments.
We address the concern of oversimplifying our environment to a 2D grid world by experimenting with planning in a graph with no constraint on the state space or the action space.
We also show our model is capable of learning how to plan under continuous control.
In the future, we intend to extend our policies trained in simulation to a real world platform such as a robot learning to plan in partially observable environments.
Additionally, in our work we use simple perfect sensors and do not take into account sensor effects such as occlusion, noise which could aversely affect performance of the agent.
This need for perfect labeling is currently a limitation of our work and as such cannot be applied directly to a scenario where a sensor cannot provide direct information about nearby states such as a RGB camera.
We intend to explore this problem space in the future, where one might have to learn sensor models in addition to learning how to plan. | Memory Augmented Network to plan in partially observable environments. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:726 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Contextualized word representations such as ELMo and BERT have become the de facto starting point for incorporating pretrained representations for downstream NLP tasks.
In these settings, contextual representations have largely made obsolete their static embedding predecessors such as Word2Vec and GloVe.
However, static embeddings do have their advantages in that they are straightforward to understand and faster to use.
Additionally, embedding analysis methods for static embeddings are far more diverse and mature than those available for their dynamic counterparts.
In this work, we introduce simple methods for generating static lookup table embeddings from existing pretrained contextual representations and demonstrate they outperform Word2Vec and GloVe embeddings on a variety of word similarity and word relatedness tasks.
In doing so, our results also reveal insights that may be useful for subsequent downstream tasks using our embeddings or the original contextual models.
Further, we demonstrate the increased potential for analysis by applying existing approaches for estimating social bias in word embeddings.
Our analysis constitutes the most comprehensive study of social bias in contextual word representations (via the proxy of our distilled embeddings) and reveals a number of inconsistencies in current techniques for quantifying social bias in word embeddings.
We publicly release our code and distilled word embeddings to support reproducible research and the broader NLP community.
Word embeddings (Bengio et al., 2003; Collobert & Weston, 2008; Collobert et al., 2011) have been a hallmark of modern natural language processing (NLP) for several years.
Pretrained embeddings in particular have seen widespread use and have experienced parallel and complementary innovations alongside neural networks for NLP.
Advances in embedding quality in part have come from integrating additional information such as syntax (Levy & Goldberg, 2014b; Li et al., 2017) , morphology (Cotterell & Schütze, 2015) , subwords (Bojanowski et al., 2017) , subcharacters (Stratos, 2017; Yu et al., 2017) and, most recently, context (Peters et al., 2018; Devlin et al., 2019) .
As a consequence of their representational potential, pretrained word representations have seen widespread adoption across almost every task in NLP and reflect one of the greatest successes of both representation learning and transfer learning for NLP (Ruder, 2019b) .
The space of pretrained word representations can be partitioned into static vs. dynamic embeddings methods.
Static methods such as Word2Vec (Mikolov et al., 2013) , GloVe (Pennington et al., 2014), and FastText (Bojanowski et al., 2017) yield representations that are fixed after training and generally associate a single vector with a given word in the style of a lookup table.
While subsequent work addressed the fact that words may have multiple senses and should have different representations for different senses (Pilehvar & Collier, 2016; Lee & Chen, 2017; Pilehvar et al., 2017; Athiwaratkun & Wilson, 2017; Camacho-Collados & Pilehvar, 2018) , fundamentally these methods cannot easily adapt to the inference time context in which they are applied.
This contrasts with contextual, or dynamic, methods such as CoVe (McCann et al., 2017) , ELMo (Peters et al., 2018) , and BERT (Devlin et al., 2019) , which produce vector representations for a word conditional on the inference time context in which it appears.
Given that dynamic representations are arguably more linguistically valid, more expressive (static embeddings are a special-case of dynamic embeddings that are optimally ineffective at being dynamic), and have yielded significant empirical improvements (Wang et al., 2019b; a; Ruder, 2019a) , it would seem that static embeddings are outdated.
Static embeddings, however, have significant advantages over dynamic embeddings with regard to speed, computational resources, and ease of use.
These benefits have important implications for time-sensitive systems, resource-constrained settings or environmental concerns (Strubell et al., 2019) , and broader accessibility of NLP technologies 1 .
As a consequence of this dichotomy between static and dynamic representations and their disparate benefits, we propose in this work a simple yet effective mechanism for converting from dynamic representations to static representations.
We begin by demonstrating that our method when applied to pretrained contextual models (BERT, GPT-2, RoBERTa, XLNet, DistilBERT) yields higher quality static embeddings than Word2Vec and GloVe when evaluated intrinsically on four word similarity and word relatedness datasets.
Further, since our procedure does not rely on specific properties of the pretrained contextual model, it can be applied as needed to generate ever-improving static embeddings that will track advances in pretrained contextual word representations.
Our approach offers the hope that high-quality embeddings can be maintained in both settings given their unique advantages and appropriateness in different settings.
At the same time, we show that by distilling static embeddings from their dynamic counterparts, we can then employ the more comprehensive arsenal of embedding analysis tools that have been developed in the static embedding setting to better understand the original contextual embeddings.
As an example, we employ methods for identifying gender, racial, and religious bias (Bolukbasi et al., 2016; Garg et al., 2018; Manzini et al., 2019) to our distilled representations and find that these experiments not only shed light on the properties of our distilled embeddings for downstream use but can also serve as a proxy for understanding existing biases in the original pretrained contextual representations.
Our large-scale and exhaustive evaluation of bias further reveals dramatic inconsistencies in existing measures of social bias and highlights sizeable discrepancies in the bias estimates obtained for distilled embeddings drawn from different pretrained models and individual model layers.
In this work, we propose simple but effective procedures for converting contextual word representations into static word embeddings.
When applied to pretrained models like BERT, we find the resulting embeddings outperform Word2Vec and GloVe substantially under intrinsic evaluation and provide insights into the pretrained model.
We further demonstrate the resulting embeddings are more amenable to (existing) embedding analysis methods and report the extent of various social biases (gender, race, religion) across a number of measures.
Our large-scale analysis furnishes several findings with respect to social bias encoded in popular pretrained contextual representations via the proxy of our embeddings and has implications towards the reliability of existing protocols for quantifying bias in word embeddings. | A procedure for distilling contextual models into static embeddings; we apply our method to 9 popular models and demonstrate clear gains in representation quality wrt Word2Vec/GloVe and improved analysis potential by thoroughly studying social bias. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:727 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The brain performs unsupervised learning and (perhaps) simultaneous supervised learning.
This raises the question as to whether a hybrid of supervised and unsupervised methods will produce better learning.
Inspired by the rich space of Hebbian learning rules, we set out to directly learn the unsupervised learning rule on local information that best augments a supervised signal.
We present the Hebbian-augmented training algorithm (HAT) for combining gradient-based learning with an unsupervised rule on pre-synpatic activity, post-synaptic activities, and current weights.
We test HAT's effect on a simple problem (Fashion-MNIST) and find consistently higher performance than supervised learning alone.
This finding provides empirical evidence that unsupervised learning on synaptic activities provides a strong signal that can be used to augment gradient-based methods.
We further find that the meta-learned update rule is a time-varying function; thus, it is difficult to pinpoint an interpretable Hebbian update rule that aids in training.
We do find that the meta-learner eventually degenerates into a non-Hebbian rule that preserves important weights so as not to disturb the learner's convergence.
The HAT algorithm demonstrates that local, unsupervised signals can provide performance-improving weight updates.
Neural nets under HAT converge to better asymptotic losses as long as there is sufficient time (> 0.5 epochs) and a sufficient number of labels (> 20% of the data is labeled).
The latter finding is surprising since the addition of an unsupervised learning algorithm depends on the presence of labels in order to deliver marginal benefits over gradient descent.
The underlying form of the learned rule that makes HAT successful is still a mystery; we find that while the meta-learner may learn a useful update rule during training, the meta-learner does not converge to this useful rule in the long run and instead devolves into a linear function ConvergedRule.
This converged function preserves fully-converged weights by reinforcing incoming weights for neurons with high activations. | Metalearning unsupervised update rules for neural networks improves performance and potentially demonstrates how neurons in the brain learn without access to global labels. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:728 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Deep convolutional networks often append additive constant ("bias") terms to their convolution operations, enabling a richer repertoire of functional mappings.
Biases are also used to facilitate training, by subtracting mean response over batches of training images (a component of "batch normalization").
Recent state-of-the-art blind denoising methods seem to require these terms for their success.
Here, however, we show that bias terms used in most CNNs (additive constants, including those used for batch normalization) interfere with the interpretability of these networks, do not help performance, and in fact prevent generalization of performance to noise levels not including in the training data.
In particular, bias-free CNNs (BF-CNNs) are locally linear, and hence amenable to direct analysis with linear-algebraic tools.
These analyses provide interpretations of network functionality in terms of projection onto a union of low-dimensional subspaces, connecting the learning-based method to more traditional denoising methodology.
Additionally, BF-CNNs generalize robustly, achieving near-state-of-the-art performance at noise levels well beyond the range over which they have been trained.
Denoising -recovering a signal from measurements corrupted by noise -is a canonical application of statistical estimation that has been studied since the 1950's.
Achieving high-quality denoising results requires (at least implicitly) quantifying and exploiting the differences between signals and noise.
In the case of natural photographic images, the denoising problem is both an important application, as well as a useful test-bed for our understanding of natural images.
The classical solution to the denoising problem is the Wiener filter (13), which assumes a translation-invariant Gaussian signal model.
Under this prior, the Wiener filter is the optimal estimator (in terms of mean squared error).
It operates by mapping the noisy image to the frequency domain, shrinking the amplitude of all components, and mapping back to the signal domain.
In the case of natural images, the high-frequency components are shrunk more aggressively than the lower-frequency components because they tend to contain less energy in natural images.
This is equivalent to convolution with a lowpass filter, implying that each pixel is replaced with a weighted average over a local neighborhood.
In the 1990's, more powerful solutions were developed based on multi-scale ("wavelet") transforms.
These transforms map natural images to a domain where they have sparser representations.
This makes it possible to perform denoising by applying nonlinear thresholding operations in order to reduce or discard components that are small relative to the noise level (4; 12; 1).
From a linear-algebraic perspective, these algorithms operate by projecting the noisy input onto a lower-dimensional subspace that contains plausible signal content.
The projection eliminates the orthogonal complement of the subspace, which mostly contains noise.
This general methodology laid the foundations for the state-of-the-art models in the 2000's (e.g. (3)), some of which added a data-driven perspective, learning sparsifying transforms (5), or more general nonlinear shrinkage functions directly from natural images (6; 10).
In the past decade, purely data-driven models based on convolutional neural networks (8) have come to dominate all previous methods in terms of performance.
These models consist of cascades of convolutional filters, and rectifying nonlinearities, which are capable of representing a diverse and powerful set of functions.
Training such architectures to minimize mean square error over large databases of noisy natural-image patches achieves current state-of-the-art results (14) (see also (2) for a related approach).
Neural networks have achieved particularly impressive results on the blind denoising problem, in which the noise amplitude is unknown (14; 15; 9) .
Despite their success, We lack intuition about the denoising mechanisms these solutions implement.
Network architecture and functional units are often borrowed from the image-recognition literature, and it is unclear which of these aspects contribute positively, or limit, the denoising performance.
Many authors claim critical importance of specific aspects of architecture (e.g., skip connections, batch normalization, recurrence), but the benefits of these attributes are difficult to isolate and evaluate in the context of the many other elements of the system.
In this work, we show that bias terms used in most CNNs (additive constants, including those used for batch normalization) interfere with the interpretability of these networks, do not help performance, and in fact prevent generalization of performance to noise levels not including in the training data.
In particular, bias-free CNNs (BF-CNNs) are locally linear, and hence amenable to direct analysis with linear-algebraic tools.
And BF-CNNs generalize robustly, achieving near-state-of-the-art performance at noise levels well beyond the range over which they have been trained. | We show that removing constant terms from CNN architectures provides interpretability of the denoising method via linear-algebra techniques and also boosts generalization performance across noise levels. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:729 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Graph classification is currently dominated by graph kernels, which, while powerful, suffer some significant limitations.
Convolutional Neural Networks (CNNs) offer a very appealing alternative.
However, processing graphs with CNNs is not trivial.
To address this challenge, many sophisticated extensions of CNNs have recently been proposed.
In this paper, we reverse the problem: rather than proposing yet another graph CNN model, we introduce a novel way to represent graphs as multi-channel image-like structures that allows them to be handled by vanilla 2D CNNs.
Despite its simplicity, our method proves very competitive to state-of-the-art graph kernels and graph CNNs, and outperforms them by a wide margin on some datasets.
It is also preferable to graph kernels in terms of time complexity.
Code and data are publicly available.
Replacing the raw counts by the empirical joint probability density function, either by normalizing the histograms, or with a Kernel Density Estimate, significantly deteriorated performance.
This suggests that keeping the absolute values of the counts is important, which makes sense, because some categories might be associated with larger or smaller graphs, on average.
Therefore, preventing the model from using size information is likely to decrease accuracy.
We also observed that increasing the number of channels to more than 5 does not yield better results (which makes sense, as channels contain less and less information), but that reducing this number improves performance in some cases, probably because it plays a regularization role.The main contribution of our study is a novel method for representing graphs as multi-channel image-like structures from their node embeddings, that allows them to be processed by 2D CNNs.
How the embeddings are computed, and which 2D CNN architecture is used, does not matter.
We hold this flexibility to be a major strength.
First, the embedding-agnostic nature of our method means that it can be seamlessly extended to directed, weighted, or labeled graphs with continuous or categorical node/edge attributes, simply by using an embedding algorithm that accepts such graphs, e.g., BID21 .
The independence of our approach with respect to the image classification model used is another advantage.
Here, we employed a vanilla 2D CNN architecture as it was offering an excellent trade-off between accuracy and simplicity, but more recent models, such as the one of BID15 , may yield even better results.
Above all, performance should improve as graph node embedding algorithms and CNN architectures for images improve in the future.Even though results are very good out-of-the-box in most cases, finding an embedding algorithm that works well, or the right combination of parameters for a given dataset, can require some efforts.
For instance, on COLLAB, we hypothesize that our results are inferior to that observed on the other datasets because optimizing p and q for COLLAB may require more than a coarse grid search, or because node2vec may not be well-suited to very dense graphs such as the ones found in COLLAB.
The main contribution of this paper is to show that CNN architectures designed for images can be used for graph processing in a completely off-the-shelf manner, simply by representing graphs as stacks of two-dimensional histograms of their node embeddings.
Despite the simplicity of our approach, results indicate that it is very competitive to state-of-the-art graph kernels and graph CNN models, sometimes outperforming them by a wide margin.
Furthermore, these good results were obtained with limited parameter tuning and by using a basic 2D CNN model.
From a time complexity perspective, our approach is preferable to graph kernels too, allowing to process larger datasets featuring bigger graphs. | We introduce a novel way to represent graphs as multi-channel image-like structures that allows them to be handled by vanilla 2D CNNs. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:73 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The selection of initial parameter values for gradient-based optimization of deep neural networks is one of the most impactful hyperparameter choices in deep learning systems, affecting both convergence times and model performance.
Yet despite significant empirical and theoretical analysis, relatively little has been proved about the concrete effects of different initialization schemes.
In this work, we analyze the effect of initialization in deep linear networks, and provide for the first time a rigorous proof that drawing the initial weights from the orthogonal group speeds up convergence relative to the standard Gaussian initialization with iid weights.
We show that for deep networks, the width needed for efficient convergence to a global minimum with orthogonal initializations is independent of the depth, whereas the width needed for efficient convergence with Gaussian initializations scales linearly in the depth.
Our results demonstrate how the benefits of a good initialization can persist throughout learning, suggesting an explanation for the recent empirical successes found by initializing very deep non-linear networks according to the principle of dynamical isometry.
Through their myriad successful applications across a wide range of disciplines, it is now well established that deep neural networks possess an unprecedented ability to model complex real-world datasets, and in many cases they can do so with minimal overfitting.
Indeed, the list of practical achievements of deep learning has grown at an astonishing rate, and includes models capable of human-level performance in tasks such as image recognition (Krizhevsky et al., 2012) , speech recognition , and machine translation (Wu et al., 2016 ).
Yet to each of these deep learning triumphs corresponds a large engineering effort to produce such a high-performing model.
Part of the practical difficulty in designing good models stems from a proliferation of hyperparameters and a poor understanding of the general guidelines for their selection.
Given a candidate network architecture, some of the most impactful hyperparameters are those governing the choice of the model's initial weights.
Although considerable study has been devoted to the selection of initial weights, relatively little has been proved about how these choices affect important quantities such as rate of convergence of gradient descent.
In this work, we examine the effect of initialization on the rate of convergence of gradient descent in deep linear networks.
We provide for the first time a rigorous proof that drawing the initial weights from the orthogonal group speeds up convergence relative to the standard Gaussian initialization with iid weights.
In particular, we show that for deep networks, the width needed for efficient convergence for orthogonal initializations is independent of the depth, whereas the width needed for efficient convergence of Gaussian networks scales linearly in the depth.
Orthogonal weight initializations have been the subject of a significant amount of prior theoretical and empirical investigation.
For example, in a line of work focusing on dynamical isometry, it was found that orthogonal weights can speed up convergence for deep linear networks (Saxe et al., 2014; Advani & Saxe, 2017) and for deep non-linear networks Xiao et al., 2018; Gilboa et al., 2019; Chen et al., 2018; Pennington et al., 2017; Tarnowski et al., 2019; Ling & Qiu, 2019) when they operate in the linear regime.
In the context of recurrent neural networks, orthogonality can help improve the system's stability.
A main limitation of prior work is that it has focused almost exclusively on model's properties at initialization.
In contrast, our analysis focuses on the benefit of orthogonal initialization on the entire training process, thereby establishing a provable benefit for optimization.
The paper is organized as follows.
After reviewing related work in Section 2 and establishing some preliminaries in Section 3, we present our main positive result on efficient convergence from orthogonal initialization in Section 4.
In Section 5, we show that Gaussian initialization leads to exponentially long convergence time if the width is too small compared with the depth.
In Section 6, we perform experiments to support our theoretical results.
In this work, we studied the effect of the initialization parameter values of deep linear neural networks on the convergence time of gradient descent.
We found that when the initial weights are iid Gaussian, the convergence time grows exponentially in the depth unless the width is at least as large 4 We choose X ∈ R 1024×16 and W * ∈ R 10×1024 , and set Y = W * X. Entries in X and W * are drawn i.i.d. from N (0, 1).
as the depth.
In contrast, when the initial weight matrices are drawn from the orthogonal group, the width needed to guarantee efficient convergence is in fact independent of the depth.
These results establish for the first time a concrete proof that orthogonal initialization is superior to Gaussian initialization in terms of convergence time. | We provide for the first time a rigorous proof that orthogonal initialization speeds up convergence relative to Gaussian initialization, for deep linear networks. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:730 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Survival function estimation is used in many disciplines, but it is most common in medical analytics in the form of the Kaplan-Meier estimator.
Sensitive data (patient records) is used in the estimation without any explicit control on the information leakage, which is a significant privacy concern.
We propose a first differentially private estimator of the survival function and show that it can be easily extended to provide differentially private confidence intervals and test statistics without spending any extra privacy budget.
We further provide extensions for differentially private estimation of the competing risk cumulative incidence function.
Using nine real-life clinical datasets, we provide empirical evidence that our proposed method provides good utility while simultaneously providing strong privacy guarantees.
A patient progresses from HIV infection to AIDS after 4.5 years.
A study using the patient's data publishes the survival function estimates (a standard practice in clinical research).
An adversary, with only access to the published estimates (even in the form of survival function plots), can reconstruct user-level data (Wei & Royston, 2018; Fredrikson et al., 2014) .
Effectively leading to the disclosure of sensitive information.
This is just one scenario.
The survival function is used for modeling any time to an event, taking into account that some subjects will not experience the event at the time of data collection.
The survival function is used in many domains, some examples are the duration of unemployment (in economics); time until the failure of a machine part (in engineering); time to disease recurrence, time to infection, time to death (in healthcare); etc.
Our personal healthcare information is the most sensitive private attribute, protected by law, violations of which carry severe penalties.
And as the initial example suggests, of all application areas, information leakage in the healthcare domain is the most serious issue and is our focus in this study.
For estimation of the survival function, we focus on the Kaplan-Meier's (KM) (Kaplan & Meier, 1958) non-parametric method.
KM's method is ubiquitous in clinical research.
A quick search of the term on PubMed 1 yields 109,421 results.
It is not an overstatement to say that almost every clinical study uses KM's method to report summary statistics on their cohort's survival.
Statistical agencies around the world use this method to report on the survival of the general population or specific disease-related survival estimates.
To best of our knowledge, there does not exist any model that can provide formal privacy guarantees for estimation of survival function using the KM method.
The only related work is by Nguyên & Hui (2017) , which uses the output and objective perturbation for regression modeling of discrete time to event data.
The approach is limited to "multivariate" regression models and cannot be directly used to estimate survival function in a differentially private fashion.
One can argue that generative models such as the differentially private generative adversarial networks (Xie et al., 2018; Zhang et al., 2018; Triastcyn & Faltings, 2018; Beaulieu-Jones et al., 2017; Yoon et al., 2019) can be trained to generate differentially private synthetic data.
Which can then be used to estimate the survival function.
But, GANs do not generalize well to the datasets typically encountered for our use-case (very small sample size (can be less than a hundred), highly constrained dimensionality (d ∈ [2, 3] ), a mixture of categorical and continuous variables, no data pre-processing allowed, etc.
).
We propose the first differentially private method for estimating the survival function based on the KM method.
Grounded by the core principles of differential privacy, our method guarantees the differentially private estimation of the survival function.
Also, we show that our method easily extends to provide differentially private confidence intervals and differentially private test statistics (for comparison of survival function between multiple groups) without any extra privacy cost.
We further extend our method for differentially private estimation of the competing risk cumulative incidence function (another popular estimate in clinical research).
Using nine real-life clinical datasets, we provide empirical evidence that our proposed method provides good utility while simultaneously providing strong privacy guarantees.
Lastly, we release our method as an R 2 (R Core Team, 2018) package for rapid accessibility and adoption.
We have presented the first method for differentially private estimation of the survival function and we have shown that our proposed method can be easily extended to differentially private estimation of "other" often used statistics such as the associated confidence intervals, test statistics, and the competing risk cumulative incidence.
With extensive empirical evaluation on nine real-life datasets, we have shown that our proposed method provides good privacy-utility trade-off. | A first differentially private estimate of the survival function | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:731 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Neural networks can learn to extract statistical properties from data, but they seldom make use of structured information from the label space to help representation learning.
Although some label structure can implicitly be obtained when training on huge amounts of data, in a few-shot learning context where little data is available, making explicit use of the label structure can inform the model to reshape the representation space to reflect a global sense of class dependencies.
We propose a meta-learning framework, Conditional class-Aware Meta-Learning (CAML), that conditionally transforms feature representations based on a metric space that is trained to capture inter-class dependencies.
This enables a conditional modulation of the feature representations of the base-learner to impose regularities informed by the label space.
Experiments show that the conditional transformation in CAML leads to more disentangled representations and achieves competitive results on the miniImageNet benchmark.
In machine learning, the objective of classification is to train a model to categorize inputs into various classes.
We usually assume a categorical distribution over the label space, and thus effectively ignore dependencies among them.
However, class structure does exist in real world and is also present in most datasets.
Although class structure can be implicitly obtained as a by-product during learning, it is not commonly exploited in an explicit manner to develop better learning systems.
The use of label structure might not be of prime importance when having access to huge amounts of data, such the full ImageNet dataset.
However, in the case of few-shot learning where little data is available, meta-information such as dependencies in the label space can be crucial.In recent years, few-shot learning-learning from few examples across many tasks-has received considerable attention BID23 BID28 BID6 BID30 .
In particular, the concept of meta-learning has been shown to provide effective tools for few-shot learning tasks.
In contrast to common transfer learning methods that aim to fine-tune a pre-trained model, meta-learning systems are trained by being exposed to a large number of tasks and evaluated in their ability to learn new tasks effectively.
In meta-training, learning happens at two levels: a meta-learner that learns across many tasks, and a base-learner that optimizes for each task.
Model-Agnostic Meta-Learning (MAML) is a gradient-based meta-learning algorithm that provides a mechanism for rapid adaptation by optimizing only for the initial parameters of the base-learner BID6 .Our
motivation stems from a core challenge in gradient-based meta-learning, wherein the quality of gradient information is key to fast generalization: it is known that gradient-based optimization fails to converge adequately when trained from only a few examples BID23 , hampering the effectiveness of gradient-based meta-learning techniques. We
hypothesize that under such circumstances, introducing a metric space trained to encode regularities of the label structure can impose global class dependencies on the model. This
class structure can then provide a high-level view of the input examples, in turn leading to learning more disentangled representations.We propose a meta-learning framework taking advantage of this class structure information, which is available in a number of applications. The
Conditional class-Aware Meta-Learning (CAML) model is tasked with producing activations in a manner similar to a standard neural network, but with the additional flexibility to shift and scale those activations conditioned on some auxiliary meta-information. While
there are no restrictions on the nature of the conditioning factor, in this work we model class dependencies by means of a metric space. We aim
to learn a function mapping inputs to a metric space where semantic distances between instances follow an Euclidean geometry-classes that are semantically close lie in close proximity in an p sense. The goal
of the conditional class-aware transformation is to make explicit use of the label structure to inform the model to reshape the representation landscape in a manner that incorporates a global sense of class structure.The contributions of this work are threefold: (i) We provide
a meta-learning framework that makes use of structured class information in the form of a metric space to modulate representations in few-shot learning tasks; (ii) We introduce
class-aware grouping to improve the statistical strength of few-shot learning tasks; (iii) We show experimentally
that our proposed algorithm learns more disentangled representation and achieves competitive results on the miniImageNet benchmark.
In this work, we propose Conditional class-Aware Meta-Learning (CAML) that incorporates class information by means of an embedding space to conditionally modulate representations of the base-learner.
By conditionally transforming the intermediate representations of the base-learner, our goal is to reshape the representation with a global sense of class structure.
Experiments reveal that the proposed conditional transformation can modulate the convolutional feature maps towards a more disentangled representation.
We also introduce class-aware grouping to address a lack of statistical strength in few-shot learning.
The proposed approach obtains competitive results with the current state-of-the-art performance on 5-way 1-shot and 5-shot miniImageNet benchmark.
TAB1 suggest that, while 1-shot learning is sensitive to multitask learning and class-aware grouping, 5-shot learning is less sensitive those techniques.
This is owing to a lack of sufficient training examples in 1-shot learning tasks, which requires more explicit guidance in the training procedure.
We further note that, in 1-shot learning, using class-aware grouping alone can improve CBN's performance by 3%.
This means exploiting metric-based channel mean and variance can provide valuable information for gradient-based meta-learning. | CAML is an instance of MAML with conditional class dependencies. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:732 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We study the problem of multiset prediction.
The goal of multiset prediction is to train a predictor that maps an input to a multiset consisting of multiple items.
Unlike existing problems in supervised learning, such as classification, ranking and sequence generation, there is no known order among items in a target multiset, and each item in the multiset may appear more than once, making this problem extremely challenging.
In this paper, we propose a novel multiset loss function by viewing this problem from the perspective of sequential decision making.
The proposed multiset loss function is empirically evaluated on two families of datasets, one synthetic and the other real, with varying levels of difficulty, against various baseline loss functions including reinforcement learning, sequence, and aggregated distribution matching loss functions.
The experiments reveal the effectiveness of the proposed loss function over the others.
A relatively less studied problem in machine learning, particularly supervised learning, is the problem of multiset prediction.
The goal of this problem is to learn a mapping from an arbitrary input to a multiset 1 of items.
This problem appears in a variety of contexts.
For instance, in the context of high-energy physics, one of the important problems in a particle physics data analysis is to count how many physics objects, such as electrons, muons, photons, taus, and jets, are in a collision event BID4 .
In computer vision, automatic alt-text, such as the one available on Facebook, 2 is a representative example of multiset prediction BID16 BID9 .
3 In multiset prediction, a learner is presented with an arbitrary input and the associated multiset of items.
It is assumed that there is no predefined order among the items, and that there are no further annotations containing information about the relationship between the input and each of the items in the multiset.
These properties make the problem of multiset prediction unique from other wellstudied problems.
It is different from sequence prediction, because there is no known order among the items.
It is not a ranking problem, since each item may appear more than once.
It cannot be transformed into classification, because the number of possible multisets grows exponentially with respect to the maximum multiset size.In this paper, we view multiset prediction as a sequential decision making process.
Under this view, the problem reduces to finding a policy that sequentially predicts one item at a time, while the outcome is still evaluated based on the aggregate multiset of the predicted items.
We first propose an oracle policy that assigns non-zero probabilities only to prediction sequences that result exactly in the target, ground-truth multiset given an input.
This oracle is optimal in the sense that its prediction never decreases the precision and recall regardless of previous predictions.
That is, its decision is optimal in any state (i.e., prediction prefix).
We then propose a novel multiset loss which minimizes the KL divergence between the oracle policy and a parametrized policy at every point in a decision trajectory of the parametrized policy.
1 A set that allows multiple instances, e.g. {x, y, x}.
See Appendix A for a detailed definition.
https://newsroom.fb.com/news/2016/04/using-artificial-intelligenceto-help-blind-people-see-facebook/ 3 We however note that such a multiset prediction problem in computer vision can also be solved as segmentation, if fine-grained annotation is available.
See, e.g., BID6 .We
compare the proposed multiset loss against an extensive set of baselines. They
include a sequential loss with an arbitrary rank function, sequential loss with an input-dependent rank function, and an aggregated distribution matching loss and its one-step variant. We also
test policy gradient, as was done by BID16 recently for multiset prediction. Our evaluation
is conducted on two sets of datasets with varying difficulties and properties. According to the
experiments, we find that the proposed multiset loss outperforms all the other loss functions.The paper is structured as follows. We first define
multiset prediction at the beginning of Section 2, and compare it to existing problems in supervised learning in 2.1. Then we propose
the multiset loss in Section 2.2, followed by alternative baseline losses in Section 3. The multiset loss
and baselines are then empirically evaluated in Section 4.
We have extensively investigated the problem of multiset prediction in this paper.
We rigorously defined the problem, and proposed to approach it from the perspective of sequential decision making.
In doing so, an oracle policy was defined and shown to be optimal, and a new loss function, called multiset loss, was introduced as a means to train a parametrized policy for multiset prediction.
The experiments on two families of datasets, MNIST Multi variants and MS COCO variants, have revealed the effectiveness of the proposed loss function over other loss functions including reinforcement learning, sequence, and aggregated distribution matching loss functions.
The success of the proposed multiset loss brings in new opportunities of applying machine learning to various new domains, including high-energy physics.Precision Precision gives the ratio of correctly predicted elements to the number of predicted elements.
Specifically, letŶ = (C, µŶ ), Y = (C, µ Y ) be multisets.
Then DISPLAYFORM0 The summation and membership are done by enumerating the multiset.
For example, the multisetŝ Y = {a, a, b} and Y = {a, b} are enumerated asŶ = {a DISPLAYFORM1 Formally, precision can be defined as DISPLAYFORM2 where the summation is now over the ground set C. Intuitively, precision decreases by 1 |Ŷ| each time an extra class label is predicted.Recall Recall gives the ratio of correctly predicted elements to the number of ground-truth elements.
Recall is defined analogously to precision, as: Similarly, we start with the definition of the recall: DISPLAYFORM3 Rec(ŷ <t , Y) = y∈ŷ<t I y∈Y |Y| .turned
into a conditional distribution over the next item after affine transformation followed by a softmax function. When the
one-step variant of aggregated distribution matching is used, we skip the convolutional LSTM layers, i.e., c = DISPLAYFORM4 See Fig. 2 for the graphical illustration of the entire network. See TAB4
for the details of the network for each dataset. conv 5 ×
5 max-pool 2 × 2 feat 10 81 conv 3 × 3 feat 32 conv 5 × 5 max-pool 2 × 2 feat 10 conv 3 × 3 feat 32 conv 5 × 5 max-pool 2 × 2 feat 32 | We study the problem of multiset prediction and propose a novel multiset loss function, providing analysis and empirical evidence that demonstrates its effectiveness. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:733 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Understanding theoretical properties of deep and locally connected nonlinear network, such as deep convolutional neural network (DCNN), is still a hard problem despite its empirical success.
In this paper, we propose a novel theoretical framework for such networks with ReLU nonlinearity.
The framework bridges data distribution with gradient descent rules, favors disentangled representations and is compatible with common regularization techniques such as Batch Norm, after a novel discovery of its projection nature.
The framework is built upon teacher-student setting, by projecting the student's forward/backward pass onto the teacher's computational graph.
We do not impose unrealistic assumptions (e.g., Gaussian inputs, independence of activation, etc).
Our framework could help facilitate theoretical analysis of many practical issues, e.g. disentangled representations in deep networks.
Deep Convolutional Neural Network (DCNN) has achieved a huge empirical success in multiple disciplines (e.g., computer vision BID0 BID10 He et al., 2016) , Computer Go BID8 BID12 BID13 , and so on).
On the other hand, its theoretical properties remain an open problem and an active research topic.Learning deep models are often treated as non-convex optimization in a high-dimensional space.
From this perspective, many properties in deep models have been analyzed: landscapes of loss functions (Choromanska et al., 2015b; BID1 BID3 , saddle points (Du et al., 2017; Dauphin et al., 2014) , relationships between local minima and global minimum (Kawaguchi, 2016; Hardt & Ma, 2017; BID5 , trajectories of gradient descent (Goodfellow et al., 2014) , path between local minima BID15 , etc.However, such a modeling misses two components: neither specific network structures nor input data distribution is considered.
Both are critical in practice.
Empirically, deep models work particular well for certain forms of data (e.g., images); theoretically, for certain data distribution, popular methods like gradient descent is shown to fail to recover network parameters (Brutzkus & Globerson, 2017) .Along
this direction, previous theoretical works assume specific data distributions like spherical Gaussian and focus on shallow nonlinear networks BID12 Brutzkus & Globerson, 2017; Du et al., 2018) . These
assumptions yield nice gradient forms and enable analysis of many properties such as global convergence. However
, it is also nontrivial to extend such approaches to deep nonlinear neural networks that yield strong empirical performance.In this paper, we propose a novel theoretical framework for deep and locally connected ReLU network that is applicable to general data distributions. Specifically
, we embrace a teacher-student setting. The teacher
computes classification labels via a computational graph that has local structures (e.g., CNN): intermediate variables in the graph, (called summarization variables), are computed from a subset of the input dimensions. The student
network, with similar local structures, updates the weights to fit teacher's labels with gradient descent, without knowing the summarization variables.One ultimate goal is to show that after training, each node in the student network is highly selective with respect to the summarization variable in the teacher. Achieving this
goal will shed light to how the training of practically effective methods like CNN works, which remains a grand challenge. As a first step
, we reformulate the forward/backward pass in gradient descent by marginalizing out the input data conditioned on the graph variables of the teacher at each layer. The reformulation
has nice properties: (1) it relates data distribution with gradient update rules, (2) it is compatible with existing Receptive fields form a hierarchy. The entire input
is denoted as x (or x ω ). A local region of
an input x is denoted as x α . (b) For each region
α, we have a latent multinomial discrete variable z α which is computed from its immediate children {z β } β∈ch (α) . Given the input x,
z α = z α (x α ) is a function of the image content x α at α. Finally, z ω at the
top level is the class label. (c) A locally connected
neural network is trained with pairs (x, z ω (x)), where z ω (x) is the class label generated from the teacher. (d) For each node j, f
j (x) is the activation while g j (x) is the back-propagated gradient, both as function of input x (and weights at different layers).state-of-the-art regularization
techniques such as Batch Normalization (Ioffe & Szegedy, 2015) , and (3) it favors disentangled representation when data distributions have factorizable structures. To our best knowledge, our work
is the first theoretical framework to achieve these properties for deep and locally connected nonlinear networks.Previous works have also proposed framework to explain deep networks, e.g., renormalization group for restricted Boltzmann machines BID2 , spin-glass models (Amit et al., 1985; Choromanska et al., 2015a) , transient chaos models BID4 , differential equations BID11 BID6 , information bottleneck (Achille & Soatto, 2017; BID14 BID7 , etc. In comparison, our framework (1
) imposes mild assumptions rather than unrealistic ones (e.g., independence of activations), (2) explicitly deals with back-propagation which is the dominant approach used for training in practice, and relates it with data distribution, and (3) considers spatial locality of neurons, an important component in practical deep models.
In this paper, we propose a novel theoretical framework for deep (multi-layered) nonlinear network with ReLU activation and local receptive fields.
The framework utilizes the specific structure of neural networks, and formulates input data distributions explicitly.
Compared to modeling deep models as non-convex problems, our framework reveals more structures of the network; compared to recent works that also take data distribution into considerations, our theoretical framework can model deep networks without imposing idealistic analytic distribution of data like Gaussian inputs or independent activations.
Besides, we also analyze regularization techniques like Batch Norm, depicts its underlying geometrical intuition, and shows that BN is compatible with our framework.Using this novel framework, we have made an initial attempt to analyze many important and practical issues in deep models, and provides a novel perspective on overfitting, generalization, disentangled representation, etc.
We emphasize that in this work, we barely touch the surface of these core issues in deep learning.
As a future work, we aim to explore them in a deeper and more thorough manner, by using the powerful theoretical framework proposed in this paper. | This paper presents a theoretical framework that models data distribution explicitly for deep and locally connected ReLU network | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:734 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Multi-agent cooperation is an important feature of the natural world.
Many tasks involve individual incentives that are misaligned with the common good, yet a wide range of organisms from bacteria to insects and humans are able to overcome their differences and collaborate.
Therefore, the emergence of cooperative behavior amongst self-interested individuals is an important question for the fields of multi-agent reinforcement learning (MARL) and evolutionary theory.
Here, we study a particular class of multi-agent problems called intertemporal social dilemmas (ISDs), where the conflict between the individual and the group is particularly sharp.
By combining MARL with appropriately structured natural selection, we demonstrate that individual inductive biases for cooperation can be learned in a model-free way.
To achieve this, we introduce an innovative modular architecture for deep reinforcement learning agents which supports multi-level selection.
We present results in two challenging environments, and interpret these in the context of cultural and ecological evolution.
Nature shows a substantial amount of cooperation at all scales, from microscopic interactions of genomes and bacteria to species-wide societies of insects and humans BID36 .
This is in spite of natural selection pushing for short-term individual selfish interests (Darwin, 1859) .
In its purest form, altruism can be favored by selection when cooperating individuals preferentially interact with other cooperators, thus realising the rewards of cooperation without being exploited by defectors BID19 BID31 BID9 BID48 BID12 ).
However, many other possibilities exist, including kin selection, reciprocity and group selection BID40 Úbeda & Duéñez-Guzmán, 2011; BID52 BID41 BID56 BID50 .Lately
the emergence of cooperation among self-interested agents has become an important topic in multi-agent deep reinforcement learning (MARL). and BID25
formalize the problem domain as an intertemporal social dilemma (ISD), which generalizes matrix game social dilemmas to Markov settings. Social dilemmas
are characterized by a trade-off between collective welfare and individual utility. As predicted by
evolutionary theory, self-interested reinforcement-learning agents are typically unable to achieve the collectively optimal outcome, converging instead to defecting strategies BID45 . The goal is to
find multi-agent training regimes in which individuals resolve social dilemmas, i.e., cooperation emerges.Previous work has found several solutions, belonging to three broad categories: 1) opponent modelling
BID13 BID31 , 2) long-term planning using perfect knowledge of the game's rules BID33 BID46 ) and 3) a specific intrinsic
motivation function drawn from behavioral economics BID25 . These hand-crafted approaches
run at odds with more recent end-to-end model-free learning algorithms, which have been shown to have a greater ability to generalize (e.g. BID10 ). We propose that evolution can
be applied to remove the hand-crafting of intrinsic motivation, similar to other applications of evolution in deep learning.Evolution has been used to optimize single-agent hyperparameters BID26 , implement black-box optimization BID55 , and to evolve neuroarchitectures BID38 BID51 , regularization BID3 , loss functions BID27 BID24 , behavioral diversity BID6 , and entire reward functions BID49 . These principles tend to be driven
by single-agent search and optimization or competitive multi-agent tasks. Therefore there is no guarantee of
success when applying them in the ISD setting. More closely related to our domain
are evolutionary simulations of predator-prey dynamics BID57 , which used enforced subpopulations to evolve populations of neurons which are sampled to form the hidden layer of a neural network.
Real environments don't provide scalar reward signals to learn from.
Instead, organisms have developed various internal drives based on either primary or secondary goals BID1 .
Here we examined intrinsic rewards based on features derived from other agents in the environment.
In accord with evolutionary theory BID0 BID40 , we found that naïvely implementing natural selection via genetic algorithms did not lead to the emergence of cooperation.
Furthermore, assortative matchmaking was sufficient to generate cooperative behavior in cases where honest signals were available.
Finally, we proposed a new multi-level evolutionary paradigm based on shared reward networks that achieves cooperation in more general situations.Why does evolving intrinsic social preferences promote cooperation?
Firstly, evolution ameliorates the intertemporal choice problem by distilling the long timescale of collective fitness into the short timescale of individual reinforcement learning, thereby improving credit assignment between selfish acts and their temporally displaced negative group outcomes BID25 .
Secondly, it mitigates the social dilemma itself by allowing evolution to expose social signals that correlate with, for example, an agent's current level of selfishness.
Such information powers a range of mechanisms for achieving mutual cooperation like competitive altruism BID21 , other-regarding preferences BID7 , and inequity aversion BID11 .
In accord, laboratory experiments show that humans cooperate more readily when they can communicate BID43 BID29 .The
shared reward network evolution model was inspired by multi-level selection; yet it does not correspond to the prototypical case of that theory since its lower level units of evolution (the policy networks) are constantly swapping which higher level unit (reward network) they are paired with. Nevertheless
, there are a variety of ways in which we see this form of modularity arise in nature. For example
, free-living microorganisms occasionally form multi-cellular structures to solve a higher order adaptive problem, like slime mold forming a spore-producing stalk for dispersal BID54 , and many prokaryotes can incorporate plasmids (modules) found in their environment or received from other individuals as functional parts of their genome, thereby achieving cooperation in social dilemmas BID17 BID37 . Alternatively
, in humans a reward network may represent a shared "cultural norm", with its fitness based on cultural information accumulated from the groups in which it holds sway. In this way,
the spread of norms can occur independently of the success of individual agents BID2 ).For future work
, we suggest investigating alternative evolutionary mechanisms for the emergence of cooperation, such as kin selection BID16 and reciprocity BID52 . It would be interesting
to see whether these lead to different weights in a reward network, potentially hinting at the evolutionary origins of different social biases. Along these lines, one
might consider studying an emergent version of the assortative matchmaking model along the lines suggested by BID22 , adding further generality and power to our setup. Finally, it would be fascinating
to determine how an evolutionary approach can be combined with multi-agent communication to produce that most paradoxical of cooperative behaviors: cheap talk. | We introduce a biologically-inspired modular evolutionary algorithm in which deep RL agents learn to cooperate in a difficult multi-agent social game, which could help to explain the evolution of altruism. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:735 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
In adversarial attacks to machine-learning classifiers, small perturbations are added to input that is correctly classified.
The perturbations yield adversarial examples, which are virtually indistinguishable from the unperturbed input, and yet are misclassified.
In standard neural networks used for deep learning, attackers can craft adversarial examples from most input to cause a misclassification of their choice.
We introduce a new type of network units, called RBFI units, whose non-linear structure makes them inherently resistant to adversarial attacks.
On permutation-invariant MNIST, in absence of adversarial attacks, networks using RBFI units match the performance of networks using sigmoid units, and are slightly below the accuracy of networks with ReLU units.
When subjected to adversarial attacks based on projected gradient descent or fast gradient-sign methods, networks with RBFI units retain accuracies above 75%, while ReLU or Sigmoid see their accuracies reduced to below 1%.
Further, RBFI networks trained on regular input either exceed or closely match the accuracy of sigmoid and ReLU network trained with the help of adversarial examples.
The non-linear structure of RBFI units makes them difficult to train using standard gradient descent.
We show that RBFI networks of RBFI units can be efficiently trained to high accuracies using pseudogradients, computed using functions especially crafted to facilitate learning instead of their true derivatives.
Machine learning via deep neural networks has been remarkably successful in a wide range of applications, from speech recognition to image classification and language processing.
While very successful, deep neural networks are affected by adversarial examples: small, especially crafter modifications of correctly classified input that are misclassified BID20 ).
The trouble with adversarial examples is twofold.
The modifications to regular input are so small as to be difficult or impossible to detect for a human: this has been shown both in the case of images BID20 ; BID14 ) and sounds BID9 ; BID5 ).
Further, the adversarial examples are in some measure transferable from one neural network to another BID7 ; BID14 ; BID16 ; BID22 ), so they can be crafted even without precise knowledge of the weights of the target neural network.
At a fundamental level, it is hard to provide guarantees about the behavior of a deep neural network, when every correctly classified input is tightly encircled by very similar, yet misclassified, inputs.Thus far, the approach for obtaining neural networks that are more resistant to adversarial attacks has been to feed to the networks, as training data, an appropriate mix of the original training data, and adversarial examples BID7 ; BID12 ).
In training neural networks using adversarial examples, if the examples are generated via efficient heuristics such as the fast gradient sign method, the networks learn to associate the specific adversarial examples to the original input from which they were derived, in a phenomenon known as label leaking BID10 ; BID12 ; BID21 ).
This does not result in increased resistance to general adversarial attacks BID12 ; BID4 ).
If the adversarial examples used in training are generated via more general optimization techniques, as in BID12 ), networks with markedly increased resistance to adversarial attacks can be obtained, at the price of a more complex and computationally expensive training regime, and an increase in required network capacity.We pursue here a different approach, proposing the use of neural network types that are, due to their structure, inherently impervious to adversarial attacks, even when trained on standard input only.
In BID7 ), the authors connect the presence of adversarial examples to the (local) linearity of neural networks.
In a purely linear form n i=1 x i w i , we can perturb each x i by , taking x i + if w i > 0, and x i − if w i < 0.
This causes an output perturbation of magnitude n i=1 |w i |, or nw forw the average modulus of w i .
When the number of inputs n is large, as is typical of deep neural networks, a small input perturbation can cause a large output change.
Of course, deep neural networks are not globally linear, but the insight of BID7 ) is that they may be sufficiently locally linear to allow adversarial attacks.
Following this insight, we develop networks composed of units that are highly non-linear.The networks on which we settled after much experimentation are a variant of the well known radial basis functions (RBFs) BID0 ; BID6 BID15 ); we call our variant RBFI units.
RBFI units are similar to classical Gaussian RBFs, except for two differences that are crucial in obtaining both high network accuracy, and high resistance to attacks.
First, rather than being radially symmetrical, RBFIs can scale each input component individually; in particular, they can be highly sensitive to some inputs while ignoring others.
This gives an individual RBFI unit the ability to cover more of the input space than its symmetrical variants.
Further, the distance of an input from the center of the Gaussian is measured not in the Euclidean, or 2 , norm, but in the infinity norm ∞ , which is equal to the maximum of the differences of the individual components.
This eliminates all multi-input linearity from the local behavior of a RBFI: at any point, the output depends on one input only; the n in the above discussion is always 1 for RBFIs, so to say.
The "I" in RBFI stands for the infinity norm.Using deeply nonlinear models is hardly a new idea, but the challenge has been that such models are typically difficult to train.
Indeed, we show that networks with RBFI units cannot be satisfactorily trained using gradient descent.
To get around this, we show that the networks can be trained efficiently, and to high accuracy, using pseudogradients.
A pseudogradient is computed just as an ordinary gradient, except that we artificially pretend that some functions have a derivative that is different from the true derivative, and especially crafted to facilitate training.
In particular, we use pseudoderivatives for the exponential function, and for the maximum operator, that enter the definition of Gaussian RBFI units.
Gaussians have very low derivative away from their center, which makes training difficult; our pseudoderivative artificially widens the region of detectable gradient around the Gaussian center.
The maximum operator appearing in the infinity norm has non-zero derivative only for one of its inputs at a time; we adopt a pseudogradient that propagates back the gradient to all of its inputs, according to their proximity in value to the maximum input.
Tampering with the gradient may seem unorthodox, but methods such as AdaDelta BID23 ), and even gradient descent with momentum, cause training to take a trajectory that does not follow pure gradient descent.
We simply go one step further, devising a scheme that operates at the granularity of the individual unit.We show that with these two changes, RBFIs can be easily trained with standard random (pseudo)gradient descent methods, yielding networks that are both accurate, and resistant to attacks.
To conduct our experiments, we have implemented RBFI networks on top of the PyTorch framework BID18 ).
The code will be made available in a final version of the paper.
We consider permutation invariant MNIST, which is a version of MNIST in which the 28 × 28 pixel images are flattened into a one-dimensional vector of 784 values and fed as a feature vector to neural networks BID7 ).
On this test set, we show that for nets of 512,512,512,10 units, RBFI networks match the classification accuracy of networks of sigmoid units ((96.96 ± 0.14)% for RBFI vs. (96.88 ± 0.15)% for sigmoid), and are close to the performance of network with ReLU units ((98.62 ± 0.08)%).
When trained over standard training sets, RBFI networks retain accuracies over 75% for adversarial attacks that reduce the accuracy of ReLU and sigmoid networks to below 2% (worse than random).
We show that RBFI networks trained on normal input are superior to ReLU and sigmoid networks trained even with adversarial examples.
Our experimental results can be summarized as follows:• In absence of adversarial attacks, RBFI networks match the accuracy of sigmoid networks, and are slightly lower in accuracy than ReLU networks.•
When networks are trained with regular input only, RBFI networks are markedly more resistant to adversarial attacks than sigmoid or ReLU networks.•
In presence of adversarial attacks, RBFI networks trained on regualar input provide higher accuracy than sigmoid or ReLU networks, even when the latter are trained also on adversarial examples, and even when the adversarial examples are obtained via general projected gradient descent BID12 ).•
RBFI networks can be successfully trained with pseudogradients; the training via standard gradient descent yields instead markedly inferior results.•
Appropriate regularization helps RBFI networks gain increased resistance to adversarial attacks.Much work remains to be done, including experimenting with convolutional networks using RBFI units for images. However
, the results seem promising, in that RBFI seem to offer a viable alternative to current adversarial training regimes in achieving robustness to adversarial attacks.
In this paper, we have shown that non-linear structures such as RBFI can be efficiently trained using artificial, "pseudo" gradients, and can attain both high accuracy and high resistance to adversarial attacks. | We introduce a type of neural network that is structurally resistant to adversarial attacks, even when trained on unaugmented training sets. The resistance is due to the stability of network units wrt input perturbations. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:736 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We show that there exists an inherent tension between the goal of adversarial robustness and that of standard generalization.
Specifically, training robust models may not only be more resource-consuming, but also lead to a reduction of standard accuracy.
We demonstrate that this trade-off between the standard accuracy of a model and its robustness to adversarial perturbations provably exists even in a fairly simple and natural setting.
These findings also corroborate a similar phenomenon observed in practice.
Further, we argue that this phenomenon is a consequence of robust classifiers learning fundamentally different feature representations than standard classifiers.
These differences, in particular, seem to result in unexpected benefits: the features learned by robust models tend to align better with salient data characteristics and human perception.
Deep learning models have achieved impressive performance on a number of challenging benchmarks in computer vision, speech recognition and competitive game playing (Krizhevsky et al., 2012; BID24 Mnih et al., 2015; Silver et al., 2016; BID25 . However, it turns out that these models are actually quite brittle. In particular, one can often synthesize small, imperceptible perturbations of the input data and cause the model to make highly-confident but erroneous predictions BID9 BID5 Szegedy et al., 2013) .This
problem of so-called adversarial examples has garnered significant attention recently and resulted in a number of approaches both to finding these perturbations, and to training models that are robust to them BID23 Nguyen et al., 2015; BID16 BID7 Sharif et al., 2016; Kurakin et al., 2016a; BID14 BID1 . However
, building such adversarially robust models has proved to be quite challenging. In particular
, many of the proposed robust training methods were subsequently shown to be ineffective BID8 BID2 Uesato et al., 2018) . Only recently
, has there been progress towards models that achieve robustness that can be demonstrated empirically and, in some cases, even formally verified BID13 Kolter & Wong, 2017; Sinha et al., 2017; Tjeng & Tedrake, 2017; Raghunathan et al., 2018; BID11 Xiao et al., 2018b) .The vulnerability
of models trained using standard methods to adversarial perturbations makes it clear that the paradigm of adversarially robust learning is different from the classic learning setting. In particular, we
already know that robustness comes at a cost. This cost takes the
form of computationally expensive training methods (more training time), but also, as shown recently in Schmidt et al. (2018) , the potential need for more training data. It is natural then
to wonder: Are these the only costs of adversarial robustness? And, if so, once we
choose to pay these costs, would it always be preferable to have a robust model instead of a standard one? The goal of this work
is to explore these questions and thus, in turn, to bring us closer to understanding the phenomenon of adversarial robustness.Our contributions It might be natural to expect that training models to be adversarially robust, albeit more resource-consuming, can only improve performance in the standard classification setting. In this work, we show
, however, that the picture here is much more nuanced: these two goals might be fundamentally at odds. Specifically, even though
applying adversarial training, the leading method for training robust models, can be beneficial in some regimes of training data size, in general, there is a trade-off between the standard accuracy and adversarially robust accuracy of a model. In fact, we show that this
trade-off provably exists even in a fairly simple and natural setting.At the root of this trade-off is the fact that features learned by the optimal standard and optimal robust classifiers are fundamentally different and, interestingly, this phenomenon persists even in the limit of infinite data. This thus also goes against
the natural expectation that given sufficient data, classic machine learning tools would be sufficient to learn robust models and emphasizes the need for techniques specifically tailored to training robust models.Our exploration also uncovers certain unexpected benefit of adversarially robust models. In particular, adversarially
robust learning tends to equip the resulting models with invariances that we would expect to be also present in human vision. This, in turn, leads to features
that align better with human perception, and could also pave the way towards building models that are easier to understand. Consequently, the feature embeddings
learnt by robust models yield also clean inter-class interpolations, similar to those found by generative adversarial networks (GANs) BID23 and other generative models. This hints at the existence of a stronger
connection between GANs and adversarial robustness.
In this work, we show that the goal of adversarially robust generalization might fundamentally be at odds with that of standard generalization.
Specifically, we identify an inherent trade-off between the standard accuracy and adversarial robustness of a model, that provably manifests in a concrete, simple setting.
This trade-off stems from intrinsic differences between the feature learned by standard and robust models.
Our analysis also explains the drop in standard accuracy observed when employing adversarial training in practice.
Moreover, it emphasizes the need to develop robust training methods, since robustness is unlikely to arise as a consequence of standard training.We discover that even though adversarial robustness comes at a price, it has some unexpected benefits.
Robust models learn features that align well with salient data characteristics.
The root of this phenomenon is that the set of adversarial perturbations encodes some prior for human perception.
Thus, classifiers that are robust to these perturbations are also necessarily invariant to input modifications that we expect humans to be invariant to.
We demonstrate a striking consequence of this phenomenon: robust models yield clean feature interpolations similar to those obtained from generative models such as GANs BID23 .
This emphasizes the possibility of a stronger connection between GANs and adversarial robustness.Finally, our findings show that the interplay between adversarial robustness and standard classification might be more nuanced that one might expect.
This motivates further work to fully undertand the relative costs and benefits of each of these notions.
Kaiming we filter out all the images from the MNIST dataset other than the "5" and "7" labelled examples.
For the ImageNet dataset, adversarial training is significantly harder since the classification problem is challenging by itself and standard classifiers are already computationally expensive to train.
We thus restrict our focus to a smaller subset of the dataset.
We group together a subset of existing, semantically similar ImageNet classes into 8 different super-classes, as shown in TAB1 .
We train and evaluate only on examples corresponding to these classes. | We show that adversarial robustness might come at the cost of standard classification performance, but also yields unexpected benefits. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:737 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Large deep neural networks are powerful, but exhibit undesirable behaviors such as memorization and sensitivity to adversarial examples.
In this work, we propose mixup, a simple learning principle to alleviate these issues.
In essence, mixup trains a neural network on convex combinations of pairs of examples and their labels.
By doing so, mixup regularizes the neural network to favor simple linear behavior in-between training examples.
Our experiments on the ImageNet-2012, CIFAR-10, CIFAR-100, Google commands and UCI datasets show that mixup improves the generalization of state-of-the-art neural network architectures.
We also find that mixup reduces the memorization of corrupt labels, increases the robustness to adversarial examples, and stabilizes the training of generative adversarial networks.
Large deep neural networks have enabled breakthroughs in fields such as computer vision BID22 , speech recognition , and reinforcement learning BID28 .
In most successful applications, these neural networks share two commonalities.
First, they are trained as to minimize their average error over the training data, a learning rule also known as the Empirical Risk Minimization (ERM) principle BID35 .
Second, the size of these state-of-theart neural networks scales linearly with the number of training examples.
For instance, the network of BID31 used 10 6 parameters to model the 5 · 10 4 images in the CIFAR-10 dataset, the network of BID30 Strikingly, a classical result in learning theory BID36 tells us that the convergence of ERM is guaranteed as long as the size of the learning machine (e.g., the neural network) does not increase with the number of training data.
Here, the size of a learning machine is measured in terms of its number of parameters or, relatedly, its VC-complexity BID16 .This
contradiction challenges the suitability of ERM to train our current neural network models, as highlighted in recent research. On the
one hand, ERM allows large neural networks to memorize (instead of generalize from) the training data even in the presence of strong regularization, or in classification problems where the labels are assigned at random . On the
other hand, neural networks trained with ERM change their predictions drastically when evaluated on examples just outside the training distribution BID33 , also known as adversarial examples. This evidence
suggests that ERM is unable to explain or provide generalization on testing distributions that differ only slightly from the training data. However, what
is the alternative to ERM?The method of
choice to train on similar but different examples to the training data is known as data augmentation BID29 , formalized by the Vicinal Risk Minimization (VRM) principle BID3 . In VRM, human
knowledge is required to describe a vicinity or neighborhood around each example in the training data. Then, additional
virtual examples can be drawn from the vicinity distribution of the training examples to enlarge the support of the training distribution. For instance, when
performing image classification, it is common to define the vicinity of one image as the set of its horizontal reflections, slight rotations, and mild scalings. While data augmentation
consistently leads to improved generalization BID29 , the procedure is dataset-dependent, and thus requires the use of expert knowledge. Furthermore, data augmentation
assumes that the examples in the vicinity share the same class, and does not model the vicinity relation across examples of different classes.Contribution Motivated by these issues, we introduce a simple and data-agnostic data augmentation routine, termed mixup (Section 2). In a nutshell, mixup constructs
virtual training examples DISPLAYFORM0 where x i , x j are raw input vectors y = λy i + (1 − λ)y j , where y i , y j are one-hot label encodings (x i , y i ) and (x j , y j ) are two examples drawn at random from our training data, and λ ∈ [0, 1]. Therefore, mixup extends the training
distribution by incorporating the prior knowledge that linear interpolations of feature vectors should lead to linear interpolations of the associated targets. mixup can be implemented in a few lines
of code, and introduces minimal computation overhead.Despite its simplicity, mixup allows a new state-of-the-art performance in the CIFAR-10, CIFAR-100, and ImageNet-2012 image classification datasets (Sections 3.1 and 3.2). Furthermore, mixup increases the robustness
of neural networks when learning from corrupt labels (Section 3.4), or facing adversarial examples (Section 3.5). Finally, mixup improves generalization on speech
(Sections 3.3) and tabular (Section 3.6) data, and can be used to stabilize the training of GANs (Section 3.7). The source-code necessary to replicate our CIFAR-10
experiments is available at:https://github.com/facebookresearch/mixup-cifar10.To understand the effects of various design choices in mixup, we conduct a thorough set of ablation study experiments (Section 3.8). The results suggest that mixup performs significantly
better than related methods in previous work, and each of the design choices contributes to the final performance. We conclude by exploring the connections to prior work
(Section 4), as well as offering some points for discussion (Section 5).
We have proposed mixup, a data-agnostic and straightforward data augmentation principle.
We have shown that mixup is a form of vicinal risk minimization, which trains on virtual examples constructed as the linear interpolation of two random examples from the training set and their labels.
Incorporating mixup into existing training pipelines reduces to a few lines of code, and introduces little or no computational overhead.
Throughout an extensive evaluation, we have shown that mixup improves the generalization error of state-of-the-art models on ImageNet, CIFAR, speech, and tabular datasets.
Furthermore, mixup helps to combat memorization of corrupt labels, sensitivity to adversarial examples, and instability in adversarial training.In our experiments, the following trend is consistent: with increasingly large α, the training error on real data increases, while the generalization gap decreases.
This sustains our hypothesis that mixup implicitly controls model complexity.
However, we do not yet have a good theory for understanding the 'sweet spot' of this bias-variance trade-off.
For example, in CIFAR-10 classification we can get very low training error on real data even when α → ∞ (i.e., training only on averages of pairs of real examples), whereas in ImageNet classification, the training error on real data increases significantly with α → ∞.
Based on our ImageNet and Google commands experiments with different model architectures, we conjecture that increasing the model capacity would make training error less sensitive to large α, hence giving mixup a more significant advantage.mixup also opens up several possibilities for further exploration.
First, is it possible to make similar ideas work on other types of supervised learning problems, such as regression and structured prediction?
While generalizing mixup to regression problems is straightforward, its application to structured prediction problems such as image segmentation remains less obvious.
Second, can similar methods prove helpful beyond supervised learning?
The interpolation principle seems like a reasonable inductive bias which might also help in unsupervised, semi-supervised, and reinforcement learning.
Can we extend mixup to feature-label extrapolation to guarantee a robust model behavior far away from the training data?
Although our discussion of these directions is still speculative, we are excited about the possibilities mixup opens up, and hope that our observations will prove useful for future development. | Training on convex combinations between random training examples and their labels improves generalization in deep neural networks | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:738 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We present a novel approach to spike sorting for high-density multielectrode probes using the Neural Clustering Process (NCP), a recently introduced neural architecture that performs scalable amortized approximate Bayesian inference for efficient probabilistic clustering.
To optimally encode spike waveforms for clustering, we extended NCP by adding a convolutional spike encoder, which is learned end-to-end with the NCP network.
Trained purely on labeled synthetic spikes from a simple generative model, the NCP spike sorting model shows promising performance for clustering multi-channel spike waveforms.
The model provides higher clustering quality than an alternative Bayesian algorithm, finds more spike templates with clear receptive fields on real data and recovers more ground truth neurons on hybrid test data compared to a recent spike sorting algorithm.
Furthermore, NCP is able to handle the clustering uncertainty of ambiguous small spikes by GPU-parallelized posterior sampling.
The source code is publicly available.
Large-scale neuronal population recordings using high-density multi-electrode arrays (MEA) are at the forefront of current progress in understanding neural circuit dynamics.
In MEA recordings, each electrode channel reads extracellular signals from many neurons, and each neuron is recorded by multiple nearby electrodes.
A key step in the analysis of MEA data is spike sorting, which converts the raw electrical signal into a set of neural spike trains belonging to individual neurons.
As MEAs grow in scale and popularity, there is a new urgency in improving spike sorting performance [2] [3] [4] [5] [6] [7] .
A typical spike sorting pipeline consists of three steps.
The spike detection step extracts putative spike events from noisy recordings.
The clustering step groups similar spike waveforms into clusters, each representing a putative neuron.
To resolve colliding waveforms, a deconvolution step is often performed.
Spike clustering is at the core of the pipeline, as the clustering performance determines both the accuracy of spike assignment and the quality of spike templates used for deconvolution.
Spike clustering, however, poses significant challenges: (1) Spike waveforms form highly nonGaussian clusters in spatial and temporal dimensions, and it is unclear what are the optimal features for clustering.
(2) It is unknown a priori how many clusters there are.
(3) Although existing methods perform well on spikes with high signal-to-noise ratios (SNR), there remain significant challenges in the lower-SNR regime with increased clustering uncertainty.
Fully-Bayesian approaches proposed to handle this uncertainty [8, 9] do not scale to large datasets due to expensive Gibbs sampling.
To address these challenges, we propose a novel approach to spike clustering using the recently introduced Neural Clustering Process (NCP) [10, 11] (Figure
1) .
NCP is based on a neural architecture that performs scalable amortized approximate Bayesian clustering.
(1) Rather than selecting arbitrary features for clustering, the spike waveforms are encoded with a convolutional neural network (ConvNet), which is learned end-to-end jointly with the NCP network to ensure optimal feature encoding.
(2) Using a variable-input softmax function, NCP is able to compute full posterior distributions on cluster labels and the number of clusters, without assuming a fixed or maximum number of clusters.
(3) NCP allows for efficient probabilistic clustering by GPU-parallelized posterior sampling, which is particularly useful for handling the clustering uncertainty of ambiguous small spikes.
(4) The computational cost of NCP training can be highly amortized, since neuroscientists often sort spikes form many statistically similar datasets.
We trained NCP for spike clustering using synthetic spikes from a simple yet effective generative model that mimics the distribution of real spikes, and evaluated the performance on labeled synthetic data, unlabeled real data, and hybrid test data with partial ground truth.
We show that using NCP for spike sorting provides high clustering quality, matches or outperforms a recent spike sorting algorithm [2] , and handles clustering uncertainty by efficiently producing multiple plausible clustering configurations.
These results show substantial promise for incorporating NCP into a production-scale spike sorting pipeline.
[11] .
The model is composed by the deep networks h, g, q, f .
Bottom left: After assigning the cluster labels c 1:n−1 , each possible discrete value k for c n gives a different symmetry-invariant encoding of x 1:n into the vector G k , using the functions h and g.
The remaining, yet-unassigned points x n+1:N are encoded by q and summed into the vector Q. Bottom right: Each pair G k , Q is mapped by f into a real number (logit), which in turn is mapped into the multinomial distribution q θ (c n |c 1:n−1 , x) via a variable-input softmax.
2 Spike Sorting using the Neural Clustering Process Data preprocessing.
Training and test data come from the retinal recordings in [12] using a 512-channel 2D hexagonal MEA with 20 kHz sampling rate.
After spike detection [5] , each multi-channel spike waveform was assigned to the channel where the waveform has the maximum peak-to-peak (PTP) amplitude (i.e. the center channel, ch0).
This partitioned the recording data by channel such that each center-channel-based partition only contains multi-channel spike waveforms centered at that channel.
Each spike waveform is represented as a 7 × 32 array containing the 32 time steps surrounding the peak from the center channel and the same time window from the 6 immediate neighbor channels (Figure 1 top).
These 7 × 32 arrays are the spikes on which clustering was performed.
Neural architecture for NCP spike sorting.
The NCP architecture contains four neural networks, h, q, g, f, as shown in Figure 1 (bottom).
We refer to [11] for the detailed formulation and notations of NCP.
To extract useful features from the spatial-temporal patterns of spike waveforms, we use a 1D ConvNet as the h and q encoder functions.
The convolution is applied along the time axis, with each electrode channel treated as a feature dimension.
The ConvNet uses a ResNet architecture with 4 residual blocks, each having 32, 64, 128, 256 feature maps (kernel size = 3, stride = [1, 2, 2, 2]).
The last block is followed by an averaged pooling layer and a final linear layer.
The outputs of the ResNet encoder are the h i and q i vectors of NCP, i.e.
The other two functions, g and f , are multilayer perceptrons identical to those in the 2D Gaussian example in [11] .
Training NCP using synthetic data.
To train NCP for spike clustering, we created synthetic labeled training data ( Figure 2 ) using a mixture of finite mixtures (MFM) generative model [13] of noisy spike waveforms that mimics the distribution of real spikes:
Here, N is the number of spikes between [200, 500] .
The number of clusters K is sampled from a shifted Poisson distribution with λ = 2 so that each channel has on average 3 clusters.
π 1:K represents the proportion of each cluster and is sampled from a Dirichlet distribution with α 1:K = 1.
The training spike templates µ k ∈ R 7×32 are sampled from a reservoir of 957 ground-truth templates not present in any test data, with the temporal axis slightly jittered by random resampling.
Finally, each waveform x i is obtained by adding to µ ci Gaussian noise with covariance given by the Kronecker product of spatial and temporal correlation matrices estimated from the training data.
This method creates spatially and temporally correlated noise patterns similar to real data ( Figure 2 ).
We trained NCP for 20000 iterations on a GPU with a batch size of 32 to optimize the NLL loss by the Adam optimizer [14] .
A learning rate of 0.0001 was used (reduced by half at 10k and 17k iterations).
Probabilistic spike clustering using NCP.
At inference time, we fed the 7 x 32 arrays of spike waveforms to NCP, and performed GPU-parallelized posterior sampling of cluster labels ( Figure 1 ).
Using beam search [15, 16] with a beam size of 150, we were able to efficiently sample 150 high-likelihood clustering configurations for 2000 spikes in less than 10 seconds on a single GPU.
After clustering, we obtained a spike template for each cluster as the average shape of the spike waveforms.
The clustering configuration with the highest probability was used in most experiments. | We present a novel approach to spike sorting using the Neural Clustering Process (NCP), a recently introduced neural architecture that performs scalable amortized approximate Bayesian inference for efficient probabilistic clustering. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:739 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The key attribute that drives the unprecedented success of modern Recurrent Neural Networks (RNNs) on learning tasks which involve sequential data, is their ever-improving ability to model intricate long-term temporal dependencies.
However, a well established measure of RNNs' long-term memory capacity is lacking, and thus formal understanding of their ability to correlate data throughout time is limited.
Though depth efficiency in convolutional networks is well established by now, it does not suffice in order to account for the success of deep RNNs on inputs of varying lengths, and the need to address their 'time-series expressive power' arises.
In this paper, we analyze the effect of depth on the ability of recurrent networks to express correlations ranging over long time-scales.
To meet the above need, we introduce a measure of the information flow across time that can be supported by the network, referred to as the Start-End separation rank.
Essentially, this measure reflects the distance of the function realized by the recurrent network from a function that models no interaction whatsoever between the beginning and end of the input sequence.
We prove that deep recurrent networks support Start-End separation ranks which are exponentially higher than those supported by their shallow counterparts.
Moreover, we show that the ability of deep recurrent networks to correlate different parts of the input sequence increases exponentially as the input sequence extends, while that of vanilla shallow recurrent networks does not adapt to the sequence length at all.
Thus, we establish that depth brings forth an overwhelming advantage in the ability of recurrent networks to model long-term dependencies, and provide an exemplar of quantifying this key attribute which may be readily extended to other RNN architectures of interest, e.g. variants of LSTM networks.
We obtain our results by considering a class of recurrent networks referred to as Recurrent Arithmetic Circuits (RACs), which merge the hidden state with the input via the Multiplicative Integration operation.
Over the past few years, Recurrent Neural Networks (RNNs) have become the prominent machine learning architecture for modeling sequential data, having been successfully employed for language modeling (Sutskever et al., 2011; Graves, 2013) , neural machine translation (Bahdanau et al., 2014) , speech recognition (Graves et al., 2013; BID1 , and more.
The success of recurrent networks in learning complex functional dependencies for sequences of varying lengths, readily implies that long-term and elaborate correlations in the given inputs are somehow supported by these networks.
However, formal understanding of the influence of a recurrent network's structure on its expressiveness, and specifically on its ever-improving ability to integrate data throughout time (e.g. translating long sentences, answering elaborate questions), is lacking.An ongoing empirical effort to successfully apply recurrent networks to tasks of increasing complexity and temporal extent, includes augmentations of the recurrent unit such as Long Short Term Memory (LSTM) networks (Hochreiter and Schmidhuber, 1997) and their variants (e.g. Cho et al. (2014) ).
A parallel avenue, which we focus on in this paper, includes the stacking of layers to form deep recurrent networks (Schmidhuber, 1992) .
Deep recurrent networks, which exhibit empirical superiority over shallow ones (see e.g. Graves et al. (2013) ), implement hierarchical processing of information at every time-step that accompanies their inherent time-advancing computation.
Evidence for a time-scale related effect arises from experiments (Hermans and Schrauwen, 2013) -deep recurrent networks appear to model correlations which correspond to longer time-scales than shallow ones.
These findings, which imply that depth brings forth a considerable advantage in complexity and in temporal capacity of recurrent networks, have no adequate theoretical explanation.In this paper, we address the above presented issues.
Based on the relative maturity of depth efficiency results in neural networks, namely results that show that deep networks efficiently express functions that would require shallow ones to have a super-polynomial size (e.g. Cohen et al. (2016) ; Eldan and Shamir (2016) ), it is natural to assume that depth has a similar effect on the expressiveness of recurrent networks.
Indeed, we show that depth efficiency holds for recurrent networks.However, the distinguishing attribute of recurrent networks, is their inherent ability to cope with varying input sequence length.
Thus, once establishing the above depth efficiency in recurrent networks, a basic question arises, which relates to the apparent depth enhanced long-term memory in recurrent networks: Do the functions which are efficiently expressed by deep recurrent networks correspond to dependencies over longer time-scales?
We answer this question, by showing that depth provides an exponential boost to the ability of recurrent networks to model long-term dependencies.In order to take-on the above question, we introduce in section 2 a recurrent network referred to as a recurrent arithmetic circuit (RAC) that shares the architectural features of RNNs, and differs from them in the type of non-linearity used in the calculation.
This type of connection between state-of-the-art machine learning algorithms and arithmetic circuits (also known as Sum-Product Networks (Poon and Domingos, 2011)) has well-established precedence in the context of neural networks.
Delalleau and Bengio (2011) prove a depth efficiency result on such networks, and Cohen et al. (2016) theoretically analyze the class of Convolutional Arithmetic Circuits which differ from common ConvNets in the exact same fashion in which RACs differ from more standard RNNs.
Conclusions drawn from such analyses were empirically shown to extend to common ConvNets (e.g. Sharir and Shashua (2017) ; Levine et al. (2017) ).
Beyond their connection to theoretical models, the modification which defines RACs resembles that of Multiplicative RNNs (Sutskever et al., 2011) and of Multiplicative Integration networks (Wu et al., 2016) , which provide a substantial performance boost over many of the existing RNN models.
In order to obtain our results, we make a connection between RACs and the Tensor Train (TT) decomposition (Oseledets, 2011) , which suggests that Multiplicative RNNs may be related to a generalized TT-decomposition, similar to the way Cohen and Shashua (2016) connected ReLU ConvNets to generalized tensor decompositions.We move on to introduce in section 3 the notion of Start-End separation rank as a measure of the recurrent network's ability to model elaborate long-term dependencies.
In order to analyze the longterm correlations of a function over a sequential input which extends T time-steps, we partition the inputs to those which arrive at the first T /2 time-steps ("Start") and the last T /2 time-steps ("End"), and ask how far the function realized by the recurrent network is from being separable w.r.t. this partition.
Distance from separability is measured through the notion of separation rank (Beylkin and Mohlenkamp, 2002) , which can be viewed as a surrogate of the L 2 distance from the closest separable function.
For a given function, high Start-End separation rank implies that the function induces strong correlation between the beginning and end of the input sequence, and vice versa.In section 4 we directly address the depth enhanced long-term memory question above, by examining depth L = 2 RACs and proving that functions realized by these deep networks enjoy Start-End separation ranks that are exponentially higher than those of shallow networks, implying that indeed these functions can model more elaborate input dependencies over longer periods of time.
An additional reinforcing result is that the Start-End separation rank of the deep recurrent network grows exponentially with the sequence length, while that of the shallow recurrent network is independent of the sequence length.
Informally, this implies that vanilla shallow recurrent networks are inadequate in modeling correlations of long input sequences, since in contrast to the case of deep recurrent networks, the modeled dependencies achievable by shallow ones do not adapt to the actual length of the input.
Finally, we present and motivate a quantitative conjecture by which the Start-End separation rank of recurrent networks grows exponentially with the network depth.
A proof of this conjecture, which will provide an even deeper insight regarding the advantages of depth in recurrent networks, is left as an open problem.
The notion of depth efficiency, by which deep networks efficiently express functions that would require shallow networks to have a super-polynomial size, is well established in the context of convolutional networks.
However, recurrent networks differ from convolutional networks, as they are suited by design to tackle inputs of varying lengths.
Accordingly, depth efficiency alone does not account for the remarkable performance of recurrent networks on long input sequences.
In this paper, we identified a fundamental need for a quantifier of 'time-series expressivity', quantifying the memory capacity of recurrent networks.
In order to meet this need, we proposed a measure of the ability of recurrent networks to model long-term temporal dependencies, in the form of the Start-End separation rank.
The separation rank was used to quantify correlations in convolutional networks, and has roots in the field of quantum physics.
The proposed measure adjusts itself to the temporal extent of the input series, and quantifies the ability of the recurrent network to correlate the incoming sequential data as time progresses.We analyzed the class of Recurrent Arithmetic Circuits, which are closely related to successful RNN architectures, and proved that the Start-End separation rank of deep RACs increases exponentially as the input sequence extends, while that of shallow RACs is independent of the input length.
These results, which demonstrate that depth brings forth an overwhelming advantage in the ability of recurrent networks to model long-term dependencies, were achieved by combining tools from the fields of measure theory, tensorial analysis, combinatorics, graph theory and quantum physics.Such analyses may be readily extended to other architectural features employed in modern recurrent networks.
Indeed, the same time-series expressivity question may now be applied to the different variants of LSTM networks, and the proposed notion of Start-End separation rank may be employed for quantifying their memory capacity.
We have demonstrated that such a treatment can go beyond unveiling the origins of the success of a certain architectural choice, and leads to new insights.
The above established observation that correlations achievable by vanilla shallow recurrent network do not adapt at all to the sequence length, is an exemplar of this potential.Moreover, practical recipes may emerge by such theoretical analyses.
The experiments preformed in Hermans and Schrauwen (2013) , suggest that shallow layers of recurrent networks are related to short time-scales, e.g. in speech: phonemes, syllables, words, while deeper layers appear to support correlations of longer time-scales, e.g. full sentences, elaborate questions.
These findings open the door to further depth related investigations in recurrent networks, and specifically the role of each layer in modeling temporal correlations may be better understood.
Levine et al. (2017) establish theoretical observations which translate into practical conclusions regarding the number of hidden channels to be chosen for each layer in a deep convolutional network.
The conjecture presented in this paper, by which the Start-End separation rank of recurrent networks grows exponentially with depth, can similarly entail practical recipes for enhancing their memory capacity.
Such analyses can be reinforced by experiments, and lead to a profound understanding of the contribution of deep layers to the recurrent network's memory.
Indeed, we view this work as an important step towards novel methods of matching the recurrent network architecture to the temporal correlations in a given sequential data set.
We begin in section A.1 by providing a brief introduction to TNs.
Next, we present in section A.2 the TN which corresponds to the calculation of a shallow RAC, and tie it to a common TN architecture referred to as a Matrix Product State (MPS) (see overview in e.g. Orús (2014)), and equivalently to the tensor train (TT) decomposition (Oseledets, 2011) .
Subsequently, we present in section A.3 a TN construction of a deep RAC, and emphasize the characteristics of this construction that are the origin of the enhanced ability of deep RACs to model elaborate temporal dependencies.
Finally, in section A.4, we make use of the above TNs construction in order to formally motivate conjecture 1, according to which the Start-End separation rank of RACs grows exponentially with depth.
A TN is a weighted graph, where each node corresponds to a tensor whose order is equal to the degree of the node in the graph.
Accordingly, the edges emanating out of a node, also referred to as its legs, represent the different modes of the corresponding tensor.
The weight of each edge in the graph, also referred to as its bond dimension, is equal to the dimension of the appropriate tensor mode.
In accordance with the relation between mode, dimension and index of a tensor presented in section 3.2, each edge in a TN is represented by an index that runs between 1 and its bond dimension.
FIG4 shows three examples: (1) A vector, which is a tensor of order 1, is represented by a node with one leg.
(2) A matrix, which is a tensor of order 2, is represented by a node with two legs.
(3) Accordingly, a tensor of order N is represented in the TN as a node with N legs.We move on to present the connectivity properties of a TN.
Edges which connect two nodes in the TN represent an operation between the two corresponding tensors.
A index which represents such an edge is called a contracted index, and the operation of contracting that index is in fact a summation over all of the values it can take.
An index representing an edge with one loose end is called an open index.
The tensor represented by the entire TN, whose order is equal to the number of open indices, can be calculated by summing over all of the contracted indices in the network.
An example for a contraction of a simple TN is depicted in FIG4 .
There, a TN corresponding to the operation of multiplying a vector v ∈ R r 1 by a matrix M ∈ R r 2 ×r 1 is performed by summing over the only contracted index, k.
As there is only one open index, d, the result of contracting the network is an order 1 tensor (a vector): u ∈ R r 2 which upholds u = M v. Though we use below the contraction of indices in more elaborate TNs, this operation can be essentially viewed as a generalization of matrix multiplication. | We propose a measure of long-term memory and prove that deep recurrent networks are much better fit to model long-term temporal dependencies than shallow ones. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:74 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The goal of the paper is to propose an algorithm for learning the most generalizable solution from given training data.
It is shown that Bayesian approach leads to a solution that dependent on statistics of training data and not on particular
samples.
The solution is stable under perturbations of training data because it is defined by an integral contribution of multiple maxima of the likelihood and not by a single global maximum.
Specifically, the Bayesian probability distribution
of parameters (weights) of a probabilistic model given by a neural network is estimated via recurrent variational approximations.
Derived recurrent update rules correspond to SGD-type rules for finding a minimum of an effective loss that is an average of an original negative log-likelihood over the Gaussian distributions of weights, which makes it a function of means and variances.
The effective loss is convex for large variances and non-convex in the limit of small variances.
Among stationary solutions of the update rules there are trivial solutions with zero variances at local minima of the original loss and a single non-trivial solution with finite variances that is a critical point at the end of convexity of the effective loss
in the mean-variance space.
At the critical point both first- and second-order gradients of the effective loss w.r.t. means are zero.
The empirical study confirms that the critical point represents the most generalizable solution.
While the location of
the critical point in the weight space depends on specifics of the used probabilistic model some properties at the critical point are universal and model independent.
Finding a generalizable solution is a critical problem for any machine learning task.
The ultimate goal of learning from the available ground truths is to make a good prediction for new data.
The Bayesian method is a very powerful approach that gives a probabilistic measure of the ability of a proposed model to predict by estimating how well the model predicts known data.The accuracy of the predictions depends on how the found solution is able to overcome a sampling bias to avoid overfitting for given particular samples of training data.Specifically, in Bayesian method predictions of labels y for an input x are made by using a probabilistic model, for certainty a neural network, which defines a function parametrized by weights w that allows computing probabilities P (y|x, w) for each weight point.
Each weight point contributes to predicted probabilities of labels P rob(y|x) in accordance with probability distribution of weights.
The distribution of weights is learned from a known training data set {x n , y n ; n = 1..
N } and its prior probability distribution P 0 (w) in the following way: P rob(y|x) = w P (y|x, w)P 0 (w) N n=1 P (y n |x n , w)/ w P 0 (w) N n=1 P (y n |x n , w)Here the predicted probability P rob(y|x) is an average of the model probability P (y|x, w) at a weight w over the learned weight distribution.
To make predictions we are only interested in a method that allows to find averages in eq. FORMULA0 and not absolute values of the integrals.
According to mean value theorem (Cauchy (1813) , also in Encyclopedia of Mathematics Hazewinkel (1994) ) values of the averages can be represented by a single point, which in our case means that there is a single point in the weight space w 0 that represents a result of computing the integrals, so P rob(y|x) = P (y|x, w 0 ).
That point w 0 is a solution of the training of the neural network.A standard approach to get the solution is a maximum likelihood method that finds a maximum of the integrand.
However, there are some cases when the maximum likelihood fails to represent main contribution to the integral by weights.
Consider this example: if log-likelihood for N data samples has a maximum at some weight point w 1 , then in general its first derivative by weights is zero, second derivative is negative and proportional to N , so corresponding Gaussian integral by the weights is proportional to N −d/2 , where d is number of weights.
This will change if there is a flat maximum, which has not only first but also second and third derivatives equal to zero.
In this case the integral is proportional to N −d/4 .
For large number of samples the flat maximum makes the most significant contribution to the integral by weights: DISPLAYFORM0 and DISPLAYFORM1 .
For a typical case when the number of weights d ∼ N and average sample probabilities at maxima are comparable O(P 1 ) ∼ O(P 2 ) the integral around flat maximum I 2 is always bigger than the integral around narrow maximum I 1 , unless P 2 is zero.While in general a likelihood has a number of regular local maxima and no flat maximum the effect of integration over multiple frequent local maxima can result in an effective flat maximum that defines a solution.We argue that any local or global maximum of likelihood gives a wrong solution that is not generalizable and so makes inaccurate predictions, because the locations for the global maximum and local maxima depend on specific samples in the training data and any modification of it by adding or removing samples will change the solution BID6 .
Instead we will show that there is another solution that more associated with properties of the distribution of training data and less with particular samples.The purpose of this paper is to show that the effective flat maximum always exists for specific parameters of prior weight distribution P 0 (w) (regularization parameters) and corresponding solution is the most generalizable solution that can be found in training.
We show that the solution is a critical point in an effective loss that represents the result of integration over the weights.In the next sections we derive the algorithm for the optimizer for finding the critical point solution and analyze properties of the solutions.
The empirical study is outside of the scope of the paper and will be presented separately.For simplicity we use same notations for a vector of weights and its components, as well as corresponding parameters of distributions of weights because all weight components are independent in our consideration and it is clear from context when it is a vector or its component.
In the paper we consider a learning of a predictive model from training data by approximately computing Bayesian integral over weights -the parameters of the model.By using recurrent variational approximations with Gaussian weight distributions we are able to find a solution -a single point in weight space that represents an effect of averaging over distribution of weights in the Bayesian integrals.We show that this approach leads to SGD-type optimization problem for an effective loss in meanvariance space.
For each mean-variance point the effective loss is defined by average of the loglikelihood over Gaussian distribution at same mean-variance point.
Due to averaging the effective loss and its gradients of any order are continuous function of means even for ReLU based neural networks.The recurrent update rules define trajectories in mean-variance space.
Starting points of the trajectories are defined by regularization parameters, which are parameters of the Gaussian weight prior in Bayesian integrals.It is shown that there are two types of stationary solutions of the update rules.
First solution type corresponds to local minima of the original loss or maxima of the log-likelihood.
Second solution type is a critical point in mean-variance space that is a result of the integration over multiple maxima of the log-likelihood.At the critical point both first and second gradient of the effective loss are zero.
That leads to stability of the solution against perturbations of the training data set due to addition or removal data samples or via creation of adversarial examples. | Proposed method for finding the most generalizable solution that is stable w.r.t. perturbations of trainig data. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:740 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Humans rely on episodic memory constantly, in remembering the name of someone they met 10 minutes ago, the plot of a movie as it unfolds, or where they parked the car.
Endowing reinforcement learning agents with episodic memory is a key step on the path toward replicating human-like general intelligence.
We analyze why standard RL agents lack episodic memory today, and why existing RL tasks don't require it.
We design a new form of external memory called Masked Experience Memory, or MEM, modeled after key features of human episodic memory.
To evaluate episodic memory we define an RL task based on the common children's game of Concentration.
We find that a MEM RL agent leverages episodic memory effectively to master Concentration, unlike the baseline agents we tested.
From a neurobiological perspective, episodic memory is a key component of human life -remembering the name of a new acquaintance, recalling the plot of a movie as it unfolds, or realizing where the car is parked, are all examples of how we use episodic memory 1 to store and recall novel information.
If a person's ability to form and retrieve new episodic memories is lost, as in advanced Alzheimer's disease, the person is severely incapacitated as a result.
Although today's standard Reinforcement Learning (RL) agents possess forms of procedural and semantic memory BID10 , they lack any functional equivalent of episodic memory.
Our motivation is to expand the general intelligence of RL agents by imbuing them with a useful form of episodic memory.Human episodic memories appear to be records of experience that are re-experienced when associatively recalled BID8 .
In RL, fundamental experiences are termed observations.
Accordingly, we propose the following working definition: Episodic memory for an RL agent is the ability to leverage details of a past observation that is similar to the current observation.
This definition implies that an agent would exercise episodic memory by doing certain things at specific points in time, including
1. At the time of the old observation, the details of that observation must be stored somewhere in the agent.
This stored record is the episodic memory.
2. Later, when another observation arrives, it must somehow be compared with the stored observations.
If one of those is sufficiently similar, then the details of the old observation must be retrieved from memory.
There are different implementations of similarity and retrieval.
We will propose a concrete one later.
3. After retrieving the details of the old observation that is similar to the new one, the agent must be able to utilize that information to benefit it's pursuit of reward.Designing an RL agent with episodic memory is one challenge, and designing an RL task to evaluate episodic memory in an agent is another.
The main difficulty is that unless the task is very carefully designed, the RL agent may find a way to solve the task using other learning abilities besides episodic memory.
To illustrate, we briefly introduce the RL task that we will present later in detail.To evaluate an agent's episodic memory ability, we introduce the Concentration task based on the card game of the same name.
Concentration is a memory game with the goal of identifying matching pairs of cards among a large set of face-down cards.
During play, one card at a time is temporarily revealed to the player who must correctly memorize and recall the locations of each pair.
Concentration tests episodic memory by requiring an agent to leverage past observations of cards and their locations in order to succeed.
In our variant of Concentration, cards are not limited to the standard deck and are instead randomly generated for each game, so each card pair is unique and never before seen in the agent's lifetime.
Unique cards test the agent's ability to use episodic memory to reason about the identities and locations of the cards that are seen within the current episode, rather than learning to recognize specific cards.Recently, the capabilities of intelligent agents have greatly expanded through the combination of deep learning and reinforcement learning.
Deep RL agents have achieved notable success outperforming humans on Atari games BID15 .
However, many of the hardest tasks in which RL agents still fail to surpass humans are fraught with the difficulties of sparse rewards, partial observability, and a limited amount of samples.
Equipping an RL agent with memory is a promising approach to tackling some of these challenges, and has attracted a growing amount of interest in the research community.Recurrent neural networks such as LSTMs are commonly used as controllers BID13 .
LSTMs can be trained to maintain and use information on timescales of tens of steps, but have trouble learning over longer sequences.
Additionally, LSTMs do not store observations as discrete entities, so it is unclear how an LSTM could compare a never-before-seen observation (such as a unique card) with detailed instances of past observations, which also may have occurred only once.Memory augmented neural networks provide storage capabilities beyond those of an LSTM.
One such architecture, the differentiable neural computer (DNC) has been shown to be capable of handling several different memory-based tasks.
We evaluate the DNC on Concentration, but discover that it has difficulty reusing elements of its memory matrix.The key contributions of this paper are:• We propose a working definition of episodic memory for RL agents.•
We introduce the Concentration task for evaluating episodic memory.•
We present the Masked Experience Memory (MEM) architecture, a new type of external memory designed to provide an RL agent with human-inspired episodic memory, and incorporating a novel improvement over cosine similarity for content-based addressing.•
We empirically demonstrate that MEM successfully enables an RL agent to solve the Concentration task by remembering the identities and locations of cards it has seen only once.•
We show that baseline RL agents (LSTM-based and DNC-based) fail to solve the task.
The optimal mean performance attainable by an agent with perfect episodic memory is shown at the top of FIG2 BID27 .
Only the MEM agent learned a near-optimal policy.
The baseline LSTM-A3C agent's results were overlapped with those of its colorblind version 3b, demonstrating that the LSTM-A3C agent never learned to remember the locations of the cards it saw.
The Sonnet LSTM agent performed consistently better than the TensorFlow LSTM agent 3b, though not by a large amount.
Both implementations claim to be based on BID30 , so the difference in behavior is unexpected.Despite being unable to see the card faces, the colorblind MEM agent 3b still performed a bit better than any of the LSTM agents, indicating that it found some other strategy (not based on card faces) to derive a small amount of gain from its external memory.Even after dozens of trial settings over a wide range of hyper-parameters, the DNC agent performed only very slightly better than the LSTM-A3C agent, and noticeably worse than its own recurrent controller alone, the Sonnet LSTM agent.
We did not attempt curriculum learning.
Appendix A presents a detailed investigation into the causes of DNC's poor performance on this type of task.Performing ablation studies on the MEM architecture, we found that using the mask (instead of cosine similarity) and Euclidean distance squared were both essential to scoring above the LSTM-A3C baseline.
Adaptation of the sharpness term turned out to be essential for stable results.
On the other hand, the similarity strength feature provided no measurable benefit.As intended, MEM's most positive learned mask weights were the ones for the six card face dimensions.
At convergence of the best MEM model, 83% of the mask's mass was concentrated on those six elements, even though they constitute only 11% of the observation vector's 54 elements.
We have defined episodic memory for RL agents, provided an unambiguous test for evaluating it, and presented an implementation of episodic memory that corrects a problem with current content-based addressing methods.
Our results show that this MEM architecture, designed to emulate specific aspects of human episodic memory, is able to use that memory effectively in the Concentration task by remembering the locations of cards it has seen only once before.
This is in sharp contrast to the other agents tested, which never learned to remember card locations.
The code to replicate this work will be made public prior to the conference.MEM represents the initial step on a path towards more robust and powerful episodic memory for RL agents.
We plan to extend MEM in several significant ways:1.
Making the mask weights context-sensitive so that read key vectors can quickly shift to cover different aspects of experience depending on the situation.
2. Expanding the memory dimensions beyond the current observation to also include recurrent network activations, so that an agent's internal thought vectors can themselves be stored as experiences for later recall, and can be used as read keys.
3. Rendering memory deletion a function of memory importance, so that certain experiences can be remembered longer than others.
4. Introducing an additional mask over dimensions for write operations, so that memories need not cover all available dimensions.The human mind offers a remote, shining existence proof of general intelligence still beyond our reach.
Despite the distance, it lights our path, and grows brighter with each step we take toward it. | Implementing and evaluating episodic memory for RL. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:741 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Parameters are one of the most critical components of machine learning models.
As datasets and learning domains change, it is often necessary and time-consuming to re-learn entire models.
Rather than re-learning the parameters from scratch, replacing learning with optimization, we propose a framework building upon the theory of \emph{optimal transport} to adapt model parameters by discovering correspondences between models and data, significantly amortizing the training cost.
We demonstrate our idea on the challenging problem of creating probabilistic spatial representations for autonomous robots.
Although recent mapping techniques have facilitated robust occupancy mapping, learning all spatially-diverse parameters in such approximate Bayesian models demand considerable computational time, discouraging them to be used in real-world robotic mapping.
Considering the fact that the geometric features a robot would observe with its sensors are similar across various environments, in this paper, we demonstrate how to re-use parameters and hyperparameters learned in different domains.
This adaptation is computationally more efficient than variational inference and Monte Carlo techniques.
A series of experiments conducted on realistic settings verified the possibility of transferring thousands of such parameters with a negligible time and memory cost, enabling large-scale mapping in urban environments.
The quintessential paradigm in the machine learning pipeline consists of the stages of data acquisition and inference of the given data.
As data become plentiful, or as ones problem set become more diverse over time, it is common to learn new models tailored to the new data or problem.
Contrasting this conventional modeling archetype, we argue that it is often redundant to perform inference and re-learn parameters from scratch.
Such model adaptation procedures are indispensable in application domains such as robotics in which the operating environments change continuously.
For instance, if the model is represented as a Bayesian model, its distribution should be redetermined regularly to adjust for changes in new data.
In this paper, we focus on significantly improving the training time of building Bayesian occupancy maps such as automorphing Bayesian Hilbert maps (ABHMs) Senanayake et al. (2018) by transferring model parameters associated with a set of source datasets to a target dataset in a zero-shot fashion Isele et al. (2016) .
Despite having attractive theoretical properties and being robust, the main reason that hinders models such as ABHM being used in real-world settings is the run-time cost of learning thousands of parameters (main parameters and hyperparameters).
Moreover, these parameters not only vary across different places in the same environment, but also change over time.
We demonstrate domain adaptation of "geometry-dependent spatial features" of the ABHM model from a pool of source domains to the current target domain.
This is efficiently done using the theory of Optimal Transport Arjovsky et al. (2017) .
Since the proposed approach completely bypasses explicitly learning parameters of the Bayesian model using domain adaptation, this process can be thought of as "replacing parameter learning with domain adapatation."
The notation given in Table 1 will be used throughout the rest of the paper. | We present a method of adapting hyperparameters of probabilistic models using optimal transport with applications in robotics | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:742 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
An algorithm is introduced for learning a predictive state representation with off-policy temporal difference (TD) learning that is then used to learn to steer a vehicle with reinforcement learning.
There are three components being learned simultaneously: (1) the off-policy predictions as a compact representation of state, (2) the behavior policy distribution for estimating the off-policy predictions, and (3) the deterministic policy gradient for learning to act.
A behavior policy discriminator is learned and used for estimating the important sampling ratios needed to learn the predictive representation off-policy with general value functions (GVFs).
A linear deterministic policy gradient method is used to train the agent with only the predictive representations while the predictions are being learned.
All three components are combined, demonstrated and evaluated on the problem of steering the vehicle from images in the TORCS racing simulator environment.
Steering from only images is a challenging problem where evaluation is completed on a held-out set of tracks that were never seen during training in order to measure the generalization of the predictions and controller.
Experiments show the proposed method is able to steer smoothly and navigate many but not all of the tracks available in TORCS with performance that exceeds DDPG using only images as input and approaches the performance of an ideal non-vision based kinematics model.
Predicting the future is an important topic in machine learning and is believed to be an important part of how humans process and interact with the world, cf Clark (2013) .
Study of the brain shows that it is highly predictive of future events and outcomes.
Despite these advances, there is still much work needed to bridge the worlds of predictive learning and control.
Most predictive control approaches learn either a forward model or a backward model Lesort et al. (2018) however these next-step models suffer from compounding errors Sutton (1988) .
This paper introduces a predictive control architecture using one kind of off-policy predictive learning, called general value functions (GVFs) White (2015) Modayil et al. (2012 Schaul & Ring (2013) , that learns to predict the relevant aspects of the environment, decided by an expert, from raw sensor data such as pixel data captured from a camera.
GVFs answer the predictive question, "if I follow policy τ , how much total cumulant will I see in the future?"
The value of the GVF framework is not yet fully understood and realized despite the connections to neuroscience; but some early work has investigated its advantages for predictive representations and found that the representations are compact and general Schaul & Ring (2013) .
An objective of this research is to better understand the value that GVFs have to offer in real-world applications.
Our work is based on the hypothesis that predictive representations are good for generalization Rafols et al. (2005) Schaul & Ring (2013) .
We are motivated by the belief that GVFs, like RL, could allow for behavior that is anticipative of future consequences rather than reactive to the current state.
General value functions (GVFs) are an understudied topic of interest in AI research fields and applications.
There is a considerable focus on understanding how to learn these predictions but limited efforts on understanding how to use them in real applications.
This is unfortunate, as todate, research into applications of GVFs suggest they have potential in real world robotics and its applications Günther et al. (2016) Pilarski et al. (2013) )White (2015 Modayil et al. (2012) .
However, several elements have been missing to apply these predictions to a larger scale problem such as autonomous driving: (1) how to characterize the behavior policy to achieve off-policy learning when it is unknown, (2) what predictions are useful, and (3) how to use those predictions to control the vehicle.
Our objective is two-fold: (1) introduce a novel architecture combining elements of predictive learning, adversarial learning and reinforcement learning, and (2) demonstrate how this architecture can be used to steer a vehicle in a racing simulator.
A method of learning a predictive representation off-policy is presented where the behavior policy distribution is estimated via an adversarial method employing the density ratio trick.
It is demonstrated that deep off-policy predictions can be learned with a deep behavior policy estimation to predict future lane centeredness and road angles from images.
The predictive representation is learned with linear deterministic policy gradient.
All of these components are combined together in a framework called GVF-DPG and learned simultaneously on the challenging problem of steering a vehicle in TORCS from only images.
The results show that the GVF-DPG is able to steer smoothly with less change in action and achieve better performance than DDPG from only images and similar performance to the kinematics model in several but not all of the test tracks.
This work is also a demonstration that we can learn off-policy predictions, characterize the behavior policy and learn the controller all at the same time despite the challenges of the behavior policy evolving with the agent and the predictive state representation changing over time.
Our work demonstrates that a learned prediction-based vision-only steering controller could potentially be viable with more work on improving the generalizability of the off-policy predictions.
This work supports the predictive state representation hypothesis in Rafols et al. (2005) that deep predictions can improve the generalization of RL to new road environments when using only images as input.
For future work, we hope to study how to learn the question for the predictive state representation: τ , γ, and c.
Moreover, because the behavior policy is unknown and estimated, our results suggest that collecting real-world human driving to train predictions off-policy without the need for a simulator could be a viable approach to steering a vehicle from images.
This is potentially advantageous since the human driver can explore the road safely. | An algorithm to learn a predictive state representation with general value functions and off-policy learning is applied to the problem of vision-based steering in autonomous driving. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:743 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
For numerous domains, including for instance earth observation, medical imaging, astrophysics,..., available image and signal datasets often irregular space-time sampling patterns and large missing data rates.
These sampling properties is a critical issue to apply state-of-the-art learning-based (e.g., auto-encoders, CNNs,...) to fully benefit from the available large-scale observations and reach breakthroughs in the reconstruction and identification of processes of interest.
In this paper, we address the end-to-end learning of representations of signals, images and image sequences from irregularly-sampled data, {\em i.e.} when the training data involved missing data.
From an analogy to Bayesian formulation, we consider energy-based representations.
Two energy forms are investigated: one derived from auto-encoders and one relating to Gibbs energies.
The learning stage of these energy-based representations (or priors) involve a joint interpolation issue, which resorts to solving an energy minimization problem under observation constraints.
Using a neural-network-based implementation of the considered energy forms, we can state an end-to-end learning scheme from irregularly-sampled data.
We demonstrate the relevance of the proposed representations for different case-studies: namely, multivariate time series, 2{\sc } images and image sequences.
In numerous application domains, the available observation datasets do not involve gap-free and regularly-gridded signals or images.
The irregular-sampling may result both from the characteristics of the sensors and sampling strategy, e.g. considered orbits and swaths in spacebone earth observation and astrophysics, sampling schemes in medical imaging, as well as environmental conditions which may affect the sensor, e.g. atmospheric conditions and clouds for earth observation.
A rich literature exists on interpolation for irregularly-sampled signals and images (also referred to as inpainting in image processing (4)).
A classic framework states the interpolation issue as the miminisation of an energy, which may be interpreted in a Bayesian framework.
A variety of energy forms, including Markovian priors (12) , patch-based priors (20) , gradient norms in variational and/or PDE-based formulations (4), Gaussian priors () as well as dynamical priors in fluid dynamics (3) .
The later relates to optimal interpolation and kriging (8) , which is among the state-of-the-art and operational schemes in geoscience (10) .
Optimal schemes classically involve the inference of the considered covariance-based priors from irregularly-sampled data.
This may however be at the expense of Gaussianity and linearity assumptions, which do not often apply for real signals and images.
For the other types of energy forms, their parameterization are generally set a priori and not learnt from the data.
Regarding more particularly data-driven and learning-based approaches, most previous works (2; 11; 20) have addressed the learning of interpolation schemes under the assumption that a representative gap-free dataset is available.
This gap-free dataset may be the image itself (9; 20; 18) .
For numerous application domains, as mentionned above, this assumption cannot be fulfilled.
Regarding recent advances in learning-based schemes, a variety of deep learning models, e.g. (7; 16; 24; 23) , have been proposed.
Most of these works focus on learning an interpolator.
One may however expect to learn not only an interpolator but also some representation of considered data, which may be of interest for other applications.
In this respect, RBM models (Restricted Boltzmann
In this paper, we have addressed the learning of energy-based representations of signals and images from observation datasets involving missing data (with possibly very large missing data rates).
Using the proposed architectures, we can jointly learn relevant representations of signals and images while jointly providing the associated interpolation schemes.
Our experiments stress that learning representations from gap-free data may lead to representations poorly adapted to the analysis of data with large missing data areas.
We have also introduced a Gibbs priors embedded in a neural network architecture.
Relying on local characteristics rather than global ones as in AE schemes, these priors involve a much lower complexity.
Our experiments support their relevance for addressing inverse problems in signal and image analysis.
Future work may further explore multi-scale extensions of the proposed schemes along with couplings between global and local energy representations and hybrid minimization schemes combining both gradient-based and fixed-point strategies in the considered end-to-end formulation. | We address the end-to-end learning of energy-based representations for signal and image observation dataset with irregular sampling patterns. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:744 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Credit assignment in Meta-reinforcement learning (Meta-RL) is still poorly understood.
Existing methods either neglect credit assignment to pre-adaptation behavior or implement it naively.
This leads to poor sample-efficiency during meta-training as well as ineffective task identification strategies.
This paper provides a theoretical analysis of credit assignment in gradient-based Meta-RL.
Building on the gained insights we develop a novel meta-learning algorithm that overcomes both the issue of poor credit assignment and previous difficulties in estimating meta-policy gradients.
By controlling the statistical distance of both pre-adaptation and adapted policies during meta-policy search, the proposed algorithm endows efficient and stable meta-learning.
Our approach leads to superior pre-adaptation policy behavior and consistently outperforms previous Meta-RL algorithms in sample-efficiency, wall-clock time, and asymptotic performance.
A remarkable trait of human intelligence is the ability to adapt to new situations in the face of limited experience.
In contrast, our most successful artificial agents struggle in such scenarios.
While achieving impressive results, they suffer from high sample complexity in learning even a single task, fail to generalize to new situations, and require large amounts of additional data to successfully adapt to new environments.
Meta-learning addresses these shortcomings by learning how to learn.
Its objective is to learn an algorithm that allows the artificial agent to succeed in an unseen task when only limited experience is available, aiming to achieve the same fast adaptation that humans possess (Schmidhuber, 1987; Thrun & Pratt, 1998) .Despite
recent progress, deep reinforcement learning (RL) still relies heavily on hand-crafted features and reward functions as well as engineered problem specific inductive bias. Meta-RL
aims to forego such reliance by acquiring inductive bias in a data-driven manner. Recent
work proves this approach to be promising, demonstrating that Meta-RL allows agents to obtain a diverse set of skills, attain better exploration strategies, and learn faster through meta-learned dynamics models or synthetic returns BID8 Xu et al., 2018; BID14 Saemundsson et al., 2018) .Meta-RL
is a multi-stage process in which the agent, after a few sampled environment interactions, adapts its behavior to the given task. Despite
its wide utilization, little work has been done to promote theoretical understanding of this process, leaving Meta-RL grounded on unstable foundations. Although
the behavior prior to the adaptation step is instrumental for task identification, the interplay between pre-adaptation sampling and posterior performance of the policy remains poorly understood. In fact,
prior work in gradient-based Meta-RL has either entirely neglected credit assignment to the pre-update distribution BID9 or implemented such credit assignment in a naive way BID10 Stadie et al., 2018) .To our knowledge
, we provide the first formal in-depth analysis of credit assignment w.r.t. preadaptation sampling distribution in Meta-RL. Based on our findings
, we develop a novel Meta-RL algorithm. First, we analyze two
distinct methods for assigning credit to pre-adaptation behavior.of MAML, was first introduced by BID9 . We refer to it as formulation
I which can be expressed as maximizing the objective DISPLAYFORM0 In that U denotes the update function which depends on the task T , and performs one VPG step towards maximizing the performance of the policy in T . For national brevity and conciseness
we assume a single policy gradient adaptation step. Nonetheless, all presented concepts
can easily be extended to multiple adaptation steps.Later work proposes a slightly different notion of gradient-based Meta-RL, also known as E-MAML, that attempts to circumvent issues with the meta-gradient estimation in MAML BID10 Stadie et al., 2018) : DISPLAYFORM1 R(τ ) with θ := U (θ, τ 1: DISPLAYFORM2 Formulation II views U as a deterministic function that depends on N sampled trajectories from a specific task. In contrast to formulation I, the expectation
over pre-update trajectories τ is applied outside of the update function. Throughout this paper we refer to π θ as pre-update
policy, and π θ as post-update policy.
In this paper we propose a novel Meta-RL algorithm, proximal meta-policy search (ProMP), which fully optimizes for the pre-update sampling distribution leading to effective task identification.
Our method is the result of a theoretical analysis of gradient-based Meta-RL formulations, based on which we develop the low variance curvature (LVC) surrogate objective that produces low variance meta-policy gradient estimates.
Experimental results demonstrate that our approach surpasses previous meta-reinforcement learning approaches in a diverse set of continuous control tasks.
Finally, we underpin our theoretical contributions with illustrative examples which further justify the soundness and effectiveness of our method. | A novel and theoretically grounded meta-reinforcement learning algorithm | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:745 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
When training a neural network for a desired task, one may prefer to adapt a pretrained network rather than start with a randomly initialized one -- due to lacking enough training data, performing lifelong learning where the system has to learn a new task while being previously trained for other tasks, or wishing to encode priors in the network via preset weights.
The most commonly employed approaches for network adaptation are fine-tuning and using the pre-trained network as a fixed feature extractor, among others.
In this paper we propose a straightforward alternative: Side-Tuning.
Side-tuning adapts a pretrained network by training a lightweight "side" network that is fused with the (unchanged) pre-rained network using a simple additive process.
This simple method works as well as or better than existing solutions while it resolves some of the basic issues with fine-tuning, fixed features, and several other common baselines.
In particular, side-tuning is less prone to overfitting when little training data is available, yields better results than using a fixed feature extractor, and doesn't suffer from catastrophic forgetting in lifelong learning.
We demonstrate the performance of side-tuning under a diverse set of scenarios, including lifelong learning (iCIFAR, Taskonomy), reinforcement learning, imitation learning (visual navigation in Habitat), NLP question-answering (SQuAD v2), and single-task transfer learning (Taskonomy), with consistently promising results.
The goal of side-tuning is to capitalize on a pretrained model to better learn one or more novel tasks.
By design, side-tuning does so without degrading performance of the base model.
The framework is straightforward: it assumes access to the frozen base model B : X → Y that maps inputs into some representation space that is shared between the base task and the current (target) task.
This representation space is flexible and could either be a latent space (e.g. in R N ) or actual model predictions.
Side-tuning then learns a side model S : X → Y, so that the representations for the target task are R(x) B(x) ⊕ S(x), fine-tuning adapts too easily and forgets old information.
Side-tuning is a simple method to address these limitations.
for some combining operation ⊕.
We use a learned alpha-blending, a ⊕ b αa + (1 − α)b for this purpose (other options are discussed in Section 3.0.3).
Certain pre-set curricula of α reduce the side-tuning framework to: fine-tuning, feature extration, and stage-wise training (see Fig. 3 , right).
Hence those can be viewed as special cases of the general side-tuning framework.
Also, other curricula suggest (e.g.) a maximum a posteriori estimator that integrates the B(x) prior with the evidence from S(x).
Side-tuning is an example of an additive learning approach as it adds (strategically placed) parameters for each new task.
Fixed feature extraction would be a simple example of an additive approach with zero new parameters.
As a result, fixed features are don't adapt the base network over the lifetime of the agent.
A number of existing approaches address this by learning new parameters (the number of which scales with the size of the base network) for each new task (e.g. .
Unlike these approaches, side-tuning places no constraints on the structure of the side network, allowing the parameters to be strategically allocated.
In particular, side-tuning can use tiny networks when the base requires only minor updates.
By adding fewer parameters per task, side-tuning can learn more tasks before the model grows large enough to require parameter consolidation.
These approaches stand in contrast to most existing methods for incremental learning, which do not increase the number of parameters over time and instead gradually fill up the capacity of a large base model.
For example, fine-tuning updates all the parameters.
A large body of constraint-based methods focus on how to regularize these updates in order to prevent inter-task interference (Cheung et al., 2019) .
Side-tuning does not require such regularization since the additive structure means inter-task interference is not possible.
We compare side-tuning to alternative approaches on both the iCIFAR and Taskonomy datasets.
iCIFAR consists of ten distinct 10-class image classification problems.
Taskonomy covers multiple tasks of varied complexity from across computer vision (surface normal and depth estimation, edge detection, image 1000-way classification, etc.).
On these datasets, side-tuning uses side networks that are much smaller than the base.
Consequently, even without consolidation, side-tuning uses fewer learnable parameters than the alternative methods.
This remarkably simple approach deals with the key challenges of incremental learning.
Namely, it does not suffer from either:
• Catastrophic forgetting: which is the tendency of a network to abruptly lose previously learned knowledge upon learning new information.
We show this in Section 4.2.1.
• Rigidity: where networks become increasingly unable to adapt to new problems as they accrue constraints from previous problems.
We explore this in Section 4.2.2.
Side-tuning avoids these problems while remaining highly performant, which we demonstrate in Section 4.2.3.
We have introduced the side-tuning framework, a simple yet effective approach for additive learning.
Since it does not suffer from catastrophic forgetting or rigidity, it is naturally suited to incremental learning.
The theoretical advantages are reflected in empirical results, and side-tuning outperforms existing approaches in challenging contexts and with state-of-the-art neural networks.
We further demonstrated that the approach is effective in multiple domains and with various network types. | Side-tuning adapts a pre-trained network by training a lightweight "side" network that is fused with the (unchanged) pre-trained network using a simple additive process. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:746 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
In this paper, we present a technique for generating artificial datasets that retain statistical properties of the real data while providing differential privacy guarantees with respect to this data.
We include a Gaussian noise layer in the discriminator of a generative adversarial network to make the output and the gradients differentially private with respect to the training data, and then use the generator component to synthesise privacy-preserving artificial dataset.
Our experiments show that under a reasonably small privacy budget we are able to generate data of high quality and successfully train machine learning models on this artificial data.
Following recent advancements in deep learning BID28 BID12 BID30 , more and more people and companies are interested in putting their data in use as they see that machine learning is able to generate a wide range of benefits, including financial, social, medical, security, and so on.
At the same time, however, such models are often able to capture a fine level of detail in training data potentially compromising privacy of individuals who's features sharply differ from others.
This problem is partially mitigated by the use of regularisation techniques that "smooth out" outstanding details and avoid overfitting, but it does not give any theoretical privacy guarantees.
Recent research by BID8 suggests that even without access to internal model parameters, by using hill climbing on output probabilities of a neural network, it is possible to recover (up to a certain degree) individual faces from a training set.The latter result is especially disturbing knowing that deep learning models are becoming an integral part of our lives, making its way to phones, smart watches, cars, and appliances.
And since these models are often trained on customers data, such training set recovery techniques will endanger privacy even without access to the manufacturer's servers where these models are being trained.In order to protect privacy while still benefiting from the use of statistics and machine learning, a number of techniques for data anonymisation has been developed over the years, including kanonymity BID29 , l-diversity BID18 , t-closeness BID17 , and differential privacy BID2 BID3 BID7 .
The latter has been recognised as a strong standard and is widely accepted by the research community.We study the task of publishing datasets in a differentially private manner.
In particular, we are interested in solving two problems.
First, we want to be able to benefit from the use of machine learning by third parties while protecting sensitive information of individuals in our dataset.
Second, we want to be sure that even if adversaries get access to the third-party model trained on our data, they would not be able to recover private information.
An additional challenge is to be able to publish an entire dataset, as opposed to being required to use a query interface like in a typical differentially private framework.In this paper, we propose a simple solution to this problem.
The main idea of our approach is to use generative adversarial networks (GANs) introduced in BID9 , trained with addition of Gaussian noise in the embedding space, to create artificial datasets that follow the same distribution as the real data while providing differential privacy guarantees.
This method has a number of advantages over the methods proposed earlier.
First of all, this solution is simple to implement, e.g. it does not require training ensembles of models on disjoint data.
Second, it can be done on a user side, and not on the side of the machine learning service provider, which eliminates the necessity of trusting this service provider or implementing privacy-preserving models locally.
Third, similarly to , privacy cannot be compromised even if the entire trained model is accessible to an adversary.Our contributions in this paper are the following:• we propose a novel mechanism for non-interactive differentially private data release, and to the best of our knowledge this is the first practical solution for complex real-world data; • we introduce a new technique of preserving privacy in neural networks via adding noise in the forward pass during training; • we show that this technique guarantees differential privacy for both the outputs and the learned weights of the network; • we demonstrate that we are able to achieve high accuracy in learning tasks while maintaining a reasonable (single-digit) privacy budget.The remainder of the paper is structured as follows.
In Section 2, we give an overview of related work.
Section 3 contains necessary background on differential privacy and generative adversarial networks.
In Section 4, we describe our approach and provide its theoretical analysis and some practical aspects.
Experimental results and implementation details are presented in Section 5, and Section 6 concludes the paper.
The theorem proofs and additional details can be found in the Appendix.
Using the experimental setup and implementation described above, we were able to get results close to BID23 although not quite matching their accuracy for the same privacy bounds on SVHN.
A performance gap is expected due to more generic nature of our method and a simpler privacy-preserving procedure.
Overall, we managed to achieve 98.19% accuracy on MNIST and 83.49% accuracy on SVHN while maintaining approximately (3.45, 10 −5 ) and (8, 10 −6 )-differential privacy.
These numbers, along with the corresponding results of BID23 , can be found in Table 1 .
It is also worth noting that we did not perform rigorous hyper-parameter tuning due to limited computational resources; even better accuracy could be achieved have we had done that.
Additionally, we trained a simple logistic regression model on MNIST, and obtained 88.96% accuracy on privately generated data compared to 92.58% on the original data, which confirms that any model can be used as a student.Examples of real and generated privacy-preserving images for MNIST and SVHN data are depicted on FIG2 .
It can be seen that generated images don't have the same contrast and dynamic range as real examples, which is not a problem in non-private GANs.
We attribute it to the lack of batch normalisation in the discriminator.In addition to quantitative analysis of test errors and privacy bounds, we perform visual inspection of generated examples and corresponding nearest neighbours in real data.
FIG3 depicts a set of generated private examples and their nearest real counterparts.
We observe that while some generated images are very close to real examples they don't match exactly, differing either in shape, colour or surrounding digits.
Moreover, a lot of pairs come from entirely different classes.
We investigate the problem of non-interactive private data release with differential privacy guarantees.
We employ generative adversarial networks to produce artificial privacy-preserving datasets.
Contrary to existing privacy protection work in deep learning, this method allows to publish sanitised data and train any non-private models on it.
The choice of GANs as a generative model ensures scalability and makes the technique suitable for real-world data with complex structure.
Moreover, this method does not require running privacy tests on generated data before releasing it.Additionally, we introduce a novel method for preserving privacy of training data specific to deep neural networks based on adding noise in the embedding space during forward pass.
It provides differential privacy guarantees and allows to construct privacy-preserving models in a simple and straightforward fashion, without modifying optimisation algorithms.In our experiments, we show that student models trained on artificial data can achieve high utility on MNIST dataset, while maintaining performance costs of added privacy and flexibility at acceptable levels on a more complicated SVHN data.
Adding privacy directly to the trained model still provides better accuracy, and therefore, one of the possible directions for future work is to improve the quality of generated data for given privacy bounds.
Extending presented technique and analysis to other types of deep neural networks provides another exciting opportunity for further research. | Train GANs with differential privacy to generate artificial privacy-preserving datasets. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:747 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
This paper presents two methods to disentangle and interpret contextual effects that are encoded in a pre-trained deep neural network.
Unlike convolutional studies that visualize image appearances corresponding to the network output or a neural activation from a global perspective, our research aims to clarify how a certain input unit (dimension) collaborates with other units (dimensions) to constitute inference patterns of the neural network and thus contribute to the network output.
The analysis of local contextual effects w.r.t. certain input units is of special values in real applications.
In particular, we used our methods to explain the gaming strategy of the alphaGo Zero model in experiments, and our method successfully disentangled the rationale of each move during the game.
Interpreting the decision-making logic hidden inside neural networks is an emerging research direction in recent years.
The visualization of neural networks and the extraction of pixel-level inputoutput correlations are two typical methodologies.
However, previous studies usually interpret the knowledge inside a pre-trained neural network from a global perspective.
For example, BID17 BID14 BID10 mined input units (dimensions or pixels) that the network output is sensitive to; BID2 visualized receptive fields of filters in intermediate layers; BID33 BID15 BID24 BID5 BID6 BID20 illustrated image appearances that maximized the score of the network output, a filter's response, or a certain activation unit in a feature map.However, instead of visualizing the entire appearance that is responsible for a network output or an activation unit, we are more interested in the following questions.•
How does a local input unit contribute to the network output? Here
, we can vectorize the input of the network into a high-dimensional vector, and we treat each dimension as a specific "unit" without ambiguity. As
we know, a single input unit is usually not informative enough to make independent contributions to the network output. Thus
, we need to clarify which other input units the target input unit collaborates with to constitute inference patterns of the neural network, so as to pass information to high layers.• Can
we quantitatively measure the significance of above contextual collaborations between the target input unit and its neighboring units?Method
: Therefore, given a pre-trained convolutional neural network (CNN), we propose to disentangle contextual effects w.r.t. certain input units.As shown in Fig. 1 , we design two methods to interpret contextual collaborations at different scales, which are agnostic to the structure of CNNs. The first
method estimates a rough region of contextual collaborations, i.e. clarifying whether the target input unit mainly collaborates with a few neighboring units or most units of the input. This method
distills knowledge from the pre-trained network into a mixture of local models (see Fig. 2 ), where each model encodes contextual collaborations within a specific input region to make predictions. We hope that
the knowledge-distillation strategy can help people determine quantitative contributions from different regions. Then, given
a model for Extracting fine-grained contextual effects from a student net A lattice within the Go board Figure 1 : Explaining the alphaGo model. Given the state
of the Go board and the next move, we use the alphaGo model to explain the rationale of the move. We first estimate
a rough region of contextual collaborations w.r.t. the current move by distilling knowledge from the value net to student nets that receive different regions of the Go board as inputs. Then, given a student
net, we analyze fine-grained contextual collaborations within its region of the Go board. In this figure, we use
a board state from a real Go game between humans for clarity.local collaborations, the second method further analyzes the significance of detailed collaborations between each pair of input units, when we use the local model to make predictions on an image.
In this paper, we have proposed two typical methods for quantitative analysis of contextual collaborations w.r.t. a certain input unit in the decision-making of a neural network.
Extracting fine-grained contextual collaborations to clarify the reason why and how an input unit passes its information to the network output is of significant values in specific applications, but it has not been well explored before, to the best of our knowledge.
In particular, we have applied our methods to the alphaGo Zero model, in order to explain the potential logic hidden inside the model that is automatically learned via self-play without human annotations.
Experiments have demonstrated the effectiveness of the proposed methods.Note that there is no exact ground-truth for contextual collaborations of the Go game, and how to evaluate the quality of the extracted contextual collaborations is still an open problem.
As a pioneering study, we do not require the explanation to be exactly fit human logics, because human logic is usually not the only correct explanations.
Instead, we just aim to visualize contextual collaborations without manually pushing visualization results towards human-interpretable concepts.
This is different from some previous studies of network visualization BID15 BID32 that added losses as the natural image prior, in order to obtain beautiful but biased visualization results.
In the future, we will continue to cooperate with professional Go players to further refine the algorithm to visualize more accurate knowledge inside the alphaGo Zero model. | This paper presents methods to disentangle and interpret contextual effects that are encoded in a deep neural network. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:748 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The main goal of network pruning is imposing sparsity on the neural network by increasing the number of parameters with zero value in order to reduce the architecture size and the computational speedup.
Recent advances in deep neural networks came with ideas to train deep architectures that have led to near-human accuracy for image recognition, object categorization and a wide variety of other applications LeCun et al. (2015) ; Maturana & Scherer (2015) ; Schmidhuber (2015) ; Mnih et al. (2013) ; .
One possible issue is that an over-parameterized network may make the architecture overcomplicated for the task at hand and it might be prone to over-fitting as well.
In addition to the model complexity, a huge amount of computational power is required to train such deep models due to having billions of weights.
Moreover, even if a huge model is trained, it cannot be effectively employed for model evaluation on low-power devices mainly due to having exhaustive matrix multiplications Courbariaux et al. (2015) .
So far, a wide variety of approaches have been proposed for creating more compact models.
Traditional methods include model compression Ba & Caruana (2014) ; , network pruning Han et al. (2015b) , sparsity-inducing regularizer Collins & Kohli (2014) , and low-rank approximation Jaderberg et al. (2014) ; Denton et al. (2014) ; Ioannou et al. (2015) ; Tai et al. (2015) .
The aforementioned methods usually induce random connection pruning which yields to few or no improvement in the computational cost.
On the other hand, structured pruning methods proposed to compress the architecture with significant computational efficiency Wen et al. (2016) ; Neklyudov et al. (2017) .
One of the critical subjects of interest in sparsity learning is to maintain the accuracy level.
In this paper, we discuss the intuitive reasons behind the accuracy drop and propose a method to prevent it.
The important step is to determine how the sparsity and accuracy are connected together in order to be able to propose a mechanism for controlling the sparsity to prevent severe accuracy drop.
In order to connect the sparsity to accuracy, intuitively, the accuracy drop is caused by imposing too much sparsity on the network in a way that the remaining elements cannot transfer enough information for optimal feature extraction for the desired task.
Another intuitive reasoning is to argue that the sparsity is not supervised with any attention towards the model performance during optimization.
For effective network pruning and feature selection, different approaches such as employing the group lasso for sparse structure learning Yuan & Lin (2006) , structure scale constraining Liu et al. (2015) , and structured regularizing deep architectures known as Structured Sparsity Learning (SSL) Wen et al. (2016) have previously been proposed.
For most of the previous research efforts, there is lack of addressing the direct effect of the proposed method on the combination of the sparsity and accuracy drop.
One may claim that successful sparsity imposing with negligible accuracy drop might be due to the initial over-parameterizing the network.
Moreover, there is no control mechanism to supervise the sparsity operation connected to the model performance which limits the available methods to intensive hyper-parameter tuning and multiple stages of training.
Our contribution.
We designed and employed a supervised attention mechanism for sparsity learning which: (1) performs model compression for having less number of parameters (2) prevents the accuracy drop by sparsity supervision by paying an attention towards the network using variance regularization and (3) is a generic mechanism that is not restricted by the sparsity penalty or any other limiting assumption regarding the network architecture.
To the best of our knowledge, this is the first research effort which proposes a supervised attention mechanism for sparsity learning.
Paper Organization.
At first, we provide a review of the related research efforts (Section 2).
Then, we introduce the attention mechanism which is aimed at forcing some sections of the network to be active (Section 3).
Later in Section 4, we propose an algorithm only for the attention supervision.
We complement our proposed method in Section 5, by providing experimental results for which we target the sparsity level, accuracy drop and robustness of the model to hyper-parameter tuning.
As will be observed, the proposed mechanism prevents the severe accuracy drop in higher levels of sparsity.
We will empirically show the robustness to exhaustive hyper-parameter tuning in addition to performance superiority of the proposed method in higher sparsity levels. | Proposing a novel method based on the guided attention to enforce the sparisty in deep neural networks. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:749 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Holistically exploring the perceptual and neural representations underlying animal communication has traditionally been very difficult because of the complexity of the underlying signal.
We present here a novel set of techniques to project entire communicative repertoires into low dimensional spaces that can be systematically sampled from, exploring the relationship between perceptual representations, neural representations, and the latent representational spaces learned by machine learning algorithms.
We showcase this method in one ongoing experiment studying sequential and temporal maintenance of context in songbird neural and perceptual representations of syllables.
We further discuss how studying the neural mechanisms underlying the maintenance of the long-range information content present in birdsong can inform and be informed by machine sequence modeling. | We compare perceptual, neural, and modeled representations of animal communication using machine learning, behavior, and physiology. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:75 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Giving provable guarantees for learning neural networks is a core challenge of machine learning theory.
Most prior work gives parameter recovery guarantees for one hidden layer networks, however, the networks used in practice have multiple non-linear layers.
In this work, we show how we can strengthen such results to deeper networks -- we address the problem of uncovering the lowest layer in a deep neural network under the assumption that the lowest layer uses a high threshold before applying the activation, the upper network can be modeled as a well-behaved polynomial and the input distribution is gaussian.
Understanding the landscape of learning neural networks has been a major challege in machine learning.
Various works gives parameter recovery guarantees for simple one-hidden-layer networks where the hidden layer applies a non-linear activation u after transforming the input x by a matrix W, and the upper layer is the weighted sum operator: thus f
(x) = a i u(w T i
x).
However, the networks used in practice have multiple non-linear layers and it is not clear how to extend these known techniques to deeper networks.We consider a multilayer neural network with the first layer activation u and the layers above represented by an unknown polynomial P such that it has non-zero non-linear components.
More precisely, the function f computed by the neural network is as follows: f W
(x) = P (u(w We assume that the input x is generated from the standard Gaussian distribution and there is an underlying true network (parameterized by some unknown W * ) 1 from which the labels are generated.In this work we strengthen previous results for one hidden layer networks to a larger class of functions representing the transform made by the upper layer functions if the lowest layer uses a high threshold (high bias term) before applying the activation: u(a − t) instead of u(a).
Intuitively, a high threshold is looking for a high correlation of the input a with a direction w * i .
Thus even if the function f is applying a complex transform after the first layer, the identity of these high threshold directions may be preserved in the training data generated using f .Learning
with linear terms in P . Suppose
P has a linear component then we show that increasing the threshold t in the lowest layer is equivalent to amplifying the coefficients of the linear part. Instead
of dealing with the polynomial P it turns out that we can roughly think of it as P (µX 1 , ..., µX d ) where µ decreases exponentially in t (µ ≈ e −t 2 ). As µ decreases
it has the effect of diminishing the non-linear terms more strongly so that relatively the linear terms stand out. Taking advantage
of this effect we manage to show that if t exceeds a certain threshold the non linear terms drop in value enough so that the directions w i can be learned by relatively simple methods. We show that we
can get close to the w i applying a simple variant of PCA. While an application
of PCA can be thought of as finding principal directions as the local maxima of max ||z||=1 E[f (x)(z T x) 2 ], DISPLAYFORM0
If W *
has a constant condition number then the local maxima can be used to recover directions that are transforms of w i . Theorem 1 (informal version
of Claim 2, Theorem 11). If t > c √ log d for large
enough constant c > 0 and P has linear terms with absolute value of coefficients at least 1/poly(d) and all coefficients at most O(1), we can recover the weight vector w i within error 1/poly(d) in time poly(d).These approximations of w i
obtained collectively can be further refined by looking at directions along which there is a high gradient in f ; for monotone functions we show how in this way we can recover w i exactly (or within any desired precision. Theorem 2. (informal version
of Theorem
5) Under the conditions of the
previous theorem, for monotone P , there exists a procedure to refine the angle to precision in time poly(1/ , d) starting from an estimate that is 1/poly(d) close.The above mentioned theorems hold for u being sign and ReLU. 3 When P is monotone and u
is the sign function, learning W is equivalent to learning a union of half spaces. We learn W * by learning sign
of P which is exactly the union of halfspaces w T i x = t. Thus our algorithm can also be
viewed as a polynomial time algorithm for learning a union of large number of half spaces that are far from the origin -to our knowledge this is the first polynomial time algorithm for this problem but with this extra requirement (see earlier work BID12 for an exponential time algorithm). Refer to Appendix B.6 for more
details.Such linear components in P may easily be present: consider for example the case where P (X) = u(v T X − b) where u is say the sigmoid or the logloss function. The taylor series of such functions
has a linear component -note that since the linear term in the taylor expansion of u(x) has coefficient u (0), for expansion
of u(x−b) it will be u (−b) which is Θ(e −b ) in the case of sigmoid. In fact one may even have a tower (deep
network) or such sigmoid/logloss layers and the linear components will still be present -unless they are made to cancel out precisely; however, the coefficients will drop exponentially in the depth of the networks and the threshold b.Sample complexity with low thresholds and no explicit linear terms. Even if the threshold is not large or P
is not monotone, we show that W * can be learned with a polynomial sample complexity (although possibly exponential time complexity) by finding directions that maximize the gradient of f . Theorem 3 (informal version of Corollary
1). If u is the sign function and w i 's are
orthogonal then in poly(1/ , d) samples one can determine W * within precision if the coefficient of the linear terms in P (µ(X 1 + 1), µ(X 2 + 1), µ(X 3 + 1), . . .) is least 1/poly(d)Learning without
explicit linear terms. We further provide evidence that P may not even need to have the linear terms -under some restricted cases (section 4), we show how such linear terms may implicitly arise even though they may be entirely apparently absent. For instance consider the case when P = X i X j that does not have any linear terms. Under certain additional assumptions we show that one can recover w i as long as the polynomial P (µ(X 1 + 1), µ(X 2 + 1), µ(X 3 + 1), ..) (where µ is e −t has linear terms components larger than the coefficients of the other terms). Note that this transform when applied to P automatically introduces linear terms. Note that as the threshold increases applying this transform on P has the effect of gathering linear components from all the different monomials in P and penalizing the higher degree monomials. We show that if W * is a sparse binary matrix then we can recover W * when activation u(a) = e ρa under certain assumptions about the structure of P . When we assume the coefficients are positive then these results extend for binary low l 1 -norm vectors without any threshold. Lastly, we show that for even activations (∀a, u(a) = u(−a)) under orthogonal weights, we can recover the weights with no threshold.Learning with high thresholds at deeper layers. We also point out how such high threshold layers could potentially facilitate learning at any depth, not just at the lowest layer. If there is any cut in the network that takes inputs X 1 , . . . , X d and if the upper layers operations can be modelled by a polynomial P , then assuming the inputs X i have some degree of independence we could use this to modularly learn the lower and upper parts of the network separately (Appendix E) Related Work. Various works have attempted to understand the learnability of simple neural networks. Despite known hardness results BID8 ; BID2 , there has been an array of positive results under various distributional assumptions on the input and the underlying noise in the label. Most of these works have focused on analyzing one hidden layer neural networks. A line of research has focused on understanding the dynamics of gradient descent on these networks for recovering the underlying parameters under gaussian input distribution Du et al. FIG1 ; BID10 ; BID16 ; BID14 ; BID17 . Another line of research borrows ideas from kernel methods and polynomial approximations to approximate the neural network by a linear function in a high dimensional space and subsequently learning the same BID15 ; BID8 ; BID7 a) . Tensor decomposition methods BID0 BID9 have also been applied to learning these simple architectures.The complexity of recovering arises from the highly non-convex nature of the loss function to be optimized. The main result we extend in this work is by BID5 . They learn the neural network by designing a loss function that allows a "well-behaved" landscape for optimization avoiding the complexity. However, much like most other results, it is unclear how to extend to deeper networks. The only known result for networks with more than one hidden layer is by BID7 . Combining kernel methods with isotonic regression, they show that they can provably learn networks with sigmoids in the first hidden layer and a single unit in the second hidden layer in polynomial time. We however model the above layer as a multivariate polynomial allowing for larger representation. Another work BID1 deals with learning a deep generative network when several random examples are generated in an unsupervised setting. By looking at correlations between input coordinates they are able to recover the network layer by layer. We use some of their ideas in section 4 when W is a sparse binary matrix.Notation. We denote vectors and matrices in bold face. || · || p denotes the l p -norm of a vector. || · || without subscript implies the l 2 -norm. For matrices || · || denotes the spectral norm and || · || F denotes the forbenius norm. N (0, Σ) denotes the multivariate gausssian distribution with mean 0 and covariance Σ. For a scalar x we will use φ(x) to denote the p.d.f. of the univariate
standard normal distribution with mean zero and variance 1 .For a vector x we will use φ(x) to denote the
p.d.f. of the multivariate standard
normal distribution with mean zero and variance 1 in each direction. Φ denotes the c.d.f. of the standard gausssian distribution
. Also define Φ c = 1 − Φ. Let h i denote the ith normalized
Hermite polynomial Wikipedia
contributors (2018). For a function f , letf i denote the ith coefficient in the hermite
expansion of f , that is, DISPLAYFORM1 For a given function f computed by the neural network, we assume that the training samples (x, y) are such that x ∈ R n is distributed according to N (0, 1) and label
has no noise, that is, y = f (x).Note: Most proofs are deferred to the Appendix due to lack of space
.
In this work we show how activations in a deep network that have a high threshold make it easier to learn the lowest layer of the network.
We show that for a large class of functions that represent the upper layers, the lowest layer can be learned with high precision.
Even if the threshold is low we show that the sample complexity is polynomially bounded.
An interesting open direction is to apply these methods to learn all layers recursively.
It would also be interesting to obtain stronger results if the high thresholds are only present at a higher layer based on the intuition we discussed. | We provably recover the lowest layer in a deep neural network assuming that the lowest layer uses a "high threshold" activation and the above network is a "well-behaved" polynomial. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:750 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Federated learning involves training and effectively combining machine learning models from distributed partitions of data (i.e., tasks) on edge devices, and be naturally viewed as a multi- task learning problem.
While Federated Averaging (FedAvg) is the leading optimization method for training non-convex models in this setting, its behavior is not well understood in realistic federated settings when the devices/tasks are statistically heterogeneous, i.e., where each device collects data in a non-identical fashion.
In this work, we introduce a framework, called FedProx, to tackle statistical heterogeneity.
FedProx encompasses FedAvg as a special case.
We provide convergence guarantees for FedProx through a device dissimilarity assumption.
Our empirical evaluation validates our theoretical analysis and demonstrates the improved robustness and stability of FedProx for learning in heterogeneous networks.
Large networks of remote devices, such as phones, vehicles, and wearable sensors, generate a wealth of data each day.
Federated learning has emerged as an attractive paradigm to push the training of models in such networks to the edge (McMahan et al., 2017) .
In such settings, the goal is to jointly learn over distributed partitions of data/tasks, where statistical heterogeneity and systems constraints present significant challenges.
Optimization methods that allow for local updating and low participation have become the de facto solvers for federated learning (McMahan et al., 2017; Smith et al., 2017) .
These methods perform a variable number of local updates on a subset of devices to enable flexible and efficient communication.
Of current federated optimization methods, FedAvg (McMahan et al., 2017) has become state-of-the-art for non-convex federated learning.
However, FedAvg was not designed to tackle the statistical heterogeneity which is inherent in federated settings; namely, that data may be non-identically distributed across devices.
In realistic statistically heterogeneous settings, FedAvg has been shown to diverge empirically (McMahan et al., 2017, Sec 3) , and it also lacks theoretical convergence guarantees.
Indeed, recent works exploring convergence guarantees are limited to unrealistic scenarios, where (1) the data is either shared across devices or distributed in an IID (identically and independently distributed) manner, or (2) all devices are active at each communication round (Zhou & Cong, 2017; Stich, 2018; Wang & Joshi, 2018; Woodworth et al., 2018; Yu et al., 2018; Wang et al., 2018) .Due
to the statistical heterogeneity of the data in federated networks, one can think of federated learning as a prime example of distributed multi-task learning, where each device corresponds to a task. However
, the more common goal of federated learning-and the focus of this work-involves training a single global model on distributed data collected for these various tasks. We introduce
and study a novel optimization framework in the federated setting. Our focus on
its convergence behavior in the face of statistically heterogeneous data is closely related to the classical multi-task setting which involves jointly learning task-specific models from statistically heterogeneous data.Contributions. We propose a
federated optimization framework for heterogeneous networks, FedProx, which encompasses FedAvg. In order to
characterize the convergence behavior of FedProx, we invoke a device dissimilarity assumption in the network. Under this
assumption, we provide the first convergence guarantees for FedProx. Finally, we
demonstrate that our theoretical assumptions reflect empirical performance, and that FedProx can improve the robustness and stability of convergence over FedAvg when data is heterogeneous across devices. | We introduce FedProx, a framework to tackle statistical heterogeneity in federated settings with convergence guarantees and improved robustness and stability. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:751 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Referential games offer a grounded learning environment for neural agents which accounts for the fact that language is functionally used to communicate.
However, they do not take into account a second constraint considered to be fundamental for the shape of human language: that it must be learnable by new language learners and thus has to overcome a transmission bottleneck.
In this work, we insert such a bottleneck in a referential game, by introducing a changing population of agents in which new agents learn by playing with more experienced agents.
We show that mere cultural transmission results in a substantial improvement in language efficiency and communicative success, measured in convergence speed, degree of structure in the emerged languages and within-population consistency of the language.
However, as our core contribution, we show that the optimal situation is to co-evolve language and agents.
When we allow the agent population to evolve through genotypical evolution, we achieve across the board improvements on all considered metrics.
These results stress that for language emergence studies cultural evolution is important, but also the suitability of the architecture itself should be considered.
Human languages show a remarkable degree of structure and complexity, and how such a complex system can have emerged is still an open question.
One concept frequently named in the context of language evolution is cultural evolution.
Unlike animal languages, which are taken to be mostly innate, human languages must be re-acquired by each individual BID29 BID10 .
This pressures them to fit two constraints that govern their cross-generational transmission: They must be learnable by new language users, and they must allow effective communication between proficient language users (see, e.g. BID31 .In
the recent past, computational studies of language emergence using referential games (see Section 2.1 for a review) has received a new wave of attention. These
studies are motivated by the second constraint, that language is used to communicate. The first
constraint, on the other hand, is in this framework not considered: language is not transmitted from agent to agent and there is thus no need for agents to develop languages that would survive a transmission bottleneck. 1 In this
work, we introduce a transmission bottleneck in a population of agents playing referential games, implicitly modelling cultural evolution. However,
merely adding a transmission bottleneck is not enough. Since the
types of language that may emerge through passing this bottleneck are not just dependent on the existence of a bottleneck, but also on the shape of the bottleneck, which is determined by the biases of the architecture of the agents playing the game (their genotypical design). If the genotypical
design of those agents is not suitable to solve this task through communication, they will -at best -converge to a language that doesn't allow for effective communication or is difficult to learn for every new agent or -at worst -not converge to an appropriate culturally transmittable language at all. In this work, we therefore
study the co-evolution of language and architecture in a referential games.To this end, we introduce the Language Transmission Engine that allows to model both cultural and genetic evolution in a population of agents. We demonstrate that the emerging
languages ben-efit from including cultural transmission as well as genetic evolution, but the best results are achieved when both types of evolution are included and languages and agents can co-evolve.2 Related Work
In this paper, we introduced a language transmission bottleneck in a referential game, where new agents have to learn the language by playing with more experienced agents.
To overcome such bottleneck, we enabled both the cultural evolution of language and the genetic evolution of agents, using a new Language Transmission Engine.
Us- ing a battery of metrics, we monitored their respective impact on communication efficiency, degree of linguistic structure and intra-population language homogeneity.
While we could find important differences in between cultural evolution strategies, it is when we included genetic evolution that agents scored best.
In a second experiment, we paired new agents with evolved languages and agents and again confirmed that, while cultural evolution makes a language easier to learn, coevolution leads to the best communication.In future research, we would like to apply the Language Transmission Engine on new, more complex tasks and further increase our understanding of the properties of the emerged languages and architectures.
Additionally, we would like to investigate other neuro-evolution techniques and apply them on different search spaces. | We enable both the cultural evolution of language and the genetic evolution of agents in a referential game, using a new Language Transmission Engine. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:752 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The need for large amounts of training image data with clearly defined features is a major obstacle to applying generative adversarial networks(GAN) on image generation where training data is limited but diverse, since insufficient latent feature representation in the already scarce data often leads to instability and mode collapse during GAN training.
To overcome the hurdle of limited data when applying GAN to limited datasets, we propose in this paper the strategy of \textit{parallel recurrent data augmentation}, where the GAN model progressively enriches its training set with sample images constructed from GANs trained in parallel at consecutive training epochs.
Experiments on a variety of small yet diverse datasets demonstrate that our method, with little model-specific considerations, produces images of better quality as compared to the images generated without such strategy.
The source code and generated images of this paper will be made public after review.
Generative Adversarial Networks(GAN) BID5 ) are powerful unsupervised learning models that have recently achieved great success in learning high-dimensional distributions in various types of problems and on different datasets.
In the context of image generation, the basic framework of a GAN model consists of two parts: a generator G that generates images by translating random input z into an image, and a discriminator D which determines the authenticity of a generated image x as compared to the real data.
These two components are alternatively optimized against each other during the training process, with the goal of minimizing the difference between the distribution of generated image data and target distribution of real image data.A notable challenge in GAN training, however, lies in the need for large amounts of clearly labeled data to capture the diversity features across various types of images into the model.
Such requirement makes it difficult or even impossible to utilize GAN in applications where the amount of available training data is small but diverse.
Moreover, recent deep learning models BID6 ) have demonstrated tendencies of misrepresentation in classification tasks when influenced by adversarial noise.
Such vulnerability may also translate to unsatisfactory image generation as most generative models are implemented with deep networks.Thus, given these considerations, we propose in this paper the strategy of parallel recurrent sample augmentation agnostic to specific model details.
Our contributions can be summarized as follows:• We proposed a general black-box method using recurrent image addition to diversify training data and enhance its quality over a large class of GANs without model specifications.•
We also includes in our model a novel K-fold parallel framework, which better augments training data by stabilizing model output and preventing overfitting.•
Experiments across various datasets and GAN objectives demonstrate the effectiveness of our method using authenticity measures such as Inception Score and Frechet Inception Distance.
In sum, our paper shows that parallel recurrent sample augmentation can significantly improve the quality of synthetic images for a large class of GAN models.
Our strategy is not only simple to implement, but also agnostic to the specific type of GAN to be improved on.As a further step, we are investigating the relationship between our proposed approach and other established methods.
One possible pathway, for instance, lies in reinforcement learning as described in BID3 that gives more control to image generation via reward designation.
We also hope to apply our idea to other generative models such as the VAE BID11 ) and further optimize our strategy using recent theoretical advances. | We introduced a novel, simple, and efficient data augmentation method that boosts the performances of existing GANs when training data is limited and diverse. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:753 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We develop a stochastic whole-brain and body simulator of the nematode roundworm Caenorhabditis elegans (C. elegans) and show that it is sufficiently regularizing to allow imputation of latent membrane potentials from partial calcium fluorescence imaging observations.
This is the first attempt we know of to ``complete the circle,'' where an anatomically grounded whole-connectome simulator is used to impute a time-varying ``brain'' state at single-cell fidelity from covariates that are measurable in practice.
Using state of the art Bayesian machine learning methods to condition on readily obtainable data, our method paves the way for neuroscientists to recover interpretable connectome-wide state representations, automatically estimate physiologically relevant parameter values from data, and perform simulations investigating intelligent lifeforms in silico.
One of the goals of artificial intelligence, neuroscience and connectomics is to understand how sentience emerges from the interactions of the atomic units of the brain, to be able to probe these mechanisms on the deepest level in living organisms, and to be able to simulate this interaction ad infinitum [1] .
In this work, we assemble an anatomically grounded, interpretable probabilistic brainbody simulator for the widely studied nematode roundworm Caenorhabditis elegans (C. elegans) [2, 3] .
We then present methods for performing posterior inference in the time evolution of the state of the worm and estimate the global simulator parameter values from readily obtainable non-invasive calcium fluorescence data [4] .
We refer to using an anatomically grounded model to infer latent states and parameters, conditioned on partial data, as a "virtual patch clamp" (VPC).
The VPC also facilitates in silico experimentation on "digital" C. elegans specimens, by programmatically modifying the simulator and observing the resulting simulations; enabling rapid, wide-reaching, fully observable and perfectly repeatable exploration of hypotheses into the how the fundamental units of the neural circuit of C. elegans combine to create intelligent behaviour.
In this work we have explored performing Bayesian inference in whole-connectome neural and whole-body C. elegans simulations.
We describe the model-based Bayesian inference aspect of this as a "virtual patch clamp," whereby unobserved latent membrane potentials can be inferred from partial observations gathered non-invasively.
Our choice of inference method facilitates estimation of the model evidence, a measure of how well the model explains the observed data.
We presented a method for maximizing this evidence without requiring differentiable simulation components.
In the past year several articles discussing open research issues pertaining to C. elegans simulation have been produced by the C. elegans community [1, 11] .
Figure 1 (a) outlines the community planned development pipeline for C. elegans simulation.
Our work addresses the implementation of the box simply labelled "optimization."
We show on representative synthetic data that our method is capable of performing such an optimization.
This approach promises to allow neuroscientists to peer deeper into the neural function of a living organism, testing hypothesis on neural function that were previously unreachable.
It is widely touted that convolutional neural networks were developed by wide-scale study of the V1 cortex.
We believe connectome-level optimization and simulation, as demonstrated here, is the next step in neuroscience to understanding the very root of intelligence, but also discovering and developing techniques building towards artificial general intelligence. | We develop a whole-connectome and body simulator for C. elegans and demonstrate joint state-space and parameter inference in the simulator. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:754 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The high-quality node embeddings learned from the Graph Neural Networks (GNNs) have been applied to a wide range of node-based applications and some of them have achieved state-of-the-art (SOTA) performance.
However, when applying node embeddings learned from GNNs to generate graph embeddings, the scalar node representation may not suffice to preserve the node/graph properties efficiently, resulting in sub-optimal graph embeddings.
Inspired by the Capsule Neural Network (CapsNet), we propose the Capsule Graph Neural Network (CapsGNN), which adopts the concept of capsules to address the weakness in existing GNN-based graph embeddings algorithms.
By extracting node features in the form of capsules, routing mechanism can be utilized to capture important information at the graph level.
As a result, our model generates multiple embeddings for each graph to capture graph properties from different aspects.
The attention module incorporated in CapsGNN is used to tackle graphs with various sizes which also enables the model to focus on critical parts of the graphs.
Our extensive evaluations with 10 graph-structured datasets demonstrate that CapsGNN has a powerful mechanism that operates to capture macroscopic properties of the whole graph by data-driven.
It outperforms other SOTA techniques on several graph classification tasks, by virtue of the new instrument.
GNN is a general type of deep-learning architectures that can be directly applied to structured data.
These architectures are mainly generalized from other well-established deep-learning models like CNN BID9 and RNN BID12 .
In this paper, we mainly focus on Convolution-based Graph Neural Networks which attract increasing interest recently.
Convolution operation can be embedded into Graph Neural Networks from spectral or spatial perspective.
BID1 defines the convolution operation in the Fourier domain which needs to calculate the eigendecomposition of the graph Laplacian.
This method is computationally expensive and the filters they defined are non-spatially localized.
Later, BID4 introduces Chebyshev expansion of the graph Laplacian to avoid computing eigenvectors and BID8 proposes to do convolution within 1-step neighbor nodes to reduce the complexity.
From the spatial perspective, BID3 and propose to define a node receptive-field and do convolution within this field during which the information of each node as well as their neighbor nodes is gathered and new representation of each node is generated through an activation function.
Both of these two perspectives perform well in node representation learning and a number of variants BID20 are developed based on the convolution idea and some of them have proven to achieve SOTA in various tasks.The success of GNN in node representation learning has inspired many deep-learning-based approaches to leverage on node embeddings extracted from GNN to generate graph embeddings for graph-based applications.
However, during this procedure, the learned representation of each node will be considered as multiple individual scalar features instead of one vector.
For example, applies element-wise max-pooling to nodes embeddings when generating graph embeddings, BID22 generates graph embeddings by computing the element-wise covariance of all nodes.
These operations indicate that the authors capture node features in the form of scalar when they generate graph embeddings which may not suffice to preserve the node/graph properties efficiently.To build high-quality graph embeddings, it is important to not only detect the presence of different structures around each node but also preserve their detailed properties such as position, direction, connection, etc.
However, encoding these properties information in the form of scalar means activating elements in a vector one-by-one which is exponentially less efficient than encoding them with distributed representations.
This has been identified discussed in BID16 .
Inspired by CapsNet, we propose to extend scalar to vector during the procedure of applying GNN to graph representation learning.
Compared with scalar-based neural network, vector-based neural network preserves the information of node/graph properties more efficiently.
The technique for extracting features in the form of vectors is proposed in BID5 and improved in BID16 and BID6 .
This technique is mainly devised for image processing.
In their work, the extracted vector is referred to as capsule (a group of neurons in neural network), so we follow the same notation in our work.
Introducing capsules allows us to use routing mechanism to generate high-level features which we believe is a more efficient way for features encoding.
Compared with max-pooling in CNN in which all information will be dropped except for the most active one, routing preserves all the information from low-level capsules and routes them to the closest high-level capsules.
Besides, this allows to model each graph with multiple embeddings and each embedding reflects different properties of the graph.
This is more representative than only one embedding used in other scalar-based approaches.In this paper, we propose Capsule Graph Neural Network (CapsGNN), a novel deep learning architecture, which is inspired by CapsNet and uses node features extracted from GNN to generate high-quality graph embeddings.
In this architecture, each graph is represented as multiple embeddings and each embedding reflects the graph properties from different aspects.
More specifically, basic node features are extracted in the form of capsules through GNN and routing mechanism is applied to generate high-level graph capsules as well as class capsules.
In the procedure of generating graph capsules, an Attention Module can be applied to tackle graphs in various sizes.
It also assigns different weights to each capsule of each node so that this model focuses on critical parts of the graph.
We validate the performance of generated graph embeddings on classification task over 5 biological datasets and 5 social datasets.
CapsGNN achieves SOTA performance on 6 out of 10 benchmark datasets and comparable results on the rest.
T-SNE BID11 ) is used to visualize the learned graph embeddings and the results show that different graph capsules indeed capture different information of the graphs.
We have proposed CapsGNN, a novel framework that fuses capsules theory into GNN for more efficient graph representation learning.
Inspired by CapsNet, the concepts of capsules are introduced in this architecture to extract features in the form of vectors on the basis of nodes features extracted from GNN.
As a result, one graph is represented as multiple embeddings and each embedding captures different aspects of the graph properties.
The generated graph and class capsules can preserve not only the classification-related information but also other information with respect to graph properties which might be useful in the follow-up work and we leave this to be explored in the future.
We believe this is a novel, efficient and powerful data-driven method to represent high-dimensional data such as graphs.
Our model has successfully achieved better or comparable performance when compared with other SOTA algorithms on 6 out of 10 graph classification tasks especially on social datasets.
Compared with similar scalar-based architectures, CapsGNN is more efficient in encoding features and this would be very beneficial for processing large datasets. | Inspired by CapsNet, we propose a novel architecture for graph embeddings on the basis of node features extracted from GNN. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:755 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We introduce a novel framework for generative models based on Restricted Kernel Machines (RKMs) with multi-view generation and uncorrelated feature learning capabilities, called Gen-RKM.
To incorporate multi-view generation, this mechanism uses a shared representation of data from various views.
The mechanism is flexible to incorporate both kernel-based, (deep) neural network and convolutional based models within the same setting.
To update the parameters of the network, we propose a novel training procedure which jointly learns the features and shared representation.
Experiments demonstrate the potential of the framework through qualitative evaluation of generated samples.
In the past decade, interest in generative models has grown tremendously, finding applications in multiple fields such as, generated art, on-demand video, image denoising (Vincent et al., 2010) , exploration in reinforcement learning (Florensa et al., 2018) , collaborative filtering (Salakhutdinov et al., 2007) , inpainting (Yeh et al., 2017) and many more.
Some examples of graphical models based on a probabilistic framework with latent variables are Variational Auto-Encoders (Kingma & Welling, 2014) and Restricted Boltzmann Machines (RBMs) (Smolensky, 1986; Salakhutdinov & Hinton, 2009 ).
More recently proposed models are based on adversarial training such as Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) and its many variants.
Furthermore, auto-regressive models such as Pixel Recurrent Neural Networks (PixelRNNs) (Van Den Oord et al., 2016) model the conditional distribution of every individual pixel given previous pixels.
All these approaches have their own advantages and disadvantages.
For example, RBMs perform both learning and Bayesian inference in graphical models with latent variables.
However, such probabilistic models must be properly normalized, which requires evaluating intractable integrals over the space of all possible variable configurations (Salakhutdinov & Hinton, 2009) .
Currently GANs are considered as the state-of-the-art for generative modeling tasks, producing high-quality images but are more difficult to train due to unstable training dynamics, unless more sophisticated variants are applied.
Many datasets are comprised of different representations of the data, or views.
Views can correspond to different modalities such as sounds, images, videos, sequences of previous frames, etc.
Although each view could individually be used for learning tasks, exploiting information from all views together could improve the learning quality (Pu et al., 2016; Liu & Tuzel, 2016; Chen & Denoyer, 2017) .
Also, it is among the goals of the latent variable modelling to model the description of data in terms of uncorrelated or independent components.
Some classical examples are Independent Component Analysis; Hidden Markov models (Rabiner & Juang, 1986) ; Probabilistic Principal Component Analysis (PCA) (Tipping & Bishop, 1999) ; Gaussian-Process Latent variable model (Lawrence, 2005) and factor analysis.
Hence, when learning a latent space in generative models, it becomes interesting to find a disentangled representation.
Disentangled variables are generally considered to contain interpretable information and reflect separate factors of variation in the data for e.g. lighting conditions, style, colors, etc.
The definition of disentanglement in the literature is not precise, however many believe that a representation with statistically independent variables is a good starting point (Schmidhuber, 1992; Ridgeway, 2016) .
Such representations extract information into a compact form which makes it possible to generate samples with specific characteristics (Chen et al., 2018; Bouchacourt et al., 2018; Tran et al., 2017; Chen et al., 2016) .
Additionally, these representations have been found to generalize better and be more robust against adversarial attacks (Alemi et al., 2017) .
In this work, we propose an alternative generative mechanism based on the framework of Restricted Kernel Machines (RKMs) (Suykens, 2017) , called Generative RKM (Gen-RKM).
RKMs yield a representation of kernel methods with visible and hidden units establishing links between Kernel PCA, Least-Squares Support Vector Machines (LS-SVM) (Suykens et al., 2002) and RBMs.
This framework has a similar energy form as RBMs, though there is a non-probabilistic training procedure where the eigenvalue decomposition plays the role of normalization.
Recently, Houthuys & Suykens (2018) used this framework to develop tensor-based multi-view classification models and Schreurs & Suykens (2018) showed how kernel PCA fits into this framework.
Contributions.
1) A novel multi-view generative model based on the RKM framework where multiple views of the data can be generated simultaneously.
2) Two methods are proposed for computing the pre-image of the feature vectors: with the feature map explicitly known or unknown.
We show that the mechanism is flexible to incorporate both kernel-based, (deep) convolutional neural network based models within the same setting.
3) When using explicit feature maps, we propose a training algorithm that jointly performs the feature-selection and learns the common-subspace representation in the same procedure.
4) Qualitative and quantitative experiments demonstrate that the model is capable of generating good quality images of natural objects.
Further experiments on multi-view datasets exhibit the potential of the model.
Thanks to the orthogonality of eigenvectors of the kernel matrix, the learned latent variables are uncorrelated.
This resembles a disentangled representation, which makes it possible to generate data with specific characteristics.
This paper is organized as follows.
In Section 2, we discuss the Gen-RKM training and generation mechanism when multiple data sources are available.
In Section 3, we explain how the model incorporates both kernel methods and neural networks through the use of implicit and explicit feature maps respectively.
When the feature maps are defined by neural networks, the Gen-RKM algorithm is explained in Section 4.
In Section 5, we show experimental results of our model applied on various public datasets.
Section 6 concludes the paper along with directions towards the future work.
Additional supplementary materials are given in the Appendix A.
The paper proposes a novel framework, called Gen-RKM, for generative models based on RKMs with extensions to multi-view generation and learning uncorrelated representations.
This allows for a mechanism where the feature map can be implicitly defined using kernel functions or explicitly by (deep) neural network based methods.
When using kernel functions, the training consists of only solving an eigenvalue problem.
In the case of a (convolutional) neural network based explicit feature map, we used (transposed) networks as the pre-image functions.
Consequently, a training procedure was proposed which involves joint feature-selection and subspace learning.
Thanks to training in mini-batches and capability of working with covariance matrices, the training is scalable to large datasets.
Experiments on benchmark datasets illustrate the merit of the proposed framework for generation quality as well as disentanglement.
Extensions of this work consists of adapting the model to more advanced multi-view datatsets involving speech, images and texts; further analysis on other feature maps, pre-image methods, loss-functions and uncorrelated feature learning.
Finally, this paper has demonstrated the applicability of the Gen-RKM framework, suggesting new research directions to be worth exploring. | Gen-RKM: a novel framework for generative models using Restricted Kernel Machines with multi-view generation and uncorrelated feature learning. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:756 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Work on the problem of contextualized word representation—the development of reusable neural network components for sentence understanding—has recently seen a surge of progress centered on the unsupervised pretraining task of language modeling with methods like ELMo (Peters et al., 2018).
This paper contributes the first large-scale systematic study comparing different pretraining tasks in this context, both as complements to language modeling and as potential alternatives.
The primary results of the study support the use of language modeling as a pretraining task and set a new state of the art among comparable models using multitask learning with language models.
However, a closer look at these results reveals worryingly strong baselines and strikingly varied results across target tasks, suggesting that the widely-used paradigm of pretraining and freezing sentence encoders may not be an ideal platform for further work.
InputFigure 1: Our common model design: During pretraining, we train the shared encoder and the task-specific model for each pretraining task.
We then freeze the shared encoder and train the task-specific model anew for each target evaluation task.
Tasks may involve more than one sentence.State-of-the-art models for natural language processing (NLP) tasks like translation, question answering, and parsing include components intended to extract representations for the meaning and contents of each input sentence.
These sentence encoder components are typically trained directly for the target task at hand.
This approach can be effective on data rich tasks and yields human performance on some narrowly-defined benchmarks BID35 BID13 , but it is tenable only for the few NLP tasks with millions of examples of training data.
This has prompted interest in pretraining for sentence encoding: There is good reason to believe it should be possible to exploit outside data and training signals to effectively pretrain these encoders, both because they are intended to primarily capture sentence meaning rather than any task-specific skill, and because we have seen dramatic successes with pretraining in the related domains of word embeddings and image encoders BID46 .More
concretely, four recent papers show that pretrained sentence encoders can yield very strong performance on NLP tasks. First
, McCann et al. (2017) show that a BiLSTM encoder from a neural machine translation (MT) system can be effectively reused elsewhere. BID16
, , and BID33 show that various kinds of encoder pretrained in an unsupervised fashion through generative language modeling (LM) are effective as well. Each
paper uses its own evaluation methods, though, making it unclear which pretraining task is most effective or whether multiple pretraining tasks can be productively combined; in the related setting of sentence-to-vector encoding, multitask learning with multiple labeled datasets has yielded a robust state of the art BID39 . This
paper attempts to systematically address these questions. We train
reusable sentence encoders on 17 different pretraining tasks, several simple baselines, and several combinations of these tasks, all using a single model architecture and procedure for pretraining and transfer, inspired by ELMo. We then
evaluate each of these encoders on the nine target language understanding tasks in the GLUE benchmark BID41 , yielding a total of 40 sentence encoders and 360 total trained models. We then
measure correlation in performance across target tasks and plot learning curves evaluating the effect of training data volume on each pretraining and target tasks.Looking to the results of this experiment, we find that language modeling is the most effective single pretraining task we study, and that multitask learning during pretraining can offer further gains and a new state-of-the-art among fixed sentence encoders. We also
, however, find reasons to worry that ELMo-style pretraining, in which we pretrain a model and use it on target tasks with no further fine-tuning, is brittle and seriously limiting: (i) Trivial
baseline representations do nearly as well as the best pretrained encoders, and the margins between substantially different pretraining tasks can be extremely small. (ii) Different
target tasks differ dramatically on what kinds of pretraining they benefit most from, and multitask pretraining is not sufficient to circumvent this problem and offer general-purpose pretrained encoders.
This paper presents a systematic comparison of tasks and task-combinations for the pretraining of sentence-level BiLSTM encoders like those seen in ELMo and CoVe.
With 40 pretraining tasks and task combinations (not counting many more ruled out early) and nine target tasks, this represents a far more comprehensive study than any seen on this problem to date.Our chief positive results are perhaps unsurprising: Language modeling works well as a pretraining task, and no other single task is consistently better.
Multitask pretraining can produce results better than any single task can, and sets a new state-of-the-art among comparable models.
Target task performance continues to improve with the addition of more language model data, even at large scales, suggesting that further work scaling up language model pretraining is warranted.However, a closer look at our results suggests that the pretrain-and-freeze paradigm that underlies ELMo and CoVe might not be a sound platform for future work: Some trivial baselines do strikingly well, the margins between pretraining tasks are small, and some pretraining configurations (such as MNLI E ) yield better performance with less data.
This suggests that we may be nearing an upper bound on the performance that can be reached with methods like these.In addition, different tasks benefit from different forms of pretraining to a striking degree-with correlations between target tasks often low or negative-and multitask pretraining tasks fail to reliably produce models better than their best individual components.
This suggests that if truly generalpurpose sentence encoders are possible, our current methods cannot produce them.While further work on language modeling seems straightforward and worthwhile, the author(s) of this paper believe that the future of this line of work will require a better understanding of the ways in which neural network target task models can benefit from outside knowledge and data, and new methods for pretraining and transfer learning to allow them to do so. | We compare many tasks and task combinations for pretraining sentence-level BiLSTMs for NLP tasks. Language modeling is the best single pretraining task, but simple baselines also do well. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:757 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
In this paper, we study the adversarial attack and defence problem in deep learning from the perspective of Fourier analysis.
We first explicitly compute the Fourier transform of deep ReLU neural networks and show that there exist decaying but non-zero high frequency components in the Fourier spectrum of neural networks.
We then demonstrate that the vulnerability of neural networks towards adversarial samples can be attributed to these insignificant but non-zero high frequency components.
Based on this analysis, we propose to use a simple post-averaging technique to smooth out these high frequency components to improve the robustness of neural networks against adversarial attacks.
Experimental results on the ImageNet and the CIFAR-10 datasets have shown that our proposed method is universally effective to defend many existing adversarial attacking methods proposed in the literature, including FGSM, PGD, DeepFool and C&W attacks.
Our post-averaging method is simple since it does not require any re-training, and meanwhile it can successfully defend over 80-96% of the adversarial samples generated by these methods without introducing significant performance degradation (less than 2%) on the original clean images.
Although deep neural networks (DNN) have shown to be powerful in many machine learning tasks, Szegedy et al. (2013) found that they are vulnerable to adversarial samples.
Adversarial samples are subtly altered inputs that can fool the trained model to produce erroneous outputs.
They are more commonly seen in image classification task and typically the perturbations to the original images are so small that they are imperceptible to human eye.
Research in adversarial attacks and defences is highly active in recent years.
In the attack side, many attacking methods have been proposed (Szegedy et al., 2013; Goodfellow et al., 2014; Papernot et al., 2016a; Moosavi-Dezfooli et al., 2016; Kurakin et al., 2016; Madry et al., 2017; Carlini and Wagner, 2017a; Chen et al., 2017; Alzantot et al., 2018; , with various ways to generate effective adversarial samples to circumvent new proposed defence methods.
However, since different attacks usually are effective to different defences or datasets, there is no consensus on which attack is the strongest.
Hence for the sake of simplicity, in this work, we will evaluate our proposed defence approach against four popular attacks for empirical analysis.
In the defence side, various defence mechanisms have also been proposed, including adversarial training (Rozsa et al., 2016; Kurakin et al., 2016; Tramèr et al., 2017; Madry et al., 2017) , network distillation (Papernot et al., 2016b) , gradient masking (Nguyen and Sinha, 2017) , adversarial detection (Feinman et al., 2017) and adding modifications to neural networks (Xie et al., 2017) .
Nonetheless, many of them were quickly defeated by new types of attacks (Carlini and Wagner, 2016; 2017b; c; a; Alzantot et al., 2018) .
Madry et al. (2017) tried to provide a theoretical security guarantee for adversarial training by a min-max loss formulation, but the difficulties in non-convex optimization and in finding the ultimate adversarial samples for training may loosen this robustness guarantee.
As a result, so far there is no defence that is universally robust to all adversarial attacks.
Along the line of researches, there were also investigations into the properties and existence of adversarial samples.
Szegedy et al. (2013) first observed the transferability of adversarial samples across models trained with different hyper-parameters and across different training sets.
They also attributed the adversarial samples to the low-probability blind spots in the manifold.
In (Goodfellow et al., 2014) , the authors explained adversarial samples as "a result of models being too linear, rather than too nonlinear."
In (Papernot et al., 2016) , the authors showed the transferability occurs across models with different structures and even different machine learning techniques in addition to neural networks.
In summary, the general existence and transferability of adversarial samples are well known but the reason of adversarial vulnerability still needs further investigation.
Generally speaking, when we view neural network as a multivariate function f (x) of input x, if a small imperceptible perturbation ∆x leads to a huge fluctuation ∆f (x), the large quantity ∆f (x)/∆x essentially corresponds to high frequency components in the Fourier spectrum of f (x).
In this paper, we will start with the Fourier analysis of neural networks and elucidate why there always exist some decaying but nonzero high frequency response components in neural networks.
Based on this analysis, we show that neural networks are inherently vulnerable to adversarial samples due to the underlying model structure.
Next, we propose a simple post-averaging method to tackle this problem.
Our proposed method is fairly simple since it works as a post-processing stage of any given neural network models and it does not require re-training the networks at all.
Furthermore, we have evaluated the post-averaging method against four popular adversarial attacking methods and our method is shown to be universally effective in defending all examined attacks.
Experimental results on the ImageNet and the CIFAR-10 datasets have shown that our simple post-averaging method can successfully defend over 80-96% of the adversarial samples generated by these attacks with little performance degradation (less than 2%) on the original clean images. | An insight into the reason of adversarial vulnerability, an effective defense method against adversarial attacks. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:758 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Reinforcement learning methods that continuously learn neural networks by episode generation with game tree search have been successful in two-person complete information deterministic games such as chess, shogi, and Go.
However, there are only reports of practical cases and there are little evidence to guarantee the stability and the final performance of learning process.
In this research, the coordination of episode generation was focused on.
By means of regarding the entire system as game tree search, the new method can handle the trade-off between exploitation and exploration during episode generation.
The experiments with a small problem showed that it had robust performance compared to the existing method, Alpha Zero.
The result that computer programs beat professional human players on chess, shogi and Go was a huge achievement in computer science.
In particular, the development of highly general methods totally changed our perspective about two-person complete information deterministic games.
Then, has this field already finished ?
My answer is no. To deal with many games, more robust methods are required to free humans from hyperparameter tuning.
Moreover, the challenge to the god of games won't be finished and we want algorithms that can achieve better final performance.
This study attempts to bring suggestions for recent achievements in two-player complete information deterministic games from classical game tree search context.
More specifically, this is a new approach in which the reinforcement learning system using game tree search itself is handled as game tree search.
In this study, we examined a very simple task, Tic-tac-toe.
First of all, it was shown that obtaining the optimal strategy is sometimes difficult depending on the parameters.
The results suggest that reinforcement learning methods like Alpha Zero often suffer from naiveness against exploration.
In the proposed method, it is possible to vary the beginning of the game in the episode generation by the master game tree.
The results suggest that the proposed method has ability to control the search for the beginning of the game by adding proper noise.
On the other hand, when PUCT using the strategy as it is applied to the master game tree (MbMNoNoise), the performance was lower than the baseline.
The reason of this result is that the policy has converged earlier than the effect of exploration in the master game tree.
Due to this point, it was not effective.
In this report, PUCT is applied to the master game tree as same as ordinal game tree.
However, it is necessary to examine a mechanism that makes it more exploratory.
Lastly, in this study, we verified only one of the simplest games, Tic-tac-toe.
From the experimental results in this paper, it is expected that the proposed method can produce robust results with respect to temperature parameters even for larger games.
It will also be necessary to verify whether the speed of improvement in real time is better that previous methods.
I hope that the combination of tree search and reinforcement learning will be used for a wider range of domains if there exists the method in which both stableness and speed are better performance. | Apply Monte Carlo Tree Search to episode generation in Alpha Zero | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:759 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The information bottleneck principle (Shwartz-Ziv & Tishby, 2017) suggests that SGD-based training of deep neural networks results in optimally compressed hidden layers, from an information theoretic perspective.
However, this claim was established on toy data.
The goal of the work we present here is to test these claims in a realistic setting using a larger and deeper convolutional architecture, a ResNet model.
We trained PixelCNN++ models as inverse representation decoders to measure the mutual information between hidden layers of a ResNet and input image data, when trained for (1) classification and (2) autoencoding.
We find that two stages of learning happen for both training regimes, and that compression does occur, even for an autoencoder.
Sampling images by conditioning on hidden layers’ activations offers an intuitive visualisation to understand what a ResNets learns to forget.
The ResNet architecture enables very deep CNNs.
We show that learning representations using a ResNet results in information compression in hidden layers.
We set out in this research to test some of the claims by Shwartz-Ziv & Tishby (2017) regarding the information bottleneck principle applied to deep learning.
By defining a lower bound on the MI and 'decoder' models to compute the MI during classifier and autoencoder training regimes, we explored the notion of compression for generalisation in the context of realistic images and a modern architecture choice.For both classification and autoencoding we observed two stages of learning, characterised by: (1) an initial and relatively short-lived increase and (2) a longer decrease in MI between hidden layers and input training data.
Although we cannot confirm the mechanism responsible for compression (stochastic relaxation, for example), we gave an intuitive glimpse into what quality/type of information is kept and discarded as ResNets learn.
PixelCNN++ models were used to estimate the MI between hidden layers (of the models under scrutiny) and input data; images were generated conditioned on hidden layers to illustrate the fitting and compression of data in a visual and intuitive fashion.The experimental procedure we developed for this research enables visualising class invariances throughout training.
In particular, we see that when a ResNet is maximally (subject to model constraints) compressing information in its hidden layers, the class-irrelevant features of the input images are discarded: conditionally generated samples vary more while retaining information relevant to classification.
This result has been shown in theory and for toy examples, but never illustrated to the degree that we do here. | The Information Bottleneck Principle applied to ResNets, using PixelCNN++ models to decode mutual information and conditionally generate images for information illustration | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:76 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Message-passing neural networks (MPNNs) have been successfully applied in a wide variety of applications in the real world.
However, two fundamental weaknesses of MPNNs' aggregators limit their ability to represent graph-structured data: losing the structural information of nodes in neighborhoods and lacking the ability to capture long-range dependencies in disassortative graphs.
Few studies have noticed the weaknesses from different perspectives.
From the observations on classical neural network and network geometry, we propose a novel geometric aggregation scheme for graph neural networks to overcome the two weaknesses.
The behind basic idea is the aggregation on a graph can benefit from a continuous space underlying the graph.
The proposed aggregation scheme is permutation-invariant and consists of three modules, node embedding, structural neighborhood, and bi-level aggregation.
We also present an implementation of the scheme in graph convolutional networks, termed Geom-GCN, to perform transductive learning on graphs.
Experimental results show the proposed Geom-GCN achieved state-of-the-art performance on a wide range of open datasets of graphs.
Message-passing neural networks (MPNNs), such as GNN (Scarselli et al., 2008) , ChebNet (Defferrard et al., 2016) , GG-NN (Li et al., 2016) , GCN (Kipf & Welling, 2017) , are powerful for learning on graphs with various applications ranging from brain networks to online social network (Gilmer et al., 2017; Wang et al., 2019) .
In a layer of MPNNs, each node sends its feature representation, a "message", to the nodes in its neighborhood; and then updates its feature representation by aggregating all "messages" received from the neighborhood.
The neighborhood is often defined as the set of adjacent nodes in graph.
By adopting permutation-invariant aggregation functions (e.g., summation, maximum, and mean), MPNNs are able to learn representations which are invariant to isomorphic graphs, i.e., graphs that are topologically identical.
Although existing MPNNs have been successfully applied in a wide variety of scenarios, two fundamental weaknesses of MPNNs' aggregators limit their ability to represent graph-structured data.
Firstly, the aggregators lose the structural information of nodes in neighborhoods.
Permutation invariance is an essential requirement for any graph learning method.
To meet it, existing MPNNs adopt permutation-invariant aggregation functions which treat all "messages" from neighborhood as a set.
For instance, GCN simply sums the normalized "messages" from all one-hop neighbors (Kipf & Welling, 2017) .
Such aggregation loses the structural information of nodes in neighborhood because it does not distinguish the "messages" from different nodes.
Therefore, after such aggregation, we cannot know which node contributes what to the final aggregated output.
Without modeling such structural information, as shown in (Kondor et al., 2018) and , the existing MPNNs cannot discriminate between certain non-isomorphic graphs.
In those cases, MPNNs may map non-isomorphic graphs to the same feature representations, which is obviously not desirable for graph representation learning.
Unlike MPNNs, classical convolutional neural networks (CNNs) avoid this problem by using aggregators (i.e., convolutional filters) with a structural receiving filed defined on grids, i.e., a Euclidean space, and are hence able to distinguish each input unit.
As shown by our experiments, such structural information often contains clues regarding topology patterns in graph (e.g., hierarchy), and should be extracted and used to learn more discriminating representations for graph-structured data.
Secondly, the aggregators lack the ability to capture long-range dependencies in disassortative graphs.
In MPNNs, the neighborhood is defined as the set of all neighbors one hop away (e.g., GCN), or all neighbors up to r hops away (e.g., ChebNet).
In other words, only messages from nearby nodes are aggregated.
The MPNNs with such aggregation are inclined to learn similar representations for proximal nodes in a graph.
This implies that they are probably desirable methods for assortative graphs (e.g., citation networks (Kipf & Welling, 2017) and community networks ) where node homophily holds (i.e., similar nodes are more likely to be proximal, and vice versa), but may be inappropriate to the disassortative graphs (Newman, 2002) where node homophily does not hold.
For example, Ribeiro et al. (2017) shows disassortative graphs where nodes of the same class exhibit high structural similarity but are far apart from each other.
In such cases, the representation ability of MPNNs may be limited significantly, since they cannot capture the important features from distant but informative nodes.
A straightforward strategy to address this limitation is to use a multi-layered architecture so as to receive "messages" from distant nodes.
For instance, due to the localized nature of convolutional filters in classical CNNs, a single convolutional layer is similarly limited in its representational ability.
CNNs typically use multiple layers connected in a hierarchical manner to learn complex and global representations.
However, unlike CNNs, it is difficult for multi-layer MPNNs to learn good representations for disassortative graphs because of two reasons.
On one hand, relevant messages from distant nodes are mixed indistinguishably with a large number of irrelevant messages from proximal nodes in multi-layer MPNNs, which implies that the relevant information will be "washed out" and cannot be extracted effectively.
On the other hand, the representations of different nodes would become very similar in multi-layer MPNNs, and every node's representation actually carries the information about the entire graph .
In this paper, we overcome the aforementioned weaknesses of graph neural networks starting from two basic observations:
i) Classical neural networks effectively address the similar limitations thanks to the stationarity, locality, and compositionality in a continuous space ;
ii) The notion of network geometry bridges the gap between continuous space and graph (Hoff et al., 2002; Muscoloni et al., 2017) .
Network geometry aims to understand networks by revealing the latent continuous space underlying them, which assumes that nodes are sampled discretely from a latent continuous space and edges are established according to their distance.
In the latent space, complicated topology patterns in graphs can be preserved and presented as intuitive geometry, such as subgraph (Narayanan et al., 2016 ), community (Ni et al., 2019 , and hierarchy (Nickel & Kiela, 2017; .
Inspired by those two observations, we raise an enlightening question about the aggregation scheme in graph neural network.
• Can the aggregation on a graph benefit from a continuous latent space, such as using geometry in the space to build structural neighborhoods and capture long-range dependencies in the graph?
To answer the above question, we propose a novel aggregation scheme for graph neural networks, termed the geometric aggregation scheme.
In the scheme, we map a graph to a continuous latent space via node embedding, and then use the geometric relationships defined in the latent space to build structural neighborhoods for aggregation.
Also, we design a bi-level aggregator operating on the structural neighborhoods to update the feature representations of nodes in graph neural networks, which are able to guarantee permutation invariance for graph-structured data.
Compared with exist-ing MPNNs, the scheme extracts more structural information of the graph and can aggregate feature representations from distant nodes via mapping them to neighborhoods defined in the latent space.
We then present an implementation of the geometric aggregation scheme in graph convolutional networks, which we call Geom-GCN, to perform transductive learning, node classification, on graphs.
We design particular geometric relationships to build the structural neighborhood in Euclidean and hyperbolic embedding space respectively.
We choose different embedding methods to map the graph to a suitable latent space for different applications, where suitable topology patterns of graph are preserved.
Finally, we empirically validate and analyze Geom-GCN on a wide range of open datasets of graphs, and Geom-GCN achieved the state-of-the-art results.
In summary, the contribution of this paper is three-fold:
i) We propose a novel geometric aggregation scheme for graph neural network, which operates in both graph and latent space, to overcome the aforementioned two weaknesses;
ii) We present an implementation of the scheme, Geom-GCN, for transductive learning in graph;
iii) We validate and analyze Geom-GCN via extensive comparisons with state-of-the-art methods on several challenging benchmarks.
We tackle the two major weaknesses of existing message-passing neural networks over graphslosses of discriminative structures and long-range dependencies.
As our key insight, we bridge a discrete graph to a continuous geometric space via graph embedding.
That is, we exploit the principle of convolution: spatial aggregation over a meaningful space-and our approach thus extracts or "recovers" the lost information (discriminative structures and long-range dependencies) in an embedding space from a graph.
We proposed a general geometric aggregation scheme and instantiated it with several specific Geom-GCN implementations, and our experiments validated clear advantages over the state-of-the-art.
As future work, we will explore techniques for choosing a right embedding method-depending not only on input graphs but also on target applications, such as epidemic dynamic prediction on social contact network (Yang et al., 2017; Pei et al., 2018) . | For graph neural networks, the aggregation on a graph can benefit from a continuous space underlying the graph. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:760 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We consider the task of program synthesis in the presence of a reward function over the output of programs, where the goal is to find programs with maximal rewards.
We introduce a novel iterative optimization scheme, where we train an RNN on a dataset of K best programs from a priority queue of the generated programs so far.
Then, we synthesize new programs and add them to the priority queue by sampling from the RNN.
We benchmark our algorithm called priority queue training (PQT) against genetic algorithm and reinforcement learning baselines on a simple but expressive Turing complete programming language called BF.
Our experimental results show that our deceptively simple PQT algorithm significantly outperforms the baselines.
By adding a program length penalty to the reward function, we are able to synthesize short, human readable programs.
Automatic program synthesis is an important task with many potential applications.
Traditional approaches (e.g., BID29 ; BID1 ) typically do not make use of machine learning and therefore require domain specific knowledge about the programming languages and hand-crafted heuristics to speed up the underlying combinatorial search.
To create more generic programming tools without much domain specific knowledge, there has been a surge of recent interest in developing neural models that facilitate some form of memory access and symbolic reasoning (e.g., BID37 ; BID33 ; BID21 ; BID49 ; ).
Despite several appealing contributions, none of these approaches is able to synthesize source code in an expressive programming language.More recently, there have been several successful attempts at using neural networks to explicitly induce programs from input-output examples BID8 BID2 BID35 and even from unstructured text BID35 , but often using restrictive programming syntax and requiring supervisory signal in the form of ground-truth programs or correct outputs.
By contrast, we advocate the use of an expressive programming language called BF 1 , which has a simple syntax, but is Turing complete.
Moreover, we aim to synthesize programs under the reinforcement learning (RL) paradigm, where only a solution checker is required to compute a reward signal.
Furthermore, one can include a notion of code length penalty or execution speed into the reward signal to search for short and efficient programs.
Hence, the problem of program synthesis based on reward is more flexible than other formulations in which the desired programs or correct outputs are required during training.To address program synthesis based on a reward signal, we study two different approaches.
The first approach is a policy gradient (PG) algorithm BID44 , where we train a recurrent neural network (RNN) to generate programs one token at a time.
Then, the program is executed and scored, and a reward feedback is sent back to the RNN to update its parameters such that over time better programs are produced.
The second approach is a deceptively simple optimization algorithm called priority queue training (PQT).
We keep a priority queue of K best programs seen during training and train an RNN with a log-likelihood objective on the top K programs in the queue.
We then sample new programs from the RNN, update the queue, and iterate.
We also compare against a genetic algorithm (GA) baseline which has been shown to generate BF programs BID3 .
Surprisingly, we find that the PQT approach significantly outperforms the GA and PG methods.We assess the effectiveness of our method on the BF programming language.
The BF language is Turing complete, while comprising only 8 operations.
The minimalist syntax of the BF language makes it easier to generate a syntactically correct program, as opposed to more higher level languages.
We consider various string manipulation, numerical, and algorithmic tasks.
Our results demonstrate that all of the search algorithms we consider are capable of finding correct programs for most of the tasks, and that our method is the most reliable in that it finds solutions on most random seeds and most tasks.The key contributions of the paper include,• We propose a learning framework for program synthesis where only a reward function is required during training (the ground-truth programs or correct outputs are not needed).Further
, we advocate to use a simple and expressive programming language, BF, as a benchmark environment for program synthesis (see also BID3 ).• We propose
an effective search algorithm using a priority queue and an RNN.• We propose
an experimental methodology to compare program synthesis methods including genetic algorithm and policy gradient. Our methodology
measures the success rates of each synthesis method on average and provides a standard way to tune the hyper-parameters.With this methodology, we find that a recurrent network trained with priority queue training outperforms the baselines.
In this paper, we considered the task of learning to synthesize programs for problems where a reward function is defined.
We use an RNN trained with our priority queue training method.
We experimented with BF, a simple Turing-complete programming language, and compared our method against a genetic algorithm baseline.
Our experimental results showed that our method is more stable than vanilla policy gradient or a genetic algorithm.That PQT works as a standalone search algorithm is surprising, and future work is needed in order to better explain it.
We can speculate that it is implementing a simple hill climbing algorithm where the buffer stores the best known samples, thereby saving progress, while the RNN functions as an exploration mechanism.
Even more surprising is that this algorithm is able to bootstrap itself to a solution starting from an empty buffer and a randomly initialized RNN.
We believe that our coding environment complements the PQT algorithm, since finding code with non-zero reward through purely random search is feasible. | We use a simple search algorithm involving an RNN and priority queue to find solutions to coding tasks. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:761 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We present graph wavelet neural network (GWNN), a novel graph convolutional neural network (CNN), leveraging graph wavelet transform to address the shortcomings of previous spectral graph CNN methods that depend on graph Fourier transform.
Different from graph Fourier transform, graph wavelet transform can be obtained via a fast algorithm without requiring matrix eigendecomposition with high computational cost.
Moreover, graph wavelets are sparse and localized in vertex domain, offering high efficiency and good interpretability for graph convolution.
The proposed GWNN significantly outperforms previous spectral graph CNNs in the task of graph-based semi-supervised classification on three benchmark datasets: Cora, Citeseer and Pubmed.
Convolutional neural networks (CNNs) BID15 have been successfully used in many machine learning problems, such as image classification BID10 and speech recognition BID11 , where there is an underlying Euclidean structure.
The success of CNNs lies in their ability to leverage the statistical properties of Euclidean data, e.g., translation invariance.
However, in many research areas, data are naturally located in a non-Euclidean space, with graph or network being one typical case.
The non-Euclidean nature of graph is the main obstacle or challenge when we attempt to generalize CNNs to graph.
For example, convolution is not well defined in graph, due to that the size of neighborhood for each node varies dramatically .Existing
methods attempting to generalize CNNs to graph data fall into two categories, spatial methods and spectral methods, according to the way that convolution is defined. Spatial
methods define convolution directly on the vertex domain, following the practice of the conventional CNN. For each
vertex, convolution is defined as a weighted average function over all vertices located in its neighborhood, with the weighting function characterizing the influence exerting to the target vertex by its neighbors . The main
challenge is to define a convolution operator that can handle neighborhood with different sizes and maintain the weight sharing property of CNN. Although
spatial methods gain some initial success and offer us a flexible framework to generalize CNNs to graph, it is still elusive to determine appropriate neighborhood.Spectral methods define convolution via graph Fourier transform and convolution theorem. Spectral
methods leverage graph Fourier transform to convert signals defined in vertex domain into spectral domain, e.g., the space spanned by the eigenvectors of the graph Laplacian matrix, and then filter is defined in spectral domain, maintaining the weight sharing property of CNN. As the pioneering
work of spectral methods, spectral CNN BID3 exploited graph data with the graph Fourier transform to implement convolution operator using convolution theorem. Some subsequent works
make spectral methods spectrum-free BID4 BID14 BID12 , achieving locality in spatial domain and avoiding high computational cost of the eigendecomposition of Laplacian matrix.In this paper, we present graph wavelet neural network to implement efficient convolution on graph data. We take graph wavelets
instead of the eigenvectors of graph Laplacian as a set of bases, and define the convolution operator via wavelet transform and convolution theorem. Graph wavelet neural network
distinguishes itself from spectral CNN by its three desirable properties: (1) Graph wavelets can be obtained via a fast algorithm without requiring the eigendecomposition of Laplacian matrix, and thus is efficient; (2) Graph wavelets are sparse, while eigenvectors of Laplacian matrix are dense. As a result, graph wavelet transform
is much more efficient than graph Fourier transform; (3) Graph wavelets are localized in vertex domain, reflecting the information diffusion centered at each node BID27 . This property eases the understanding
of graph convolution defined by graph wavelets.We develop an efficient implementation of the proposed graph wavelet neural network. Convolution in conventional CNN learns
an individual convolution kernel for each pair of input feature and output feature, causing a huge number of parameters especially when the number of features is high. We detach the feature transformation from
convolution and learn a sole convolution kernel among all features, substantially reducing the number of parameters. Finally, we validate the effectiveness of
the proposed graph wavelet neural network by applying it to graph-based semi-supervised classification. Experimental results demonstrate that our
method consistently outperforms previous spectral CNNs on three benchmark datasets, i.e., Cora, Citeseer, and Pubmed.2 OUR METHOD
2.1
PRELIMINARY Let G = {V, E, A}
be an undirected graph, where V is the set of nodes with |V| = n, E is the set of edges, and A is adjacency matrix with A i,j = A j,i to define the connection between node i and node j. The graph Laplacian matrix L is defined as
L = D −A where D is a diagonal degree matrix with D i,i = j A i,j , and the normalized Laplacian matrix is L = I n − D −1/2 AD −1/2 where I n is the identity matrix. Since L is a real symmetric matrix, it has a
complete set of orthonormal eigenvectors U = (u 1 , u 2 , ..., u n ), known as Laplacian eigenvectors. These eigenvectors have associated real, non-negative
eigenvalues {λ l } n l=1 , identified as the frequencies of graph. Eigenvectors associated with smaller eigenvalues carry
slow varying signals, indicating that connected nodes share similar values. In contrast, eigenvectors associated with larger eigenvalues
carry faster varying signals across connected nodes.
Replacing graph Fourier transform with graph wavelet transform, we proposed GWNN.
Graph wavelet transform has three desirable properties: (1) Graph wavelets are local and sparse; (2) Graph wavelet transform is computationally efficient; (3) Convolution is localized in vertex domain.
These advantages make the whole learning process interpretable and efficient.
Moreover, to reduce the number of parameters and the dependence on huge training data, we detached the feature transformation from convolution.
This practice makes GWNN applicable to large graphs, with remarkable performance improvement on graph-based semi-supervised learning. | We present graph wavelet neural network (GWNN), a novel graph convolutional neural network (CNN), leveraging graph wavelet transform to address the shortcoming of previous spectral graph CNN methods that depend on graph Fourier transform. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:762 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Learning rate decay (lrDecay) is a \emph{de facto} technique for training modern neural networks.
It starts with a large learning rate and then decays it multiple times.
It is empirically observed to help both optimization and generalization.
Common beliefs in how lrDecay works come from the optimization analysis of (Stochastic) Gradient Descent:
1) an initially large learning rate accelerates training or helps the network escape spurious local minima;
2) decaying the learning rate helps the network converge to a local minimum and avoid oscillation.
Despite the popularity of these common beliefs, experiments suggest that they are insufficient in explaining the general effectiveness of lrDecay in training modern neural networks that are deep, wide, and nonconvex.
We provide another novel explanation: an initially large learning rate suppresses the network from memorizing noisy data while decaying the learning rate improves the learning of complex patterns.
The proposed explanation is validated on a carefully-constructed dataset with tractable pattern complexity.
And its implication, that additional patterns learned in later stages of lrDecay are more complex and thus less transferable, is justified in real-world datasets.
We believe that this alternative explanation will shed light into the design of better training strategies for modern neural networks.
Modern neural networks are deep, wide, and nonconvex.
They are powerful tools for representation learning and serve as core components of deep learning systems.
They are top-performing models in language translation (Sutskever et al., 2014 ), visual recognition (He et al., 2016 , and decision making (Silver et al., 2018) .
However, the understanding of modern neural networks is way behind their broad applications.
A series of pioneering works (Zhang et al., 2017; Belkin et al., 2019; Locatello et al., 2019) reveal the difficulty of applying conventional machine learning wisdom to deep learning.
A better understanding of deep learning is a major mission in the AI field.
One obstacle in the way of understanding deep learning is the existence of magic modules in modern neural networks and magic tricks to train them.
Take batch normalization module (Ioffe & Szegedy, 2015) for example, its pervasiveness in both academia and industry is undoubted.
The exact reason why it expedites training and helps generalization, however, remains mysterious and is actively studied in recent years (Bjorck et al., 2018; Santurkar et al., 2018; Kohler et al., 2019) .
Only when we clearly understand these magical practices can we promote the theoretical understanding of modern neural networks.
Learning rate is "the single most important hyper-parameter" (Bengio, 2012) in training neural networks.
Learning rate decay (lrDecay) is a de facto technique for training modern neural networks, where we adopt an initially large learning rate and then decay it by a certain factor after pre-defined epochs.
Popular deep networks such as ResNet (He et al., 2016) , DenseNet (Huang et al., 2017b) are all trained by Stochastic Gradient Descent (SGD) with lrDecay.
Figure 1
(a) is an example of lrDecay, with the learning rate decayed by 10 every 30 epochs.
The training is divided into several stages by the moments of decay.
These stages can be easily identified in learning curves (such as Figure 1(b
) ), where the performance boosts sharply shortly after the learning rate is decayed. The
lrDecay enjoys great popularity due to its simplicity and general effectiveness.
Common beliefs in how lrDecay works are derived from the optimization analysis in (Stochastic) Gradient Descent (LeCun et al., 1991; Kleinberg et al., 2018) .
They attribute the effect of an initially Kleinberg et al. (2018) optimization escapes bad local minima converges to local minimum Proposed pattern complexity avoids fitting noisy data learns more complex patterns Table 1 : Comparison of explanations on why lrDecay helps training neural networks.
The column "supported" means whether the explanation is supported by the empirical experiments in this paper.
large learning rate to escaping spurious local minima or accelerating training and attribute the effect of decaying the learning rate to avoiding oscillation around local minima.
However, these common beliefs are insufficient to explain our empirical observations from a series of carefully-designed experiments in Section 4.
In this paper, we provide an alternative view: the magnitude of the learning rate is closely related to the complexity of learnable patterns.
From this perspective, we propose a novel explanation for the efficacy of lrDecay: an initially large learning rate suppresses the memorization of noisy data while decaying the learning rate improves the learning of complex patterns.
This is validated on a carefully-constructed dataset with tractable pattern complexity.
The pattern complexity in realworld datasets is often intractable.
We thus validate the explanation by testing its implication on real-world datasets.
The implication that additional patterns learned in later stages of lrDecay are more complex and thus less transferable across different datasets, is also justified empirically.
A comparison between the proposed explanation and the common beliefs is summarized in Table 1 .
Our explanation is supported by carefully-designed experiments and provides a new perspective on analyzing learning rate decay.
The contribution of this paper is two-fold:
• We demonstrate by experiments that existing explanations of how lrDecay works are insufficient in explaining the training behaviors in modern neural networks.
• We propose a novel explanation based on pattern complexity, which is validated on a dataset with tractable pattern complexity, and its implication is validated on real-world datasets.
The explanation also suggests that complex patterns are only learnable after learning rate decay.
Thus, when the model learns all simple patterns, but the epoch to decay has not reached, immediately decaying the learning rate will not hurt the performance.
This implication is validated in Section A.1.
In this paper, we dive into how learning rate decay (lrDecay) helps modern neural networks.
We uncover the insufficiency of common beliefs and propose a novel explanation: the effect of decaying learning rate is to improve the learning of complex patterns, and the effect of an initially large learning rate is to avoid memorization of noisy data.
It is supported by experiments on a dataset with tractable pattern complexity as well as on real-world datasets.
It would be interesting to further bridge the proposed explanation and the formal analysis of optimization procedure. | We provide another novel explanation of learning rate decay: an initially large learning rate suppresses the network from memorizing noisy data while decaying the learning rate improves the learning of complex patterns. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:763 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We show how an ensemble of $Q^*$-functions can be leveraged for more effective exploration in deep reinforcement learning.
We build on well established algorithms from the bandit setting, and adapt them to the $Q$-learning setting.
We propose an exploration strategy based on upper-confidence bounds (UCB).
Our experiments show significant gains on the Atari benchmark.
Deep reinforcement learning seeks to learn mappings from high-dimensional observations to actions.
Deep Q-learning BID15 ) is a leading technique that has been used successfully, especially for video game benchmarks.
However, fundamental challenges remain, for example, improving sample efficiency and ensuring convergence to high quality solutions.
Provably optimal solutions exist in the bandit setting and for small MDPs, and at the core of these solutions are exploration schemes.
However these provably optimal exploration techniques do not extend to deep RL in a straightforward way.Bootstrapped DQN BID16 ) is a previous attempt at adapting a theoretically verified approach to deep RL.
In particular, it draws inspiration from posterior sampling for reinforcement learning (PSRL, BID17 ; BID16 ), which has near-optimal regret bounds.
PSRL samples an MDP from its posterior each episode and exactly solves Q * , its optimal Q-function.
However, in high-dimensional settings, both approximating the posterior over MDPs and solving the sampled MDP are intractable.
Bootstrapped DQN avoids having to establish and sample from the posterior over MDPs by instead approximating the posterior over Q * .
In addition, bootstrapped DQN uses a multi-headed neural network to represent the Q-ensemble.
While the authors proposed bootstrapping to estimate the posterior distribution, their empirical findings show best performance is attained by simply relying on different initializations for the different heads, not requiring the sampling-with-replacement process that is prescribed by bootstrapping.In this paper, we design new algorithms that build on the Q-ensemble approach from BID16 .
However, instead of using posterior sampling for exploration, we construct uncertainty estimates from the Q-ensemble.
Specifically, we first propose the Ensemble Voting algorithm where the agent takes action by a majority vote from the Q-ensemble.
Next, we propose the UCB exploration strategy.
This strategy is inspired by established UCB algorithms in the bandit setting and constructs uncertainty estimates of the Q-values.
In this strategy, agents are optimistic and take actions with the highest UCB.
We demonstrate that our algorithms significantly improve performance on the Atari benchmark. | Adapting UCB exploration to ensemble Q-learning improves over prior methods such as Double DQN, A3C+ on Atari benchmark | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:764 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We focus on the problem of learning a single motor module that can flexibly express a range of behaviors for the control of high-dimensional physically simulated humanoids.
To do this, we propose a motor architecture that has the general structure of an inverse model with a latent-variable bottleneck.
We show that it is possible to train this model entirely offline to compress thousands of expert policies and learn a motor primitive embedding space.
The trained neural probabilistic motor primitive system can perform one-shot imitation of whole-body humanoid behaviors, robustly mimicking unseen trajectories.
Additionally, we demonstrate that it is also straightforward to train controllers to reuse the learned motor primitive space to solve tasks, and the resulting movements are relatively naturalistic.
To support the training of our model, we compare two approaches for offline policy cloning, including an experience efficient method which we call linear feedback policy cloning.
We encourage readers to view a supplementary video (https://youtu.be/CaDEf-QcKwA ) summarizing our results.
A broad challenge in machine learning for control and robotics is to produce policies capable of general, flexible, and adaptive behavior of complex, physical bodies.
To build policies that can effectively control simulated humanoid bodies, researchers must simultaneously overcome foundational challenges related to high-dimensional control, body balance, and locomotion.
Recent progress in deep reinforcement learning has raised hopes that such behaviors can be learned end-to-end with minimal manual intervention.
Yet, even though significant progress has been made thanks to better algorithms, training regimes, and computational infrastructure, the resulting behaviors still tend to exhibit significant idiosyncrasies (e.g. BID2 .One
advantage of working with humanoids in this context is that motion capture data is widely available and can serve to help design controllers that produce apparently humanlike movement. Indeed
, recent developments are now allowing for the production of highly specialized expert policies which robustly, albeit narrowly, reproduce single motion capture clips (e.g. BID18 ; BID30 ).A remaining
challenge on the way to truly flexible and general purpose control is to be able to sequence and generalize individual movements or "skills" in a task-directed manner. Achieving this
goal requires not just the ability to acquire individual skills in the first place, but also an architecture and associated training procedure that supports representation, recruitment, and composition of a large number of skills.This paper presents a step in this direction. Specifically,
the setting we focus on will be one in which we have a large number of robust experts that perform single skills well and we wish to transfer these skills into a shared policy that can do what each expert does as well as the expert, while also generalizing to unseen behaviors within the distribution of skills. To this end we
design a system that performs one-shot imitation as well as permits straightforward reuse (or transfer) of skills. We require our
approach to scale to a very large number of individual skills while also keeping manual intervention and oversight to a minimum.Our primary contribution is the development of a neural network architecture that can represent and generate many motor behaviors, which we refer to as neural probabilistic motor primitives. This architecture
is designed to perform one-shot imitation, while learning a dense embedding space of a large number of individual motor skills. Once trained, this
module does not just reproduce individual behaviors in the training data, but can sequence and compose these behaviors in a controlled fashion as well as synthesize novel movements consistent with the training data distribution. Empirically, we also
find that training controllers to reuse this learned motor primitive module for new tasks generates surprisingly human-like movement and the behavior generated seems to interpolate the space of behaviors well.In order to facilitate transfer and compression of expert skills at the scale of thousands of behaviors, we wish to avoid closed-loop RL training. We call the general,
offline, functional transfer of policy content policy transfer or policy cloning and consider two approaches. The natural baseline
approach involves the application of behavioral cloning to data gathered by executing experts many times, with noise, and logging intended expert actions, resembling the approach of BID16 . This works well, as
it ensures the student behaves like the expert not only along nominal expert rollouts but also at points arrived at by perturbing the expert. However, this approach
may require many rollouts, which can be costly to obtain in many settings. As a more efficient alternative
we therefore consider a second solution that operates by comprehensively transferring the functional properties of an expert to a student policy by matching the local noise-feedback properties along one or a small number of representative expert reference trajectories. We call this specific proposal
linear feedback policy cloning (LFPC), and we demonstrate that it is competitive with behavioral cloning from many more rollouts in our setting.
In this paper we have described approaches for transfer and compression of control policies.
We have exhibited a motor primitive module that learns to represent and execute motor behaviors for control of a simulated humanoid body.
Using either a variant of behavioral cloning or linear feedback policy cloning we can train the neural probabilistic motor primitive sytem to perform robust one-shotimitation, and with the latter we can use relatively restricted data consisting of only single rollouts from each expert.
While LFPC did not work quite as well in the full-scale model as cloning from noisy rollouts, we consider it remarkable that it is possible in our setting to transfer expert behavior using a single rollout.
We believe LFPC holds promise insofar as it may be useful in settings where rollouts are costly to obtain (e.g. adapted to real-world robotic applications), and there is room for further improvement as we did not carefully tune certain parameters, most saliently the marginal noise distribution ∆.The
resulting neural probabilistic motor primitive module is interpretable and reusable. We
are optimistic that this kind of architecture could serve as a basis for further continual learning of motor skills. This
work has been restricted to motor behaviors which do not involve interactions with objects and where a full set a of behaviors are available in advance. Meaningful
extensions of this work may attempt to greatly enrich the space of behaviors or demonstrate how to perform continual learning and reuse of new skills. | Neural Probabilistic Motor Primitives compress motion capture tracking policies into one flexible model capable of one-shot imitation and reuse as a low-level controller. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:765 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Data augmentation is a useful technique to enlarge the size of the training set and prevent overfitting for different machine learning tasks when training data is scarce.
However, current data augmentation techniques rely heavily on human design and domain knowledge, and existing automated approaches are yet to fully exploit the latent features in the training dataset.
In this paper we propose \textit{Parallel Adaptive GAN Data Augmentation}(PAGANDA), where the training set adaptively enriches itself with sample images automatically constructed from Generative Adversarial Networks (GANs) trained in parallel.
We demonstrate by experiments that our data augmentation strategy, with little model-specific considerations, can be easily adapted to cross-domain deep learning/machine learning tasks such as image classification and image inpainting, while significantly improving model performance in both tasks.
Our source code and experimental details are available at \url{https://github.com/miaojiang1987/k-folder-data-augmentation-gan/}.
Deep learning and machine learning models produce highly successful results when given sufficient training data.
However, when training data is scarce, overfitting will occur and the resulting model will generalize poorly.
Data augmentation(DA) ameliorates such issues by enlarging the original data set and making more effective use of the information in existing data.
Much prior work has centered on data augmentation strategies based on human design, including heuristic data augmentation strategies such as crop, mirror, rotation and distortion BID15 BID21 Proceedings of the 1 st Adaptive & Multitask Learning Workshop, Long Beach, California, 2019.
Copyright 2019 by the author(s).
et al., 2003) , interpolating through labeled data points in feature spaces BID5 , and adversarial data augmentation strategies based on BID22 BID8 .
These methods have greatly aided many deep learning tasks across several domains such as classification BID15 , image segmentation BID24 and image reconstruction/inpainting BID0 .Despite
their success, these DA methods generally require domain-specific expert knowledge, manual operations and extensive amount of tuning depending on actual contexts BID3 BID6 . In particular
, the need to directly operate on existing data with domain knowledge prevents many previous data augmentation strategies from being applicable to more general settings. To circumvent
the need for specific domain knowledge in data augmentation, more recent work BID1 utilizes generative adversarial networks(GANs) BID10 to produce images that better encode features in the latent space of training data. By alternatively
optimizing the generator G and the discriminator D in the GAN, the GAN is able to produce images similar to the original data and effectively complement the training set. It has been shown
in experiments BID1 that GAN-based methods have indeed significantly boosted the performance of classifiers under limited data through automatic augmentation, but applications into other tasks are yet to be explored. Furthermore, given
the computational complexity of GANs, a natural way to reduce runtime is to consider parallelism BID13 BID7 .In view of these considerations
, we propose in this paper Parallel Adaptive Generative Adversarial Network Data Augmentation(PAGANDA), where the training set adaptively enriches itself with sample images automatically constructed from Generative Adversarial Networks (GANs) trained in parallel. Our contributions can be summarized
as follows:• We propose a general adaptive black-box data augmentation strategy to diversify enhance training data, with no task-specific requirements.• We also include in our model a novel
K-fold parallel framework, which helps make the most use of the existing data.• Experiments over various datasets and
tasks demonstrate the effectiveness of our method in different context.
In sum, our paper shows that PAGANDA effectively improves the performances for different machine learning tasks with little task-specific considerations.
Our strategy is not only simple to implement, but also demonstrates capability to generate onto different settings since it does not require specific information about the task being analyzed.As a further step, we are investigating the relationship between our proposed approach and other established methods.
We hope to apply our idea to other generative models such as VAE BID14 and further optimize our strategy using recent theoretical advances, and wish to investigate the scenarios where the tasks involved are interrelated.
Application wise, we are aiming to apply our parallel GAN model to multi-modal image synthesis/generation where training data is limited. | We present an automated adaptive data augmentation that works for multiple different tasks. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:766 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Deep neural networks with millions of parameters may suffer from poor generalizations due to overfitting.
To mitigate the issue, we propose a new regularization method that penalizes the predictive distribution between similar samples.
In particular, we distill the predictive distribution between different samples of the same label and augmented samples of the same source during training.
In other words, we regularize the dark knowledge (i.e., the knowledge on wrong predictions) of a single network, i.e., a self-knowledge distillation technique, to force it output more meaningful predictions.
We demonstrate the effectiveness of the proposed method via experiments on various image classification tasks: it improves not only the generalization ability, but also the calibration accuracy of modern neural networks.
Deep neural networks (DNNs) have achieved state-of-the-art performance on many machine learning applications, e.g., computer vision (He et al., 2016) , natural language processing (Devlin et al., 2019) , and reinforcement learning (Silver et al., 2016) .
As the scale of training dataset increases, the size of DNNs (i.e., the number of parameters) also scales up to handle such a large dataset efficiently.
However, networks with millions of parameters may incur overfitting and suffer from poor generalizations (Pereyra et al., 2017; .
To address the issue, many regularization strategies have been investigated in the literature: early stopping, L 1 /L 2 -regularization (Nowlan & Hinton, 1992) , dropout (Srivastava et al., 2014) , batch normalization (Sergey Ioffe, 2015) and data augmentation (Cubuk et al., 2019) Regularizing the predictive or output distribution of DNNs can be effective because it contains the most succinct knowledge of the model.
On this line, several strategies such as entropy maximization (Pereyra et al., 2017) and angular-margin based methods (Chen et al., 2018; Zhang et al., 2019) have been proposed in the literature.
They can be also influential to solve related problems, e.g., network calibration (Guo et al., 2017) , detection of out-of-distribution samples (Lee et al., 2018) and exploration of the agent in reinforcement learning (Haarnoja et al., 2018) .
In this paper, we focus on developing a new output regularizer for deep models utilizing the concept of dark knowledge (Hinton et al., 2015) , i.e., the knowledge on wrong predictions made by DNN.
Its importance has been first evidenced by the so-called knowledge distillation and investigated in many following works (Romero et al., 2015; Zagoruyko & Komodakis, 2017; Srinivas & Fleuret, 2018; Ahn et al., 2019) .
While the related works (Furlanello et al., 2018; Hessam Bagherinezhad & Farhadi, 2018) use the knowledge distillation (KD; Hinton et al. 2015) to transfer the dark knowledge learned by a teacher network to a student network, we regularize the dark knowledge itself during training a single network, i.e., self-knowledge distillation.
Specifically, we propose a new regularization technique, coined class-wise self-knowledge distillation (CS-KD) that matches or distills the predictive distribution of DNNs between different samples of the same label (class-wise regularization) and augmented samples of the same source (sample-wise regularization) as shown in Figure 1 .
One can expect that the proposed regularization method forces DNNs to produce similar wrong predictions if samples are of the same class, while the conventional cross-entropy loss does not consider such consistency on the wrong predictions.
We demonstrate the effectiveness of our regularization method using deep convolutional neural networks, such as ResNet (He et al., 2016) and DenseNet (Huang et al., 2017) trained for image classification tasks on various datasets including CIFAR-100 (Krizhevsky et al., 2009) , TinyImageNet 1 , CUB-200-2011 (Wah et al., 2011) , Stanford Dogs (Khosla et al., 2011) , and MIT67 (Quattoni & Torralba, 2009 ) datasets.
We compare or combine our method with prior regularizers.
In our experiments, the top-1 error rates of our method are consistently smaller than those of prior output regularization methods such as angular-margin based methods (Chen et al., 2018; Zhang et al., 2019) and entropy regularization (Dubey et al., 2018; Pereyra et al., 2017) .
In particular, the gain tends to be larger in overall for the top-5 error rates and the expected calibration errors (Guo et al., 2017) , which confirms that our method indeed makes predictive distributions more meaningful.
Moreover, we investigate a variant of our method by combining it with other types of regularization method for boosting performance, such as the mixup regularization (Zhang et al., 2018) and the original KD method.
We improve the top-1 error rate of mixup from 37.09% to 31.95% and that of KD from 39.32% to 35.36% under ResNet (He et al., 2016) trained by the CUB-200-2011 dataset.
Our method is very simple to use, and would enjoy a broader usage in the future.
In this paper, we discover a simple regularization method to enhance generalization performance of deep neural networks.
We propose two regularization terms which penalizes the predictive distribution between different samples of the same label and augmented samples of the same source by minimizing the Kullback-Leibler divergence.
We remark that our ideas regularize the dark knowledge (i.e., the knowledge on wrong predictions) itself and encourage the model to produce more meaningful predictions.
Moreover, we demonstrate that our proposed method can be useful for the generalization and calibration of neural networks.
We think that the proposed regularization techniques would enjoy a broader range of applications, e.g., deep reinforcement learning (Haarnoja et al., 2018) and detection of out-of-distribution samples (Lee et al., 2018) . | We propose a new regularization technique based on the knowledge distillation. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:767 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
In this paper, we consider the specific problem of word-level language modeling and investigate strategies for regularizing and optimizing LSTM-based models.
We propose the weight-dropped LSTM, which uses DropConnect on hidden-to-hidden weights, as a form of recurrent regularization.
Further, we introduce NT-ASGD, a non-monotonically triggered (NT) variant of the averaged stochastic gradient method (ASGD), wherein the averaging trigger is determined using a NT condition as opposed to being tuned by the user.
Using these and other regularization strategies, our ASGD Weight-Dropped LSTM (AWD-LSTM) achieves state-of-the-art word level perplexities on two data sets: 57.3 on Penn Treebank and 65.8 on WikiText-2.
In exploring the effectiveness of a neural cache in conjunction with our proposed model, we achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and 52.0 on WikiText-2.
We also explore the viability of the proposed regularization and optimization strategies in the context of the quasi-recurrent neural network (QRNN) and demonstrate comparable performance to the AWD-LSTM counterpart.
The code for reproducing the results is open sourced and is available at https://github.com/salesforce/awd-lstm-lm.
Effective regularization techniques for deep learning have been the subject of much research in recent years.
Given the over-parameterization of neural networks, generalization performance crucially relies on the ability to regularize the models sufficiently.
Strategies such as dropout BID33 and batch normalization BID13 have found great success and are now ubiquitous in feed-forward and convolutional neural networks.
Naïvely applying these approaches to the case of recurrent neural networks (RNNs) has not been highly successful however.
Many recent works have hence been focused on the extension of these regularization strategies to RNNs; we briefly discuss some of them below.A naïve application of dropout BID33 to an RNN's hidden state is ineffective as it disrupts the RNN's ability to retain long term dependencies BID40 .
BID7 propose overcoming this problem by retaining the same dropout mask across multiple time steps as opposed to sampling a new binary mask at each timestep.
Another approach is to regularize the network through limiting updates to the RNN's hidden state.
One such approach is taken by BID31 wherein the authors drop updates to network units, specifically the input gates of the LSTM, in lieu of the units themselves.
This is reminiscent of zoneout BID20 where updates to the hidden state may fail to occur for randomly selected neurons.Instead of operating on the RNN's hidden states, one can regularize the network through restrictions on the recurrent matrices as well.
This can be done either through restricting the capacity of the matrix BID0 BID39 BID14 or through element-wise interactions (Balduzzi & Ghifary, 2016; BID32 .Other
forms of regularization explicitly act upon activations such as batch normalization BID13 , recurrent batch normalization BID4 , and layer normalization BID1 . These
all introduce additional training parameters and can complicate the training process while increasing the sensitivity of the model.In this work, we investigate a set of regularization strategies that are not only highly effective but which can also be used with no modification to existing LSTM implementations. The weightdropped
LSTM applies recurrent regularization through a DropConnect mask on the hidden-tohidden recurrent weights. Other strategies
include the use of randomized-length backpropagation through time (BPTT), embedding dropout, activation regularization (AR), and temporal activation regularization (TAR).As no modifications
are required of the LSTM implementation these regularization strategies are compatible with black box libraries, such as NVIDIA cuDNN, which can be many times faster than naïve LSTM implementations.Effective methods for training deep recurrent networks have also been a topic of renewed interest. Once a model has been
defined, the training algorithm used is required to not only find a good minimizer of the loss function but also converge to such a minimizer rapidly. The choice of the optimizer
is even more important in the context of regularized models since such strategies, especially the use of dropout, can impede the training process. Stochastic gradient descent
(SGD), and its variants such as Adam BID18 and RMSprop BID36 are amongst the most popular training methods. These methods iteratively reduce
the training loss through scaled (stochastic) gradient steps. In particular, Adam has been found
to be widely applicable despite requiring less tuning of its hyperparameters. In the context of word-level language
modeling, past work has empirically found that SGD outperforms other methods in not only the final loss but also in the rate of convergence. This is in agreement with recent evidence
pointing to the insufficiency of adaptive gradient methods BID38 .Given the success of SGD, especially within
the language modeling domain, we investigate the use of averaged SGD (AvSGD) BID29 which is known to have superior theoretical guarantees. AvSGD carries out iterations similar to SGD
, but instead of returning the last iterate as the solution, returns an average of the iterates past a certain, tuned, threshold T . This threshold T is typically tuned and has
a direct impact on the performance of the method. We propose a variant of AvSGD where T is determined
on the fly through a non-monotonic criterion and show that it achieves better training outcomes compared to SGD.
In this work, we discuss regularization and optimization strategies for neural language models.
We propose the weight-dropped LSTM, a strategy that uses a DropConnect mask on the hidden-tohidden weight matrices, as a means to prevent overfitting across the recurrent connections.
Further, we investigate the use of averaged SGD with a non-monontonic trigger for training language models and show that it outperforms SGD by a significant margin.
We investigate other regularization strategies including the use of variable BPTT length and achieve a new state-of-the-art perplexity on the PTB and WikiText-2 data sets.
Our models outperform custom-built RNN cells and complex regularization strategies that preclude the possibility of using optimized libraries such as the NVIDIA cuDNN LSTM.
We explore the use of a neural cache in conjunction with our proposed model and show that this further improves the performance, thus attaining an even lower state-of-the-art perplexity.
We also explore the viability of using the proposed regularization and optimization strategies in the context of a quasi-recurrent neural network (QRNN) and demonstrate comparable performance to the LSTM counterpart.
While the regularization and optimization strategies proposed are demonstrated on the task of language modeling, we anticipate that they would be generally applicable across other sequence learning tasks. | Effective regularization and optimization strategies for LSTM-based language models achieves SOTA on PTB and WT2. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:768 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Various methods of measuring unit selectivity have been developed with the aim of better understanding how neural networks work.
But the different measures provide divergent estimates of selectivity, and this has led to different conclusions regarding the conditions in which selective object representations are learned and the functional relevance of these representations.
In an attempt to better characterize object selectivity, we undertake a comparison of various selectivity measures on a large set of units in AlexNet, including localist selectivity, precision, class-conditional mean activity selectivity (CCMAS), network dissection, the human interpretation of activation maximization (AM) images, and standard signal-detection measures.
We find that the different measures provide different estimates of object selectivity, with precision and CCMAS measures providing misleadingly high estimates.
Indeed, the most selective units had a poor hit-rate or a high false-alarm rate (or both) in object classification, making them poor object detectors.
We fail to find any units that are even remotely as selective as the 'grandmother cell' units reported in recurrent neural networks.
In order to generalize these results, we compared selectivity measures on a few units in VGG-16 and GoogLeNet trained on the ImageNet or Places-365 datasets that have been described as 'object detectors'.
Again, we find poor hit-rates and high false-alarm rates for object classification.
There have been recent attempts to understand how neural networks (NNs) work by analyzing hidden units one-at-a-time using various measures such as localist selectivity (Bowers et al., 2014) , class-conditional mean activity selectivity (CCMAS) (Morcos et al., 2018) , precision (Zhou et al., 2015) , network dissection (Zhou et al., 2018a) , and activation maximization (AM) (Erhan et al., 2009) .
These measures are all taken to provide evidence that some units respond highly selectively to categories of objects under some conditions.
Not only are these findings surprising given the widespread assumption that NNs only learn highly distributed and entangled representations, they raise a host of questions, including the functional importance of these selective representations (Zhou et al., 2018b) , the conditions in which they are learned (e.g., Morcos et al., 2018) , and the relation between these representations and the selective neurons observed in cortex (Bowers, 2009 ).
To answer these question, it is necessary to have a better understanding of what these metrics actually measure, and how they relate to one another.
Accordingly, we directly compare these measures of selectivity on the same set of units as well as adopt standard signal-detection measures in an attempt to provide better measures of single-unit selectivity to object category.
In addition, to provide a more intuitive assessment of selectivity, we report jitterplots for a few of the most selective units that visually display how the unit responds to the different image categories.
We focus on AlexNet (Krizhevsky et al., 2012 ) trained on ImageNet (Deng et al., 2009 ) because many authors have studied the selectivity of single hidden units in this model using a range of quantitative (Zhou et al., 2018a; and qualitative (Nguyen et al., 2017; Yosinski et al., 2015; Simonyan et al., 2013) methods.
But we also compare different selectivity measures on specific units in VGG-16 (Simonyan and Zisserman, 2014) and GoogLeNet (Szegedy et al., 2015) trained on the the ImageNet and Places-365 datasets that were characterized by Zhou et al. (2018a) as "object detectors" based on their Network Dissection method (Zhou et al., 2018a) .
Our main findings are:
1. The precision and CCMAS measures are misleading with near-maximum selectivity scores associated with units that strongly respond to many different image categories.
By contrast, the signal-detection measures more closely capture the level of selectivity displayed in the jitterplots (Sec. 3.1).
2. Units with interpretable AM images do not correspond to highly selective representations (Sec. 3.2).
3. The Network Dissection method also provides a misleading measure for "object detectors" (Sec. 3.3).
In one line of research, Bowers et al. (2014; assessed the selectivity of single hidden units in recurrent neural networks (RNNs) designed to model human short-term memory.
They reported many 'localist' or 'grandmother cell' units that were 100% selective for specific letters or words, where all members of the selective category were more active than and disjoint from all non-members, as can be shown in jitterplots (Berkeley et al., 1995 ) (see Fig. 1 for a unit selective to the letter 'j').
The authors argued that the network learned these representations in order to co-activate multiple letters or words at the same time in short-term memory without producing ambiguous blends of overlapping distributed patterns (the so-called 'superposition catastrophe').
Consistent with this hypothesis, localist units did not emerge when the model was trained on letters or words one-at-a-time (Bowers et al., 2014 ) (see Fig. 1 for an example of a non-selective unit).
In parallel, researchers have reported selective units in the hidden layers of various CNNs trained to classify images into one of multiple categories (Zhou et al., 2015; Morcos et al., 2018; Zeiler and Fergus, 2014; Erhan et al., 2009) , for a review see Bowers (2017) .
For example, Zhou et al. (2015) assessed the selectivity of units in the pool5 layer of two CNNs trained to classify images into 1000 objects and 205 scene categories, respectively.
They reported many highly selective units that they characterized as 'object detectors' in both networks.
Similarly, Morcos et al. (2018) reported that CNNs trained on CIFAR-10 and ImageNet learned many highly selective hidden units, with CCMAS scores approaching the maximum of 1.0.
These later findings appear to be inconsistent with Bowers et al. (2016) who failed to observe selective representations in fully connected NNs trained on stimuli one-at-a-time (see Fig. 1 ), but the measures of selectivity that have been applied across studies are different, and accordingly, it is difficult to directly compare results.
A better understanding of the relation between selectivity measures is vital given that different measures are frequently used to address similar issues.
For example, both the human interpretability of generated images (Le, 2013) and localist selectivity (Bowers et al., 2014) have been used to make claims about 'grandmother cells', but it is not clear whether they provide similar insights into unit selectivity.
Similarly, based on their precision metric, Zhou et al. (2015) claim that the object detectors learned in CNNs play an important role in identifying specific objects, whereas Morcos et al. (2018) challenge this conclusion based on their finding that units with high CCMAS measures were not especially important in the performance of their CNNs and concluded: "...it implies that methods for understanding neural networks based on analyzing highly selective single units, or finding optimal inputs for single units, such as activation maximization (Erhan et al., 2009 ) may be misleading".
This makes a direct comparison between selectivity measures all the more important.
In order to directly compare and have a better understanding of the different selectivity measures we assessed (1) localist, (2) precision, and (3) CCMAS selectivity of the conv5, fc6, and fc7 of AlexNet trained on ImageNet, and in addition, we employed a range of signal detection methods on these units, namely, (4) recall with 100% and 95% precision, (5) maximum informedness, (6) specificity at maximum informedness , and (7) recall (also called sensitivity) at maximum informedness, and false alarm rates at maximum informedness (described in Sec. 2).
We also assessed the selectivity of a few units in VGG-16 and GoogLeNet models trained on the ImageNet and Places-365 dataset that were highly selective according to the Network Dissection method (Zhou et al., 2018a) .
We show that the precision and CCMAS measures often provide misleadingly high estimates of object selectivity compared to other measures, and we do not find any units that can be reasonably described as 'object detectors' given that the most selective units show a low hit-rate or a high false-alarm rate (or both) when classifying images.
At best, the most selective units in CNNs are sensitive to some unknown feature that is weakly associated with the class in question.
(Bowers et al., 2016) .
Top middle: jitterplot of a non-selective unit 160 found in an RNN trained on words one-at-a-time from (Bowers et al., 2016) .
Top right: Activation maximization image of unit conv5 9 AlexNet that resembles a lighthouse (Nguyen et al., 2016) .
Bottom: highest-activation images for a 'lamp' detector with 84% precision in the layer conv5 of AlexNet; from (Zhou et al., 2015) .
In addition to these quantitative measures and jitterplots we assessed selectivity with a common qualitative measure, namely, human interpretation of images generated by a state-of-the-art activation maximization (AM) method (Nguyen et al., 2017) .
AM images are generated to strongly activate individual units, and some of them are interpretable by humans (e.g., a generated image that looks like a lighthouse, see Fig. 1 ).
For the first time, we systematically evaluated the interpretability of the AM images and compare these ratings with the selectivity measures for corresponding units.
We show that the few hidden units with interpretable AM images are not highly selective.
Our central finding is that different measures of single-unit selectivity for objects support very different conclusions when applied to the same units in AlexNet.
In contrast with the precision (Zhou et al., 2015) and CCMAS (Morcos et al., 2018) measures that suggest some highly selective units for objects in layers conv5, fc6, and fc7, the recall with perfect precision and false alarm rates at maximum informedness show low levels of selectivity.
Indeed, the most selective units have a poor hit-rate or a high false-alarm rate (or both) for identifying an object class.
The same outcome was observed with units in VGG-16 and GoogLeNet trained on either ImageNet or the Places-365 dataset.
Not only do the different measures provide very different assessments of selectivity, the precision, CCMAS, and Network Dissection measures provide highly misleading estimates of selectivity that have led to mistaken conclusions.
For example, unit fc6 1199 in AlexNet trained on ImageNet is considered an Monarch Butterfly detector according to Zhou et al. (2015) with a precision score of 98% (and a CCMAS score of .93).
But the jitterplot in Fig. 3 and signal detection scores (e.g., high false alarm rate at maximum informedness) show this is a mischaracterisation of this unit.
In the same way, the Network Dissection method identified many object detectors in VGG-16 and GoogLeNet CNNs, but the jitterplots in Fig. 5 (and precision scores) show that this conclusion is unjustified.
For additional problems with the CCMAS score see Figure 5 in Appendix C. Similarly, the images generated by Activation Maximization also provided a misleading estimate of selectivity given that interpretable images were associated with very low selectivity scores.
This has led to confusions that have delayed theoretical progress.
For example, describing single units in CNNs as "object detectors" in response to high precision measures (Zhou et al.) suggests similar types of representations are learned in CNNs and RNNs.
Indeed, we are not aware of anyone in the machine learning community who has even considered the hypothesis that selectivity is reduced in CNNs compared RNNs.
Our findings highlight the contrasting results.
What should be made of the finding that localist representations are sometimes learned in RNNs (units with perfect specificity and recall), but not in AlexNet and related CNNs?
The failure to observe localist units in the hidden layers of these CNNs is consistent with Bowers et al. (2014) 's claim that these units emerge in order to support the co-activation of multiple items at the same time in short-term memory.
That is, localist representations may be the solution to the superposition catastrophe, and these CNNs only have to identify one image at a time.
The pressure to learn highly selective representations in response to the superposition constraint may help explain the reports of highly selective neurons in cortex given that the cortex needs to co-activate multiple items at the same time in order to support short-term memory (Bowers et al., 2016) .
Note, the RNNs that learned localist units were very small in scale compared to CNNs we have studied here, and accordingly, it is possible that the contrasting results reflect the size of the networks rather than the superposition catastrophe per se.
Relevant to this issue a number of authors have reported the existence of selective units in larger RNNs with long-short term memory (LSTM) units (Karpathy et al., 2016; Radford et al., 2017; Lakretz et al., 2019; Na et al., 2019) .
Indeed, Lakretz et al. (2019) use the term 'grandmother cell' to describe the units they observed.
It will be interesting to apply our measures of selectivity to these larger RNNs and see whether these units are indeed 'grandmother units'.
It should also be noted that there are recent reports of impressively selective representations in Generative Adversarial Networks (Bau et al., 2019) and Variational Autoencoders (Burgess et al., 2018) where the superposition catastrophe is not an issue.
Again, it will be interesting to assess the selectivity of these units according to signal detection measures in order to see whether there are additional computational pressures to learn highly selective or even grandmother cells.
We will be exploring these issues in future work. | Looking for object detectors using many different selectivity measures; CNNs are slightly selective , but not enough to be termed object detectors. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:769 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We study the problem of safe adaptation: given a model trained on a variety of past experiences for some task, can this model learn to perform that task in a new situation while avoiding catastrophic failure?
This problem setting occurs frequently in real-world reinforcement learning scenarios such as a vehicle adapting to drive in a new city, or a robotic drone adapting a policy trained only in simulation.
While learning without catastrophic failures is exceptionally difficult, prior experience can allow us to learn models that make this much easier.
These models might not directly transfer to new settings, but can enable cautious adaptation that is substantially safer than na\"{i}ve adaptation as well as learning from scratch.
Building on this intuition, we propose risk-averse domain adaptation (RADA).
RADA works in two steps: it first trains probabilistic model-based RL agents in a population of source domains to gain experience and capture epistemic uncertainty about the environment dynamics.
Then, when dropped into a new environment, it employs a pessimistic exploration policy, selecting actions that have the best worst-case performance as forecasted by the probabilistic model.
We show that this simple maximin policy accelerates domain adaptation in a safety-critical driving environment with varying vehicle sizes.
We compare our approach against other approaches for adapting to new environments, including meta-reinforcement learning.
An experienced human driving a rental car for the first time is initially very aware of her lack of familiarity with the car.
How sensitive is it to acceleration and braking?
How does it respond to steering?
How wide is the vehicle and what is its turning radius?
She drives mindfully, at low speeds, braking far ahead of desired stops, and making wide turns, all the while observing the car's responses and adapting to it.
Within minutes, once she is familiar with the car, she begins to drive more fluently and efficiently.
Humans draw upon their prior experiences to perform this kind of safe, quick adaptation to unfamiliar situations all the time, such as when playing with a new tennis racquet, or walking on a new slippery surface.
Such problems are critical to address in autonomous systems: such as when a self-driving car must learn to drive in a new country, or when a planetary rover might have to learn to explore a harsh new environment.
Missteps in real-world situations can cause real damage to robots and their environments.
An important bottleneck in applying today's standard machine learning approaches to control in these real-world situations is that they are trained without any notion of safe behavior under uncertainty.
Recent works have attempted to address this by proposing methods for safe exploration during reinforcement learning -in other words, how might an agent avoid risky actions during training time?
This still requires that the robot acquire its notions of uncertainty and risks at the same time as it is learning to perform tasks in the new environment, which is difficult and precarious.
Could we instead rely on transferring notions of uncertainty and risk acquired from prior experience in other related domains, such as in simulated environments, where safety may not be as much of a concern?
In other words, could we make the safe learning problem easier through knowledge transfer, relaxing the problem to safe adaptation, like the human driver?
How might the planetary rover draw on its experience in many varied terrains on Earth to perform meaningfully cautious actions during learning on the unknown terrain of a new planet?
Motivated by these questions, we propose a model-based reinforcement learning approach called risk averse domain adaptation (RADA).
RADA works by first pretraining a probabilistic dynamics model on a population of training domains with varied, unknown dynamics.
Through this experience over many environments, the model learns to estimate the epistemic uncertainty (model uncertainty) of unknown environment dynamics, thus permitting estimation of a distribution of outcomes for any action executed by the agent.
When introduced into a new target environment, RADA uses this estimated distribution of outcomes to select cautious actions that obey the following maximin notion of risk-aversion: among various candidate action sequences, it executes those that lead to the best worst-case performance, as predicted by the model.
Much like the human driver in the example above, all the information collected during this cautious phase of exploration is fed back into the model to finetune it to the new domain, leading to increasingly confident predictions.
Over time, RADA steadily estimates lower risks and approaches optimality in the target environment.
As we demonstrate in experiments in a driving domain, the experience acquired during RADA's pretraining phase enables fast yet safe adaptation within only a handful of episodes.
We have proposed RADA, a new approach to model-based reinforcement learning for safe, quick adaptation of RL agents in new environments with unknown dynamics.
RADA relies on two key ideas: transferring knowledge from training in a variety of training environments, and using a maximin notion of risk-aversion during action selection in the target environment.
We show in a physically accurate driving environment that RADA performs fast, safe adaptation to learn to drive cars around corners, even when they are up to two times larger than any cars it has driven at pretraining time. | Adaptation of an RL agent in a target environment with unknown dynamics is fast and safe when we transfer prior experience in a variety of environments and then select risk-averse actions during adaptation. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:77 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The folding structure of the DNA molecule combined with helper molecules, also referred to as the chromatin, is highly relevant for the functional properties of DNA.
The chromatin structure is largely determined by the underlying primary DNA sequence, though the interaction is not yet fully understood.
In this paper we develop a convolutional neural network that takes an image-representation of primary DNA sequence as its input, and predicts key determinants of chromatin structure.
The method is developed such that it is capable of detecting interactions between distal elements in the DNA sequence, which are known to be highly relevant.
Our experiments show that the method outperforms several existing methods both in terms of prediction accuracy and training time.
DNA is perceived as a sequence over the letters {A,C,G,T }, the alphabet of nucleotides.
This sequence constitutes the code that acts as a blueprint for all processes taking place in a cell.
But beyond merely reflecting primary sequence, DNA is a molecule, which implies that DNA assumes spatial structure and shape.
The spatial organization of DNA is achieved by integrating ("recruiting") other molecules, the histone proteins, that help to assume the correct spatial configuration.
The combination of DNA and helper molecules is called chromatin; the spatial configuration of the chromatin, finally, defines the functional properties of local areas of the DNA BID9 .Chromatin
can assume several function-defining epigenetic states, where states vary along the genome BID12 . The key determinant
for spatial configuration is the underlying primary DNA sequence: sequential patterns are responsible for recruiting histone proteins and their chemical modifications, which in turn give rise to or even define the chromatin states. The exact configuration
of the chromatin and its interplay with the underlying raw DNA sequence are under active research. Despite many enlightening
recent findings (e.g. BID6 The EN-CODE Project Consortium, 2012; BID11 , comprehensive understanding has not yet been reached. Methods that predict chromatin
related states from primary DNA sequence are thus of utmost interest. In machine learning, many prediction
methods are available, of which deep neural networks have recently been shown to be promising in many applications BID17 . Also in biology deep neural networks
have been shown to be valuable (see BID3 for a review).Although DNA is primarily viewed as a
sequence, treating genome sequence data as just a sequence neglects its inherent and biologically relevant spatial configuration and the resulting interaction between distal sequence elements. We hypothesize that a deep neural network
designed to account for long-term interactions can improve performance. Additionally, the molecular spatial configuration
of DNA suggests the relevance of a higher-dimensional spatial representation of DNA. However, due to the lack of comprehensive understanding
with respect to the structure of the chromatin, sensible suggestions for such higher-dimensional representations of DNA do not exist.One way to enable a neural net to identify long-term interactions is the use of fully connected layers. However, when the number of input nodes to the fully connected
layer is large, this comes with a large number of parameters. We therefore use three other techniques to detect long-term interactions
. First, most convolutional neural networks (CNNs) use small convolution
filters. Using larger filters already at an early stage in the network allows for
early detection of long-term interactions without the need of fully connected layers with a large input. Second, a deep network similar to the ResNet BID14 or Inception BID27 network
design prevents features found in early layers from vanishing. Also, they reduce the size of the layers such that the final fully connected
layers have a smaller input and don't require a huge number of parameters. Third, we propose a novel kind of DNA representation by mapping DNA sequences
to higher-dimensional images using space-filling curves. Space-filling curves map a 1-dimensional line to a 2-dimensional space by mapping
each element of the sequence to a pixel in the 2D image. By doing so, proximal elements of the sequence will stay in close proximity to one
another, while the distance between distal elements is reduced.The space-filling curve that will be used in this work is the Hilbert curve which has several advantages. (i): [Continuity] Hilbert curves optimally ensure that the pixels representing two
sequence elements that are close within the sequence are also close within the image BID4 BID1 . (ii): [Clustering property] Cutting out rectangular subsets of pixels (which is what
convolutional filters do) yields a minimum amount of disconnected subsequences BID20 . (iii): If a rectangular subimage cuts out two subsequences that are disconnected in
the original sequence, chances are maximal that the two different subsequences are relatively far apart (see our analysis in Appendix A).The combination of these points arguably renders Hilbert curves an interesting choice
for representing DNA sequence as two-dimensional images. (i) is a basic requirement for mapping short-term sequential relationships, which are
ubiquitous in DNA (such as codons, motifs or intron-exon structure).(ii) relates to the structure of the chromatin, which -without all details being fully
understood -is tightly packaged and organized in general. Results from BID10 indicate that when arranging DNA sequence based on Hilbert curves,
contiguous areas belonging to identical chromatin states cover rectangular areas. In particular, the combination of (i) and (ii) motivate the application of convolutional
layers on Hilbert curves derived
from DNA
sequence: rectangular subspaces, in other words, submatrices encoding the convolution operations, contain a minimum amount of disconnected pieces of DNA. (iii) finally is beneficial insofar as long-term interactions affecting DNA can also be
mapped. This in particular applies to so-called enhancers and silencers, which exert positive (
enhancer) or negative (silencer) effects on the activity of regions harboring genes, even though they may be far apart from those regions in terms of sequential distance.
In this paper we developed a CNN that outperforms the state-of-the-art for prediction of epigenetic states from primary DNA sequence.
Indeed, our methods show improved prediction accuracy and training time compared to the currently available chromatin state prediction methods from Pahm TAB1 in BID15 .
In the splice dataset, Seq-CNN performed best when using 4-mers, while for HCNN and seq-HCNN 1-mers yielded the best performance.results Figure 4 : HCNN with different mapping strategies by BID21 and thus yields a huge number of parameters in the fully connected layer.
In HCNN on the other hand the number of nodes is strongly reduced before introducing a fully connected layer.
Third, the use of a two-dimensional input further enhances the model's capabilities of incorporating long-term interactions.We showed that seq-HCNN and HCNN are not only capable of predicting chromatin state, but can also predict the presence or absence of splice-junctions in DNA subsequences.
This suggests that our approach could be useful for DNA sequence classification problems in general.Hilbert curves have several properties that are desirable for DNA sequence classification.
The intuitive motivation for the use of Hilbert curves is supported by good results when comparing Hilbert curves to other space-filling curves.
Additionally, Hilbert curves have previously been shown to be useful for visualization of DNA sequences BID2 ).The
main limitation of Hilbert curves is their fixed length, which implies that the generated image contains some empty spaces. These
spaces consume computation resources; nevertheless, the 2D representation still yields reduced training times compared to the 1D-sequence representation, presumably due to the high degree of optimization for 2D inputs present in standard CNN frameworks.Given that a substantial part of the improvements in performance rates are due to our novel architecture, we plan on investigating the details of how components of the architecture are intertwined with improvements in prediction performance in more detail. We also
plan to further investigate why Hilbert curves yield the particular advantages in terms of robustness and false discovery control we have observed here. | A method to transform DNA sequences into 2D images using space-filling Hilbert Curves to enhance the strengths of CNNs | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:770 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
This paper introduces a novel method to perform transfer learning across domains and tasks, formulating it as a problem of learning to cluster.
The key insight is that, in addition to features, we can transfer similarity information and this is sufficient to learn a similarity function and clustering network to perform both domain adaptation and cross-task transfer learning.
We begin by reducing categorical information to pairwise constraints, which only considers whether two instances belong to the same class or not (pairwise semantic similarity).
This similarity is category-agnostic and can be learned from data in the source domain using a similarity network.
We then present two novel approaches for performing transfer learning using this similarity function.
First, for unsupervised domain adaptation, we design a new loss function to regularize classification with a constrained clustering loss, hence learning a clustering network with the transferred similarity metric generating the training inputs.
Second, for cross-task learning (i.e., unsupervised clustering with unseen categories), we propose a framework to reconstruct and estimate the number of semantic clusters, again using the clustering network.
Since the similarity network is noisy, the key is to use a robust clustering algorithm, and we show that our formulation is more robust than the alternative constrained and unconstrained clustering approaches.
Using this method, we first show state of the art results for the challenging cross-task problem, applied on Omniglot and ImageNet.
Our results show that we can reconstruct semantic clusters with high accuracy.
We then evaluate the performance of cross-domain transfer using images from the Office-31 and SVHN-MNIST tasks and present top accuracy on both datasets.
Our approach doesn't explicitly deal with domain discrepancy.
If we combine with a domain adaptation loss, it shows further improvement.
Supervised learning has made significant strides in the past decade, with substantial advancements arising from the use of deep neural networks.
However, a large part of this success has come from the existence of extensive labeled datasets.
In many situations, it is not practical to obtain such data due to the amount of effort required or when the task or data distributions change dynamically.
To deal with these situations, the fields of transfer learning and domain adaptation have explored how to transfer learned knowledge across tasks or domains.
Many approaches have focused on cases where the distributions of the features and labels have changed, but the task is the same (e.g., classification across datasets with the same categories).
Cross-task transfer learning strategies, on the other hand, have been widely adopted especially in the computer vision community where features learned by a deep neural network on a large classification task have been applied to a wide variety of other tasks (Donahue et al., 2014) .Most
of the prior cross-task transfer learning works, however, require labeled target data to learn classifiers for the new task. If labels
of the target data are absent, there is little choice other than to apply unsupervised approaches such as clustering on the target data with pre-trained feature representations. In this paper
, we focus on the question of what can be transferred (besides features) to support both cross-domain and cross-task transfer learning. We address it
with a learned similarity function as the fundamental component of clustering. Clustering can
then be realized using a neural network trained using the output of the similarity function, which can be successfully used to achieve both cross-task and cross-domain transfer.The key idea is to formulate the clustering objective to use a learnable (and transferable) term, which in our proposed work is a similarity prediction function. Our proposed objective
function can be easily combined with deep neural networks and optimized end-to-end. The features and clustering
are optimized jointly, hence taking advantage of such side information in a robust way. Using this method, we show
that unsupervised learning can benefit from learning performed on a distinct task, and demonstrate the flexibility of further combining it with a classification loss and domain discrepancy loss.In summary, we make several contributions. First, we propose to use predictive
pairwise similarity as the knowledge that is transferred and formulate a learnable objective function to utilize the pairwise information in a fashion similar to constrained clustering. We then provide the methodologies to
deploy the objective function in both cross-task and cross-domain scenarios with deep neural networks. The experimental results for cross-task
learning on Omniglot and ImageNet show that we can achieve state of the art clustering results with predicted similarities. On the standard domain adaptation benchmark
Office-31 dataset, we demonstrate improvements over state-of-art even when not performing any explicit domain adaptation, and further improvements if we do. Finally, on another domain adaptation task,
SVHN-to-MNIST, our approach using Omniglot as the auxiliary dataset achieves top performance with a large margin.
We report the average performance over the 20 alphabets in table 1.
Our approach achieved the top performance on both metrics.
The CCN demonstrates strong robustness on the challenging scenario of unknown K. It achieved 78.1% average accuracy.
Compared with 82.4% when K is known, CCN has a relatively small drop.
Compared to the second best algorithm, CSP, which is 65.4%, CCN outperforms it with a large gap.
The classical approach MPCK-means works surprisingly well when the number of clusters is known, but its performance dropped dramatically from 81.9% to 53.9% when K = 100.
In the performance breakdown for the 20 individual alphabets, CCN achieved 94% clustering accuracy on Old Church Slavonic Cyrillic, which has 45 characters (appendix TAB4 ).
Therefore the results show the feasibility of reconstructing semantic clusters using only noisy similarity predictions.When to use the semantic similarity?
The experiments in table 1 show a clear trend that utilizing the pairwise constraints jointly for both metric learning and minimizing the clustering loss achieves the best performance, including both MPCK-means and CCN.
In the case of unknown number of clusters, where we set K = 100, the algorithms that use constraints to optimize clustering loss have better robustness, for example, CSP and CCN.
The group that only use constraints for metric learning (ITML, SKMS, SKKm, and SKLR) significantly outperform the group that does not use it (K-means, LPNMF, LSC).
However, their performance are still far behind CCN.
Our results confirm the importance of jointly optimizing the metric and clustering.The robustness against noisy similarity prediction is the key factor to enable the cross-task transfer framework.
To the best of our knowledge, table 1 is the first comprehensive robustness comparisons using predicted constraints learned from real data instead of converting from ground-truth labels.
The accuracy of G in our experiment is shown in appendix table 7 and demonstrates the reasonable performance of G which is on par with Matching-Net BID34 .
After binarizing the prediction at 0.5 probability, the similar pair precision, similar pair recall, dissimilar pair precision, and dissimilar pair recall among the 659 characters are (0.392, 0.927, 0.999, 0.995), accordingly.
The binarized predictions are better than uniform random guess (0.002, 0.500, 0.998, 0.500), but are still noisy.
Therefore it is very challenging for constrained clustering.
The visualization of the robustness range of CCN are provided in appendix D, and shows that the robustness is related to the density of pairs involved in a mini-batch.
We hypothesize that during the optimization, the gradients from wrongly predicted pairs are canceled out by each other or by the correctly predicted pairs.
Therefore the overall gradient still moves the solution towards a better clustering result.How to predict K?
Inferring the number of clusters (N C) is a hard problem, but with the pairwise similarity information, it becomes feasible.
For evaluation, we compute the difference between the number of dominant clusters (N DC) and the true number of categories (N C gt ) in a dataset.
We use a naive definition for N DC, which is the number of clusters that have a size larger than expected size when data is sampled uniformly.
In other words, TAB5 ).
We compare this with the baseline approach SKMS BID1 , which does not require a given K and supports a pipeline to estimate K automatically (therefore we only put it into the column K = 100 in table 1.).
SKMS gets 16.3.
Furthermore, 10 out of 20 datasets from CCN's prediction have a difference between N DC d and N C gt d smaller or equal to 3, which shows the feasibility of estimating K with predicted similarity.
DISPLAYFORM0
The results are summarized in table 2.
Our approach (CCN + ) demonstrates a strong performance boost for the unsupervised cross-domain transfer problem.
It reaches 77.5% average accuracy which gained 6.2 points from the 71.3% source-only baseline.
Although our approach merely transfers more information from the auxiliary dataset, it outperforms the strong approach DANN (75.7%), and state-of-the-art JAN (76.9%).
When combining ours with DANN (CCN ++ ), the performance is further boosted.
This indicates that LCO helps mitigate the transfer problem in a certain way that is orthogonal to minimizing the domain discrepancy.
We observe the same trend when using a deeper backbone network, i.e., ResNet-34.
In such a case the average accuracy achieved is 77.9%, 81.1% and 82% for source-only, CCN + and CCN ++ , respectively, though we used exactly the same G as before (with ResNet-18 backbone for G).
This indicates that the information carried in the similarity predictions is not equivalent to transferring features with deeper networks.
More discussions are in appendix C and the performance of G is provided in appendix table 11 to show that although the prediction has low precision for similar pairs (∼ 0.2), our approach still benefits from the dense similarity predictions.
In this paper, we demonstrated the usefulness of transferring information in the form of pairwise similarity predictions.
Such information can be transferred as a function and utilized by a loss formulation inspired from constrained clustering, but implemented more robustly within a neural network that can jointly optimize both features and clustering outputs based on these noisy predictions.
The experiments for both cross-task and cross-domain transfer learning show strong benefits of using the semantic similarity predictions resulting in new state of the art results across several datasets.
This is true even without explicit domain adaptation for the cross-domain task, and if we add a domain discrepancy loss the benefits increase further.There are two key factors that determine the performance of the proposed framework.
The first is the robustness of the constrained clustering and second is the performance of the similarity prediction function.
We show robustness of CCN empirically, but we do not explore situations where learning the similarity function is harder.
For example, such cases arise when there are a small number of categories in source or a large domain discrepancy between source and target.
One idea to deal with such a situation is learning G with domain adaptation strategies.
We leave these aspects for future work.
The resulting performance w.r.t different values of recall, density, and number of clusters is visualized in FIG7 .
A bright color means high NMI score and is desired.
The larger the bright region, the more robust the clustering is against the noise of similarity prediction.
The ACC score shows almost the same trend and is thus not shown here.How does similarity prediction affect clustering?
Looking at the top-left heat map in FIG7 , which has D = 1 and 10 clusters, it can be observed that the NMI score is very robust to low similar pair recall, even lower than 0.5.
For recall of dissimilar pairs, the effect of recall is divided at the 0.5 value: the clustering performance can be very robust to noise in dissimilar pairs if the recall is greater than 0.5; however, it can completely fail if recall is below 0.5.
For similar pairs, the clustering works on a wide range of recalls when the recall of dissimilar pairs is high.In practical terms, robustness to the recall of similar pairs is desirable because it is much easier to predict dissimilar pairs than similar pairs in real scenarios.
In a dataset with 10 categories e.g. Cifar-10, we can easily get 90% recall for dissimilar pairs with purely random guess if the number of classes is known, while the recall for similar pairs will be 10%.How
does the density of the constraints affect clustering? We
argue that the density of pairwise relationships is the key factor to improving the robustness of clustering. The
density D = 1 means that every pair in a mini-batch is utilized by the clustering loss. For
density D = 0.1, it means only 1 out of 10 possible constraints is used. We
could regard the higher density as better utilization of the pairwise information in a mini-batch, thus more learning instances contribute to the gradients at once. Consider
a scenario where there is one sample associated with 5 true similar pairs and 3 false similar pairs. In such
a case, the gradients introduced by the false similar pairs have a higher chance to be overridden by true similar pairs within the mini-batch, thus the loss can converge faster and is less affected by errors. In FIG7
, we could see when density decreases, the size of the bright region shrinks significantly.In our implementation, enumerating the full pairwise relationships introduces negligible overhead in computation time using GPU. Although
there is overhead for memory consumption, it is limited because only the vector of predicted distributions has to be enumerated for calculating the clustering loss.The effect of varying the number of Clusters In the MNIST experiments, the number of categories is 10. We augment
the softmax output number up to 100. The rows of
FIG7 show that even when the number of output categories is significant larger than the number of true object categories, e.g. 100 > 10, the clustering performance NMI score only degrades slightly. | A learnable clustering objective to facilitate transfer learning across domains and tasks | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:771 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Recent advances in cross-lingual word embeddings have primarily relied on mapping-based methods, which project pretrained word embeddings from different languages into a shared space through a linear transformation.
However, these approaches assume word embedding spaces are isomorphic between different languages, which has been shown not to hold in practice (Søgaard et al., 2018), and fundamentally limits their performance.
This motivates investigating joint learning methods which can overcome this impediment, by simultaneously learning embeddings across languages via a cross-lingual term in the training objective.
Given the abundance of parallel data available (Tiedemann, 2012), we propose a bilingual extension of the CBOW method which leverages sentence-aligned corpora to obtain robust cross-lingual word and sentence representations.
Our approach significantly improves cross-lingual sentence retrieval performance over all other approaches, as well as convincingly outscores mapping methods while maintaining parity with jointly trained methods on word-translation.
It also achieves parity with a deep RNN method on a zero-shot cross-lingual document classification task, requiring far fewer computational resources for training and inference.
As an additional advantage, our bilingual method also improves the quality of monolingual word vectors despite training on much smaller datasets.
We make our code and models publicly available.
Cross-lingual representations-such as embeddings of words and phrases into a single comparable feature space-have become a key technique in multilingual natural language processing.
They offer strong promise towards the goal of a joint understanding of concepts across languages, as well as for enabling the transfer of knowledge and machine learning models between different languages.
Therefore, cross-lingual embeddings can serve a variety of downstream tasks such as bilingual lexicon induction, cross-lingual information retrieval, machine translation and many applications of zero-shot transfer learning, which is particularly impactful from resource-rich to low-resource languages.
Existing methods can be broadly classified into two groups (Ruder et al., 2017) : mapping methods leverage existing monolingual embeddings which are treated as independent, and apply a postprocess step to map the embeddings of each language into a shared space, through a linear transformation (Mikolov et al., 2013b; Conneau et al., 2017; Joulin et al., 2018) .
On the other hand, joint methods learn representations concurrently for multiple languages, by combining monolingual and cross-lingual training tasks (Luong et al., 2015; Coulmance et al., 2015; Gouws et al., 2015; Vulic & Moens, 2015; Chandar et al., 2014; Hermann & Blunsom, 2013) .
While recent work on word embeddings has focused almost exclusively on mapping methods, which require little to no cross-lingual supervision, (Søgaard et al., 2018) establish that their performance is hindered by linguistic and domain divergences in general, and for distant language pairs in particular.
Principally, their analysis shows that cross-lingual hubness, where a few words (hubs) in the source language are nearest cross-lingual neighbours of many words in the target language, and structural non-isometry between embeddings do impose a fundamental barrier to the performance of linear mapping methods.
(Ormazabal et al., 2019) propose using joint learning as a means of mitigating these issues.
Given parallel data, such as sentences, a joint model learns to predict either the word or context in both source and target languages.
As we will demonstrate with results from our algorithm, joint methods yield compatible embeddings which are closer to isomorphic, less sensitive to hubness, and perform better on cross-lingual benchmarks.
Contributions.
We propose the BI-SENT2VEC algorithm, which extends the SENT2VEC algorithm (Pagliardini et al., 2018; Gupta et al., 2019) to the cross-lingual setting.
We also revisit TRANS-GRAM Coulmance et al. (2015) , another joint learning method, to assess the effectiveness of joint learning over mapping-based methods.
Our contributions are
• On cross-lingual sentence-retrieval and monolingual word representation quality evaluations, BI-SENT2VEC significantly outperforms competing methods, both jointly trained as well as mapping-based ones while preserving state-of-the-art performance on cross-lingual word retrieval tasks.
For dis-similar language pairs, BI-SENT2VEC outperform their competitors by an even larger margin on all the tasks hinting towards the robustness of our method.
• BI-SENT2VEC performs on par with a multilingual RNN based sentence encoder, LASER (Artetxe & Schwenk, 2018) , on MLDoc (Schwenk & Li, 2018) , a zero-shot crosslingual transfer task on documents in multiple languages.
Compared to LASER, our method improves computational efficiency by an order of magnitude for both training and inference, making it suitable for resource or latency-constrained on-device cross-lingual NLP applications.
• We verify that joint learning methods consistently dominate state-of-the-art mapping methods on standard benchmarks, i.e., cross-lingual word and sentence retrieval.
• Training on parallel data additionally enriches monolingual representation quality, evident by the superior performance of BI-SENT2VEC over FASTTEXT embeddings trained on a 100× larger corpus.
We make our models and code publicly available.
In the following section, we discuss the results on monolingual and cross-lingual benchmarks, presented in Tables 1 -5 , and a data ablation study for how the model behaves with increasing parallel corpus size in Figure 2 -3.
The most impressive outcome of our experiments is improved crosslingual sentence retrieval performance, which we elaborate on along with word translation in the next subsection.
We introduce a cross-lingual extension of an existing monolingual word and sentence embedding method.
The proposed model is tested at three levels of linguistic granularity: words, sentences and documents.
The model outperforms all other methods by a wide margin on the cross-lingual sentence retrieval task while maintaining parity with the best-performing methods on word translation tasks.
Our method achieves parity with LASER on zero-shot document classification, despite being a much simpler model.
We also demonstrate that training on parallel data yields a significant improvement in the monolingual word representation quality.
The success of our model on the bilingual level calls for its extension to the multilingual level especially for pairs which have little or no parallel corpora.
While the amount of bilingual/multilingual parallel data has grown in abundance, the amount of monolingual data available is practically limitless.
Consequently, we would like to explore training cross-lingual embeddings with a large amount of raw text combined with a smaller amount of parallel data.
We used ParaCrawl v4.0 corpora for training BI-SENT2VEC, SENT2VEC,BIVEC,VECMAP and TRANSGRAM embeddings except for En-Ru pair for which we used OpenSubtitles and Tanzil corpora combined.
MUSE and RCSLS vectors were trained from FASTTEXT vectors obtained from Wikipedia dumps (Grave et al., 2018a | Joint method for learning cross-lingual embeddings with state-of-art performance for cross-lingual tasks and mono-lingual quality | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:772 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
To act and plan in complex environments, we posit that agents should have a mental simulator of the world with three characteristics:
(a) it should build an abstract state representing the condition of the world;
(b) it should form a belief which represents uncertainty on the world;
(c) it should go beyond simple step-by-step simulation, and exhibit temporal abstraction.
Motivated by the absence of a model satisfying all these requirements, we propose TD-VAE, a generative sequence model that learns representations containing explicit beliefs about states several steps into the future, and that can be rolled out directly without single-step transitions.
TD-VAE is trained on pairs of temporally separated time points, using an analogue of temporal difference learning used in reinforcement learning.
Generative models of sequential data have received a lot of attention, due to their wide applicability in domains such as speech synthesis BID18 , neural translation BID3 , image captioning BID22 , and many others.
Different application domains will often have different requirements (e.g. long term coherence, sample quality, abstraction learning, etc.), which in turn will drive the choice of the architecture and training algorithm.Of particular interest to this paper is the problem of reinforcement learning in partially observed environments, where, in order to act and explore optimally, agents need to build a representation of the uncertainty about the world, computed from the information they have gathered so far.
While an agent endowed with memory could in principle learn such a representation implicitly through model-free reinforcement learning, in many situations the reinforcement signal may be too weak to quickly learn such a representation in a way which would generalize to a collection of tasks.Furthermore, in order to plan in a model-based fashion, an agent needs to be able to imagine distant futures which are consistent with the agent's past.
In many situations however, planning step-by-step is not a cognitively or computationally realistic approach.To successfully address an application such as the above, we argue that a model of the agent's experience should exhibit the following properties:• The model should learn an abstract state representation of the data and be capable of making predictions at the state level, not just the observation level.•
The model should learn a belief state, i.e. a deterministic, coded representation of the filtering posterior of the state given all the observations up to a given time. A
belief state contains all the information an agent has about the state of the world and thus about how to act optimally.•
The model should exhibit temporal abstraction, both by making 'jumpy' predictions (predictions several time steps into the future), and by being able to learn from temporally separated time points without backpropagating through the entire time interval.To our knowledge, no model in the literature meets these requirements. In
this paper, we develop a new model and associated training algorithm, called Temporal Difference Variational Auto-Encoder (TD-VAE), which meets all of the above requirements. We
first develop TD-VAE in the sequential, non-jumpy case, by using a modified evidence lower bound (ELBO) for stochastic state space models (Krishnan et al., 2015; BID12 BID8 which relies on jointly training a filtering posterior and a local smoothing posterior. We
demonstrate that on a simple task, this new inference network and associated lower bound lead to improved likelihood compared to methods classically used to train deep state-space models.Following the intuition given by the sequential TD-VAE, we develop the full TD-VAE model, which learns from temporally extended data by making jumpy predictions into the future. We
show it can be used to train consistent jumpy simulators of complex 3D environments. Finally
, we illustrate how training a filtering a posterior leads to the computation of a neural belief state with good representation of the uncertainty on the state of the environment.2 MODEL
DESIDERATA
In this paper, we argued that an agent needs a model that is different from an accurate step-by-step environment simulator.
We discussed the requirements for such a model, and presented TD-VAE, a sequence model that satisfies all requirements.
TD-VAE builds states from observations by bridging time points separated by random intervals.
This allows the states to relate to each other directly over longer time stretches and explicitly encode the future.
Further, it allows rolling out in state-space and in time steps larger than, and potentially independent of, the underlying temporal environment/data step size.
In the future, we aim to apply TD-VAE to more complex settings, and investigate a number of possible uses in reinforcement learning such are representation learning and planning. | Generative model of temporal data, that builds online belief state, operates in latent space, does jumpy predictions and rollouts of states. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:773 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
This paper introduces an information theoretic co-training objective for unsupervised learning.
We consider the problem of predicting the future
. Rather than predict future sensations (image pixels or sound waves) we predict ``hypotheses'' to be confirmed by future sensations
. More formally, we assume a population distribution on pairs $(x,y)$ where we can think of $x$ as a past sensation and $y$ as a future sensation
. We train both a predictor model $P_\Phi(z|x)$ and a confirmation model $P_\Psi(z|y)$ where we view $z$ as hypotheses (when predicted) or facts (when confirmed
). For a population distribution on pairs $(x,y)$ we focus on the problem of measuring the mutual information between $x$ and $
y$. By the data processing inequality this mutual information is at least as large as the mutual information between $x$ and $z$ under the distribution on triples $(x,z,y)$ defined by the confirmation model $P_\Psi(z|y)$.
The information theoretic training objective for $P_\Phi(z|x)$ and $P_\Psi(z|y)$ can be viewed as a form of co-training where we want the prediction from $x$ to match the confirmation from $y$.
We give experiments on applications to learning phonetics on the TIMIT dataset. | Presents an information theoretic training objective for co-training and demonstrates its power in unsupervised learning of phonetics. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:774 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Generative models for singing voice have been mostly concerned with the task of "singing voice synthesis," i.e., to produce singing voice waveforms given musical scores and text lyrics.
In this work, we explore a novel yet challenging alternative: singing voice generation without pre-assigned scores and lyrics, in both training and inference time.
In particular, we experiment with three different schemes:
1) free singer, where the model generates singing voices without taking any conditions;
2) accompanied singer, where the model generates singing voices over a waveform of instrumental music; and
3) solo singer, where the model improvises a chord sequence first and then uses that to generate voices.
We outline the associated challenges and propose a pipeline to tackle these new tasks.
This involves the development of source separation and transcription models for data preparation, adversarial networks for audio generation, and customized metrics for evaluation.
The task of computationally producing singing voices is usually referred to as singing voice synthesis (SVS) in the literature (Cook, 1996) .
Most researchers assume that the note sequence and the lyrics of the waveform to be generated are given as the model input, and aim to build synthesis engines that sound as natural and expressive as a real singer (Blaauw et al., 2019; Hono et al., 2019; Kaewtip et al., 2019; Lee et al., 2019a; Tamaru et al., 2019) .
As such, the content of the produced singing voice is largely determined by the given model input, which is usually assigned by human.
And, accordingly, progress in SVS has followed closely with that in text-to-speech (TTS) synthesis (Umbert et al., 2015; Shen et al., 2017; Gibiansky et al., 2017) .
However, we argue that singing according to a pre-assigned musical score and lyrics is only a part of the human singing activities.
For human beings, singing can also be a spontaneous activity.
We learn to spontaneously sing when we were children (Dowling, 1984) .
We do not need a score to sing when we are humming on the road or in the bathroom.
The voices sung do not have to be intelligible.
Jazz vocalists can improvise according to a chord progression, an accompaniment, or even nothing.
We aim to explore such a new task in this paper: teaching a machine to sing with a training collection of singing voices, but without the corresponding musical scores and lyrics of the training data.
Moreover, the machine has to sing without pre-assigned score and lyrics as well even in the inference (generation) time.
This task is challenging in that, as the machine sees no lyrics at all, it hardly has any knowledge of the human language to pronounce or articulate either voiced or unvoiced sounds.
And, as the machine sees no musical scores at all, it has to find its own way learning the language of music in creating plausible vocal melodies.
It also makes the task different from TTS.
Specifically, we consider three types of such score-and lyrics-free singing voice generation tasks, as shown in Figures 1(b) - (d) .
A free singer sings with only random noises as the input.
An accompanied singer learns to sing over a piece of instrumental music, which is given as an audio waveform (again without score information).
Finally, a solo singer also sings with only noises as the input, but it uses the noises to firstly generate some kind of 'inner ideas' of what to sing.
From a technical point of view, we can consider SVS as a strongly conditioned task for generating singing voices, as the target output is well specified by the input.
In contrast, the proposed tasks are either unconditioned or weakly conditioned.
This work therefore contributes to expanding the "spectrum" (in terms of the strength of conditional signals) of singing voice generation.
Doing so has at least two implications.
First, while our models are more difficult to train than SVS models, they enjoy more freedom in the generation output.
Such freedom may be desirable considering the artistic nature of singing.
Second, we can more easily use a larger training set to train our model- due to the difficulty in preparing time-aligned scores and lyrics, the training set employed in existing work on SVS usually consists of tens of songs only (Lee et al., 2019a) ; in contrast, in our case we do not need labeled and aligned data and can therefore use more than hundreds of songs for training.
This may help establish a universal model based on which extensions can be made.
The proposed accompanied singer also represents one of the first attempts to produce singing voice given an accompaniment.
One intuitive approach to achieve this is to first generate a score according to an accompaniment in the symbolic domain and then synthesize the singing voices according to the score.
The second step of synthesis is relatively well-established, but the first step of generating a score given an accompaniment is not explored yet.
Extensive researches have been done in generating scores of one or several instruments (Hadjeres et al., 2017; Huang et al., 2019; Payne, 2019) .
However, to the best of our knowledge, very few, if any, researches have been done on generating scores of singing voices given an accompaniment.
Our approach bypasses the step of generating scores by directly generating the mel-spectrogram representation.
We outline below the challenges associated with the proposed tasks and the solutions we investigate.
First, the tasks are unsupervised as we do not provide any labels (e.g., annotations of phonemes, pitches, or onset times) for the training singing files.
The machine has to learn the complex structure of music directly from audio signals.
We explore the use of generative adversarial network (GAN) (Goodfellow et al., 2014) to address this issue, for its demonstrated effectiveness for SVS (Hono et al., 2019) and pitch-conditioned instrument note synthesis (Engel et al., 2019) .
Specifically, we design a novel GAN-based architecture to learn to generate the mel-spectrogram of singing voice, and then use WaveRNN (Kalchbrenner et al., 2018) , a single-layer recurrent neural network, as the vocoder to generate the audio waveform.
Rather than considering the mel-spectrograms as a fixedsize image as done in recent work on audio generation (Engel et al., 2019; Marafioti et al., 2019) , we use gated recurrent units (GRUs) and dilated convolutions (van den Oord et al., 2016) in both the generator and discriminator, to model both the local and sequential patterns in music and to facilitate the generation of variable-length waveforms.
Second, for training the free singer, unaccompanied vocal tracks are needed.
As for the accompanied singer, we need additionally an accompaniment track for each vocal track.
However, public-domain multi-track music data is hard to find.
We choose to implement a vocal source separation model with state-of-the-art separation quality (Liu & Yang, 2019) for data preparation.
The proposed pipeline for training and evaluating an accompanied singer is illustrated in Figure 2 .
The advantage of having a vocal separation model is that we can use as many audio files as we have to compile the training data.
The downside is that the singing voice generation models may suffer from the artifacts (Cano et al., 2018) of the source separation model, which is moderate but not negligible.
Third, for the accompanied singer, there is no single "ground truth" and the relationship between the model input and output may be one-to-many.
This is because there are plenty of valid ways to Figure 2 : A pipeline for building the accompanied singer.
We use source separation to get separated singing voice and accompaniment from professionally recorded audio files.
Then, we use the separated tracks to train the generators and discriminators in the GAN.
In inference time, we feed an unseen accompaniment to the trained singer model and let it "sing."
sing over an accompaniment track.
For diversity and artistic freedom, we cannot ask the machine to generate any specific singing voice in response to an accompaniment track, even if we have paired data of vocal and accompaniment tracks.
We investigate using conditional GAN (Mirza & Osindero, 2014) to retain the possibility of generating singing voices with multiple modes.
Fourth, as the proposed tasks are new, there are no established ways for performance evaluation.
According to our setting, we desire our machine to generate audio waveforms with high quality and diversity, vocal-like timbre, plausible pitch contour, emotion expression, and, for the accompanied singer, that are in harmony with the given accompaniment track.
But, the singing does not have to be intelligible.
We propose customized objective and subjective metrics to evaluate our models in these aspects.
For example, we adapt the melody harmonization model proposed by Lim et al. (2017) to measure the matchness between the generated vocal track and the given accompaniment track.
Finally, reproducibility is a major issue, especially for a subjective task.
We intend to use publiclyavailable copyright-free instrumental music as the conditional signals for testing the accompanied singer, so that other researchers can use the same testing conditions for model comparison in the future.
We will also release the testing conditions for the solo singer, the generated singing voices for all our models, as well as open source our code through a public git repository [URL removed].
We focus on Jazz music in this work.
Samples of the generated singing voices can be found at https://bit.ly/2mIvoIc.
Our models have many possible use cases.
For example, we can use the accompanied singer as a backing vocalist.
In addition, we can use the free singer as a sound source-to demonstrate this, we make a song by hand in the style of Jazz Hiphop by sampling the output of our free singer.
This song can be listened to at https://bit.ly/2QkUJoJ.
In this paper, we have introduced a novel task of singing voice generation that does not use musical scores and lyrics.
Specifically, we proposed three singing schemes with different input conditions: free singer, accompanied singer, and solo singer.
We have also proposed a BEGAN based architecture that uses GRUs and grouped dilated convolutions to learn to generate singing voices in an adversarial way.
For evaluating such models, we proposed several objective metrics and implemented a model to measure the compatibility between a given accompaniment track and the generated vocal track.
The evaluation shows that the audio quality of the generated voices still leave much room for improvement, but in terms of humanness and emotion expression our models work fine.
Score and lyrics-free singing voice generation is a new task, and this work represents only a first step tackling it.
There are many interesting ideas to pursue.
For example, we have chosen to extract pitch-related information only from the accompaniment track for the accompanied singer, but a more interesting way is to let the model learns to extract relevant information itself.
In the near future, we plan to investigate advanced settings that allow for timbre and expression control, and experiment with other network architectures, such as coupling a fine-grained auto-regressive model with a multiscale generation procedure as done in MelNet (Vasquez & Lewis, 2019) , using a discriminator that examines different chunks of the generated audio as done in PatchGAN for the vision domain (Isola et al., 2017) , or using multiple discriminators that evaluate the generated audio based on multi-frequency random windows as done in GAN-TTS (Bińkowski et al., 2019) The generator in G3BEGAN is implemented with a stack of two G3 blocks.
Please see Table 4 for details of the network architecture. | Our models generate singing voices without lyrics and scores. They take accompaniment as input and output singing voices. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:775 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The carbon footprint of natural language processing (NLP) research has been increasing in recent years due to its reliance on large and inefficient neural network implementations.
Distillation is a network compression technique which attempts to impart knowledge from a large model to a smaller one.
We use teacher-student distillation to improve the efficiency of the Biaffine dependency parser which obtains state-of-the-art performance with respect to accuracy and parsing speed (Dozat & Manning, 2016).
When distilling to 20% of the original model’s trainable parameters, we only observe an average decrease of ∼1 point for both UAS and LAS across a number of diverse Universal Dependency treebanks while being 2.26x (1.21x) faster than the baseline model on CPU (GPU) at inference time.
We also observe a small increase in performance when compressing to 80% for some treebanks.
Finally, through distillation we attain a parser which is not only faster but also more accurate than the fastest modern parser on the Penn Treebank.
Ethical NLP research has recently gained attention (Kurita et al., 2019; Sun et al., 2019) .
For example, the environmental cost of AI research has become a focus of the community, especially with regards to the development of deep neural networks (Schwartz et al., 2019; Strubell et al., 2019) .
Beyond developing systems to be greener, increasing the efficiency of models makes them more cost-effective, which is a compelling argument even for people who might downplay the extent of anthropogenic climate change.
In conjunction with this push for greener AI, NLP practitioners have turned to the problem of developing models that are not only accurate but also efficient, so as to make them more readily deployable across different machines with varying computational capabilities (Strzyz et al., 2019; Clark et al., 2019; Junczys-Dowmunt et al., 2018) .
This is in contrast with the recently popular principle of make it bigger, make it better (Devlin et al., 2019; Radford et al., 2019) .
Here we explore teacher-student distillation as a means of increasing the efficiency of neural network systems used to undertake a core task in NLP, dependency parsing.
To do so, we take a state-of-theart (SoTA) Biaffine parser from Dozat & Manning (2016) .
The Biaffine parser is not only one of the most accurate parsers, it is the fastest implementation by almost an order of magnitude among state-of-the-art performing parsers.
Contribution We utilise teacher-student distillation to compress Biaffine parsers trained on a diverse subset of Universal Dependency (UD) treebanks.
We find that distillation maintains accuracy performance close to that of the full model and obtains far better accuracy than simply implementing equivalent model size reductions by changing the parser's network size and training regularly.
Furthermore, we can compress a parser to 20% of its trainable parameters with minimal loss in accuracy and with a speed 2.26x (1.21x) faster than that of the original model on CPU (GPU).
We have shown the efficacy of using the teacher-student distillation technique for dependency parsing by distilling a state-of-the-art parser implementation.
The parser used for our experiments was not only accurate but already fast, meaning it was a strong baseline from which to see improvements.
We obtained parsing speeds up to 2.26x (1.21x) faster on CPU (GPU) while only losing ∼1 point for both UAS and LAS when compared to the original sized model.
Furthermore, the smallest model which obtains these results only has 20% of the original model's trainable parameters, vastly reducing its environmental impact.
A APPENDIX | We increase the efficiency of neural network dependency parsers with teacher-student distillation. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:776 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
While autoencoders are a key technique in representation learning for continuous structures, such as images or wave forms, developing general-purpose autoencoders for discrete structures, such as text sequence or discretized images, has proven to be more challenging.
In particular, discrete inputs make it more difficult to learn a smooth encoder that preserves the complex local relationships in the input space.
In this work, we propose an adversarially regularized autoencoder (ARAE) with the goal of learning more robust discrete-space representations.
ARAE jointly trains both a rich discrete-space encoder, such as an RNN, and a simpler continuous space generator function, while using generative adversarial network (GAN) training to constrain the distributions to be similar.
This method yields a smoother contracted code space that maps similar inputs to nearby codes, and also an implicit latent variable GAN model for generation.
Experiments on text and discretized images demonstrate that the GAN model produces clean interpolations and captures the multimodality of the original space, and that the autoencoder produces improvements in semi-supervised learning as well as state-of-the-art results in unaligned text style transfer task using only a shared continuous-space representation.
Recent work on regularized autoencoders, such as variational BID15 BID29 and denoising BID37 variants, has shown significant progress in learning smooth representations of complex, high-dimensional continuous data such as images.
These codespace representations facilitate the ability to apply smoother transformations in latent space in order to produce complex modifications of generated outputs, while still remaining on the data manifold.Unfortunately, learning similar latent representations of discrete structures, such as text sequences or discretized images, remains a challenging problem.
Initial work on VAEs for text has shown that optimization is difficult, as the decoder can easily degenerate into a unconditional language model BID2 .
Recent work on generative adversarial networks (GANs) for text has mostly focused on getting around the use of discrete structures either through policy gradient methods BID40 or with the Gumbel-Softmax distribution BID17 .
However, neither approach can yet produce robust representations directly.A major difficulty of discrete autoencoders is mapping a discrete structure to a continuous code vector while also smoothly capturing the complex local relationships of the input space.
Inspired by recent work combining pretrained autoencoders with deep latent variable models, we propose to target this issue with an adversarially regularized autoencoder (ARAE).
Specifically we jointly train a discrete structure encoder and continuous space generator, while constraining the two models with a discriminator to agree in distribution.
This approach allows us to utilize a complex encoder model, such as an RNN, and still constrain it with a very flexible, but more limited generator distribution.
The full model can be then used as a smoother discrete structure autoencoder or as a latent variable GAN model where a sample can be decoded, with the same decoder, to a discrete output.
Since the system produces a single continuous coded representation-in contrast to methods that act on each RNN state-it can easily be further regularized with problem-specific invariants, for instance to learn to ignore style, sentiment or other attributes for transfer tasks.Experiments apply ARAE to discretized images and sentences, and demonstrate that the key properties of the model.
Using the latent variable model (ARAE-GAN), the model is able to generate varied samples that can be quantitatively shown to cover the input spaces and to generate consistent image and sentence manipulations by moving around in the latent space via interpolation and offset vector arithmetic.
Using the discrete encoder, the model can be used in a semi-supervised setting to give improvement in a sentence inference task.
When the ARAE model is trained with task-specific adversarial regularization, the model improves the current best results on sentiment transfer reported in BID33 and produces compelling outputs on a topic transfer task using only a single shared code space.
All outputs are listed in the Appendix 9 and code is available at (removed for review).
We present adversarially regularized autoencoders, as a simple approach for training a discrete structure autoencoder jointly with a code-space generative adversarial network.
The model learns a improved autoencoder as demonstrated by semi-supervised experiments and improvements on text transfer experiments.
It also learns a useful generative model for text that exhibits a robust latent space, as demonstrated by natural interpolations and vector arithmetic.
We do note that (as has been frequently observed when training GANs) our model seemed to be quite sensitive to hyperparameters.
Finally, while many useful models for text generation already exist, text GANs provide a qualitatively different approach influenced by the underlying latent variable structure.
We envision that such a framework could be extended to a conditional setting, combined with other existing decoding schemes, or used to provide a more interpretable model of language. | Adversarially Regularized Autoencoders learn smooth representations of discrete structures allowing for interesting results in text generation, such as unaligned style transfer, semi-supervised learning, and latent space interpolation and arithmetic. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:777 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
When data arise from multiple latent subpopulations, machine learning frameworks typically estimate parameter values independently for each sub-population.
In this paper, we propose
to overcome these limits by considering samples as tasks in a multitask learning framework. | We present a method to estimate collections of regression models in which each model is personalized to a single sample. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:778 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
In this work, we first conduct mathematical analysis on the memory, which is
defined as a function that maps an element in a sequence to the current output,
of three RNN cells; namely, the simple recurrent neural network (SRN), the long
short-term memory (LSTM) and the gated recurrent unit (GRU).
Based on the
analysis, we propose a new design, called the extended-long short-term memory
(ELSTM), to extend the memory length of a cell.
Next, we present a multi-task
RNN model that is robust to previous erroneous predictions, called the dependent
bidirectional recurrent neural network (DBRNN), for the sequence-in-sequenceout
(SISO) problem.
Finally, the performance of the DBRNN model with the
ELSTM cell is demonstrated by experimental results.
The recurrent neural network (RNN) has proved to be an effective solution for natural language processing (NLP) through the advancement in the last three decades BID8 BID11 BID2 BID1 .
At the cell level of a RNN, the long short-term memory (LSTM) BID10 and the gated recurrent unit (GRU) are often adopted by a RNN as its low-level building cell.
Being built upon these cells, various RNN models have been proposed to solve the sequence-in-sequence-out (SISO) problem.
To name a few, there are the bidirectional RNN (BRNN) BID14 , the encoder-decoder model BID15 BID16 BID0 and the deep RNN BID12 .
Although the LSTM and the GRU were designed to enhance the memory length of RNNs and avoid the gradient vanishing/exploding issue BID10 BID13 BID3 , a good understanding of their memory length is still lacking.
Here, we define the memory of a RNN model as a function that maps an element in a sequence to current output.
The first objective of this research is to analyze the memory length of three RNN cells -the simple RNN (SRN) BID8 BID11 , the long short-term memory (LSTM) and the gated recurrent unit (GRU).
This will be conducted in Sec. 2.
Such analysis is different to the investigation of gradient vanishing/exploding problem in a sense that gradient vanishing/exploding problem happens during the training process, the memory analysis is, however, done on a trained RNN model.
Based on the understanding from the memory analysis, we propose a new design, called the extended-long short-term memory (ELSTM), to extend the memory length of a cell in Sec.3.As to the macro RNN model, one popular choice is the BRNN.
Since the elements in BRNN output sequences should be independent of each other BID14 , the BRNN cannot be used to solve dependent output sequence problem alone.
Nevertheless, most language tasks do involve dependent output sequences.
The second choice is the encoder-decoder system, where the attention mechanism has been introduced BID16 BID0 to improve its performance furthermore.
As shown later in this work, the encoder-decoder system is not an efficient learner.
Here, to take advantages of both the encoder-decoder and the BRNN and overcome their drawbacks, we propose a new multitask model called the dependent bidirectional recurrent neural network (DBRNN), which will be elaborated in Sec. 4.
Furthermore, we conduct a series of experiments on the part of speech (POS) tagging and the dependency parsing (DP) problems in Sec. 5 to demonstrate the performance of the DBRNN model with the ELSTM cell.
Finally, concluding remarks are given and future research direction is pointed out in Sec. 6.
The memory decay behavior of the LSTM and the GRU was investigated and explained by mathematical analysis.
Although the memory of the LSTM and the GRU fades slower than that of the SRN, it may not be long enough for complicated language tasks such as dependency parsing.
To enhance the memory length, two cells called the ELSTM-I and the ELSTM-II were proposed.
Furthermore, we introduced a new RNN model called the DBRNN that has the merits of both the BRNN and the encoder-decoder.
It was shown by experimental results that the ELSTM-I and ELSTM-II outperforms other designs by a significant margin for complex language tasks.
The DBRNN design is superior to BRNN as well as sequence-to-sequence models for both simple and complex language tasks.
There are interesting issues to be further explored.
For example, is the ELSTM cell also helpful in more sophisticated RNN models such as the deep RNN?
Is it possible to make the DBRNN deeper and better?
They are left for future study. | A recurrent neural network cell with extended-long short-term memory and a multi-task RNN model for sequence-in-sequence-out problems | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:779 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We propose the Neuro-Symbolic Concept Learner (NS-CL), a model that learns visual concepts, words, and semantic parsing of sentences without explicit supervision on any of them; instead, our model learns by simply looking at images and reading paired questions and answers.
Our model builds an object-based scene representation and translates sentences into executable, symbolic programs.
To bridge the learning of two modules, we use a neuro-symbolic reasoning module that executes these programs on the latent scene representation.
Analogical to human concept learning, the perception module learns visual concepts based on the language description of the object being referred to.
Meanwhile, the learned visual concepts facilitate learning new words and parsing new sentences.
We use curriculum learning to guide the searching over the large compositional space of images and language.
Extensive experiments demonstrate the accuracy and efficiency of our model on learning visual concepts, word representations, and semantic parsing of sentences.
Further, our method allows easy generalization to new object attributes, compositions, language concepts, scenes and questions, and even new program domains.
It also empowers applications including visual question answering and bidirectional image-text retrieval.
Humans are capable of learning visual concepts by jointly understanding vision and language BID12 BID8 BID15 .
Consider the example shown in Figure 1 -I.
Imagine that someone with no prior knowledge of colors is presented with the images of the red and green cubes, paired with the questions and answers.
They can easily identify the difference in objects' visual appearance (in this case, color), and align it to the corresponding words in the questions and answers (Red and Green).
Other object attributes (e.g., shape) can be learned in a similar fashion.
Starting from there, humans are able to inductively learn the correspondence between visual concepts and word semantics (e.g., spatial relations and referential expressions, Figure 1 -II), and unravel compositional logic from complex questions assisted by the learned visual concepts (Figure 1 -III, also see BID0 ).Motivated
by this, we propose the neuro-symbolic concept learner (NS-CL), which jointly learns visual perception, words, and semantic language parsing from images and question-answer pairs. NS-CL has
three modules: a neural-based perception module that extracts object-level representations from the scene, a visually-grounded semantic parser for translating questions into executable programs, and a symbolic program executor that reads out the perceptual representation of objects, classifies their attributes/relations, and executes the program to obtain an answer.Figure 1: Humans learn visual concepts, words, and semantic parsing jointly and incrementally. I. Learning
visual concepts (red vs. green) starts from looking at simple scenes, reading simple questions, and reasoning over contrastive examples BID12 . II. Afterwards
, we
can interpret referential expressions based on the learned object-based concepts, and learn relational concepts (e.g., on the right of, the same material as). III Finally, we
can interpret complex questions from visual cues by exploiting the compositional structure.NS-CL learns from natural supervision (i.e., images and QA pairs), requiring no annotations on images or semantic programs for sentences. Instead, analogical
to human concept learning, it learns via curriculum learning. NS-CL starts by learning
representations/concepts of individual objects from short questions (e.g., What's the color of the cylinder?) on simple scenes (≤3 objects). By doing so, it learns object-based
concepts such as colors and shapes. NS-CL then learns relational concepts
by leveraging these object-based concepts to interpret object referrals (e.g., Is there a box right of a cylinder?). The model iteratively adapts to more
complex scenes and highly compositional questions.NS-CL's modularized design enables interpretable, robust, and accurate visual reasoning: it achieves state-of-the-art performance on the CLEVR dataset (Johnson et al., 2017a) . More importantly, it naturally learns
disentangled visual and language concepts, enabling combinatorial generalization w.r.t. both visual scenes and semantic programs. In particular, we demonstrate four forms
of generalization. First, NS-CL generalizes to scenes with
more objects and longer semantic programs than those in the training set. Second, it generalizes to new visual attribute
compositions, as demonstrated on the CLEVR-CoGenT (Johnson et al., 2017a) dataset. Third, it enables fast adaptation to novel visual
concepts, such as learning a new color. Finally, the learned visual concepts transfer to
new tasks, such as image-caption retrieval, without any extra fine-tuning.
We presented a method that jointly learns visual concepts, words, and semantic parsing of sentences from natural supervision.
The proposed framework, NS-CL, learns by looking at images and reading paired questions and answers, without any explicit supervision such as class labels for objects.
Our model learns visual concepts with remarkable accuracy.
Based upon the learned concepts, our model achieves good results on question answering, and more importantly, generalizes well to new visual compositions, new visual concepts, and new domain specific languages.The design of NS-CL suggests multiple research directions.
First, constructing 3D object-based representations for realistic scenes needs further exploration BID1 BID5 .
Second, our model assumes a domain-specific language for describing formal semantics.
The integration of formal semantics into the processing of complex natural language would be meaningful future work BID4 Oh et al., 2017) .
We hope our paper could motivate future research in visual concept learning, language learning, and compositionality.Our framework can also be extended to other domains such as video understanding and robotic manipulation.
Here, we would need to discover semantic representations for actions and interactions (e.g., push) beyond static spatial relations.
Along this direction, researchers have studied building symbolic representations for skills (Konidaris et al., 2018) and learning instruction semantics from interaction (Oh et al., 2017) in constrained setups.
Applying neuro-symbolic learning frameworks for concepts and skills would be meaningful future work toward robotic learning in complex interactive environments. | We propose the Neuro-Symbolic Concept Learner (NS-CL), a model that learns visual concepts, words, and semantic parsing of sentences without explicit supervision on any of them. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:78 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Many recently trained neural networks employ large numbers of parameters to achieve good performance.
One may intuitively use the number of parameters required as a rough gauge of the difficulty of a problem.
But how accurate are such notions?
How many parameters are really needed?
In this paper we attempt to answer this question by training networks not in their native parameter space, but instead in a smaller, randomly oriented subspace.
We slowly increase the dimension of this subspace, note at which dimension solutions first appear, and define this to be the intrinsic dimension of the objective landscape.
The approach is simple to implement, computationally tractable, and produces several suggestive conclusions.
Many problems have smaller intrinsic dimensions than one might suspect, and the intrinsic dimension for a given dataset varies little across a family of models with vastly different sizes.
This latter result has the profound implication that once a parameter space is large enough to solve a problem, extra parameters serve directly to increase the dimensionality of the solution manifold.
Intrinsic dimension allows some quantitative comparison of problem difficulty across supervised, reinforcement, and other types of learning where we conclude, for example, that solving the inverted pendulum problem is 100 times easier than classifying digits from MNIST, and playing Atari Pong from pixels is about as hard as classifying CIFAR-10.
In addition to providing new cartography of the objective landscapes wandered by parameterized models, the method is a simple technique for constructively obtaining an upper bound on the minimum description length of a solution.
A byproduct of this construction is a simple approach for compressing networks, in some cases by more than 100 times.
Training a neural network to model a given dataset entails several steps.
First, the network designer chooses a loss function and a network architecture for a given dataset.
The architecture is then initialized by populating its weights with random values drawn from some distribution.
Finally, the network is trained by adjusting its weights to produce a loss as low as possible.
We can think of the training procedure as traversing some path along an objective landscape.
Note that as soon as a dataset and network architecture are specified, the landscape in its entirety is completely determined.
It is instantiated and frozen; all subsequent parameter initialization, forward and backward propagation, and gradient steps taken by an optimizer are just details of how the frozen space is explored.Consider a network parameterized by D weights.
We can picture its associated objective landscape as a set of "hills and valleys" in D dimensions, where each point in R D corresponds to a value of the loss, i.e., the elevation of the landscape.
If D = 2, the map from two coordinates to one scalar loss can be easily imagined and intuitively understood by those living in a three-dimensional world with similar hills.
However, in higher dimensions, our intuitions may not be so faithful, and generally we must be careful, as extrapolating low-dimensional intuitions to higher dimensions can lead to unreliable conclusions.
The difficulty of understanding high-dimensional landscapes notwithstanding, it is the lot of neural network researchers to spend their efforts leading (or following?) networks over these multi-dimensional surfaces.
Therefore, any interpreted geography of these landscapes is valuable.Several papers have shed valuable light on this landscape, particularly by pointing out flaws in common extrapolation from low-dimensional reasoning.
BID4 showed that, in contrast to conventional thinking about getting stuck in local optima (as one might be stuck in a valley in our familiar D = 2), local critical points in high dimension are almost never valleys but are instead saddlepoints: structures which are "valleys" along a multitude of dimensions with "exits" in a multitude of other dimensions.
The striking conclusion is that one has less to fear becoming hemmed in on all sides by higher loss but more to fear being waylaid nearly indefinitely by nearly flat regions.
BID9 showed another property: that paths directly from the initial point to the final point of optimization are often monotonically decreasing.
Though dimension is high, the space is in some sense simpler than we thought: rather than winding around hills and through long twisting corridors, the walk could just as well have taken a straight line without encountering any obstacles, if only the direction of the line could have been determined at the outset.In this paper we seek further understanding of the structure of the objective landscape by restricting training to random slices through it, allowing optimization to proceed in randomly generated subspaces of the full parameter space.
Whereas standard neural network training involves computing a gradient and taking a step in the full parameter space (R D above), we instead choose a random d-dimensional subspace of R D , where generally d < D, and optimize directly in this subspace.
By performing experiments with gradually larger values of d, we can find the subspace dimension at which solutions first appear, which we call the measured intrinsic dimension of a particular problem.
Examining intrinsic dimensions across a variety of problems leads to a few new intuitions about the optimization problems that arise from neural network models.We begin in Sec. 2 by defining more precisely the notion of intrinsic dimension as a measure of the difficulty of objective landscapes.
In Sec. 3 we measure intrinsic dimension over a variety of network types and datasets, including MNIST, CIFAR-10, ImageNet, and several RL tasks.
Based on these measurements, we draw a few insights on network behavior, and we conclude in Sec. 4.
In this paper, we have defined the intrinsic dimension of objective landscapes and shown a simple method -random subspace training -of approximating it for neural network modeling problems.
We use this approach to compare problem difficulty within and across domains.
We find in some cases the intrinsic dimension is much lower than the direct parameter dimension, and hence enable network compression, and in other cases the intrinsic dimension is similar to that of the best tuned models, and suggesting those models are better suited to the problem.Further work could also identify better ways of creating subspaces for reparameterization: here we chose random linear subspaces, but one might carefully construct other linear or non-linear subspaces to be even more likely to contain solutions.
Finally, as the field departs from single stackof-layers image classification models toward larger and more heterogeneous networks BID11 BID14 often composed of many modules and trained by many losses, methods like measuring intrinsic dimension that allow some automatic assessment of model components might provide much-needed greater understanding of individual black-box module properties.
In the main paper, we attempted to find d int90 across 20 FC networks with various depths and widths.
A grid sweep of number of hidden layers from {1,2,3,4,5} and width of each hidden layer from {50,100,200,400} is performed, and all 20 plots are shown in FIG7 .
For each d we take 3 runs and plot the mean and variance with blue dots and blue error bars.
d int90 is indicated in plots (darkened blue dots) by the dimension at which the median of the 3 runs passes 90% performance threshold.
The variance of d int90 is estimated using 50 bootstrap samples.
Note that the variance of both accuracy and measured d int90 for a given hyper-parameter setting are generally small, and the mean of performance monotonically increases (very similar to the single-run result) as d increases.
This illustrates that the difference between lucky vs. unlucky random projections have little impact on the quality of solutions, while the subspace dimensionality has a great impact.
We hypothesize that the variance due to different P matrices will be smaller than the variance due to different random initial parameter vectors θ 0 , and aspects of the network depending on smaller numbers of random samples will exhibit greater variance.
Hence, in some other experiments we rely on single runs to estimate the intrinsic dimension, though slightly more accurate estimates could be obtained via multiple runs.In similar manner to the above, in FIG8 we show the relationship between d int90 and D across 20 networks but using a per-model, directly trained baseline.
Most baselines are slightly below 100% accuracy.
This is in contrast to FIG3 , which used a simpler global baseline of 100% across all models.
Results are qualitatively similar but with slightly lower intrinsic dimension due to slightly lower thresholds. | We train in random subspaces of parameter space to measure how many dimensions are really needed to find a solution. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:780 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Graph Neural Networks as a combination of Graph Signal Processing and Deep Convolutional Networks shows great power in pattern recognition in non-Euclidean domains.
In this paper, we propose a new method to deploy two pipelines based on the duality of a graph to improve accuracy.
By exploring the primal graph and its dual graph where nodes and edges can be treated as one another, we have exploited the benefits of both vertex features and edge features.
As a result, we have arrived at a framework that has great potential in both semisupervised and unsupervised learning.
Convolutional Neural Networks (CNNs) (Lecun et al. (1998) ) has been very successfully used for automated feature extraction in Euclidean domains, especially for computer vision, such as 2D image classification, object detection, etc.
However, many real-life data has a non-Euclidean graph structure in nature, from which we want to investigate the underlying relations among different objects by utilizing the representation of nodes and edges.
Recently, research on applying the generalization of Convolutional Neural Networks to the non-Euclidean domains has attracted growing attention.
As a result, a branch of research on Geometric Deep Learning (Bruna et al. (2013) ) based on that has been ignited.
Previous works including ChebNet (Defferrard et al. (2016) ) and GCN (Kipf & Welling (2017) ) have demonstrated strong results in solving problems in semi-supervised learning where the labels of only a few objects are given, and we want to find out the labels of other objects through their inner connections.
Current methods generalizing convolution operations include both spatial and spectral domains (Bruna et al. (2013) ).
The spatial one deals with each node directly in the vertex domain while the spectral one takes a further step in converting signals via graph Fourier transform into the spectral domain.
However, one critical weakness would be the fact that the interchangeable and complementary nature between nodes and edges are generally ignored in previous research.
As a result, the duality of the graph is not fully utilized.
If we treat those edges in the original, or known as the primal graph, as the nodes in the new graph, and original nodes as edges, we can arrive at a new graph that further exploits the benefits of edge features.
In such a way, we are able to get both the primal graph and the dual graph (Monti et al. (2018) ).
By combining both the vertex features and the edge features, we will be able to solve wider range of problems and achieve better performance.
In this paper, we propose a new approach to transform the primal graph into its dual form and have implemented two pipelines based on these two forms of graph to improve the accuracy and the performance.
With two pipelines, we also exploited a path to make the model wider instead of merely deeper.
Meanwhile, we have developed a new framework that can be applied later on both semi-supervised learning and unsupervised learning.
In this work, we propose the TwinGCN with parallel pipelines working on both the primal graph and its dual graph, respectively.
TwinGCN achieves the state-of-the-art performance in semisupervised learning tasks.
Moreover, TwinGCN's ability is not limited to this, we can extend its power/utilization into unsupervised learning by altering its loss functions. | A primal dual graph neural network model for semi-supervised learning | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:781 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We describe two end-to-end autoencoding models for semi-supervised graph-based dependency parsing.
The first model is a Local Autoencoding Parser (LAP) encoding the input using continuous latent variables in a sequential manner; The second model is a Global Autoencoding Parser (GAP) encoding the input into dependency trees as latent variables, with exact inference.
Both models consist of two parts: an encoder enhanced by deep neural networks (DNN) that can utilize the contextual information to encode the input into latent variables, and a decoder which is a generative model able to reconstruct the input.
Both LAP and GAP admit a unified structure with different loss functions for labeled and unlabeled data with shared parameters.
We conducted experiments on WSJ and UD dependency parsing data sets, showing that our models can exploit the unlabeled data to boost the performance given a limited amount of labeled data.
Dependency parsing captures bi-lexical relationships by constructing directional arcs between words, defining a head-modifier syntactic structure for sentences, as shown in Figure 1 .
Dependency trees are fundamental for many downstream tasks such as semantic parsing (Reddy et al., 2016; , machine translation (Bastings et al., 2017; Ding & Palmer, 2007) , information extraction (Culotta & Sorensen, 2004; Liu et al., 2015) and question answering (Cui et al., 2005) .
As a result, efficient parsers (Kiperwasser & Goldberg, 2016; Ma et al., 2018) have been developed using various neural architectures.
While supervised approaches have been very successful, they require large amounts of labeled data, particularly when neural architectures are used.
Syntactic annotation is notoriously difficult and requires specialized linguistic expertise, posing a serious challenge for low-resource languages.
Semisupervised parsing aims to alleviate this problem by combining a small amount of labeled data and a large amount of unlabeled data, to improve parsing performance over labeled data alone.
Traditional semi-supervised parsers use unlabeled data to generate additional features, assisting the learning process (Koo et al., 2008) , together with different variants of self-training (Søgaard & Rishøj, 2010) .
However, these approaches are usually pipe-lined and error-propagation may occur.
In this paper, we propose two end-to-end semi-supervised parsers based on probabilistic autoencoder models illustrated in Figure 3 , Locally Autoencoding Parser (LAP) and Globally Autoencoding Parser (GAP).
In LAP, continuous latent variables are used to support tree inference by providing a better representation, while in GAP, the latent information forms a probability distribution over dependency trees corresponding to the input sentence.
A similar idea has been proposed by Corro & Titov (2018) , but our GAP model differs fundamentally from their parser, as GAP does not sample from the posterior of the latent tree structure to approximate the Evidence Lower Bound (ELBO).
Instead it relies on a tractable algorithm to directly compute the posterior to calculate the ELBO.
We summarize our contributions as follows:
1. We proposed two autoencoding parsers for semi-supervised dependency parsing, with complementary strengths, trading off speed vs. accuracy;
2. We propose a tractable inference algorithm to compute the expectation and marginalization of the latent dependency tree posterior analytically for GAP, avoiding sampling from the posterior to approximate the expectation (Corro & Titov, 2018)
; 3. We show improved performance of both LAP and GAP with unlabeled data on WSJ and UD data sets empirically, and improved results of GAP comparing to a recently proposed semi-supervised parser (Corro & Titov, 2018) .
In this paper, we present two semi-supervised parsers, which are locally autoencoding parser (LAP) and globally autoencoding parser (GAP).
Both of them are end-to-end learning systems enhanced with neural architecture, capable of utilizing the latent information within the unlabeled data together with labeled data to improve the parsing performance, without using external resources.
More importantly, our GAP model outperforms the previous published (Corro & Titov, 2018) semisupervised parsing system on the WSJ data set.
We attribute this success to two reasons: First, our GAP model consists both a discriminative component and a generative component.
These two components are constraining and supplementing each other such that final parsing choices are made in a checked-and-balanced manner to avoid over-fitting.
Second, instead of sampling from posterior of the latent variable (the dependency tree) (Corro & Titov, 2018) , our model analytically computes the expectation and marginalization of the latent variable, such that the global optima can be found for the decoder, which leads to an improved performance.
A APPENDIX | We describe two end-to-end autoencoding parsers for semi-supervised graph-based dependency parsing. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:782 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We improve previous end-to-end differentiable neural networks (NNs) with fast
weight memories.
A gate mechanism updates fast weights at every time step of
a sequence through two separate outer-product-based matrices generated by slow
parts of the net.
The system is trained on a complex sequence to sequence variation
of the Associative Retrieval Problem with roughly 70 times more temporal
memory (i.e. time-varying variables) than similar-sized standard recurrent NNs
(RNNs).
In terms of accuracy and number of parameters, our architecture outperforms
a variety of RNNs, including Long Short-Term Memory, Hypernetworks,
and related fast weight architectures.
Recurrent Neural Networks (RNNs) are general parallel-sequential computers that can implement algorithms which map input sequences to output sequences.
One variation of it, the Long ShortTerm Memory (LSTM), has achieved great success on a wide variety of Machine Learning tasks such as natural language translation, image caption generation, and speech recognition among others BID11 ; BID6 ; BID7 .
In practical applications, most RNNs are actually LSTM networks now used billions of times per day for automatic translation BID21 , speech recognition Sak et al., and many other tasks BID13 BID17 .However
, plain RNNs but also LSTMs are known to have difficulty in performing memorization, like e.g. a simple copying task of outputting the same sequence as the input sequence BID22 . But also
other more high-level cognitive tasks have been shown to be difficult to master BID2 .In this work
, we explore a generalization of the Associative Retrieval problem. We follow a
similar style as in BID2 but turned the task into a general sequence to sequence problem and also substantially increased its complexity. The underlying
mechanism is essentially a dictionary with a certain number of key-value pairs which is controlled using a simple syntax of storage and query tokens.In order to overcome the limitation of current RNNs on this task, we propose a fast weight architecture that is able to learn and generalize using much fewer parameters. Our architecture
consists of the two networks s and f which both operate on the input sequence in parallel. The small network
f predicts the targets while the big network s generates on-the-fly weight-updates for f . The big network s
is called the slow network because its weights change only after every mini-batch according to the gradient-based learning algorithm. f , on the other
hand, is called the fast network because its weights can change after every time step.
It has been shown before how generalizing a memory mechanism, such as required in this task, is difficult for vanilla RNNs to learn.
Several previous works focused on integrating differentiable Figure 2 : The left figure represents the accuracy of non-trivial targets and the right figure the respective bits per character.
These are the validation set results of the best models of the four examined architectures due to our hyper parameter search.
Green is the LSTM, dark blue is the fast weights architecture as attention to the recent past, red is the hypernetwork, and cyan is our novel fast weight architecture.computer-like memory into the graph structure of the architecture such that the model wouldn't need to learn the mechanism itself but mainly how to use it.
Examples of such are differentiable stacks by BID3 ; BID12 , but also related storage types like those in LSTM-controlled Neural Turing Machines BID8 or memory nets BID20 .A
basic argument against a memory approach inspired by the Turing Machine or the von Neumann architecture is its biological plausibility, as well as, the fact that we know how the human memory system often doesn't really behave as computer memory does. It
is generally known to be much more nuanced and forcing an architecture to include a strong and possibly misleading bias would certainly limit its ability to learn and generalize to a more effective mechanism. We
think that learning high-level cognitive functions (i.e. high-level programs implemented under the constraints of some artificial neural substrate) is difficult and find the idea to search and reverse engineer every human capability necessary for intelligence in order to engineer it into an architecture to be undesirable. Instead
, we favour an approach which focuses on improving the capabilities of the artificial neural substrate which allows for the emergence of higher-level functions through training. We think
fast weights are such a component from which many models could benefit.Limitations Fast weights seem to have a positive effect when they are incorporated into an architecture but we experienced at least two practical limitations. While the
calculation of the gradient through these fast weight dynamics remains rather cheap, the number of values to be stored in the backward pass encompasses now all time-varying variables (i.e. all fast weights) at each relevant time step. This quadratically
increases the memory consumption compared to a similar sized RNN. At the moment, these
memory limitations are the main reason why such fast weight networks remain rather small compared to state-of-the-art RNNs one some popular application like e.g. neural machine translation. Another noteworthy limitation
is the wall-time necessary for computing a more complex architecture. Reshaping tensors and other simple
operations result in a significant increase of wall-time. However, over 20 years ago it was
pointed out that an RNN can also use additional, soft, end-toend differentiable attention mechanisms to learn to control its own internal spotlights of attention BID15 to quickly associate self-defined patterns through fast weights (on connections between certain units) that can quickly and dramatically change from one time step to the next. This approach can essentially increase
the number of time-varying variables massively while keeping the model relatively small.We improved the update mechanism through which the slow network learns to write into its fast weight memory. This allows us to construct a model with
a small but memory expensive fast network in addition to the standard slow network. However, the fast weights are not just passive
memory like the state but are more like active memory in the sense of a context-specific computation. We force the model to use this active memory at
every step to predict the current output by delaying the weight updates from slow network by one step.Consider the model introduced in the previous section. While the slow network is technically bigger, it
contains only 40 time-varying variables, namely the state vector h S . The fast network is much smaller but has 3840 time-varying
variables (h F , F (1) , and F (2) ). Increasing the total number of time-varying variables significantly
.
In this paper, we introduce a complex sequence to sequence variation of the Associative Retrieval problem.
In that problem, the model has to learn how to store a number of associations from the input sequence, retrieve them if necessary, and forget them to learn new associations.
We use a standard RNN to generate weight updates for a fast weight RNN.
This allows our model to store temporal information not only in the state of either RNN but also in the weights of the fast weight RNN.
Our contribution is a new way of updating the weight matrices of the fast weight RNN where we use a gate and two generated matrices instead of one.
Without our contribution the model has never shown to be able to learn any non-trivial predictions.
We compare it with other architectures on this general task and show how it outperforms them in convergence, accuracy, and number of parameters. | An improved Fast Weight network which shows better results on a general toy task. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:783 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The field of deep learning has been craving for an optimization method that shows outstanding property for both optimization and generalization.
We propose a method for mathematical optimization based on flows along geodesics, that is, the shortest paths between two points, with respect to the Riemannian metric induced by a non-linear function.
In our method, the flows refer to Exponentially Decaying Flows (EDF), as they can be designed to converge on the local solutions exponentially.
In this paper, we conduct experiments to show its high performance on optimization benchmarks (i.e., convergence properties), as well as its potential for producing good machine learning benchmarks (i.e., generalization properties).
Due to recent progress in the field of machine learning, it becomes more and more important to develop and sophisticate methods of solving hard optimization problems.
At the same time, in this field, such methods are additionally required to elicit decent generalization performance from statistical models.
An efficient method of mathematical optimization, however, does not always produce sufficient generalization properties, since these are involved with two distinct mathematical problems; The former is to find one of the solutions which minimize a given (possibly non-convex) objective function, and the latter is to adjust parameters so that a statistical estimator achieves its best result.
To address such a hard issue, we introduce a new mathematical perspective on optimization, and develop a method for machine learning based on this perspective.
We then empirically show its rapid convergence rate and high compatibility with deep learning techniques, as well as good statistical properties.In this field, many optimization methods have been proposed and modified so that they fit specific problems or models.
One of the current standard methods is the gradient descent method.
The method tends to converge slowly in general optimization problems.
However, with various specific techniques, such as mini-batch training and batch normalization BID9 ), it has been found to be efficient for state-of-the-art purposes in the field of deep learning.
Another class of methods that are now becoming popular and standard is adaptive methods, such as AdaGrad BID6 ) and Adam (Kingma & Ba (2015) ).
Compared to the gradient descent method, these methods have been shown to improve convergence rates with almost the same computational cost as the gradient descent method, but are reported to result in poor statistical outcomes in some cases of machine learning BID16 ).Other
class of methods that have been thoroughly studied in the theory of mathematical optimization is second-order methods, such as the Newton method and the Gauss-Newton method. These
methods possess great convergence properties, and in particular, have a potential to overcome plateau's problems BID5 ). Furthermore
, when it comes to applications in stochastic settings, the method based on the Gauss-Newton Matrix (or Fisher information Matrix) is shown to asymptotically attain the best statistical result, which is called Fisher efficiency (see BID0 ). Despite these
attractive characteristics, the methods have not yet been spotlighted in the field of machine learning due to several severe drawbacks; They suffer from high computational cost in general and their useful properties are no longer guaranteed in practical settings (see Section 12 in BID12 ). One of the continuously
developing second-order methods in this field, K-FAC BID1 , BID7 ), successfully produced high convergence rate empirically with relatively low computational cost. However, it still requires
much effort to become compatible with some deep learning techniques. In addition, it is unclear
whether the method has advantages in generalization performance.In our approach, by introducing a Riemannian metric induced by non-linear functions, we constitute dynamical systems which describe motions along the shortest route from arbitrary initial points to the zeros of non-linear functions on the corresponding Riemannian manifold, that is, geodesic with respect to the Riemannian metric. One of the remarkable characteristics
of our approach is that it enables us to flexibly design flows of such dynamical systems to control convergence rates. The results for the flows are then applicable
to mathematical optimization problems, in particular, with deep neural network (DNN) models. In this paper, after providing mathematical ground
of our methods, we experimentally demonstrate their performance in various aspects, from convergence rates to statistical properties.
Obtaining good statistical results from limited available data is a critical goal in machine learning.
To reach this goal, while developing an effective model is an essential approach, eliciting the best performance from the fixed model through optimization is important as well.
In our study, to examine the performance of our optimization methods, Exponentially Decaying Flows (EDF) based methods, we explored their generalization properties pertaining to results of optimization.
Our experiments showed that EDF-based methods are more likely to achieve optimal solutions which generalize the test data well than other standard optimizers are.
Therefore, EDF-based methods are considered to be optimization methods that have a high potential in their application to various tasks, and thus, are worthwhile to be sophisticated through future studies.
In terms of computation of the EDF-based methods with GPU, the Jacobian-vector-product can be carried out at almost the same cost as the gradient of loss function.
In fact, multiplying a vector by Jacobian and its transposed matrix (written as R-op and L-op, respectively) are implemented in combination with gradients of scholar functions.
For the psuedo-code for update scheme of the EDF-G with L/R-op, refer to Algorithm 1, and Algorithm 2 in particular case that k = 1.Algorithm 1: Update scheme for EDF-G with non-preconditioned MINRES Input : FIG3 shows the results of experiments using EDF for simple examples that compare a full-batch training with stochastic trainings.
In this example, the convolutional network similar to that used in Section 6.1 was employed on MNIST.
The curve labeled as "EDF F" depicts the result of Full-batch training per step, and those labeled as "EDF S" illustrate the results of stochastic trainings per epoch with a mini-batch of size 500.
DISPLAYFORM0 | Introduction of a new optimization method and its application to deep learning. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:784 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We introduce Quantum Graph Neural Networks (QGNN), a new class of quantum neural network ansatze which are tailored to represent quantum processes which have a graph structure, and are particularly suitable to be executed on distributed quantum systems over a quantum network.
Along with this general class of ansatze, we introduce further specialized architectures, namely, Quantum Graph Recurrent Neural Networks (QGRNN) and Quantum Graph Convolutional Neural Networks (QGCNN).
We provide four example applications of QGNN's: learning Hamiltonian dynamics of quantum systems, learning how to create multipartite entanglement in a quantum network, unsupervised learning for spectral clustering, and supervised learning for graph isomorphism classification.
Variational Quantum Algorithms are a promising class of algorithms are rapidly emerging as a central subfield of Quantum Computing (McClean et al., 2016; Farhi et al., 2014; Farhi & Neven, 2018) .
Similar to parameterized transformations encountered in deep learning, these parameterized quantum circuits are often referred to as Quantum Neural Networks (QNNs).
Recently, it was shown that QNNs that have no prior on their structure suffer from a quantum version of the no-free lunch theorem (McClean et al., 2018) and are exponentially difficult to train via gradient descent.
Thus, there is a need for better QNN ansatze.
One popular class of QNNs has been Trotter-based ansatze (Farhi et al., 2014; Hadfield et al., 2019) .
The optimization of these ansatze has been extensively studied in recent works, and efficient optimization methods have been found (Verdon et al., 2019b; Li et al., 2019) .
On the classical side, graph-based neural networks leveraging data geometry have seen some recent successes in deep learning, finding applications in biophysics and chemistry (Kearnes et al., 2016) .
Inspired from this success, we propose a new class of Quantum Neural Network ansatz which allows for both quantum inference and classical probabilistic inference for data with a graph-geometric structure.
In the sections below, we introduce the general framework of the QGNN ansatz as well as several more specialized variants and showcase four potential applications via numerical implementation.
Results featured in this paper should be viewed as a promising set of first explorations of the potential applications of QGNNs.
Through our numerical experiments, we have shown the use of these QGNN ansatze in the context of quantum dynamics learning, quantum sensor network optimization, unsupervised graph clustering, and supervised graph isomorphism classification.
Given that there is a vast set of literature on the use of Graph Neural Networks and their variants to quantum chemistry, future works should explore hybrid methods where one can learn a graph-based hidden quantum representation (via a QGNN) of a quantum chemical process.
As the true underlying process is quantum in nature and has a natural molecular graph geometry, the QGNN could serve as a more accurate model for the hidden processes which lead to perceived emergent chemical properties.
We seek to explore this in future work.
Other future work could include generalizing the QGNN to include quantum degrees of freedom on the edges, include quantum-optimization-based training of the graph parameters via quantum phase backpropagation (Verdon et al., 2018) , and extending the QSGCNN to multiple features per node. | Introducing a new class of quantum neural networks for learning graph-based representations on quantum computers. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:785 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Data breaches involve information being accessed by unauthorized parties.
Our research concerns user perception of data breaches, especially issues relating to accountability.
A preliminary study indicated many people had weak understanding of the issues, and felt they themselves were somehow responsible.
We speculated that this impression might stem from organizational communication strategies.
We therefore compared texts from organizations with external sources, such as the news media.
This suggested that organizations use well-known crisis communication methods to reduce their reputational damage, and that these strategies align with repositioning of the narrative elements involved in the story.
We then conducted a quantitative study, asking participants to rate either organizational texts or news texts about breaches.
The findings of this study were in line with our document analysis, and suggest that organizational communication affects the users' perception of victimization, attitudes in data protection, and accountability.
Our study suggests some software design and legal implications supporting users to protect themselves and develop better mental models of security breaches.
A data breach is a successful malicious attack which leads to the compromise or the loss of data [18] .
Personally Identifiable Information (PII) is often stored in organization databases, and if disclosed is at risk of misuse.
Depending on the size, scale, and type of stolen information, the potential consequences of a data breach can be huge.
A data breach can put people at risk of identity theft, which often happens through fraudulent use of existing accounts like credit cards, online accounts, and Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.
Copyrights for components of this work owned by others than the author(s) must be honored.
Abstracting with credit is permitted.
To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
Request permissions from permissions@acm.org.
insurance.
It can also lead to financial loss and emotional distress [22] .
Despite the increased awareness of organizations and great emphasis by experts on security mechanisms, many organizations still maintain insufficient security practices on data collection, processing, and storage, so are unable to prevent data breaches and consequent misuse of the data.
Several recent occurrences follow this pattern, and data breaches at major companies, like Equifax, have exposed a massive number of consumers' records [17] .
Although such events have become commonplace, there appears to be little indication that end-users feel urgency about holding companies to account.
A 2016 study reports that by far most consumers kept doing business with companies after breaches [1] , and some high-profile commentary suggests "breach fatigue" has "set a new normal and instill a sense of fatalism -and complacency" [16] .
In a small preliminary study, we even found that participants often thought that they themselves were somehow responsible for data breaches.
According to Coombs [5] , the reputation of a company is based on the evaluation customers make about it.
Customer evaluations can be affected by the behavior of a company when a crisis like a data breach happens.
So, due to the significant financial loss and reputational damage caused by data breaches, companies try to reduce the damage using communication strategies in the after-breach notifications [9, 19] .
The crisis response strategies aim to reduce the negative effects of the crisis by changing the level of crisis responsibility.
For example, if a company frames themselves as victims of the situation and therefore positioned in what crisis communication theorists call the "victim cluster", they are likely to incur little blame for the crisis [5] .
User understanding of data breach incidents are important because it allows development of mental models to support reasoning about behavior and accountability [3] .
The goal of our research is to explore how breached companies and news media communicate with users, and how that might affect users' perception of a data breach incident.
To do so, we apply Image Repair Theory (IRT) [2] and a narrative-semiotics method [8] to the analysis of Equifax crisis communications to see how this incident is reported in the company press releases and the news.
We first conducted a communication study based on collected data from 58 stories related to this security breach crisis.
We then conducted a questionnaire study with 100 participants testing the influence of companies' notifications and news on the general public.
To the best of our knowledge, testing communication strategies' influence on users mental models of a data breach is original, and it shows HCI efforts on building user understanding of security can be undermined by organizational communication.
Our results also suggest a need for the improvement of software design, and delicate attention of communication professionals and legal scholars to the notifications created during and after a data breach.
The primary goal of our work was to explore how organization communications about data breaches might affect user perception.
To do this, we first studied the nature of the communication itself.
Using Image Repair Theory, we analyzed press releases posted on official company websites.
We found that Equifax press releases had characteristics consistent with tactics to reduce reputational damage and therefore financial loss.
Recognizing that the way the news media frames a crisis might be different to the framing in an organization's press releases, we next explored that issue.
We used techniques from narrative semiotics to examine the structure of the stories being told, and found that the agents were not positioned the same way.
Considering the first narrative story studied, our comparison of the Equifax press releases with news and GAO reports shows important differences with respect to the positioning of Equifax (see Figure 4) .
In the press releases, there was emphasis on Equifax as a helper, presenting the company's protection actions.
In the news and GAO reports there was emphasis on Equifax as an opponent, presenting the company's weak security protection of consumer data.
Moreover, the new media had a focus on the company and its security failure, whereas the company appears to use scapegoating as its primary crisis response strategy [4] and suggesting responsibility lay with a single unnamed IT staff member.
The ethics of scapegoating is doubtful [10] , suggesting a manipulative approach used deflect responsibility.
The company's apparently lax attitude in crisis response was heavily criticized by the media.
The news text suggests Equifax shares responsibility for this incident.
However, Equifax positioned itself as a receiver to emphasize it is a victim, a strategy that consistent with an attempt to reduce its responsibility [5] .
Equifax appears to map all their actions to the helper category, in a manner consistent with Image Repair Theory.
For example, a bolstering strategy places the company in a helper position, deflecting responsibility by shifting the blame, scapegoating puts the company in victim position, and compensation strategies stress the company acts as a helper.
However, when the news media narrates the story, the mapping of the actions and agents goes to the opponent category, since the media is not concerned with Image Repair.
Our second step was to explore how the strategies used in the company press releases might influence the public understanding of data breaches.
We conducted a questionnaire study to see if data breach incident descriptions from different sources, the company's and the news media, result in different perceptions of the incident.
After reading the text extracted from the company's press release, participants tended to rate the company's after-breach action and security measures higher.
They also thought that the company was helping their customers and did not put them at any risk.
However, we got different results from participants who read the news texts reporting the same incident; participants disagreed that the company took the security seriously and their after breach protective actions were not acceptable to help the customers.
The company was regarded as a victim after reading the company's description; however, the news approach in narrating the data breach resulted in a different perception of participants.
This therefore confirmed our speculations based on our text analysis.
It also confirms the effectiveness of IRT and its relevance in crisis communication.
Of course, it is not surprising that companies tend to present themselves in a better light that the news media.
Nor is it surprising that they used strategies that have been developed to help them to do this.
However, our text study shows that their Image Repair strategies exhibit some important characteristics.
In particular, they show differences in how agency is presented, which, in turn affects readers' understanding of what happened.
In this paper, we presented our study on communication about data breach events which exposed private consumer data.
We first analysed Equifax press releases and notifications to identify their the strategies, and then analysed news stories and government reports on the same events; we studied 58 stories in all.
We found that the company used crisis communication strategies to reduce its reputational damage and financial loss.
Our analysis also showed that there are differences between press releases, major newspaper and technical news when reporting the same data breach incident.
In our narrative-semiotic analysis, we found the company mapped their after-breach actions into helper category; but the narrator of news reports mapped them into the opponent category.
These narrative changes affected reader perception about these data breaches.
Our questionnaire study revealed that the dissimilar approach detected in document analysis when narrating the same story from a different point of view (companies and news) has a considerable influence on the general public's perception of a data breach incident.
Large scale data breaches are a serious matter, not just for organizations, but for the thousands or millions of users who have private data exposed, making them vulnerable to a range of consequences.
Despite this, it is unclear if users understand what exactly has happened, where accountability lies, and how to proceed.
In work on human factors in computer security, it has often been found that users have only weak mental models of online threats and defences, e.g. [20, 3] .
When user data is exposed by a large scale data breach, communication with the user may well be primarily from the organization itself.
Our research suggests that communication from organizations may misrepresent the data breach events leading to misleading perceptions of the crisis and the company's accountability.
Design of software that stores sensitive personal information should support users in maintaining better awareness of data breaches. | "In this paper, we tested communication strategies' influence on users mental models of a data breach." | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:786 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Goal recognition is the problem of inferring the correct goal towards which an agent executes a plan, given a set of goal hypotheses, a domain model, and a (possibly noisy) sample of the plan being executed.
This is a key problem in both cooperative and competitive agent interactions and recent approaches have produced fast and accurate goal recognition algorithms.
In this paper, we leverage advances in operator-counting heuristics computed using linear programs over constraints derived from classical planning problems to solve goal recognition problems.
Our approach uses additional operator-counting constraints derived from the observations to efficiently infer the correct goal, and serves as basis for a number of further methods with additional constraints.
Agents that act autonomously on behalf of a human user must choose goals independently of user input and generate plans to achieve such goals ).
When such agents have complex sets goals and require interaction with multiple agents that are not under the user's control, the resulting plans are likely to be equally complex and non-obvious for human users to interpret BID0 .
In such environments, the ability to accurately and quickly identify the goals and plans of all involved agents is key to provide meaningful explanation for the observed behavior.
Goal recognition is the problem of inferring one or more goals from a set of hypotheses that best account for a sequence of observations, given a fixed initial state, a goal state, and a behavior model of the agent under observation.
Recent approaches to goal recognition based on classical planning domains have leveraged data-structures and heuristic information used to improve planner efficiency to develop increasingly accurate and faster goal recognition algorithms BID1 BID2 .
Specifically, BID2 use heuristics based on planning landmarks BID1 ) to accurately and efficiently recognize goals in a wide range of domains with various degrees of observability and noise.
This approach, however, does not deal with noise explicitly, relying on the implicit necessity of landmarks in valid plans for goal hypotheses to achieve com- petitive accuracy with other methods BID3 BID3 , while increasing the number of recognized goals (spread).Thus
, goal recognition under partial observability (i.e., missing observations) in the presence of noisy observation is a difficult problem to address while achieving both reasonable recognition time (i.e., a few seconds), high accuracy and low spread. In
this paper, we address these limitations by leveraging recent advances on operator-counting heuristics (Pommerening et al. 2014; BID4 ). Operator-counting
heuristics provide a unifying framework for a variety of sources of information from planning heuristics BID1 ) that provide both an estimate of the total cost of a goal from any given state and and indication of the actual operators likely to be in such plans. This information
proves to be effective at differentiating between goal hypotheses in goal recognition.Our contributions are threefold. First, we develop
three, increasingly more accurate goal recognition approaches using operator-counting heuristics.Second, we empirically show that these heuristics are very effective at goal recognition, overcoming existing approaches in almost all domains in terms of accuracy while diminishing the spread of recognized goals. Such approaches are
substantially more effective for noisy settings. Third, we discuss a
broad class of operator-counting heuristics for goal recognition that can use additional constraints to provide even finer handling of noise and missing observations.
We developed a novel class goal recognition technique based on operator-counting heuristics from classical planning (Pommerening et al. 2014) which, themselves rely on ILP constraints to estimate which operators occur in valid optimal plans towards a goal.
The resulting approaches are competitive with the state of the art in terms of high accuracy and low false positive rate (i.e., the spread of returned goals), at a moderate computational cost.
We show empirically that the overall accuracy of our best approach is sub- stantially superior to the state-of-the-art over a large dataset.
Importantly, the values of the operator-counting constraints we compute for each of the heuristics can be used as explanations for recognized goals.
The techniques described in this paper use a set of simple additional constraints in the ILP formulation to achieve substantial performance, so we expect substantial future work towards further goal recognition approaches and heuristics that explore more refined constraints to improve accuracy and reduce spread, as well as deriving a probabilistic approach using operator-counting information.
Examples of such work include using the constraints to force the LP to generate the counterfactual operator-counts (i.e., non-compliant with the observations) used by the R&G approach, or, given an estimate of the noise, relax the observation constraints to allow a number of observations to not be included in the resulting operator-counts.
DISPLAYFORM0 | A goal recognition approach based on operator counting heuristics used to account for noise in the dataset. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:787 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
It can be challenging to train multi-task neural networks that outperform or even match their single-task counterparts.
To help address this, we propose using knowledge distillation where single-task models teach a multi-task model.
We enhance this training with teacher annealing, a novel method that gradually transitions the model from distillation to supervised learning, helping the multi-task model surpass its single-task teachers.
We evaluate our approach by multi-task fine-tuning BERT on the GLUE benchmark.
Our method consistently improves over standard single-task and multi-task training.
Building a single model that jointly learns to perform many tasks effectively has been a longstanding challenge in Natural Language Processing (NLP).
However, applying multi-task NLP remains difficult for many applications, with multitask models often performing worse than their single-task counterparts BID30 BID1 BID25 .
Motivated by these results, we propose a way of applying knowledge distillation BID3 BID0 BID14 so that single-task models effectively teach a multi-task model.Knowledge distillation transfers knowledge from a "teacher" model to a "student" model by training the student to imitate the teacher's outputs.
In "born-again networks" BID10 , the teacher and student have the same neural architecture and model size, but surprisingly the student is able to surpass the teacher's accuracy.
Intuitively, distillation is effective because the teacher's output distribution over classes provides more training signal than a one-hot label; BID14 suggest that teacher outputs contain "dark knowledge" capturing additional information about training examples.
Our work extends born-again networks to the multi-task setting.
We compare Single→Multi 1 born-again distillation with several other variants (Single→Single and Multi→Multi), and also explore performing multiple rounds of distillation (Single→Multi→Single→Multi) .
Furthermore, we propose a simple teacher annealing method that helps the student model outperform its teachers.
Teacher annealing gradually transitions the student from learning from the teacher to learning from the gold labels.
This method ensures the student gets a rich training signal early in training but is not limited to only imitating the teacher.Our experiments build upon recent success in self-supervised pre-training BID7 BID28 and multi-task fine-tune BERT BID8 to perform the tasks from the GLUE natural language understanding benchmark BID41 .
Our training method, which we call Born-Again Multi-tasking (BAM) 2 , consistently outperforms standard single-task and multi-task training.
Further analysis shows the multi-task models benefit from both better regu- 1 We use Single→Multi to indicate distilling single-task "teacher" models into a multi-task "student" model.
2 Code is available at https://github.com/ google-research/google-research/tree/ master/bam larization and transfer between related tasks.
We have shown that Single→Multi distillation combined with teacher annealing produces results consistently better than standard single-task or multi-task training.
Achieving robust multi-task gains across many tasks has remained elusive in previous research, so we hope our work will make multi-task learning more broadly useful within NLP.
However, with the exception of closely related tasks with small datasets (e.g., MNLI helping RTE), the overall size of the gains from our multi-task method are small compared to the gains provided by transfer learning from self-supervised tasks (i.e., BERT).
It remains to be fully understood to what extent "self-supervised pre-training is all you need" and where transfer/multi-task learning from supervised tasks can provide the most value. | distilling single-task models into a multi-task model improves natural language understanding performance | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:788 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The detection of out of distribution samples for image classification has been widely researched.
Safety critical applications, such as autonomous driving, would benefit from the ability to localise the unusual objects causing the image to be out of distribution.
This paper adapts state-of-the-art methods for detecting out of distribution images for image classification to the new task of detecting out of distribution pixels, which can localise the unusual objects.
It further experimentally compares the adapted methods on two new datasets derived from existing semantic segmentation datasets using PSPNet and DeeplabV3+ architectures, as well as proposing a new metric for the task.
The evaluation shows that the performance ranking of the compared methods does not transfer to the new task and every method performs significantly worse than their image-level counterparts.
Figure 1: Image from the LostAndFound dataset (Pinggera et al., 2016) , where two unlikely objects (storage crates) are almost entirely incorrectly predicted to be road.
The Max Softmax method clearly highlights these crates as OOD.
(best viewed in colour)
Many applications using machine learning (ML) may benefit from out of distribution (OOD) detection to improve safety.
When inputs are determined to be out of distribution, the output of an ML algorithm should not be trusted.
A large body of research exists for detecting entire images as OOD for the task of image classification.
Image-level OOD detection outputs a classification for the entire image; this coarse level of detection may be inadequate for many safety critical applications, including autonomous driving.
Most of the pixels in an image taken from an onboard camera will be in distribution (ID), i.e. an image of a road scene with cars, people, and roadway-but an unusual object that was not part of the training set may cause only a small number of OOD pixels.
Extending the framework to semantic segmentation networks will allow each pixel to have an "in" or "out of" distribution classification.
Applied to autonomous driving, groups of pixels classified as OOD would be considered as unknown objects.
Depending on the location of the unknown objects, a planner would then proceed with caution or hand over control to a safety driver.
Another application is automatic tagging of images with OOD objects, which would then be sent for human labelling.
Figure 1 shows a failure case where OOD detection is beneficial.
The two crates are predicted as road.
The right image of this figure shows the result of pixel-level OOD detection using one of the proposed methods, which clearly identifies the unusual objects.
This paper adapts existing state-of-the-art image-level OOD detection methods to the new task of pixel-level OOD classification and compares their performance on a new dataset designed for this task.
In addition to adapting the methods, we address the question of whether the best-performing image-level methods maintain their performance when adapted to the new task.
In order to answer this question, we also propose pixel-level OOD detection performance metrics, drawing both on existing image-level OOD detection and semantic segmentation performance metrics.
Further, we design two new datasets for pixel-level OOD detection with test images that contain both pixels that are in distribution and pixels that are out of distribution, evaluated with two different network architectures-PSPNet (Zhao et al., 2016) and DeeplabV3+ (Chen et al., 2018) .
Somewhat surprisingly, our evaluation shows that the best performing pixel-level OOD detection methods were derived from image-level OOD detection methods that were not necessarily the best performing on the image-level OOD detection task.
In summary, the contributions of this paper are the following:
• adaptation of image-level OOD detection methods to pixel-level OOD detection and their evaluation; • training and evaluation datasets for pixel-level OOD detection evaluation derived from existing segmentation datasets; and • a new metric for pixel-level OOD detection, called MaxIoU.
The drop in performance for pixel-level OOD detection is likely due to features that cause large disruptions at the pixel-level, but would not affect an entire image; for example, shadows, occlusion, and far away objects.
Figure 7 shows an example of shadows and far away objects in the bottom row.
At the end of the road, most pixels are high OOD values as well as the right side of the scene, which is in the shade of a building.
The top row of Figure 7 shows an interesting failure case of a flooded road being predicted as road with a low OOD value.
As can be seen in all example outputs, class boundaries are highlighted.
A classical computer vision algorithm was developed, using a series of erosion, dilation and other filters to remove these boundaries.
In general performance was increased; however, the increase was on the order of 10 −3 .
Several methods for detecting OOD pixels were adapted from image-level OOD detection, as well as a pixel uncertainty estimation.
These methods were compared using metrics previously established by OOD detection works, as well as a new metric that has roots in the semantic segmentation task.
This paper also contributed two new datasets for pixel-level OOD classification derived from semantic segmentation datasets that have common classes but also unique ones.
There is great room for improvement for pixel-level OOD detection.
One shortcoming for all the methods compared in this paper is the ability to distinguish between class boundary pixels and OOD pixels.
We tested classical computer vision techniques that could be used to visually fix this problem, but the performance increase was negligible.
The ODIN and Mahalanobis methods have the best performance with PSPNet and SUN dataset, beating the VarSum, Mutual Information, and Confidence methods by a significant margin.
However, Mutual Information has the best performance with DeeplabV3+ and the IDD dataset, with the other methods following closely.
Therefore the ODIN, Mahalanobis, and Mutual Information methods should be considered the baseline for further research in pixel-level OOD detection.
Understanding the faults of pixel-level OOD detectors is crucial for progress.
This would include categorising the failure cases of a detector.
For example, understanding why a flooded road is not highlighted, and what makes that different to shadows falsely being highlighted. | Evaluating pixel-level out-of-distribution detection methods on two new real world datasets using PSPNet and DeeplabV3+. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:789 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Bayesian inference offers a theoretically grounded and general way to train neural networks and can potentially give calibrated uncertainty.
However, it is challenging to specify a meaningful and tractable prior over the network parameters, and deal with the weight correlations in the posterior.
To this end, this paper introduces two innovations:
(i) a Gaussian process-based hierarchical model for the network parameters based on recently introduced unit embeddings that can flexibly encode weight structures, and
(ii) input-dependent contextual variables for the weight prior that can provide convenient ways to regularize the function space being modeled by the network through the use of kernels.
We show these models provide desirable test-time uncertainty estimates, demonstrate cases of modeling inductive biases for neural networks with kernels and demonstrate competitive predictive performance on an active learning benchmark.
The question of which priors one should use for Bayesian neural networks is largely unanswered, as two considerations need to be balanced: First, we want to keep inference in the high dimensional weight posterior tractable; Second, we desire to express our beliefs about the properties of the modeled functions compactly by modeling the collection of weights.
Especially the latter is typically hard, as functional regularization for weight-based models is non-trivial.
In order to cope with richer posterior inference than mean-field typically achieves, a variety of structured posterior models have been proposed recently, for instance utilizing radial posteriors (Oh et al., 2019) , or rich weight posteriors based on Gaussian processes (Louizos and Welling, 2016) .
When it comes to modeling priors on weights with correlations, recent work has attempted to capture feature-level correlations using for instance a horseshoe prior (Ghosh et al., 2018) .
One interesting direction of inquiry has focused on utilizing hyper-networks in order to model distributions over weights for an entire network (Ha et al., 2016; Pradier et al., 2018) , or alternatively to utilize unit-level level variables combined with compact hyper-networks to regress to single weights and capture weight correlations through the auxiliary variables (Karaletsos et al., 2018) .
We propose to tackle some of the challenges in modeling weight priors by extending the latter work and combining it with ideas from the Gaussian process literature to replace the hyper-network with a Gaussian process prior over weights.
We explore the use of compositional kernels to add input-dependence to the prior for our model and obtain rich models with beneficial properties in tasks such as active learning, and generalization, while maintaining tractable inference properties. | We introduce a Gaussian Process Prior over weights in a neural network and explore its ability to model input-dependent weights with benefits to various tasks, including uncertainty estimation and generalization in the low-sample setting. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:79 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We present a domain adaptation method for transferring neural representations from label-rich source domains to unlabeled target domains.
Recent adversarial methods proposed for this task learn to align features across domains by ``fooling'' a special domain classifier network.
However, a drawback of this approach is that the domain classifier simply labels the generated features as in-domain or not, without considering the boundaries between classes.
This means that ambiguous target features can be generated near class boundaries, reducing target classification accuracy.
We propose a novel approach, Adversarial Dropout Regularization (ADR), which encourages the generator to output more discriminative features for the target domain.
Our key idea is to replace the traditional domain critic with a critic that detects non-discriminative features by using dropout on the classifier network.
The generator then learns to avoid these areas of the feature space and thus creates better features.
We apply our ADR approach to the problem of unsupervised domain adaptation for image classification and semantic segmentation tasks, and demonstrate significant improvements over the state of the art.
Transferring knowledge learned by deep neural networks from label-rich domains to new target domains is a challenging problem, especially when the source and target input distributions have different characteristics.
Such domain shifts occurs in many practical applications.
For example, while simulated driving images rendered by games provide a rich source of labeled data for semantic segmentation BID19 , deep models trained on such source data do not transfer well to real target domains ( FIG0 ).
When target-domain labels are unavailable for fine-tuning, unsupervised domain adaptation must be applied to improve the source model.
Recent methods for unsupervised domain adaptation attempt to reduce the discrepancy between the source and target features via adversarial learning BID28 ; BID4 ).
They divide the base network into a feature encoder G and classifier C, and add a separate domain classifier (critic) network D. The critic takes the features generated by G and labels them as either source-or target-domain.
The encoder G is then trained with an additional adversarial loss that maximizes D's mistakes and thus aligns features across domains.However, a major drawback of this approach is that the critic simply predicts the domain label of the generated point and does not consider category information.
Thus the generator may create features that look like they came from the right domain, but are not discriminative.
In particular, it can generate points close to class boundaries, as shown in FIG0
(e), which are likely to be misclassified by the source model.
We argue that to achieve good performance on the target data, the adaptation model must take the decision boundaries between classes into account while aligning features across domains ( FIG0 ).
Moreover, since our setting is unsupervised adaptation, this must be accomplished without labels on target data.In this paper, we propose a novel adversarial alignment technique that overcomes the above limitation and preserves class boundaries.
We make the following observation: if the critic could detect points near the decision boundary, then the generator would have to avoid these areas of the feature space in order to fool the critic.
Thus the critic would force the generator to create more discriminative features.
How can we obtain such a critic?
If we alter the boundary of the classifier C slightly and measure the change in the posterior class probability p(y|x), where y and x denote class and We propose to use the boundary information to achieve low-density separation of aligned points.input respectively, then samples near the decision boundary are likely to have the largest change.
In fact, this posterior discrepancy is inversely proportional to the distance from the class boundary.
We thus propose to maximize this posterior discrepancy to turn C into a critic sensitive to nondiscriminative points.
We call this technique Adversarial Dropout Regularization.
Here, dropout is not used in the standard way, which is to regularize the main classifier and make it insensitive to noise.
Instead, we use dropout in an adversarial way, to transform the classifier into a critic sensitive to noise.
Compared to previous adversarial feature alignment methods, where the distributions p(x) are aligned globally, our method aligns target features away from decision boundaries, as illustrated in FIG0 (f).Our
ADR approach has several benefits. First
, we train the generator G with feedback from the classifier C, in contrast to existing methods, which use an unrelated critic D. Second, our method is general and straightforward to apply to a variety of domain adaptation problems, such as classification and semantic segmentation. Finally
, since ADR is trained to align distributions, it is also applicable to semi-supervised learning and training of generative models, such as Generative Adversarial Networks (GANs) BID6 ). Through
extensive experiments, we demonstrate the benefit of ADR over existing domain adaptation approaches, achieving state-of-the-art results in difficult domain shifts. We also
show an application to semi-supervised learning using GANs in appendix.
In this paper, we introduced a novel approach for aligning deep representation, Adversarial Dropout Regularization, which learns to generate discriminative features for the target domain.
The method Table 3 : Results on adaptation from GTA5 → Cityscapes.
DANN and FCN Wild denote methods proposed by BID4 ) and BID10 respectively.
consists of a critic network that can detect samples near the task decision boundary and a feature generator that fools the critic.
Our approach is general, applies to a variety of tasks, and does not require target domain labels.
In extensive domain adaptation experiments, our method outperformed baseline methods, including entropy minimization, and achieved state-of-the-art results on three datasets.We also show how to apply our method to train Generative Adversarial Networks for semisupervised learning in the appendix. | We present a new adversarial method for adapting neural representations based on a critic that detects non-discriminative features. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:790 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
The use of deep learning for a wide range of data problems has increased the need for understanding and diagnosing these models, and deep learning interpretation techniques have become an essential tool for data analysts.
Although numerous model interpretation methods have been proposed in recent years, most of these procedures are based on heuristics with little or no theoretical guarantees.
In this work, we propose a statistical framework for saliency estimation for black box computer vision models.
We build a model-agnostic estimation procedure that is statistically consistent and passes the saliency checks of Adebayo et al. (2018).
Our method requires solving a linear program, whose solution can be efficiently computed in polynomial time.
Through our theoretical analysis, we establish an upper bound on the number of model evaluations needed to recover the region of importance with high probability, and build a new perturbation scheme for estimation of local gradients that is shown to be more efficient than the commonly used random perturbation schemes.
Validity of the new method is demonstrated through sensitivity analysis.
Deep learning models have achieved great predictive performance in many tasks.
However, these complex, often un-tractable models are difficult to interpret and understand.
This lack of interpretability is a major barrier for their wide adoption, especially in domains (e.g., medicine) where models need to be qualitatively understood and/or verified for robustness.
In order to address these issues, several interpretation approaches have been proposed in the last few years.
A group of methods are based on visualizations, either by quantifying the effect of particular neurons or features, or by creating new images that maximize the target score for specific classes (Erhan et al., 2009; Simonyan et al., 2013; Zeiler & Fergus, 2014) .
A large collection of the techniques build saliency maps by attributing the gradients of the neural network to the input image through various procedures or by finding perturbations that significantly change the output (Springenberg et al., 2014; Bach et al., 2015; Montavon et al., 2017; Shrikumar et al., 2017; Zhou et al., 2016; Selvaraju et al., 2017; Smilkov et al., 2017; Fong & Vedaldi, 2017; Adebayo et al., 2018a; Dumitru et al., 2018; Singla et al., 2019) .
Another class of approaches treat the deep learner as a black-box.
In this domain, Baehrens et al. (2010) use a Parzen window classifier to approximate the target classifier locally.
Ribeiro et al. (2016) propose the LIME procedure, where small perturbations on the instance are used to obtain additional samples with which a sparse linear model is fit.
Lundberg & Lee (2017) propose SHapley Additive exPlanation(SHAP), which combines the Shapley value from the game theory with the additive feature attribution methods.
They also make connections of the SHAP procedure with various existing methods including LRP, LIME and DeepLIFT.
Chen et al. (2019) propose L-and C-Shapley procedures which can reliably approximate the Shapley values in linear time with respect to the number of features.
Majority of the listed methods are heuristics which are constructed according to certain desirable qualities.
For these methods, it is not clear what the main estimand is, if it can be consistently estimated or if (and how) the estimand can be computed more efficiently.
In fact, according to the recent research by Adebayo et al. (2018b) , most methods with great visual inspection lack sensitivity to the model and the data generating process.
Theoretical explanation for why guided back-propagation and deconvolutional methods perform image recovery is provided by Nie et al. (2018) .
In this work, we propose a statistically valid technique for model-agnostic saliency estimation, and prove its consistency under reasonable assumptions.
Furthermore, our method passes the sanity checks given by Adebayo et al. (2018b) .
Through our analysis, we obtain insights into how to improve the accuracy and reliability of our approach.
We note that there is recent work by Burns et al. (2019) where they provide a saliency estimation technique with theoretical guarantees -more specifically, FDR control.
Although their procedure is very promising from a statistical perspective, and theoretically valid under a very general set of assumptions, their technique requires human input and has a significant computational load as it uses a generative model for filling in certain regions of the target image.
Our main contributions are as follows:
• We introduce a new saliency estimation framework for CNNs and propose a new method based on input perturbation.
Our procedure requires solving a linear program, and hence the estimates can be computed very efficiently.
Furthermore, the optimization problem can be recast as a "parametric simplex" (Vanderbei, 2014) , which allows the computation of the full solution path in an expedient manner.
• We establish conditions under which the significant pixels in the input can be identified with high probability.
We present finite-sample convergence rates that can be used to determine the number of necessary model evaluations.
• We find that the noise distribution for the perturbation has a substantial effect on the convergence rate.
We propose a new perturbation scheme which uses a highly correlated Gaussian, instead of the widely used independent Gaussian distribution.
In the following section, we define the linearly estimated gradient (LEG), which is the saliency parameter of interest (i.e. the estimand), and introduce our statistical framework.
In section 3, we propose a regularized estimation procedure for LEG that penalizes the anisotropic total-variation.
We provide our theoretical results in Section 4 and the result of our numerical comparisons in Section 5.
We have proposed a statistical framework for saliency estimation that relies on local linear approximations.
Utilizing the new framework, we have built a computationally efficient saliency estimator that has theoretical guarantees.
Using our theoretical analysis, we have identified how the sample complexity of the estimator can be improved by altering the model evaluation scheme.
Finally, we have shown through empirical studies that
(i) unlike most of its competitors, our method passes the recently proposed sanity checks for saliency estimation; and
(ii) pixels identified through our approach are highly relevant for the predictions, and our method often chooses regions with higher saliency compared to regions suggested by its alternatives.
Our linear program can also be recast by a change of variables and setting α = Dg.
In this case, the elements of α correspond to differences between adjoint pixels.
This program can be written as:
+ is the pseudo-inverse of D and U 2 is related to the left singular vectors of D. More precisely, letting D = U ΘV T denote the singular value decomposition of D, U 2 is the submatrix that corresponds to the columns of U for which Θ j is zero.
The linearity constraint ensures that the differences between the adjoint pixels is proper.
Derivation of the alternative formulation follows from Theorem 1 in Gaines et al. (2018) and is omitted.
This formulation can be expressed in the standard augmented form, i.e. min Ax=b,x≥0 c T x, by writ-
where y = 1 n n i=1f (x i )x i and m = 2p 1 p 2 −p 1 −p 2 .
The γ coefficient in the original formulation can be obtained by setting
A.2
PROOF OF THEOREM 1
Our proof depends on the following lemma.
Lemma 2.
For L ≥ 2 D + 1 log (p 1 p 2 / ) /n, γ * is in the feasibility set with probability 1 − , that is
Proof.
For ease of notation, let
We also assume that the images have been rescaled so that the maximum value ofx i is 1 (without rescaling, the maximum would be given as the largest intensity, i.e. 255).
Since, the function values are also in the range given by [-2,2], we can bound |z i,j |, that is
Under review as a conference paper at ICLR 2020
The proof follows by applying the McDiarmid's inequality (Vershynin, 2018) for each row of the difference and then taking the supremum over the terms.
By application of McDiarmid's inequality, we have that
Let L = 2 D + 1 log (p 1 p 2 /2 ) /n.
Then, taking a union bound over all variables, we have
Now note that that the feasibility set for any L ≥ L contains that of L and thus γ * is automatically included.
We now present the proof of the theorem.
Note that the technique is based on the Confidence Set approach by Fan (2013) .
In the proof, we use γ to refer to vec(γ) for ease of presentation.
Proof.
First, let the high probability set for which Lemma 2 holds by A. All of the following statements hold true for A. We let ∆ = D (γ − γ * ) .
We know that Dγ 1 ≤ Dγ * 1 since both are in the feasibility set, as stated in Lemma 2.
Let α * = Dγ * ,α = Dγ and define S = {j : α * j = 0}, and the complement of S as S C .
By assumption of the Theorem, we have that the cardinality of S is s, i.e. |S| = s.
Now let ∆ S as the elements of ∆ in S. Then, using the above statement, one can show that ∆ S 1 ≥ ∆ S C 1 .
Note,
and ∆ S 1 ≥ ∆ S C 1 follows immediately.
Furthermore
where the last line uses the previous result.
Additionally, note that
where the first inequality follows by Holder's inequality and the second follows from Lemma 2 and the fact that bothγ and γ * are in the feasibility set for L = 2 D + 1 log (p 1 p 2 / ) /n.
We further bound the right hand side of the inequality by using the previous result, which gives
Next, we bound ∆ 2 by combining the previous results.
Now, by assumption of the Theorem, we have that
Dividing both sides by ∆ 2 , we obtain that | We propose a statistical framework and a theoretically consistent procedure for saliency estimation. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:791 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We explore the idea of compositional set embeddings that can be used to infer not
just a single class, but the set of classes associated with the input data (e.g., image,
video, audio signal).
This can be useful, for example, in multi-object detection in
images, or multi-speaker diarization (one-shot learning) in audio.
In particular, we
devise and implement two novel models consisting of (1) an embedding function
f trained jointly with a “composite” function g that computes set union opera-
tions between the classes encoded in two embedding vectors; and (2) embedding
f trained jointly with a “query” function h that computes whether the classes en-
coded in one embedding subsume the classes encoded in another embedding.
In
contrast to prior work, these models must both perceive the classes associated
with the input examples, and also encode the relationships between different class
label sets.
In experiments conducted on simulated data, OmniGlot, and COCO
datasets, the proposed composite embedding models outperform baselines based
on traditional embedding approaches.
Embeddings, especially as enabled by advances in deep learning, have found widespread use in natural language processing, object recognition, face identification and verification, speaker verification and diarization (i.e., who is speaking when (Sell et al., 2018) ), and other areas.
What embedding functions have in common is that they map their input into a fixed-length distributed representation (i.e., continuous space) that facilitates more efficient and accurate (Scott et al., 2018) downstream analysis than simplistic representations such as one-of-k.
Moreover, they are amenable to one-shot and few-shot learning since the set of classes that can be represented does not depend directly on the dimensionality of the embedding space.
Previous research on embeddings has focused on cases where each example is associated with just one class (e.g., the image contains only one person's face).
In contrast, we investigate the case where each example is associated with not just one, but an entire subset of classes from a universe S. The goal is to embed each example so that questions of two types can be answered (see Figure 1 (a)): (1) Is the set of classes in example x a equal to the union of the classes in examples x b and x c ?
(2) Does the set of classes in example x a subsume the set of classes in example x b ?
Importantly, we focus on settings in which the classes present in the example must be perceived automatically.
We approach this problem using compositional set embeddings.
Like traditional embeddings, we train a function f that maps each example x ∈ R n into an embedding space R m so that examples with the same classes are mapped close together and examples with different classes are mapped far apart.
Unlike traditional embeddings, our function f is trained to represent the set of classes that is associated with each example, so that questions about set union and subsumption can be answered by comparing vectors in the embedding space.
We do not assume that the mechanism by which examples (e.g., images, audio signals) are rendered from multiple classes is known.
Rather, the rendering process must be learned from training data.
We propose two models, whereby f is trained jointly with either a "composition" function g (Model I) that answers questions about set union, or a "query" function h (Model II) that answers question about subsumption (see Figure 1(a
) ). Figure
1: (a): Overview
of the paper: embedding function f is trained jointly with either the composition function g or the query function h. In particular
, the goal is for g to "compose" the embeddings of two examples, containing classes T and U respectively, to approximate the embedding of an example containing classes T ∪ U. (b): 2-D projection
of the embedding space from Experiment 1 on test classes and examples not seen during training (one-shot learning). Function g composes
two embeddings (two arrow tails) and maps the result back into the embedding space (arrow head). To substantial if imperfect
degree, the embedding space is compositional as described in (a).
To our knowledge, this computational problem is novel.
We see at least two use-cases: (1) Speaker recognition and diarization (i.e., infer who is talking within an audio signal) with multiple simultaneous speakers: Given an audio signal containing speakers who were not part of the training set and who may be speaking simultaneously, and given one example of each person speaking in isolation (one-shot learning), infer which set of speakers is talking.
(2) Multi-object recognition in images: Given just the embedding of an image x a , answer whether x a contains the object(s) in another image x b .
Storing just the embeddings but not the pixels could potentially be more space-efficient.
Because of the novelty of the problem, it was not obvious to what baselines we should compare.
When evaluating our models, we sought to assess the unique contribution of the compositional embedding above and beyond what a "traditional" embedding could achieve.
Hence, we created baselines by endowing a traditional embedding with some extra functionality to enable it to infer label sets.
Modeling assumptions and notation: For generality, we refer to the data to be embedded (images, videos, audio signals, etc.) simply as "examples".
Let the universe of classes be S. From any subset T ⊆ S, a ground-truth rendering function r : 2 S → R n "renders" an example, i.e., r(T ) = x.
Inversely, there is also a ground-truth classification function c : R n → 2 S that identifies the label set from the rendered example, i.e., c(x) = T .
Neither r nor c is observed.
We let e T represent the embedding (i.e., output of f ) associated with some example containing classes T .
Contribution: To our knowledge, this is the first paper to explore how embedding functions can be trained both to perceive multiple objects in the example and to represent the set of detected objects so that set operations can be conducted among embedded vectors.
We instantiate this idea in two ways: Model I for set union (f &
g) and Model II for set containment (f &
h).
By evaluating on synthetic data, OmniGlot handwritten image data (Lake et al., 2015) , as well as the COCO dataset (Lin et al., 2014) , we provide a proof-of-concept that "compositional set embeddings" can work.
We proposed a new kind of embedding mechanism whereby the set of objects contained in the input data (e.g., image, video, audio) must be both perceived and then mapped into a space such that the set relationships -union (Model I) and subset (Model II) -between multiple embedded vectors can be inferred.
Importantly, the ground-truth rendering process for how examples are rendered from their component classes is not known and must implicitly be learned.
In our experiments, conducted on simulated data, OmniGlot, and COCO, the accuracy was far from perfect but outperformed several baselines, including one based on a traditional embedding approach.
The results provide a proof-of-concept of how an embedding function f , trained jointly with either the composition function g or the query function h, could be effectively optimized.
One possible direction for further research to increase accuracy is to take better advantage of the statistical structure of class co-occurrence in a specific application domain (e.g., which objects tend to co-occur in the same image).
A ALTERNATIVE TRAINING PROCEDURE
We also tried another method of training f and g with the explicit goal of encouraging g to map e T and e U to be close to e T ∪U .
This can be done by training f and g alternately, or by training them jointly in the same backpropagation.
However, this approach yielded very poor results.
A possible explanation is that g could fulfill its goal by mapping all vectors to the same location (e.g., 0).
Hence, a trade-off arises between g's goal and f 's goal (separating examples with distinct label sets). | We explored how a novel method of compositional set embeddings can both perceive and represent not just a single class but an entire set of classes that is associated with the input data. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:792 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
In this work, we propose a goal-driven collaborative task that contains language, vision, and action in a virtual environment as its core components.
Specifically, we develop a Collaborative image-Drawing game between two agents, called CoDraw.
Our game is grounded in a virtual world that contains movable clip art objects.
The game involves two players: a Teller and a Drawer.
The Teller sees an abstract scene containing multiple clip art pieces in a semantically meaningful configuration, while the Drawer tries to reconstruct the scene on an empty canvas using available clip art pieces.
The two players communicate via two-way communication using natural language.
We collect the CoDraw dataset of ~10K dialogs consisting of ~138K messages exchanged between human agents.
We define protocols and metrics to evaluate the effectiveness of learned agents on this testbed, highlighting the need for a novel "crosstalk" condition which pairs agents trained independently on disjoint subsets of the training data for evaluation.
We present models for our task, including simple but effective baselines and neural network approaches trained using a combination of imitation learning and goal-driven training.
All models are benchmarked using both fully automated evaluation and by playing the game with live human agents.
Building agents that can interact with humans in natural language while perceiving and taking actions in their environments is one of the fundamental goals in artificial intelligence.
One of the required components, language understanding, has traditionally been studied in isolation and with tasks aimed at imitating human behavior (e.g. language modeling BID4 ; BID35 , machine translation BID2 ; BID42 , etc.) by learning from large text-only corpora.
To incorporate both vision and action, it is important to have the language grounded BID19 BID3 , where words like cat are connected to visual percepts and words like move relate to actions taken in an environment.
Additionally, judging language understanding purely based on the ability to mimic human utterances has limitations: there are many ways to express roughly the same meaning, and conveying the correct information is often more important than the particular choice of words.
An alternative approach, which has recently gained increased prominence, is to train and evaluate language generation capabilities in an interactive setting, where the focus is on successfully communicating information that an agent must share in order to achieve its goals.
In this paper, we propose the Collaborative Drawing (CoDraw) task, which combines grounded language understanding and learning effective goal-driven communication into a single, unified testbed.
This task involves perception, communication, and actions in a partially observable virtual environment.
As shown in FIG0 , our game is grounded in a virtual world constructed by clip art objects .
Two players, Teller and Drawer, play the game.
The Teller sees an abstract scene made from clip art objects in a semantically meaningful configuration, while the Drawer sees a drawing canvas that is initially empty.
The goal of the game is to have both players communicate so that the Drawer can reconstruct the image of the Teller, without ever seeing it.Our task requires effective communication because the two players cannot see each other's scenes.
The Teller has to describe the scene in sufficient detail for the Drawer to reconstruct it, which will require rich grounded language.
Moreover, the Drawer will need to carry out a series of actions from a rich action space to position, orient, and resize all of the clip art pieces required for the reconstruction.
Note that such actions are only made possible through clip art pieces: they can represent semantically meaningful configurations of a visual scene that are easy to manipulate, in contrast to low-level pixel-based image representations.
The performance of a pair of agents is judged based on the quality of reconstructed scenes.
We consider high-quality reconstructions as a signal that communication has been successful.As we develop models and protocols for CoDraw, we found it critical to train the Teller and the Drawer separately on disjoint subsets of the training data.
Otherwise, the two machine agents may conspire to successfully achieve the goal while communicating using a shared "codebook" that bears little resemblance to natural language.
We call this separate-training, joint-evaluation protocol crosstalk, which prevents learning of mutually agreed upon codebooks, while still checking for goal completion at test time.
We highlight crosstalk as one of our contributions, and believe it can be generally applicable to other related tasks BID41 BID14 BID11 BID9 BID27 .
In this paper, we introduce CoDraw: a collaborative task designed to facilitate learning of effective natural language communication in a grounded context.
The task combines language, perception, and actions while permitting automated goal-driven evaluation both at the end and as a measure of intermediate progress.
We introduce a dataset and models for this task, and propose a crosstalk training + evaluation protocol that is more generally applicable to studying emergent communication.
The models we present in this paper show levels of task performance that are still far from what humans can achieve.
Long-term planning and contextual reasoning as two key challenges for this task that our models only begin to address.
We hope that the grounded, goal-driven communication setting that CoDraw is a testbed for can lead to future progress in building agents that can speak more naturally and better maintain coherency over a long dialog, while being grounded in perception and actions. | We introduce a dataset, models, and training + evaluation protocols for a collaborative drawing task that allows studying goal-driven and perceptually + actionably grounded language generation and understanding. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:793 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Presence of bias and confounding effects is inarguably one of the most critical challenges in machine learning applications that has alluded to pivotal debates in the recent years.
Such challenges range from spurious associations of confounding variables in medical studies to the bias of race in gender or face recognition systems.
One solution is to enhance datasets and organize them such that they do not reflect biases, which is a cumbersome and intensive task.
The alternative is to make use of available data and build models considering these biases.
Traditional statistical methods apply straightforward techniques such as residualization or stratification to precomputed features to account for confounding variables.
However, these techniques are not in general applicable to end-to-end deep learning methods.
In this paper, we propose a method based on the adversarial training strategy to learn discriminative features unbiased and invariant to the confounder(s).
This is enabled by incorporating a new adversarial loss function that encourages a vanished correlation between the bias and learned features.
We apply our method to a synthetic, a medical diagnosis, and a gender classification (Gender Shades) dataset.
Our results show that the learned features by our method not only result in superior prediction performance but also are uncorrelated with the bias or confounder variables.
The code is available at http://blinded_for_review/.
A central challenge in practically all machine learning applications is the consideration of confounding biases.
Confounders are extraneous variables that distort the relationship between the input (independent) and output (dependent) variables and hence lead to erroneous conclusions (Pourhoseingholi et al., 2012) .
In a variety of applications ranging from disease prediction to face recognition, where machine learning models are built to predict labels from images, demographic variables (such as age, sex, race) of the study may confound the training process if the distribution of image labels is skewed with respect to them.
In this situation, the predictor may learn the influence of the confounder and bias present in the data instead of actual discriminative cues.
It is a cumbersome task to account for all biases when curating large-scale datasets (Yang et al., 2019) .
An alternative approach is to account for the bias in the model.
Traditionally, confounding variables are often controlled by statistical methods in either design or analytical stages (Aschengrau & Seage, 2013) .
In the design stage, one can utilize randomization or matching of the confounding variables across different study groups.
In the analytical stage, confounding can be controlled by standardization or stratification (Pourhoseingholi et al., 2012; Aschengrau & Seage, 2013) .
Another common solution is to learn the influence of the confounding variables on the input (independent) variables by regression analysis.
Then, the residuals derived from the optimal regression model are regarded as the confounder-free input to train the predictor (Wodtke, 2018) .
The regression analysis works reasonably well under the assumption that the input variables represent deterministic features that are comparable across a population, e.g., morphometric measurements extracted from medical images or engineered features extracted from face images.
The method fails, however, when this assumption does not hold such as for the pixel intensity values in images.
Note, the raw intensities are only meaningful within a neighborhood but variant across images.
Therefore, these regression approaches cannot be used in connection with deep learning methods that are di- † indicates equal contribution.
Figure 1: Average face images across each shade category (first row), average saliency map of the trained baseline (second row), and BR-Net (third row) color-coded with the normalized saliency value for each pixel.
BR-Net results in more stable patterns across all 6 shade categories.
The last column shows the tSNE projection of the learned features by each method.
Our method results in a better feature space invariant to the bias variable (shade) while the baseline shows a clear pattern affected by the bias.
Average accuracy of per-shade gender classification over 5 runs of 5-fold crossvalidation is shown on each average map.
The models are pre-trained on ImageNet and fine-tuned on GS-PPB.
BR-Net is not only able to close the gap of accuracy for the darker shade but it also regularizes the model to improve per-category accuracy.
rectly applied to images, such as convolutional neural networks (CNNs).
Removing confounding factors for CNNs is an open question we aim to address here.
We propose a feature learning scheme to produce features that are predictive of class labels while being unbiased to confounding variables.
The idea is inspired by the domain-adversarial training approaches (Ganin et al., 2016) with controllable invariance (Xie et al., 2017) within the context of generative adversarial networks (GANs) (Goodfellow et al., 2014 ), but we argue that generic and widely used loss functions are not designed for controlling the invariance with respect to bias variables.
Hence, we introduce an adversarial loss function that aims to quantify the statistical dependence between the learned features and bias variables with the correlation coefficient.
This strategy improves over the commonly used cross-entropy or mean-squared error (MSE) loss that only aims to predict the exact value of the bias variables and thereby achieves stabler results within the context of adversarial training.
Since our proposed model injects resilience towards the bias during training to produce confounder-invariant features, we refer to our approach as Bias-Resilient Neural Network (BR-Net).
We evaluate BR-Net on three datasets to examine different aspects of the method and compare it with a wide range of baselines.
First, we test on a synthetic dataset to outline how the learned features by our method are unbiased to controlled confounding variables.
Then, we test it on a medical imaging application, i.e., predicting the human immunodeficiency virus (HIV) diagnosis directly from T1-weighted Magnetic Resonance Images (MRIs).
As widely explored in the HIV literature, HIV disease accentuates brain aging (Cole et al., 2017) and if a predictor is learned not considering age as a confounder, the predictor may actually be learning the brain aging patterns rather than actual HIV markers.
Lastly, we evaluate BR-Net for gender classification using the Gender Shades Pilot Parliaments Benchmark (GS-PPB) dataset (Buolamwini & Gebru, 2018) .
We use different backbones pre-trained on ImageNet (Deng et al., 2009 ) and fine-tune them for predicting gender from face images.
We show that prediction of the vanilla models is dependent on the race of the subject (alternatively we consider skin color quantified by the 'shade' variable) and show poor results for darker faces, while BR-Net can successfully close the gap.
Our comparison with methods based on multi-task (Lu et al., 2017 ) prediction (i.e., predicting gender and shade as two tasks) and categorical GAN (Springenberg, 2015) (i.e., predicting shade as a categorical variable in the adver-sarial component) shows that BR-Net is not only able to learn features impartial to the bias of race (verified by feature embedding and saliency visualization), it also results in better performance in gender prediction (see Fig. 1 ).
We proposed a method based on adversarial training strategies by encouraging vanished correlation to learn features for the prediction task while being unbiased to the confounding variables in the study.
We evaluated our bias-resilient neural network (BR-Net) on a synthetic, a medical diagnosis, and a gender prediction dataset.
In all experiments, BR-Net resulted in a feature embedding space that was agnostic to the bias in the data while all other methods failed to do so.
Based on our experiments we can conclude that, besides the attempt to improve datasets and curate unbiased ones (Yang et al., 2019) , it is crucial to build models that properly account for the bias in data during training.
Our bias-resilient model and some other recent works set on foot toward this direction.
This is crucial as machine learning models are acceding to everyday lives, or are being developed for crucial medical applications.
Failure to account for the underlying bias or confounding effects can lead to spurious associations and erroneous decisions.
As a direction for the future work, other strategies such as deep canonical correlation analysis (Andrew et al., 2013) can be explored to form the adversarial component. | We propose a method based on the adversarial training strategy to learn discriminative features unbiased and invariant to the confounder(s) by incorporating a loss function that encourages a vanished correlation between the bias and learned features. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:794 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Existing neural question answering (QA) models are required to reason over and draw complicated inferences from a long context for most large-scale QA datasets.
However, if we view QA as a combined retrieval and reasoning task, we can assume the existence of a minimal context which is necessary and sufficient to answer a given question.
Recent work has shown that a sentence selector module that selects a shorter context and feeds it to the downstream QA model achieves performance comparable to a QA model trained on full context, while also being more interpretable.
Recent work has also shown that most state-of-the-art QA models break when adversarially generated sentences are appended to the context.
While humans are immune to such distractor sentences, QA models get easily misled into selecting answers from these sentences.
We hypothesize that the sentence selector module can filter out extraneous context, thereby allowing the downstream QA model to focus and reason over the parts of the context that are relevant to the question.
In this paper, we show that the sentence selector itself is susceptible to adversarial inputs.
However, we demonstrate that a pipeline consisting of a sentence selector module followed by the QA model can be made more robust to adversarial attacks in comparison to a QA model trained on full context.
Thus, we provide evidence towards a modular approach for question answering that is more robust and interpretable. | A modular approach consisting of a sentence selector module followed by the QA model can be made more robust to adversarial attacks in comparison to a QA model trained on full context. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:795 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Multi-relational graph embedding which aims at achieving effective representations with reduced low-dimensional parameters, has been widely used in knowledge base completion.
Although knowledge base data usually contains tree-like or cyclic structure, none of existing approaches can embed these data into a compatible space that in line with the structure.
To overcome this problem, a novel framework, called Riemannian TransE, is proposed in this paper to embed the entities in a Riemannian manifold.
Riemannian TransE models each relation as a move to a point and defines specific novel distance dissimilarity for each relation, so that all the relations are naturally embedded in correspondence to the structure of data.
Experiments on several knowledge base completion tasks have shown that, based on an appropriate choice of manifold, Riemannian TransE achieves good performance even with a significantly reduced parameters.
1.1 BACKGROUND Multi-relational graphs, such as social networks and knowledge bases, have a variety of applications, and embedding methods for these graphs are particularly important for these applications.
For instance, multi-relational graph embedding has been applied to social network analysis (KrohnGrimberghe et al., 2012) and knowledge base completion BID2 .
A multi-relational graph consists of entities V, a set R of relation types, and a collection of real data triples, where each triple (h, r, t) ∈ V × R × V represents some relation r ∈ R between a head entity h ∈ V and a tail entity t ∈ V. Embedding a multi-relational graph refers to a map from the entity and the relation set to some space.
Mathematical operations in this space enable many tasks, including clustering of entities and completion, prediction, or denoising of triples.
Indeed, completion tasks for knowledge bases attract considerable attention, because knowledge bases are known to be far from complete, as discussed in (West et al., 2014) BID13 .
Multi-relational graph embedding can help its completion and improve the performance of applications that use the graph.
This is the reason why much work focuses on multi-relational graph embedding.
FIG0 shows an example of a multi-relational graph and a completion task.In multi-relational graph embedding, reducing the number of parameters is an important problem in the era of big data.
Many parameters are needed with tensor-factorization-based methods, such as Bayesian clustered tensor factorization (BCTF) (Sutskever et al., 2009) , RESCAL (Nickel et al., 2011) , and a neural tensor network (NTN) (Socher et al., 2013) , where each relation has a dense matrix or tensors (O D 2 or more parameters, where D is dimensionality of the space).
Thus, TransE BID2 was proposed to reduce the number of parameters, to overcome this problem.
In TransE, each entity is mapped to a point in Euclidean space and each relation is no more than a vector addition (O (D) parameters), rather than a matrix operation.
The successors to TransE, TransH (Wang et al., 2014) and TransD BID11 , also use only a small number of parameters.
Some methods succeeded in reducing parameters using diagonal matrices instead of dense matrices: e.g. DISTMULT (Yang et al., 2015) , ComplEx (Trouillon et al., 2016) , HolE (through the Fourier transform) (Nickel et al., 2016) , and ANALOGY BID15 .
In these methods, all relations share one space for embedding, but each relation uses its own dissimilarity criterion.
The success of these methods implies that one common space underlies whole data, and each relation can be regarded as a dissimilarity criterion in the space.Whereas these methods use distances or inner products in Euclidean space as dissimilarity criteria, recent work has shown that using non-Euclidean space can further reduce the number of parameters.
One typical example of this is Poincaré Embedding (Nickel & Kiela, 2017) for hierarchical data, where a hyperbolic space is used as a space for embedding.
Here, the tree structure of hierarchical data has good compatibility with the exponential growth of hyperbolic space.
Recall the circumference with radius R is given by 2π sinh R(≈ 2π exp R) in a hyperbolic plane.
As a result, Poincaré embedding achieved good graph completion accuracy, even in low dimensionality such as 5 or 10.
On the other hand, spheres (circumference: 2π sin R) are compatible with cyclic structures.
Since Poincaré embedding, several methods have been proposed for single-relational graph embedding in non-Euclidean space (e.g. BID8 , (Nickel & Kiela, 2018) ) and shown good results.
The success of these methods suggests that the appropriate choice of a manifold (i.e., space) can retain low dimensionality, although these methods are limited to single-relational graph embedding.According to the success of the TransE and its derivation and Poincaré embedding, it is reasonable in multi-relational graph embedding to assume the existence of a single structure compatible with a non-Euclidean manifold.
For example, we can consider a single tree-like structure, which contains multiple hierarchical structures, where root selection gives multiple hierarchical structures from a single tree, which is compatible with hyperbolic spaces (See Figure 2) .
Therefore, embedding in a single shared non-Euclidean manifold with multiple dissimilarity criteria used in TransE is promising.
Taking Poincaré embedding's success with low dimensionality into consideration, this method should work well (e.g., in graph completion tasks) with small number of parameters.
This is the main idea of this paper.
There are five entities and two kinds of relation (hypernym and synonym).
Graph completion refers to answering questions such as "is mammal a hypernym of cannis?"Figure 2: Multiple hierarchical relations in a single tree.
As this example shows, it is possible that multiple relations are given by multiple dissimilarity criteria in a single structure.
We proposed Riemannian TransE, a novel framework for multi-relational graph embedding, by extending TransE to a Riemannian TransE.
Numerical experiments showed that Riemannian TransE outperforms baseline methods in low dimensionality, although its performance depends significantly on the choice of manifold.
Hence, future research shall clarify which manifolds work well with particular kinds of data, and develop a methodology for choosing the appropriate manifold.
This is important work not only for graph completion tasks but also for furthering our understanding of the global characteristics of a graph.
In other words, observing which manifold is effective can help us to understand the global "behavior" of a graph.
Other important work involves using "subspaces" in non-Euclidean space.
Although the notion of a subspace in a non-Euclidean manifold is nontrivial, it may be that our method offers advantages over TransH and TransD, which exploit linear subspaces. | Multi-relational graph embedding with Riemannian manifolds and TransE-like loss function. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:796 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Catastrophic forgetting in neural networks is one of the most well-known problems in continual learning.
Previous attempts on addressing the problem focus on preventing important weights from changing.
Such methods often require task boundaries to learn effectively and do not support backward transfer learning.
In this paper, we propose a meta-learning algorithm which learns to reconstruct the gradients of old tasks w.r.t. the current parameters and combines these reconstructed gradients with the current gradient to enable continual learning and backward transfer learning from the current task to previous tasks.
Experiments on standard continual learning benchmarks show that our algorithm can effectively prevent catastrophic forgetting and supports backward transfer learning.
The ability to learn continually without forgetting previously learned skills is crucial to artificial general intelligence (AGI) BID3 .
Addressing catastrophic forgetting in artificial neural networks (ANNs) has been the top priority of continual learning research.
Notable attempts on solving the problem include Elastic Weight Consolidation (EWC) by BID2 and the follow up work on Synaptic Intelligence (SI) by BID6 , and Memory Aware Synapse (MAS) by BID0 .
These algorithms share the same core idea: preventing important parameters from deviating from their old (presumably better) values.
In order to achieve that, EWC-like algorithms compute the importance of each parameter w.r.t. each task in the sequence and for each old task, a regularization term is added to the loss of the new task to prevent that task from being catastrophically forgotten.
The regular-Preliminary work.
Under review by the International Conference on Machine Learning (ICML).
Do not distribute.
ization term for task T (i) in EWC-like algorithms takes the following form: DISPLAYFORM0 where λ (i) controls the relative importance of task i to the current task, θ is the current parameters, θ (i) * is the parameters found at the end of the training of T (i) , and ω DISPLAYFORM1 j is the importance of parameter
θ 1. The regularizer in Eqn.
1 prevent changes to important parameters regardless of the effect of these changes.
Unless θ DISPLAYFORM2 is the optimal value for the j-th parameter, either increasing or decreasing its value will result in better performance on task i.
Keeping θ close to θ (i) * only prevent the network from catastrophically forgetting T (i) but cannot help the network to leverage the information from the current task T (k) , k > i to improve its performance on T (i) and other previous tasks.
In other words, regularizers of the form in Eqn.
1 do not support backward transfer learning.2.
The number of old parameter and importance vectors, θ * and ω, grows linearly with the number of tasks, making EWC-like algorithms not scalable to a large number of tasks.
BID5 proposed the online EWC algorithm which maintains only one copy of θ * and ω.
The sizes of θ * and ω are equal to that of the network.
Therefore, the memory requirement of online EWC is still considerably large for large networks.To address these limitations of EWC-like algorithms, we propose a meta learning algorithm which:1.
Learns to approximate the gradient of a task w.r.t. the current parameters from the current parameters
2. Combines the approximated gradients of old tasks w.r.t. the current parameters and the current task's gradient to result in an update that improves the performance of the network on all tasks.By combining the gradients, our algorithm exploits the similarity between the current task and previous tasks to enable backward transfer learning.
As described in section 2.2 and 5.2, the size of a meta-network is typically orders of magnitude smaller than that of the main network and metanetworks for different tasks can be distilled into a single meta-network in an online manner.
That significantly reduces the memory requirement of our method.In the next section, we introduce our learning to learn algorithm for continual learning.
Experiments are presented in section
3. Conclusions and future work are located in section 4 and 5, respectively.
In this paper, we present a meta learning algorithm for continual learning.
Experiments on Permuted MNIST dataset show that our algorithm is effective in preventing catastrophic forgetting and is capable of supporting backward transfer learning. | We propose a meta learning algorithm for continual learning which can effectively prevent catastrophic forgetting problem and support backward transfer learning. | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:797 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
We give a formal procedure for computing preimages of convolutional
network outputs using the dual basis defined from the set of
hyperplanes associated with the layers of the network.
We point out
the special symmetry associated with arrangements of hyperplanes of
convolutional networks that take the form of regular
multidimensional polyhedral cones.
We discuss the efficiency of of
large number of layers of nested cones that result from incremental
small size convolutions in order to give a good compromise between
efficient contraction of data to low dimensions and shaping of
preimage manifolds.
We demonstrate how a specific network flattens a
non linear input manifold to an affine output manifold and discuss
it's relevance to understanding classification properties of deep
networks.
Deep convolutional networks for classification map input data domains to output domains that ideally correspond to various classes.
The ability of deep networks to construct various mappings has been the subject of several studies over the years (1; 3; 10) and in general resulted in various estimates of capacity given a network structure.
The actual mappings that are learnt by training a specific network however, often raise a set of questions such as why are increasingly deeper networks advantageous (13; 14) ?
What are the mechanisms responsible for the successful generalisation properties of deep networks ?
Also the basic question why deep learning over large datasets is so much more effective than earlier machine learning approaches is still essentially open, BID6 .
These questions are not in general answered by studies of capacity.
A more direct approach based on actual trained networks and the mappings they are efficiently able to produce seems needed in order to answer these questions.
It seems ever more likely e.g that the ability of deep networks to generalize is connected with some sort of restriction of mappings that they theoretically can produce and that these mappings are ideally adapted to the problem for which deep learning has proven successful, Due to the complexity of deep networks the actual computation of how input domains are mapped to output classifiers has been considered prohibitively difficult.
From general considerations of networks with rectifier (ReLU) non linearities we know that these functions must be piecewise linear BID9 but the relation between network parameters such has convolutional filter weights and fully connected layer parameters and the actual functions remains largely obscure.
In general, work has therefore been concentrated on empirical studies of actual trained networks (6; 8; 9) Recently however there have been attempts to understand the relation between networks and their mapping properties from a more general and theoretical point of view.
This has included specific procedures for generating preimages of network outputs BID3 and more systematic studies of the nature of piecewise linear functions and mappings involved in deep networks, (2; 11; 15) .In
this work we will make the assertion that understanding the geometry of deep networks and the manifolds of data they process is an effective way to understand the comparative success of deep networks. We
will consider convolutional networks with ReLU non linearities. These
can be completely characterised by the corresponding hyperplanes associated with individual convolutional kernels . We will
demonstrate that the individual arrangement of hyperplanes inside a layer and the relative arrangement between layers is crucial to the understanding the success of various deep network structures and how they map data from input domains to output classifiers.We will consider only the convolutional part of a deep network with a single channel. We will
assume no subsampling or max pooling. This will
allow us to get a clear understanding of the role of the convolutional part. A more complete
analysis involving multiple channels and fully connected layers is possible but more complex and will be left to future work.The focus of our study is to analyse how domains of input data are mapped through a deep network. A complete understanding
of this mapping and its inverse or preimage will give a detailed description of the workings of the network. Since we are not considering
the final fully connected layers we will demonstrate how to compute in detail the structure of input data manifold that can be mapped to a specified reduced dimensionality affine manifold in the activity space of the final convolutional output layer. This flattening of input data
is often considered as a necessary preprocessing step for efficient classification.The understanding of mappings between layers will be based on the specific understanding of how to compute preimages for networks activities. We will recapitulate and extend
the work in (4) based on the construction of a dual basis from an arrangement of hyperplanes. By specialising to convolutional
networks we will demonstrate that the arrangement of hyperplanes associated with a specific layer can be effectively described by a regular multidimensional polyhedral cone oriented in the identity direction in the input space of the layer. Cones associated with successive
layers are then in general partly nested inside their predecessor. This leads to efficient contraction
and shaping of the input domain data manifold. In general however contraction and
shaping are in conflict in the sense that efficient contraction implies less efficient shaping. We will argue that this' conflict
is resolved by extending the number of layers of the network with small incremental updates of filters at each layer.The main contribution of the paper is the exploitation of the properties of nested cones in order to explain how non linear manifolds can be shaped and contracted in order to comply with the distribution of actual class manifolds and to enable efficient preprocessing for the final classifier stages of the network. We will specifically demonstrate
the capability of the convolutional part of the network to flatten non linear input manifolds which has previously been suggested as an important preprocessing step in object recognition, (5; 12)
We have defined a formal procedure for computing preimages of deep linear transformation networks with ReLU non linearities using the dual basis extracted from the set of hyperplanes representing the transformation.
Specialising to convolutional networks we demonstrate that the complexity and the symmetry of the arrangement of corresponding hyperplanes is substantially reduced and we show that these arrangements can be modelled closely with multidimensisional regular polyhedral cones around the identity line in input space.
We point out the crucial property of nested cones which guarantees efficient contraction of data to lower dimensions and argue that this property could be relevant in the design of real networks.
By increasing the number of layers to shape input manifolds in the form of preimages we can retain the nested cone property that most efficiently exploits network data in order to construct input manifolds that comply with manifolds corresponding to real classes and would explain the success of ever deeper networks for deep learning.
The retaining of the nested cone property can be expressed as a limitation of the degrees of freedom of multidimensional rotation of the cones.
Since convolutional networks essentially always have limited spatial support convolutions, this is to a high degree built in to existing systems.
The desire to retain the property of nesting could however act as an extra constraint to further reduce the complexity of the convolutions.
This of course means that the degrees of freedom are reduced for a network which could act as a regularization constraint and potentially explain the puzzling efficiency of generalisation of deep networks in spite of a high number of parameters.We demonstrate that it is in principle possible to compute non linear input manifolds that map to affine output manifolds.
This demonstrates the possibility of deep convolutional networks to achieve flattening of input data which is generally considered as an important preprocessing step for classification.
Since we do not consider a complete network with fully connected layers at the end we cannot give details how classification is achieved.
The explicit demonstration of non linear manifolds that map to affine outputs however indicates a possible basic structure of input manifolds for classes.
It is easy to see that a parallel translation of the affine output manifold would result in two linearly separable manifolds that would be generated by essentially parallel translated non linear manifolds in the input space.
This demonstrates that convolutional networks can be designed to exactly separate sufficiently "covariant " classes.
and that this could be the reason for the relative success of convolutional networks over previous machine learning approaches to classification and explain why using a large number of classes for training is advantageous since they all contribute to very similar individual manifolds.Disregarding these speculations the fact remains that these manifolds will always exist since they are derived on purely formal grounds from the structure of the network.
If they have no role in classification their presence will have to be explained in other ways. | Analysis of deep convolutional networks in terms of associated arrangement of hyperplanes | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:798 |
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text.
Paper text:
Meta learning has been making impressive progress for fast model adaptation.
However, limited work has been done on learning fast uncertainty adaption for Bayesian modeling.
In this paper, we propose to achieve the goal by placing meta learning on the space of probability measures, inducing the concept of meta sampling for fast uncertainty adaption.
Specifically, we propose a Bayesian meta sampling framework consisting of two main components: a meta sampler and a sample adapter.
The meta sampler is constructed by adopting a neural-inverse-autoregressive-flow (NIAF) structure, a variant of the recently proposed neural autoregressive flows, to efficiently generate meta samples to be adapted.
The sample adapter moves meta samples to task-specific samples, based on a newly proposed and general Bayesian sampling technique, called optimal-transport Bayesian sampling.
The combination of the two components allows a simple learning procedure for the
meta sampler to be developed, which can be efficiently optimized via standard back-propagation.
Extensive experimental results demonstrate the efficiency and effectiveness of the proposed framework, obtaining better sample quality and faster
uncertainty adaption compared to related methods.
Meta learning (Schmidhuber, 1987; Andrychowicz et al., 2016) is an important topic in modern machine learning.
The goal is to learn some abstract concepts from different but related tasks, which can then be adapted and generalized to new tasks and environments that have never been encountered during training.
There has been lots of research on this topic.
A recent review classifies the methods as metric-based, model-based and optimization-based methods (Weng, 2018) .
Among these methods, learning-to-learn seeks to learn a meta optimizer that can be applied to different models, with some task-specific information such as current gradients as input (Andrychowicz et al., 2016) .
Model agnostic meta learning (MAML) aims to learn a meta parameter/model from a set of training tasks such that it can quickly adapt to models for new tasks (Finn et al., 2017) .
Many follow-up works have been proposed recently, including but not limited to the meta network (Munkhdalai & Yu, 2017) , the meta learner (Ravi & Larochelle, 2017) , the Reptile model (Nichol et al., 2018) , and the lately extensions to an online setting (Finn et al., 2019) , to model hierarchical relation (Yao et al., 2019) and sequential strategies (Ortega et al., 2019) , and to its stable version Antoniou et al. (2019) and to some theoretical analysis (Khodak et al., 2019) .
It is worth noting that all the aforementioned models are designed from an optimization perspective.
Bayesian modeling, in parallel with optimization, has also been gaining increasing attention and found various applications in deep learning.
Recent research has extended the above meta-learning methods to a Bayesian setting.
For example, Bayesian MAML (BMAML) replaces the stochasticgradient-descent (SGD) step with Stein variational gradient descent (SVGD) for posterior sampling (Yoon et al., 2018) .
Probabilistic MAML (PMAML) extends standard MAML by incorporating a parameter distribution of the adapted model trained via a variational lower bound (Finn et al., 2018) .
Amortized Bayesian Meta Learning extends the idea of MAML to amortized variational inference (Ravi & Beatson, 2019; Choi et al., 2019) .
VERSA (Gordon et al., 2019) uses an amortization network to approximate the posterior predictive distributions.
Meta particle flow realizes Bayes's rule based on ODE neural operator that can be trained in a meta-learning framework.
Though methodologically elegant with many interesting applications, the above methods lack the ability to uncertainty propagation/adaption, in the sense that uncertainty is either not considered (e.g., in MAML) or only considered in the specific task level (e.g., BMAML).
This could slow down model adaption or even inaccurate uncertainty modeling when considering from a Bayesian modeling perspective.
For example, suppose one is given samples from a set of Gaussians with different mean and covariance matrices, how can she/he efficiently leverage uncertainty in these samples to generate samples from a complex yet related distribution such as a Gaussian mixture?
To tackle this problem, we propose to perform meta learning on the space of probability measures, i.e., instead of adapting parameters to a new task, one adapts a meta distribution to new tasks.
When implementing distribution adaption in algorithms where distributions are approximated by samples, our distribution-adaptation framework becomes sample-to-sample adaption.
In other words, the meta parameter in standard MAML becomes meta samples in our method, where uncertainty can be well encoded.
For this reason, we call our framework Bayesian meta sampling.
Specifically, we propose a mathematically elegant framework for Bayesian meta sampling based on the theory of Wasserstein gradient flows (WGF) (Ambrosio et al., 2005) .
Our goal is to learn a meta sampler whose samples can be fast adapted to new tasks.
Our framework contains two main components: a meta sampler and a sample adapter.
For the meta sampler, we adopt the state-ofthe-art flow-based method to learn to transport noise samples to meta samples.
Our meta sampler is parameterized by a neural inverse-autoregressive flow (NIAF), an extension of the recently developed neural autoregressive flows (NAFs) (Huang et al., 2018) .
The NIAF consists of a meta-sample generator and an autoregressive conditioner model, which outputs the parameters of the meta-sample generator.
The NIAF takes some task-specific information (such as gradients of target distributions) and random noise as input and outputs meta samples from its generator.
These meta samples are then quickly adapted to task-specific samples of target distributions by feeding them to the sample adapter.
To ensure efficient and accurate adaptations to new task distributions, a novel optimal-transport Bayesian sampling (OP-sampling) scheme, based on Wasserstein gradient flows, is proposed as the adaptation mechanism of the sample adapter.
The OP-sampling is general and can ensure samples to be adapted in a way that makes the sample density evolve to a target distribution optimally, thus endowing the property of fast uncertainty adaption.
Finally, when one aims to perform specific tasks such as Bayesian classification with a task network, these samples are used to encode uncertainty into modeling.
To this end, we further develop an efficient learning algorithm to optimize the task network based on variational inference.
Extensive experiments are conducted to test the advantages of the proposed meta-sampling framework, ranging from synthetic-distribution to posterior-distribution adaption and to k-shot learning in Bayesian neural networks and reinforcement learning.
Our results demonstrate a better performance of the proposed model compared to related methods.
We present a Bayesian meta-sampling framework, called DAMS, consisting of a meta sampler and a sample adapter for effective uncertainty adaption.
Our model is based on the recently proposed neural autoregressive flows and related theory from optimal transport, enabling a simple yet effective training procedure.
To make the proposed model scalable, an efficient uncertainty parameterization is proposed for the task network, which is trained by variational inference.
DAMS is general and can be applied to different scenarios with an ability for fast uncertainty adaptation.
Experiments on a series of tasks demonstrate the advantages of the proposed framework over other methods including the recently proposed meta SG-MCMC, in terms of both sample efficiency and fast uncertainty adaption. | We proposed a Bayesian meta sampling method for adapting the model uncertainty in meta learning | {
"domains": [
"artificial_intelligence"
],
"input_context": "multiple_paragraphs",
"output_context": "sentence",
"source_type": "single_source",
"task_family": "summarization"
} | scitldr_aic:train:799 |