input
stringlengths
286
19k
output
stringlengths
1
15.8k
metadata
dict
_instance_id
stringlengths
15
62
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Existing multi-agent reinforcement learning (MARL) communication methods have relied on a trusted third party (TTP) to distribute reward to agents, leaving them inapplicable in peer-to-peer environments. This paper proposes reward distribution using {\em Neuron as an Agent} (NaaA) in MARL without a TTP with two key ideas: (i) inter-agent reward distribution and (ii) auction theory. Auction theory is introduced because inter-agent reward distribution is insufficient for optimization. Agents in NaaA maximize their profits (the difference between reward and cost) and, as a theoretical result, the auction mechanism is shown to have agents autonomously evaluate counterfactual returns as the values of other agents. NaaA enables representation trades in peer-to-peer environments, ultimately regarding unit in neural networks as agents. Finally, numerical experiments (a single-agent environment from OpenAI Gym and a multi-agent environment from ViZDoom) confirm that NaaA framework optimization leads to better performance in reinforcement learning.
Neuron as an Agent (NaaA) enable us to train multi-agent communication without a trusted third party.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:980
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Increasing model size when pretraining natural language representations often results in improved performance on downstream tasks. However, at some point further model increases become harder due to GPU/TPU memory limitations, longer training times, and unexpected model degradation. To address these problems, we present two parameter-reduction techniques to lower memory consumption and increase the training speed of BERT. Comprehensive empirical evidence shows that our proposed methods lead to models that scale much better compared to the original BERT. We also use a self-supervised loss that focuses on modeling inter-sentence coherence, and show it consistently helps downstream tasks with multi-sentence inputs. As a result, our best model establishes new state-of-the-art results on the GLUE, RACE, and SQuAD benchmarks while having fewer parameters compared to BERT-large. Full network pre-training (Dai & Le, 2015; Radford et al., 2018; Howard & Ruder, 2018 ) has led to a series of breakthroughs in language representation learning. Many nontrivial NLP tasks, including those that have limited training data, have greatly benefited from these pre-trained models. One of the most compelling signs of these breakthroughs is the evolution of machine performance on a reading comprehension task designed for middle and high-school English exams in China, the RACE test (Lai et al., 2017) : the paper that originally describes the task and formulates the modeling challenge reports then state-of-the-art machine accuracy at 44.1%; the latest published result reports their model performance at 83.2% ; the work we present here pushes it even higher to 89.4%, a stunning 45.3% improvement that is mainly attributable to our current ability to build high-performance pretrained language representations. Evidence from these improvements reveals that a large network is of crucial importance for achieving state-of-the-art performance . It has become common practice to pre-train large models and distill them down to smaller ones (Sun et al., 2019; Turc et al., 2019) for real applications. Given the importance of model size, we ask: Is having better NLP models as easy as having larger models? An obstacle to answering this question is the memory limitations of available hardware. Given that current state-of-the-art models often have hundreds of millions or even billions of parameters, it is easy to hit these limitations as we try to scale our models. Training speed can also be significantly hampered in distributed training, as the communication overhead is directly proportional to the number of parameters in the model. We also observe that simply growing the hidden size of a model such as BERT-large can lead to worse performance. Table 1 and Fig. 1 show a typical example, where we simply increase the hidden size of BERT-large to be 2x larger and get worse results with this BERT-xlarge model. Model Hidden Size Parameters RACE (Accuracy) BERT-large 1024 334M 72.0% BERT-large (ours) 1024 334M 73.9% BERT-xlarge (ours) 2048 1270M 54.3% Table 1 : Increasing hidden size of BERT-large leads to worse performance on RACE. Existing solutions to the aforementioned problems include model parallelization (Shoeybi et al., 2019) and clever memory management (Chen et al., 2016; . These solutions address the memory limitation problem, but not the communication overhead and model degradation problem. In this paper, we address all of the aforementioned problems, by designing A Lite BERT (ALBERT) architecture that has significantly fewer parameters than a traditional BERT architecture. ALBERT incorporates two parameter reduction techniques that lift the major obstacles in scaling pre-trained models. The first one is a factorized embedding parameterization. By decomposing the large vocabulary embedding matrix into two small matrices, we separate the size of the hidden layers from the size of vocabulary embedding. This separation makes it easier to grow the hidden size without significantly increasing the parameter size of the vocabulary embeddings. The second technique is cross-layer parameter sharing. This technique prevents the parameter from growing with the depth of the network. Both techniques significantly reduce the number of parameters for BERT without seriously hurting performance, thus improving parameter-efficiency. An ALBERT configuration similar to BERT-large has 18x fewer parameters and can be trained about 1.7x faster. The parameter reduction techniques also act as a form of regularization that stabilizes the training and helps with generalization. To further improve the performance of ALBERT, we also introduce a self-supervised loss for sentence-order prediction (SOP). SOP primary focuses on inter-sentence coherence and is designed to address the ineffectiveness of the next sentence prediction (NSP) loss proposed in the original BERT. As a result of these design decisions, we are able to scale up to much larger ALBERT configurations that still have fewer parameters than BERT-large but achieve significantly better performance. We establish new state-of-the-art results on the well-known GLUE, SQuAD, and RACE benchmarks for natural language understanding. Specifically, we push the RACE accuracy to 89.4%, the GLUE benchmark to 89.4, and the F1 score of SQuAD 2.0 to 92.2. While ALBERT-xxlarge has less parameters than BERT-large and gets significantly better results, it is computationally more expensive due to its larger structure. An important next step is thus to speed up the training and inference speed of ALBERT through methods like sparse attention and block attention (Shen et al., 2018 ). An orthogonal line of research, which could provide additional representation power, includes hard example mining (Mikolov et al., 2013) efficient language modeling training . Additionally, although we have convincing evidence that sentence order prediction is a more consistently-useful learning task that leads to better language representations, we hypothesize that there could be more dimensions not yet captured by the current self-supervised training losses that could create additional representation power for the resulting representations. RACE RACE is a large-scale dataset for multi-choice reading comprehension, collected from English examinations in China with nearly 100,000 questions. Each instance in RACE has 4 candidate answers. Following prior work , we use the concatenation of the passage, question, and each candidate answer as the input to models. Then, we use the representations from the "[CLS]" token for predicting the probability of each answer. The dataset consists of two domains: middle school and high school. We train our models on both domains and report accuracies on both the development set and test set. A.2 HYPERPARAMETERS Hyperparameters for downstream tasks are shown in Table 15 . We adapt these hyperparameters from , , and Yang et al. (2019
A new pretraining method that establishes new state-of-the-art results on the GLUE, RACE, and SQuAD benchmarks while having fewer parameters compared to BERT-large.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:981
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Structured tabular data is the most commonly used form of data in industry according to a Kaggle ML and DS Survey. Gradient Boosting Trees, Support Vector Machine, Random Forest, and Logistic Regression are typically used for classification tasks on tabular data. The recent work of Super Characters method using two-dimensional word embeddings achieved state-of-the-art results in text classification tasks, showcasing the promise of this new approach. In this paper, we propose the SuperTML method, which borrows the idea of Super Characters method and two-dimensional embeddings to address the problem of classification on tabular data. For each input of tabular data, the features are first projected into two-dimensional embeddings like an image, and then this image is fed into fine-tuned ImageNet CNN models for classification. Experimental results have shown that the proposed SuperTML method have achieved state-of-the-art results on both large and small datasets. In data science, data is categorized into structured data and unstructured data. Structured data is also known as tabular data, and the terms will be used interchangeably. Anthony Goldbloom, the founder and CEO of Kaggle observed that winning techniques have been divided by whether the data was structured or unstructured BID12 . Currently, DNN models are widely applied for usage on unstructured data such as image, speech, and text. According to Anthony, "When the data is unstructured, its definitely CNNs and RNNs that are carrying the day" BID12 . The successful CNN model in the ImageNet competition BID8 has outperformed human for image classification task by ResNet BID6 since 2015.On the other side of the spectrum, machine learning models such as Support Vector Machine (SVM), Gradient Boosting Trees (GBT), Random Forest, and Logistic Regression, have been used to process structured data. According to a recent survey of 14,000 data scientists by Kaggle (2017) , a subdivision of structured data known as relational data is reported as the most popular type of data in industry, with at least 65% working daily with relational data. Regarding structured data competitions, Anthony says that currently XGBoost is winning practically every competition in the structured data category BID4 . XGBoost BID2 is one popular package implementing the Gradient Boosting method.Recent research has tried using one-dimensional embedding and implementing RNNs or one-dimensional CNNs to address the TML (Tabular Machine Learning) tasks, or tasks that deal with structured data processing BID7 BID11 , and also categorical embedding for tabular data with categorical features BID5 . However, this reliance upon onedimensional embeddings may soon come to change. Recent NLP research has shown that the two-dimensional embedding of the Super Characters method BID9 is capable of achieving state-of-the-art results on large dataset benchmarks. The Super Characters method is a two-step method that was initially designed for text classification problems. In the first step, the characters of the input text are drawn onto a blank image. In the second step, the image is fed into two-dimensional CNN models for classification. The two-dimensional CNN models are trained by fine-tuning from pretrained models on large image dataset, e.g. ImageNet.In this paper, we propose the SuperTML method, which borrows the concept of the Super Characters method to address TML problems. For each input, tabular features are first projected onto a two-dimensional embedding and fed into fine-tuned two-dimensional CNN models for classification. The proposed SuperTML method handles the categorical type and missing values in tabular data automatically, without need for explicit conversion into numerical type values. The proposed SuperTML method borrows the idea of twodimensional embedding from Super Characters and transfers the knowledge learned from computer vision to the structured tabular data. Experimental results shows that the proposed SuperTML method has achieved state-of-the-art results on both large and small tabular dataset.
Deep learning on structured tabular data using two-dimensional word embedding with fine-tuned ImageNet pre-trained CNN model.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:982
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Predictive coding, within theoretical neuroscience, and variational autoencoders, within machine learning, both involve latent Gaussian models and variational inference. While these areas share a common origin, they have evolved largely independently. We outline connections and contrasts between these areas, using their relationships to identify new parallels between machine learning and neuroscience. We then discuss specific frontiers at this intersection: backpropagation, normalizing flows, and attention, with mutual benefits for both fields. Perception has been conventionally formulated as hierarchical feature detection [52] , similar to discriminative deep networks [34] . In contrast, predictive coding [48, 14] and variational autoencoders (VAEs) [31, 51] frame perception as a generative process, modeling data observations to learn and infer aspects of the external environment. Specifically, both areas model observations, x, using latent variables, z, through a probabilistic model, p θ (x, z) = p θ (x|z)p θ (z). Both areas also use variational inference, introducing an approximate posterior, q(z|x), to infer z and learn the model parameters, θ. These similarities are the result of a common origin, with Mumford [45] , Dayan et al. [9] , and others [46] formalizing earlier ideas [59, 38] . However, since their inception, these areas have developed largely independently. We explore their relationships (see also [58, 37] ) and highlight opportunities for the transfer of ideas. In identifying these ties, we hope to strengthen this promising, close connection between neuroscience and machine learning, prompting further investigation. We have identified commonalities between predictive coding and VAEs, discussing new frontiers resulting from this perspective. Reuniting these areas may strengthen the connection between neuroscience and machine learning. Further refining this connection could lead to mutual benefits: neuroscience can offer inspiration for investigation in machine learning, and machine learning can evaluate ideas on real-world datasets and environments. Indeed, despite some push back [17] , if predictive coding and related theories [18] are to become validated descriptions of the brain and overcome their apparent generality, they will likely require the computational tools and ideas of modern machine learning to pin down and empirically compare design choices.
connections between predictive coding and VAEs + new frontiers
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:983
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Adaptive optimization methods such as AdaGrad, RMSprop and Adam have been proposed to achieve a rapid training process with an element-wise scaling term on learning rates. Though prevailing, they are observed to generalize poorly compared with SGD or even fail to converge due to unstable and extreme learning rates. Recent work has put forward some algorithms such as AMSGrad to tackle this issue but they failed to achieve considerable improvement over existing methods. In our paper, we demonstrate that extreme learning rates can lead to poor performance. We provide new variants of Adam and AMSGrad, called AdaBound and AMSBound respectively, which employ dynamic bounds on learning rates to achieve a gradual and smooth transition from adaptive methods to SGD and give a theoretical proof of convergence. We further conduct experiments on various popular tasks and models, which is often insufficient in previous work. Experimental results show that new variants can eliminate the generalization gap between adaptive methods and SGD and maintain higher learning speed early in training at the same time. Moreover, they can bring significant improvement over their prototypes, especially on complex deep networks. The implementation of the algorithm can be found at https://github.com/Luolc/AdaBound . There has been tremendous progress in first-order optimization algorithms for training deep neural networks. One of the most dominant algorithms is stochastic gradient descent (SGD) BID15 , which performs well across many applications in spite of its simplicity. However, there is a disadvantage of SGD that it scales the gradient uniformly in all directions. This may lead to poor performance as well as limited training speed when the training data are sparse. To address this problem, recent work has proposed a variety of adaptive methods that scale the gradient by square roots of some form of the average of the squared values of past gradients. Examples of such methods include ADAM BID7 , ADAGRAD BID2 and RMSPROP BID16 . ADAM in particular has become the default algorithm leveraged across many deep learning frameworks due to its rapid training speed BID17 .Despite their popularity, the generalization ability and out-of-sample behavior of these adaptive methods are likely worse than their non-adaptive counterparts. Adaptive methods often display faster progress in the initial portion of the training, but their performance quickly plateaus on the unseen data (development/test set) BID17 . Indeed, the optimizer is chosen as SGD (or with momentum) in several recent state-of-the-art works in natural language processing and computer vision BID11 BID18 , wherein these instances SGD does perform better than adaptive methods. BID14 have recently proposed a variant of ADAM called AMSGRAD, hoping to solve this problem. The authors provide a theoretical guarantee of convergence but only illustrate its better performance on training data. However, the generalization ability of AMSGRAD on unseen data is found to be similar to that of ADAM while a considerable performance gap still exists between AMSGRAD and SGD BID6 BID1 .In this paper , we first conduct an empirical study on ADAM and illustrate that both extremely large and small learning rates exist by the end of training. The results correspond with the perspective pointed out by BID17 that the lack of generalization performance of adaptive methods may stem from unstable and extreme learning rates. In fact, introducing non-increasing learning rates, the key point in AMSGRAD, may help abate the impact of huge learning rates, while it neglects possible effects of small ones. We further provide an example of a simple convex optimization problem to elucidate how tiny learning rates of adaptive methods can lead to undesirable non-convergence. In such settings, RMSPROP and ADAM provably do not converge to an optimal solution, and furthermore, however large the initial step size α is, it is impossible for ADAM to fight against the scale-down term.Based on the above analysis, we propose new variants of ADAM and AMSGRAD, named AD-ABOUND and AMSBOUND, which do not suffer from the negative impact of extreme learning rates. We employ dynamic bounds on learning rates in these adaptive methods, where the lower and upper bound are initialized as zero and infinity respectively, and they both smoothly converge to a constant final step size. The new variants can be regarded as adaptive methods at the beginning of training, and they gradually and smoothly transform to SGD (or with momentum) as time step increases. In this framework, we can enjoy a rapid initial training process as well as good final generalization ability. We provide a convergence analysis for the new variants in the convex setting.We finally turn to an empirical study of the proposed methods on various popular tasks and models in computer vision and natural language processing. Experimental results demonstrate that our methods have higher learning speed early in training and in the meantime guarantee strong generalization performance compared to several adaptive and non-adaptive methods. Moreover, they can bring considerable improvement over their prototypes especially on complex deep networks. We investigate existing adaptive algorithms and find that extremely large or small learning rates can result in the poor convergence behavior. A rigorous proof of non-convergence for ADAM is provided to demonstrate the above problem.Motivated by the strong generalization ability of SGD, we design a strategy to constrain the learning rates of ADAM and AMSGRAD to avoid a violent oscillation. Our proposed algorithms, AD-ABOUND and AMSBOUND, which employ dynamic bounds on their learning rates, achieve a smooth transition to SGD. They show the great efficacy on several standard benchmarks while maintaining advantageous properties of adaptive methods such as rapid initial progress and hyperparameter insensitivity.
Novel variants of optimization methods that combine the benefits of both adaptive and non-adaptive methods.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:984
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: An important problem that arises in reinforcement learning and Monte Carlo methods is estimating quantities defined by the stationary distribution of a Markov chain. In many real-world applications, access to the underlying transition operator is limited to a fixed set of data that has already been collected, without additional interaction with the environment being available. We show that consistent estimation remains possible in this scenario, and that effective estimation can still be achieved in important applications. Our approach is based on estimating a ratio that corrects for the discrepancy between the stationary and empirical distributions, derived from fundamental properties of the stationary distribution, and exploiting constraint reformulations based on variational divergence minimization. The resulting algorithm, GenDICE, is straightforward and effective. We prove the consistency of the method under general conditions, provide a detailed error analysis, and demonstrate strong empirical performance on benchmark tasks, including off-line PageRank and off-policy policy evaluation. Estimation of quantities defined by the stationary distribution of a Markov chain lies at the heart of many scientific and engineering problems. Famously, the steady-state distribution of a random walk on the World Wide Web provides the foundation of the PageRank algorithm (Langville & Meyer, 2004) . In many areas of machine learning, Markov chain Monte Carlo (MCMC) methods are used to conduct approximate Bayesian inference by considering Markov chains whose equilibrium distribution is a desired posterior (Andrieu et al., 2002 ). An example from engineering is queueing theory, where the queue lengths and waiting time under the limiting distribution have been extensively studied (Gross et al., 2018) . As we will also see below, stationary distribution quantities are of fundamental importance in reinforcement learning (RL) (e.g., Tsitsiklis & Van Roy, 1997) . Classical algorithms for estimating stationary distribution quantities rely on the ability to sample next states from the current state by directly interacting with the environment (as in on-line RL or MCMC), or even require the transition probability distribution to be given explicitly (as in PageRank). Unfortunately, these classical approaches are inapplicable when direct access to the environment is not available, which is often the case in practice. There are many practical scenarios where a collection of sampled trajectories is available, having been collected off-line by an external mechanism that chose states and recorded the subsequent next states. Given such data, we still wish to estimate a stationary quantity. One important example is off-policy policy evaluation in RL, where we wish to estimate the value of a policy different from that used to collect experience. Another example is off-line PageRank (OPR), where we seek to estimate the relative importance of webpages given a sample of the web graph. Motivated by the importance of these off-line scenarios, and by the inapplicability of classical methods, we study the problem of off-line estimation of stationary values via a stationary distribution corrector. Instead of having access to the transition probabilities or a next-state sampler, we assume only access to a fixed sample of state transitions, where states have been sampled from an unknown distribution and next-states are sampled according to the Markov chain's transition operator. This off-line setting is distinct from that considered by most MCMC or on-line RL methods, where it is assumed that new observations can be continually sampled by demand from the environment. The off-line setting is indeed more challenging than its more traditional on-line counterpart, given that one must infer an asymptotic quantity from finite data. Nevertheless, we develop techniques that still allow consistent estimation under general conditions, and provide effective estimates in practice. The main contributions of this work are: • We formalize the problem of off-line estimation of stationary quantities, which captures a wide range of practical applications. • We propose a novel stationary distribution estimator, GenDICE, for this task. The resulting algorithm is based on a new dual embedding formulation for divergence minimization, with a carefully designed mechanism that explicitly eliminates degenerate solutions. • We theoretically establish consistency and other statistical properties of GenDICE, and empirically demonstrate that it achieves significant improvements on several behavior-agnostic offpolicy evaluation benchmarks and an off-line version of PageRank. The methods we develop in this paper fundamentally extend recent work in off-policy policy evaluation (Liu et al., 2018; Nachum et al., 2019) by introducing a new formulation that leads to a more general, and as we will show, more effective estimation method. In this paper, we proposed a novel algorithm GenDICE for general stationary distribution correction estimation, which can handle both the discounted and average stationary distribution given multiple behavior-agnostic samples. Empirical results on off-policy evaluation and offline PageRank show the superiority of proposed method over the existing state-of-the-art methods. the existence of the stationary distribution. Our discussion is all based on this assumption. Assumption 1 Under the target policy, the resulted state-action transition operator T has a unique stationary distribution in terms of the divergence D (·||·). If the total variation divergence is selected, the Assumption 1 requires the transition operator should be ergodic, as discussed in Meyn & Tweedie (2012).
In this paper, we proposed a novel algorithm, GenDICE, for general stationary distribution correction estimation, which can handle both discounted and average off-policy evaluation on multiple behavior-agnostic samples.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:985
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: The power of neural networks lies in their ability to generalize to unseen data, yet the underlying reasons for this phenomenon remain elusive. Numerous rigorous attempts have been made to explain generalization, but available bounds are still quite loose, and analysis does not always lead to true understanding. The goal of this work is to make generalization more intuitive. Using visualization methods, we discuss the mystery of generalization, the geometry of loss landscapes, and how the curse (or, rather, the blessing) of dimensionality causes optimizers to settle into minima that generalize well. Neural networks are a powerful tool for solving classification problems. The power of these models is due in part to their expressiveness; they have many parameters that can be efficiently optimized to fit nearly any finite training set. However, the real power of neural network models comes from their ability to generalize; they often make accurate predictions on test data that were not seen during training, provided the test data is sampled from the same distribution as the training data. The generalization ability of neural networks is seemingly at odds with their expressiveness. Neural network training algorithms work by minimizing a loss function that measures model performance using only training data. Because of their flexibility, it is possible to find parameter configurations Figure 1: A minefield of bad minima: we train a neural net classifier and plot the iterates of SGD after each tenth epoch (red dots). We also plot locations of nearby "bad" minima with poor generalization (blue dots). We visualize these using t-SNE embedding. All blue dots achieve near perfect train accuracy, but with test accuracy below 53% (random chance is 50%). The final iterate of SGD (yellow star) also achieves perfect train accuracy, but with 98.5% test accuracy. Miraculously, SGD always finds its way through a landscape full of bad minima, and lands at a minimizer with excellent generalization. for neural networks that perfectly fit the training data and minimize the loss function while making mostly incorrect predictions on test data. Miraculously, commonly used optimizers reliably avoid such "bad" minima of the loss function, and succeed at finding "good" minima that generalize well. Our goal here is to develop an intuitive understanding of neural network generalization using visualizations and experiments rather than analysis. We begin with some experiments to understand why generalization is puzzling, and how over-parameterization impacts model behavior. Then, we explore how the "flatness" of minima correlates with generalization, and in particular try to understand why this correlation exists. We explore how the high dimensionality of parameter spaces biases optimizers towards landing in flat minima that generalize well. Finally, we present some counterfactual experiments to validate the intuition we develop. Code to reproduce experiments is available at https://github.com/genviz2019/genviz. We explored the connection between generalization and loss function geometry using visualizations and experiments on classification margin and loss basin volumes, the latter of which does not appear in the literature. While experiments can provide useful insights, they sometimes raise more questions than they answer. We explored why the "large margin" properties of flat minima promote generalization. But what is the precise metric for "margin" that neural networks respect? Experiments suggest that the small volume of bad minima prevents optimizers from landing in them. But what is a correct definition of "volume" in a space that is invariant to parameter re-scaling and other transforms, and how do we correctly identify the attraction basins for good minima? Finally and most importantly: how do we connect these observations back to a rigorous PAC learning framework? The goal of this study is to foster appreciation for the complex behaviors of neural networks, and to provide some intuitions for why neural networks generalize. We hope that the experiments contained here will provide inspiration for theoretical progress that leads us to rigorous and definitive answers to the deep questions raised by generalization.
An intuitive empirical and visual exploration of the generalization properties of deep neural networks.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:986
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: We argue that symmetry is an important consideration in addressing the problem of systematicity and investigate two forms of symmetry relevant to symbolic processes. We implement this approach in terms of convolution and show that it can be used to achieve effective generalisation in three toy problems: rule learning, composition and grammar learning. Convolution (LeCun & Bengio, 1998) has been an incredibly effective element in making Deep Learning successful. Applying the same set of filters across all positions in an image captures an important characteristic of the processes that generate the objects depicted in them, namely the translational symmetry of the underlying laws of nature. Given the impact of these architectures, researchers are increasingly interested in finding approaches that can be used to exploit further symmetries (Cohen & Welling, 2016; Higgins et al., 2018) , such as rotation or scale. Here, we will investigate symmetries relevant to symbolic processing. We show that incorporating symmetries derived from symbolic processes into neural architectures allows them to generalise more robustly on tasks that require handling elements and structures that were not seen at training time. Specifically, we construct convolution-based models that outperform standard approaches on the rule learning task of Marcus et al. (1999) , a simplified form of the SCAN task (Lake & Baroni, 2018 ) and a simple context free language learning task. Symbolic architectures form the main alternative to conventional neural networks as models of intelligent behaviour, and have distinct characteristics and abilities. Specifically, they form representations in terms of structured combinations of atomic symbols. Their power comes not from the atomic symbols themselves, which are essentially arbitrary, but from the ability to construct and transform complex structures. This allows symbolic processing to happen without regard to the meaning of the symbols themselves, expressed in the formalist's motto as If you take care of the syntax, the semantics will take care of itself (Haugeland, 1985) . From this point of view, thought is a form of algebra (James, 1890; Boole, 1854) in which formal rules operate over symbolic expressions, without regard to the values of the variables they contain (Marcus, 2001) . As a consequence, those values can be processed systematically across all the contexts they occur in. So, for example, we do not need to know who Socrates is or even what mortal means in order to draw a valid conclusion from All men are mortal and Socrates is a man. However, connectionist approaches have been criticised as lacking this systematicity. Fodor & Pylyshyn (1988) claimed that neural networks lack the inherent ability to model the fact that cognitive capacities always exhibit certain symmetries, so that the ability to entertain a given thought implies the ability to entertain thoughts with semantically related contents. Thus, understanding these symmetries and designing neural architectures around them may enable us to build systems that demonstrate this systematicity. However, the concept of systematicity has itself drawn scrutiny and criticism from a range of researchers interested in real human cognition and behaviour. Pullum & Scholz (2007) argue that the definition is too vague. Nonetheless, understanding the symmetries of symbolic processes is likely to be fruitful in itself, even where human cognition fails to fully embody that idealisation. We investigate two kinds of symmetry, relating to permutations of symbols and to equivalence between memory slots. The relation between symbols and their referents is, in principle, arbitrary, and any permutation of this correspondence is therefore a symmetry of the system. More simply, the names we give to things do not matter, and we should be able to get equivalent results whether we call it rose or trandafir, as long as we do so consistently. Following on from that, a given symbol should be treated consistently wherever we find it. This can be thought of as a form of symmetry over the various slots within the data structures, such as stacks and queues, where symbols can be stored. We explore these questions using a number of small toy problems and compare the performance of architectures with and without the relevant symmetries. In each case, we use convolution as the means of implementing the symmetry, which, in practical terms, allows us to rely only on standard deep learning components. In addition, this approach opens up novel uses for convolutional architectures, and suggests connections between symbolic processes and spatial representations. One way to address the criticisms of distributed approaches raised by Fodor & Pylyshyn (1988) has been to focus on methods for binding and combining multiple representations (Smolensky, 1990; Hinton, 1990; Plate, 1991; Pollack, 1990; Hummel & Holyoak, 1997) in order to handle constituent structure more effectively. Here, we instead examined the role of symmetry in the systematicity of how those representations are processed, using a few simple proof-of-concept problems. We showed that imposing a symmetry on the architecture was effective in obtaining the desired form of generalisation when learning simple rules, composing representations and learning grammars. In particular, we discussed two forms of symmetry relevant to the processing of symbols, corresponding respectively to the fact that all atomic symbols are essentially equivalent and the fact that any given symbol can be represented in multiple places, yet retain the same meaning. The first of these gives rise to a symmetry under permutations of these symbols, which allows generalisation to occur from one symbol to another. The second gives rise to a symmetry across memory locations, which allows generalisation from simple structures to more complex ones. On all the problems, we implemented the symmetries using convolution. From a practical point of view, this allowed us to build networks using only long-accepted components from the standard neural toolkit. From a theoretical point of view, however, this implementation decision draws a connection between the cognition of space and the cognition of symbols. The translational invariance of space is probably the most significant and familiar example of symmetry we encounter in our natural environment. As such it forms a sensible foundation on which to build an understanding of other symmetries. In fact, Corcoran & Tarski (1986) use invariances under various spatial transformations within geometry as a starting point for their definition of logical notion in terms of invariance under all permutations. Moreover, from an evolutionary perspective, it is also plausible that there are common origins behind the mechanisms that support the exploitation of a variety of different symmetries, including potentially spatial and symbolic. In addition, recent research supports the idea that cerebral structures historically associated with the representation of spatial structure, such as the hippocampus and entorhinal cortex, also play a role in representing more general relational structures (Behrens et al., 2018; Duff & Brown-Schmidt, 2012) . Thus, our use of convolution is not merely a detail of implementation, but also an illustration of how spatial symmetries might relate to more abstract domains. In particular, the recursive push down automata, discussed in Section 4, utilises push and pop operations that relate fairly transparently to spatial translations. Of course, a variety of other symmetries, beyond translations, are likely to be important in human cognition, and an important challenge for future research will be to understand how symmetries are discovered and learned empirically, rather than being innately specified. A common theme in our exploration of symmetry, was the ability it conferred to separate content from structure. Imposing a symmetry across symbols or memory locations, allowed us to abstract away from the particular content represented to represent the structure containing it. So, for example the grammar rule learned by our network on the syllable sequences of Marcus et al. (1999) was able to generalise from seen to unseen syllables because it represented the abstract structure of ABB and ABA sequences, without reference to the particular syllables involved. We explored how this ability could also be exploited on composition and grammar learning tasks, but it is likely that there are many other situations where such a mechanism would be useful.
We use convolution to make neural networks behave more like symbolic systems.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:987
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Key equatorial climate phenomena such as QBO and ENSO have never been adequately explained as deterministic processes. This in spite of recent research showing growing evidence of predictable behavior. This study applies the fundamental Laplace tidal equations with simplifying assumptions along the equator — i.e. no Coriolis force and a small angle approximation. The solutions to the partial differential equations are highly non-linear related to Navier-Stokes and only search approaches can be used to fit to the data.
Analytical Formulation of Equatorial Standing Wave Phenomena: Application to QBO and ENSO
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:988
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: In the field of Continual Learning, the objective is to learn several tasks one after the other without access to the data from previous tasks. Several solutions have been proposed to tackle this problem but they usually assume that the user knows which of the tasks to perform at test time on a particular sample, or rely on small samples from previous data and most of them suffer of a substantial drop in accuracy when updated with batches of only one class at a time. In this article, we propose a new method, OvA-INN, which is able to learn one class at a time and without storing any of the previous data. To achieve this, for each class, we train a specific Invertible Neural Network to output the zero vector for its class. At test time, we can predict the class of a sample by identifying which network outputs the vector with the smallest norm. With this method, we show that we can take advantage of pretrained models by stacking an invertible network on top of a features extractor. This way, we are able to outperform state-of-the-art approaches that rely on features learning for the Continual Learning of MNIST and CIFAR-100 datasets. In our experiments, we are reaching 72% accuracy on CIFAR-100 after training our model one class at a time. A typical Deep Learning workflow consists in gathering data, training a model on this data and finally deploying the model in the real world (Goodfellow et al., 2016) . If one would need to update the model with new data, it would require to merge the old and new data and process a training from scratch on this new dataset. Nevertheless, there are circumstances where this method may not apply. For example, it may not be possible to store the old data because of privacy issues (health records, sensible data) or memory limitations (embedded systems, very large datasets). In order to address those limitations, recent works propose a variety of approaches in a setting called Continual Learning (Parisi et al., 2018) . In Continual Learning, we aim to learn the parameters w of a model on a sequence of datasets with the inputs x j i ∈ X i and the labels y j i ∈ Y i , to predict p(y * |w, x * ) for an unseen pair (x * , y * ). The training has to be done on each dataset, one after the other, without the possibility to reuse previous datasets. The performance of a Continual Learning algorithm can then be measured with two protocols : multi-head or single-head. In the multi-head scenario, the task identifier i is known at test time. For evaluating performances on task i, the set of all possible labels is then Y = Y i . Whilst in the single-head scenario, the task identifier is unknown, in that case we have Y = ∪ N i=1 Y i with N the number of tasks learned so far. For example, let us say that the goal is to learn MNIST sequentially with two batches: using only the data from the first five classes and then only the data from the remaining five other classes. In multi-head learning, one asks at test time to be able to recognize samples of 0-4 among the classes 0-4 and samples of 5-9 among classes 5-9. On the other hand, in single-head learning, one can not assume from which batch a sample is coming from, hence the need to be able to recognize any samples of 0-9 among classes 0-9. Although the former one has received the most attention from researchers, the last one fits better to the desiderata of a Continual Learning system as expressed in Farquhar & Gal (2018) and (van de Ven & Tolias, 2019) . The single-head scenario is also notoriously harder than its multi-head counterpart (Chaudhry et al., 2018) and is the focus of the present work. Updating the parameters with data from a new dataset exposes the model to drastically deteriorate its performance on previous data, a phenomenon known as catastrophic forgetting (McCloskey & Cohen, 1989) . To alleviate this problem, researchers have proposed a variety of approaches such as storing a few samples from previous datasets (Rebuffi et al., 2017) , adding distillation regularization (Li & Hoiem, 2018) , updating the parameters according to their usefulness on previous datasets (Kirkpatrick et al., 2017) , using a generative model to produce samples from previous datasets (Kemker & Kanan, 2017) . Despite those efforts toward a more realistic setting of Continual Learning, one can notice that, most of the time, results are proposed in the case of a sequence of batches of multiple classes. This scenario often ends up with better accuracy (because the learning procedure highly benefits of the diversity of classes to find the best tuning of parameters) but it does not illustrate the behavior of those methods in the worst case scenario. In fact, Continual Learning algorithms should be robust in the size of the batch of classes. In this work, we propose to implement a method specially designed to handle the case where each task consists of only one class. It will therefore be evaluated in the single-head scenario. Our approach, named One-versus-All Invertible Neural Networks (OvA-INN), is based on an invertible neural network architecture proposed by Dinh et al. (2014) . We use it in a One-versus-All strategy : each network is trained to make a prediction of a class and the most confident one on a sample is used to identify the class of the sample. In contrast to most other methods, the training phase of each class can be independently executed from one another. The contributions of our work are : (i) a new approach for Continual Learning with one class per batch; (ii) a neural architecture based on Invertible Networks that does not require to store any of the previous data; (iii) state-of-the-art results on several tasks of Continual Learning for Computer Vision (CIFAR-100, MNIST) in this setting. We start by reviewing the closest methods to our approach in Section 2, then explain our method in Section 3, analyse its performances in Section 4 and identify limitations and possible extensions in Section 5. A limiting factor in our approach is the necessity to add a new network each time one wants to learn a new class. This makes the memory and computational cost of OvA-INN linear with the number of classes. Recent works in networks merging could alleviate the memory issue by sharing weights (Chou et al., 2018) or relying on weights superposition (Cheung et al., 2019) . This being said, we showed that Ova-INN was able to achieve superior accuracy on CIFAR-100 class-by-class training than approaches reported in the literature, while using less parameters. Another constraint of using Invertible Networks is to keep the size of the output equal to the size of the input. When one wants to apply a features extractor with a high number of output channels, it can have a very negative impact on the memory consumption of the invertible layers. Feature Selection or Feature Aggregation techniques may help to alleviate this issue (Tang et al., 2014) . Finally, we can notice that our approach is highly dependent on the quality of the pretrained features extractor. In our CIFAR-100, we had to rescale the input to make it compatible with ResNet. Nonetheless, recent research works show promising results in training features extractors in very efficient ways (Asano et al., 2019) . Because it does not require to retrain its features extractor, we can foresee better performance in class-by-class learning with OvA-INN as new and more efficient features extractors are discovered. As a future research direction, one could try to incorporate our method in a Reinforcement Learning scenario where various situations can be learned separately in a first phase (each situation with its own Invertible Network). Then during a second phase where any situation can appear without the agent explicitly told in which situation it is in, the agent could rely on previously trained Invertible Networks to improve its policy. This setting is closely related to Options in Reinforcement Learning. Also, in a regression setting, one can add a fully connected layer after an intermediate layer of an Invertible Network and use it to predict the output for the trained class. At test time, one only need to read the output from the regression layer of the Invertible Network that had the highest confidence. In this paper, we proposed a new approach for the challenging problem of single-head Continual Learning without storing any of the previous data. On top of a fixed pretrained neural network, we trained for each class an Invertible Network to refine the extracted features and maximize the loglikelihood on samples from its class. This way, we show that we can predict the class of a sample by running each Invertible Network and identifying the one with the highest log-likelihood. This setting allows us to take full benefit of pretrained models, which results in very good performances on the class-by-class training of CIFAR-100 compared to prior works. channels with 5 × 5 kernel, a fully-connected layer with 100 channels applied on an input of size 7 × 7 and a final layer of 10 channels : S iCaRL,MNIST = 28 × 28 × 800 + (5 × 5 + 1) × 32 + (5 × 5 + 1) × 64 + (7 × 7 × 64 + 1) × 100 + (100 + 1) × 10 = 944406
We propose to train an Invertible Neural Network for each class to perform class-by-class Continual Learning.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:989
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Reinforcement learning (RL) with value-based methods (e.g., Q-learning) has shown success in a variety of domains such as games and recommender systems (RSs). When the action space is finite, these algorithms implicitly finds a policy by learning the optimal value function, which are often very efficient. However, one major challenge of extending Q-learning to tackle continuous-action RL problems is that obtaining optimal Bellman backup requires solving a continuous action-maximization (max-Q) problem. While it is common to restrict the parameterization of the Q-function to be concave in actions to simplify the max-Q problem, such a restriction might lead to performance degradation. Alternatively, when the Q-function is parameterized with a generic feed-forward neural network (NN), the max-Q problem can be NP-hard. In this work, we propose the CAQL method which minimizes the Bellman residual using Q-learning with one of several plug-and-play action optimizers. In particular, leveraging the strides of optimization theories in deep NN, we show that max-Q problem can be solved optimally with mixed-integer programming (MIP)---when the Q-function has sufficient representation power, this MIP-based optimization induces better policies and is more robust than counterparts, e.g., CEM or GA, that approximate the max-Q solution. To speed up training of CAQL, we develop three techniques, namely (i) dynamic tolerance, (ii) dual filtering, and (iii) clustering. To speed up inference of CAQL, we introduce the action function that concurrently learns the optimal policy. To demonstrate the efficiency of CAQL we compare it with state-of-the-art RL algorithms on benchmark continuous control problems that have different degrees of action constraints and show that CAQL significantly outperforms policy-based methods in heavily constrained environments. Reinforcement learning (RL) has shown success in a variety of domains such as games (Mnih et al., 2013) and recommender systems (RSs) (Gauci et al., 2018) . When the action space is finite, valuebased algorithms such as Q-learning (Watkins & Dayan, 1992) , which implicitly finds a policy by learning the optimal value function, are often very efficient because action optimization can be done by exhaustive enumeration. By contrast, in problems with a continuous action spaces (e.g., robotics (Peters & Schaal, 2006) ), policy-based algorithms, such as policy gradient (PG) (Sutton et al., 2000; Silver et al., 2014) or cross-entropy policy search (CEPS) (Mannor et al., 2003; Kalashnikov et al., 2018) , which directly learn a return-maximizing policy, have proven more practical. Recently, methods such as ensemble critic (Fujimoto et al., 2018) and entropy regularization (Haarnoja et al., 2018) have been developed to improve the performance of policy-based RL algorithms. Policy-based approaches require a reasonable choice of policy parameterization. In some continuous control problems, Gaussian distributions over actions conditioned on some state representation is used. However, in applications such as RSs, where actions often take the form of high-dimensional item-feature vectors, policies cannot typically be modeled by common action distributions. Furthermore, the admissible action set in RL is constrained in practice, for example, when actions must lie within a specific range for safety (Chow et al., 2018) . In RSs, the admissible actions are often random functions of the state (Boutilier et al., 2018) . In such cases, it is non-trivial to define policy parameterizations that handle such factors. On the other hand, value-based algorithms are wellsuited to these settings, providing potential advantage over policy methods. Moreover, at least with linear function approximation (Melo & Ribeiro, 2007) , under reasonable assumptions, Q-learning converges to optimality, while such optimality guarantees for non-convex policy-based methods are generally limited (Fazel et al., 2018) . Empirical results also suggest that value-based methods are more data-efficient and less sensitive to hyper-parameters (Quillen et al., 2018) . Of course, with large action spaces, exhaustive action enumeration in value-based algorithms can be expensive--one solution is to represent actions with continuous features (Dulac-Arnold et al., 2015) . The main challenge in applying value-based algorithms to continuous-action domains is selecting optimal actions (both at training and inference time). Previous work in this direction falls into three broad categories. The first solves the inner maximization of the (optimal) Bellman residual loss using global nonlinear optimizers, such as the cross-entropy method (CEM) for QT-Opt (Kalashnikov et al., 2018) , gradient ascent (GA) for actor-expert (Lim et al., 2018) , and action discretization (Uther & Veloso, 1998; Smart & Kaelbling, 2000; Lazaric et al., 2008) . However, these approaches do not guarantee optimality. The second approach restricts the Q-function parameterization so that the optimization problem is tractable. For instance, wire-fitting (Gaskett et al., 1999; III & Klopf, 1993) approximates Q-values piecewise-linearly over a discrete set of points, chosen to ensure the maximum action is one of the extreme points. The normalized advantage function (NAF) (Gu et al., 2016) constructs the state-action advantage function to be quadratic, hence analytically solvable. Parameterizing the Q-function with an input-convex neural network (Amos et al., 2017) ensures it is concave. These restricted functional forms, however, may degrade performance if the domain does not conform to the imposed structure. The third category replaces optimal Q-values with a "soft" counterpart (Haarnoja et al., 2018) : an entropy regularizer ensures that both the optimal Q-function and policy have closed-form solutions. However, the sub-optimality gap of this soft policy scales with the interval and dimensionality of the action space (Neu et al., 2017) . Motivated by the shortcomings of prior approaches, we propose Continuous Action Q-learning (CAQL), a Q-learning framework for continuous actions in which the Q-function is modeled by a generic feed-forward neural network. 1 Our contribution is three-fold. First, we develop the CAQL framework, which minimizes the Bellman residual in Q-learning using one of several "plug-andplay" action optimizers. We show that "max-Q" optimization, when the Q-function is approximated by a deep ReLU network, can be formulated as a mixed-integer program (MIP) that solves max-Q optimally. When the Q-function has sufficient representation power, MIP-based optimization induces better policies and is more robust than methods (e.g., CEM, GA) that approximate the max-Q solution. Second, to improve CAQL's practicality for larger-scale applications, we develop three speed-up techniques for computing max-Q values: (i) dynamic tolerance; (ii) dual filtering; and (iii) clustering. Third, we compare CAQL with several state-of-the-art RL algorithms on several benchmark problems with varying degrees of action constraints. Value-based CAQL is generally competitive, and outperforms policy-based methods in heavily constrained environments, sometimes significantly. We also study the effects of our speed-ups through ablation analysis. We proposed Continuous Action Q-learning (CAQL), a general framework for handling continuous actions in value-based RL, in which the Q-function is parameterized by a neural network. While generic nonlinear optimizers can be naturally integrated with CAQL, we illustrated how the inner maximization of Q-learning can be formulated as mixed-integer programming when the Qfunction is parameterized with a ReLU network. CAQL (with action function learning) is a general Q-learning framework that includes many existing value-based methods such as QT-Opt and actorexpert. Using several benchmarks with varying degrees of action constraint, we showed that the policy learned by CAQL-MIP generally outperforms those learned by CAQL-GA and CAQL-CEM; and CAQL is competitive with several state-of-the-art policy-based RL algorithms, and often outperforms them (and is more robust) in heavily-constrained environments. Future work includes: extending CAQL to the full batch learning setting, in which the optimal Q-function is trained using only offline data; speeding up the MIP computation of the max-Q problem to make CAQL more scalable; and applying CAQL to real-world RL problems.
A general framework of value-based reinforcement learning for continuous control
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:99
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Human-computer conversation systems have attracted much attention in Natural Language Processing. Conversation systems can be roughly divided into two categories: retrieval-based and generation-based systems. Retrieval systems search a user-issued utterance (namely a query) in a large conversational repository and return a reply that best matches the query. Generative approaches synthesize new replies. Both ways have certain advantages but suffer from their own disadvantages. We propose a novel ensemble of retrieval-based and generation-based conversation system. The retrieved candidates, in addition to the original query, are fed to a reply generator via a neural network, so that the model is aware of more information. The generated reply together with the retrieved ones then participates in a re-ranking process to find the final reply to output. Experimental results show that such an ensemble system outperforms each single module by a large margin. Automatic human-computer conversation systems have long served humans in domain-specific scenarios. A typical approach for such systems is built by human engineering, for example, using manually constructed ontologies ), natural language templates ), and even predefined dialogue state tracking BID29 ).Recently , researchers have paid increasing attention to open-domain, chatbot-style human-computer conversations such as XiaoIce 1 and Duer 2 due to their important commercial values. For opendomain conversations, rules and templates would probably fail since they hardly can handle the great diversity of conversation topics and flexible representations of natural language sentences. With the increasing popularity of on-line social media and community question-answering platforms, a huge number of human-human conversation utterances are available on the public Web BID32 ; BID13 ). Previous studies begin to develop data-oriented approaches, which can be roughly categorized into two groups: retrieval systems and generative systems.When a user issues an utterance (called a query), the retrieval-based conversation systems search a corresponding utterance (called a reply) that best matches the query in a pre-constructed conversational repository BID10 ; BID11 ). Owing to the abundant web resources, the retrieval mechanism will always find a candidate reply given a query using semantic matching. The retrieved replies usually have various expressions with rich information. However, the retrieved replies are limited by the capacity of the pre-constructed repository. Even the best matched reply from the conversational repository is not guaranteed to be a good response since most cases are not tailored for the issued query.To make a reply tailored appropriately for the query, a better way is to generate a new one accordingly. With the prosperity of neural networks powered by deep learning, generation-based conversation systems are developing fast. Generation-based conversation systems can synthesize a new sentence as the reply, and thus bring the results of good flexibility and quality. A typical generationbased conversation model is seq2seq BID23 ; BID22 ; BID20 ), in which two recurrent neural networks (RNNs) are used as the encoder and the decoder. The encoder is to capture the semantics of the query with one or a few distributed and real-valued vectors (also known as embeddings); the decoder aims at decoding the query embeddings to a reply. Long short term memory (LSTM) BID8 Table 1 : Characteristics of retrieved and generated replies in two different conversational systems.(GRUs) BID3 ) could further enhance the RNNs to model longer sentences. The advantage of generation-based conversation systems is that they can produce flexible and tailored replies. A well known problem for the generation conversation systems based on "Seq2Seq" is that they are prone to choose universal and common generations. These generated replies such as "I don't know" and " Me too" suit many queries BID20 ), but they contain insufficient semantics and information. Such insufficiency leads to non-informative conversations in real applications.Previously, the retrieval-based and generation-based systems with their own characteristics, as listed in Table 1 , have been developed separately. We are seeking to absorb their merits. Hence, we propose an ensemble of retrieval-based and generation-based conversation systems. Specifically, given a query, we first apply the retrieval module to search for k candidate replies. We then propose a "multi sequence to sequence" (multi-seq2seq) model to integrate each retrieved reply into the Seq2Seq generation process so as to enrich the meaning of generated replies to respond the query. We generate a reply via the multi-seq2seq generator based on the query and k retrieved replies. Afterwards, we construct a re-ranker to re-evaluate the retrieved replies and the newly generated reply so that more meaningful replies with abundant information would stand out. The highest ranked candidate (either retrieved or generated) is returned to the user as the final reply. To the best of our knowledge, we are the first to build a bridge over retrieval-based and generation-based modules to work out a solution for an ensemble of conversation system.Experimental results show that our ensemble system consistently outperforms each single component in terms of subjective and objective metrics, and both retrieval-based and generation-based methods contribute to the overall approach. This also confirms the rationale for building model ensembles for conversation systems. Having verified that our model achieves the best performance, we are further curious how each gadget contributes to our final system. Specifically, we focus on the following research questions. In this paper, we propose a novel ensemble of retrieval-based and generation-based open-domain conversation systems. The retrieval part searches the k best-matched candidate replies, which are, along with the original query, fed to an RNN-based multi-seq2seq reply generator. Then the generated replies and retrieved ones are re-evaluated by a re-ranker to find the final result. Although traditional generation-based and retrieval-based conversation systems are isolated, we have designed a novel mechanism to connect both modules. The proposed ensemble model clearly outperforms state-of-the-art conversion systems in the constructed large-scale conversation dataset.
A novel ensemble of retrieval-based and generation-based for open-domain conversation systems.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:990
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Deep neural networks trained on a wide range of datasets demonstrate impressive transferability. Deep features appear general in that they are applicable to many datasets and tasks. Such property is in prevalent use in real-world applications. A neural network pretrained on large datasets, such as ImageNet, can significantly boost generalization and accelerate training if fine-tuned to a smaller target dataset. Despite its pervasiveness, few effort has been devoted to uncovering the reason of transferability in deep feature representations. This paper tries to understand transferability from the perspectives of improved generalization, optimization and the feasibility of transferability. We demonstrate that 1) Transferred models tend to find flatter minima, since their weight matrices stay close to the original flat region of pretrained parameters when transferred to a similar target dataset; 2) Transferred representations make the loss landscape more favorable with improved Lipschitzness, which accelerates and stabilizes training substantially. The improvement largely attributes to the fact that the principal component of gradient is suppressed in the pretrained parameters, thus stabilizing the magnitude of gradient in back-propagation. 3) The feasibility of transferability is related to the similarity of both input and label. And a surprising discovery is that the feasibility is also impacted by the training stages in that the transferability first increases during training, and then declines. We further provide a theoretical analysis to verify our observations. The last decade has witnessed the enormous success of deep neural networks in a wide range of applications. Deep learning has made unprecedented advances in many research fields, including computer vision, natural language processing, and robotics. Such great achievement largely attributes to several desirable properties of deep neural networks. One of the most prominent properties is the transferability of deep feature representations. Transferability is basically the desirable phenomenon that deep feature representations learned from one dataset can benefit optimization and generalization on different datasets or even different tasks, e.g. from real images to synthesized images, and from image recognition to object detection (Yosinski et al., 2014) . This is essentially different from traditional learning techniques and is often regarded as one of the parallels between deep neural networks and human learning mechanisms. In real-world applications, practitioners harness transferability to overcome various difficulties. Deep networks pretrained on large datasets are in prevalent use as general-purpose feature extractors for downstream tasks (Donahue et al., 2014) . For small datasets, a standard practice is to fine-tune a model transferred from large-scale dataset such as ImageNet (Russakovsky et al., 2015) to avoid over-fitting. For complicated tasks such as object detection, semantic segmentation and landmark localization, ImageNet pretrained networks accelerate training process substantially (Oquab et al., 2014; He et al., 2018) . In the NLP field, advances in unsupervised pretrained representations have enabled remarkable improvement in downstream tasks (Vaswani et al., 2017; Devlin et al., 2019) . Despite its practical success, few efforts have been devoted to uncovering the underlying mechanism of transferability. Intuitively, deep neural networks are capable of preserving the knowledge learned on one dataset after training on another similar dataset (Yosinski et al., 2014; Li et al., 2018b; 2019) . This is even true for notably different datasets or apparently different tasks. Another line of works have observed several detailed phenomena in the transfer learning of deep networks (Kirkpatrick et al., 2016; Kornblith et al., 2019 ), yet it remains unclear why and how the transferred representations are beneficial to the generalization and optimization perspectives of deep networks. The present study addresses this important problem from several new perspectives. We first probe into how pretrained knowledge benefits generalization. Results indicate that models fine-tuned on target datasets similar to the pretrained dataset tend to stay close to the transferred parameters. In this sense, transferring from a similar dataset makes fine-tuned parameters stay in the flat region around the pretrained parameters, leading to flatter minima than training from scratch. Another key to transferability is that transferred features make the optimization landscape significantly improved with better Lipschitzness, which eases optimization. Results show that the landscapes with transferred features are smoother and more predictable, fundamentally stabilizing and accelerating training especially at the early stages of training. This is further enhanced by the proper scaling of gradient in back-propagation. The principal component of gradient is suppressed in the transferred weight matrices, controlling the magnitude of gradient and smoothing the loss landscapes. We also investigate a common concern raised by practitioners: when is transfer learning helpful to target tasks? We test the transferability of pretrained networks with varying inputs and labels. Instead of the similarity between pretrained and target inputs, what really matters is the similarity between the pretrained and target tasks, i.e. both inputs and labels are required to be sufficiently similar. We also investigate the relationship between pretraining epoch and transferability. Surprisingly, although accuracy on the pretrained dataset increases throughout training, transferability first increases at the beginning and then decreases significantly as pretraining proceeds. Finally, this paper gives a theoretical analysis based on two-layer fully connected networks. Theoretical results consistently justify our empirical discoveries. The analysis here also casts light on deeper networks. We believe the mechanism of transferability is the fundamental property of deep neural networks and the in-depth understanding presented here may stimulate further algorithmic advances. Why are deep representations pretrained from modern neural networks generally transferable to novel tasks? When is transfer learning feasible enough to consistently improve the target task performance? These are the key questions in the way of understanding modern neural networks and applying them to a variety of real tasks. This paper performs the first in-depth analysis of the transferability of deep representations from both empirical and theoretical perspectives. The results reveal that pretrained representations will improve both generalization and optimization performance of a target network provided that the pretrained and target datasets are sufficiently similar in both input and labels. With this paper, we show that transfer learning, as an initialization technique of neural networks, exerts implicit regularization to restrict the networks from escaping the flat region of pretrained landscape.
Understand transferability from the perspectives of improved generalization, optimization and the feasibility of transferability.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:991
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: We address the following question: How redundant is the parameterisation of ReLU networks? Specifically, we consider transformations of the weight space which leave the function implemented by the network intact. Two such transformations are known for feed-forward architectures: permutation of neurons within a layer, and positive scaling of all incoming weights of a neuron coupled with inverse scaling of its outgoing weights. In this work, we show for architectures with non-increasing widths that permutation and scaling are in fact the only function-preserving weight transformations. For any eligible architecture we give an explicit construction of a neural network such that any other network that implements the same function can be obtained from the original one by the application of permutations and rescaling. The proof relies on a geometric understanding of boundaries between linear regions of ReLU networks, and we hope the developed mathematical tools are of independent interest. Ever since its early successes, deep learning has been a puzzle for machine learning theorists. Multiple aspects of deep learning seem at first sight to contradict common sense: single-hidden-layer networks suffice to approximate any continuous function (Cybenko, 1989; Hornik et al., 1989 ), yet in practice deeper is better; the loss surface is highly non-convex, yet it can be minimised by first-order methods; the capacity of the model class is immense, yet deep networks tend not to overfit (Zhang et al., 2017) . Recent investigations into these and other questions have emphasised the role of overparameterisation, or highly redundant function representation. It is now known that overparameterised networks enjoy both easier training (Allen-Zhu et al., 2019; Du et al., 2019; Frankle & Carbin, 2019) , and better generalisation (Belkin et al., 2019; Neyshabur et al., 2019; Novak et al., 2018) . However, the specific mechanism by which over-parameterisation operates is still largely a mystery. In this work, we study one particular aspect of over-parameterisation, namely the ability of neural networks to represent a target function in many different ways. In other words, we ask whether many different parameter configurations can give rise to the same function. Such a notion of parameterisation redundancy has so far remained unexplored, despite its potential connections to the structure of the loss landscape, as well as to the literature on neural network capacity in general. Specifically, we consider feed-forward ReLU networks, with weight matrices W 1 , . . . , W L , and biases b 1 , . . . , b L ,. We study parameter transformations which preserve the output behaviour of the network h(z) = W L σ(W L−1 σ(. . . W 1 z + b 1 . . . ) + b L−1 ) + b L for all inputs z in some domain Z. Two such transformations are known for feed-forward ReLU architectures: 1. Permutation of units (neurons) within a layer, i.e. for some permutation matrix P, 2. Positive scaling of all incoming weights of a unit coupled with inverse scaling of its outgoing weights. Applied to a whole layer, with potentially different scaling factors arranged into a diagonal matrix M, this can be written as Our main theorem applies to architectures with non-increasing widths, and shows that there are no other function-preserving parameter transformations besides permutation and scaling. Stated formally: Theorem 1. Consider a bounded open nonempty domain Z ⊆ R d0 and any architecture For this architecture, there exists a ReLU network h θ : Z → R, or equivalently a setting of the weights θ (W 1 , b 1 , . . . , W L , b L ), such that for any 'general' ReLU network h η : Z → R (with the same architecture) satisfying h θ (z) = h η (z) for all z ∈ Z, there exist permutation matrices P 1 , . . . P L−1 , and positive diagonal matrices M 1 , . . . , M L−1 , such that where η (W 1 , b 1 , . . . , W L , b L ) are the parameters of h η . In the above, 'general' networks is a class of networks meant to exclude degenerate cases. We give a more precise definition in Section 3; for now it suffices to note that almost all networks are general. The proof of the result relies on a geometric understanding of prediction surfaces of ReLU networks. These surfaces are piece-wise linear functions, with non-differentiabilities or 'folds' between linear regions. It turns out that folds carry a lot of information about the parameters of a network, so much in fact, that some networks are uniquely identified (up to permutation and scaling) by the function they implement. This is the main insight of the theorem. In the following sections, we introduce in more detail the concept of a fold-set, and describe its geometric structure for a subclass of ReLU networks. The paper culminates in a proof sketch of the main result. The full proof, including proofs of intermediate results, is included in the Appendix. In this work, we have shown that for architectures with non-increasing widths, certain ReLU networks are almost uniquely identified by the function they implement. The result suggests that the function-equivalence classes of ReLU networks are surprisingly small, i.e. there may be only little redundancy in the way ReLU networks are parameterised, contrary to what is commonly believed. This apparent contradiction could be explained in a number of ways: • It could be the case that even though exact equivalence classes are small, approximate equivalence is much easier to achieve. That is, it could be that h θ − h η ≤ is satisfied by a disproportionately larger class of parameters η than h θ − h η = 0. This issue is related to the so-called inverse stability of the realisation map of neural nets, which is not yet well understood. • Another possibility is that the kind of networks we consider in this paper is not representative of networks typically encountered in practice, i.e. it could be that 'typical networks' do not have well connected dependency graphs, and are therefore not easily identifiable. • Finally, we have considered only architectures with non-increasing widths, whereas some previous theoretical work has assumed much wider intermediate layers compared to the input dimension. It is possible that parameterisation redundancy is much larger in such a regime compared to ours. However, gains from over-parameterisation have also been observed in practical settings with architectures not unlike those considered here. We consider these questions important directions for further research. We also hypothesise that our analysis could be extended to convolutional and recurrent networks, and to other piece-wise linear activation functions such as leaky ReLU. Definition A.1 (Partition). Let S ⊆ Z. We define the partition of Z induced by S, denoted P Z (S), as the set of connected components of Z \ S. Definition A.2 (Piece-wise hyperplane). Let P be a partition of Z. We say H ⊆ Z is a piece-wise hyperplane with respect to partition P, if H = ∅ and there exist (w, b) = (0, 0) and P ∈ P such that H = {z ∈ P | w z + b = 0}. Definition A.3 (Piece-wise linear surface / pwl. surface). A set S ⊆ Z is called a piece-wise linear surface on Z of order κ if it can be written as , and no number smaller than κ admits such a representation. Lemma A.1. If S 1 , S 2 are piece-wise linear surfaces on Z of order k 1 and k 2 , then S 1 ∪ S 2 is a piece-wise linear surface on Z of order at most max {k 1 , k 2 }. We can write H Given sets Z and S ⊆ Z, we introduce the notation (The dependence on Z is suppressed.) By Lemma A.1, i S is itself a pwl. surface on Z of order at most i. Lemma A.2. For i ≤ j and any set S, we have i j S = j i S = i S. Proof. We will need these definitions: j S = {S ⊆ S | S is a pwl. surface of order at most j}, i j S = {S ⊆ j S | S is a pwl. surface of order at most i}, surface of order at most j}. Consider first the equality j i S = i S. We know that j i S ⊆ i S because the square operator always yields a subset. At the same time, i S ⊆ j i S, because i S satisfies the condition for membership in (6). To prove the equality i j S = i S, we use the inclusion j S ⊆ S to deduce i j S ⊆ i S. Now let S ⊆ S be one of the sets under the union in (3), i.e. it is a pwl. surface of order at most i. Then it is also a pwl. surface of order at most j, implying S ⊆ j S. This means S is also one of the sets under the union in (5), proving that i S ⊆ i j S. Lemma A.3. Let Z and S ⊆ Z be sets. Then one can write k+1 S = k S ∪ i H i where H i are piece-wise hyperplanes wrt. P Z ( k S). Proof. At the same time, is a pwl. surface of order at most k + 1 because k S is a pwl. surface of order at most k and H k+1 i can be decomposed into piece-wise hyperplanes wrt. Definition A.4 (Canonical representation of a pwl. surface). Let S be a pwl. surface on Z. The pwl. is a pwl. surface in canonical form, then κ is the order of S. Proof. Denote the order of S by λ. By the definition of order, λ ≤ κ, and S = λ S. Then, since It follows that κ = λ. Lemma A.5. Every pwl. surface has a canonical representation. Proof. The inclusion l∈[k],i∈[n l ] H l i ⊆ k S holds for any representation. We will show the other inclusion by induction in the order of S. If S is order one, 1 S ⊆ S = i∈[n1] H 1 i holds for any representation and we are done. Now assume the lemma holds up to order κ − 1, and let S be order κ. Then by Lemma A.3, S = κ S = κ−1 S ∪ i H κ i , where H κ i are piece-wise hyperplanes wrt. P Z ( κ−1 S). By the inductive assumption, κ−1 S has a canonical representation, Proof. Let k ∈ [κ]. Because both representations are canonical, we have where H k i and G k j are piece-wise hyperplanes wrt. where on both sides above we have a union of hyperplanes on an open set. The claim follows. Definition A.5 (Dependency graph of a pwl. surface). Let S be a piece-wise linear surface on Z, and let S = l∈[κ],i∈[n l ] H l i be its canonical representation. We define the dependency graph of S as the directed graph that has the piece-wise hyperplanes H l i l,i as vertices, and has an edge
We prove that there exist ReLU networks whose parameters are almost uniquely determined by the function they implement.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:992
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: A general problem that received considerable recent attention is how to perform multiple tasks in the same network, maximizing both efficiency and prediction accuracy. A popular approach consists of a multi-branch architecture on top of a shared backbone, jointly trained on a weighted sum of losses. However, in many cases, the shared representation results in non-optimal performance, mainly due to an interference between conflicting gradients of uncorrelated tasks. Recent approaches address this problem by a channel-wise modulation of the feature-maps along the shared backbone, with task specific vectors, manually or dynamically tuned. Taking this approach a step further, we propose a novel architecture which modulate the recognition network channel-wise, as well as spatial-wise, with an efficient top-down image-dependent computation scheme. Our architecture uses no task-specific branches, nor task specific modules. Instead, it uses a top-down modulation network that is shared between all of the tasks. We show the effectiveness of our scheme by achieving on par or better results than alternative approaches on both correlated and uncorrelated sets of tasks. We also demonstrate our advantages in terms of model size, the addition of novel tasks and interpretability. Code will be released. The goal of multi-task learning is to improve the learning efficiency and increase the prediction accuracy of multiple tasks learned and performed together in a shared network. Over the years, several types of architectures have been proposed to combine multiple tasks training and evaluation. Most current schemes assume task-specific branches, on top of a shared backbone (Figure 1a) and use a weighted sum of tasks losses, fixed or dynamically tuned, to train them (Chen et al., 2017; Kendall et al., 2018; Sener & Koltun, 2018) . Having a shared representation is more efficient from the standpoint of memory and sample complexity and can also be beneficial in cases where the tasks are correlated to each other (Maninis et al., 2019) . However, in many other cases, the shared representation can also result in worse performance due to the limited capacity of the shared backbone and interference between conflicting gradients of uncorrelated tasks (Zhao et al., 2018) . The performance of the multi-branch architecture is highly dependent on the relative losses weights and the task correlations, and cannot be easily determined without a "trial and error" phase search (Kendall et al., 2018) . Another type of architecture (Maninis et al., 2019 ) that has been recently proposed uses task specific modules, integrated along a feed-forward backbone and producing task-specific vectors to modulate the feature-maps along it (Figure 1b) . Here, both training and evaluation use a single tasking paradigm: executing one task at a time, rather than getting all the task responses in a single forward pass of the network. A possible disadvantage of using task-specific modules and of using a fixed number of branches, is that it may become difficult to add additional tasks at a later time during the system life-time. Modulation-based architectures have been also proposed by Strezoski et al. (2019) and Zhao et al. (2018) (Figure 1c ). However, all of these works modulate the recognition network channel-wise, using the same modulation vector for all the spatial dimension of the feature-maps. We propose a new type of architecture with no branching, which performs single task at a time but with no task-specific modules (Figure 1d ). The core component of our approach is a top-down (TD) (a) (b) (c) (d) Figure 1: (a) Multi branched architecture, task specific branches on a top of a shared backbone, induces capacity and destructive interference problems, force careful tuning. Recently proposed architectures: (b) using tasks specific modules and (c) using channel-wise modulation modules. (d) Our architecture: a top-down image-aware full tensor modulation network with no task specific modules. modulation network, which carries the task information in combination with the image information, obtained from a first bottom-up (BU1) network, and modulates a second bottom-up (BU2) network common for all the tasks. In our approach, the modulation is channel-wise as well as spatial-wise (a full tensor modulation), calculated sequentially along the TD stream. This allows us, for example, to modulate only specific spatial locations of the image depending on the current task, and get interpretability properties by visualizing the activations in the lowest feature-map of the TD stream. In contrast to previous works, our modulation mechanism is also "image-aware" in the sense that information from the image, extracted by the BU1 stream, is accumulated by the TD stream, and affects the modulation process. The main differences between our approach and previous approaches are the following: First, as mentioned, our approach does not use multiple branches or task-specific modules. We can scale the number of tasks with no additional layers. Second, our modulation scheme includes a spatial component, which allows attention to specific locations in the image, as illustrated in figure 2a for the Multi-MNIST tasks (Sabour et al., 2017) . Third, the modulation in our scheme is also image dependent and can modulate regions of the image based on their content rather than location (relevant examples are demonstrated in figures 2b and 2c). We empirically evaluated the proposed approach on three different datasets. First, we demonstrated on par accuracies with the single task baseline on an uncorrelated set of tasks with MultiMNIST while using less parameters. Second, we examined the case of correlated tasks and outperformed all baselines on the CLEVR (Johnson et al., 2017) dataset. Third, we scaled the number of tasks and demonstrated our inherent attention mechanism on the CUB200 (Welinder et al., 2010) dataset. The choice of datasets includes cases where the tasks are uncorrelated (Multi-MNIST) and cases where the tasks are relatively correlated (CLEVR and CUB200). The results demonstrate that our proposed scheme can successfully handle both cases and shows distinct advantages over the channel-wise modulation approach.
We propose a top-down modulation network for multi-task learning applications with several advantages over current schemes.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:993
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Clustering algorithms have wide applications and play an important role in data analysis fields including time series data analysis. The performance of a clustering algorithm depends on the features extracted from the data. However, in time series analysis, there has been a problem that the conventional methods based on the signal shape are unstable for phase shift, amplitude and signal length variations. In this paper, we propose a new clustering algorithm focused on the dynamical system aspect of the signal using recurrent neural network and variational Bayes method. Our experiments show that our proposed algorithm has a robustness against above variations and boost the classification performance. The rapid progress of IoT technology has brought huge data in wide fields such as traffic, industries, medical research and so on. Most of these data are gathered continuously and accumulated as time series data, and the extraction of features from a time series have been studied intensively in recent years. The difficulty of time series analysis is the variation of the signal in time which gives rise to phase shift, compress/stretch and length variation. Many methods have been proposed to solve these problems. Dynamic Time Warping (DTW) was designed to measure the distance between warping signals (Rabiner & Juang, 1993) . This method solved the compress/stretch problem by applying a dynamic planning method. Fourier transfer or wavelet transfer can extract the features based on the frequency components of signals. The phase shift independent features are obtained by calculating the power spectrum of the transform result. In recent years, the recurrent neural network (RNN), which has recursive neural network structure, has been widely used in time series analysis (Elman, 1990; 1991) . This recursive network structure makes it possible to retain the past information of time series. Furthermore, this architecture enables us to apply this algorithm to signals with different lengths. Although the methods mentioned above are effective solutions for the compress/stretch, phase shift and signal length variation issues respectively, little has been studied about these problems comprehensively. Let us turn our attention to feature extraction again. Unsupervised learning using a neural network architecture autoencoder (AE) has been studied as a feature extraction method (Hinton & Salakhutdinov, 2006; Vincent et al., 2008; Rifai et al., 2011) . AE using RNN structure (RNN-AE) has also been proposed (Srivastava et al., 2015) and it has been applied to real data such as driving data (Dong et al., 2017) and others. RNN-AE can be also interpreted as the discrete dynamical system: chaotic behavior and the deterrent method have been studied from this point of view (Zerroug et al., 2013; Laurent & von Brecht, 2016) . In this paper, we propose a new clustering algorithm for feature extraction focused on the dynamical system aspect of RNN-AE. In order to achieve this, we employed a multi-decoder autoencoder with multiple decoders to describe different dynamical systems. We also applied the variational Bayes method (Attias, 1999; Ghahramani & Beal, 2001; Kaji & Watanabe, 2011) as the clustering algorithm. This paper is composed as follows: in Section 4, we explain AE from a dynamical system view, then we define our model and from this, derive its learning algorithm. In Section 5, we describe the application of our algorithm to an actual time series to show its robustness, including running two experiments using periodic data and driving data. Finally we summarize our study and describe our future work in Section 7. We verified the feature extraction peformance of the MDRA using actual time series data. In Section 5.1, we saw that the periodic signals are completely classified by the frequency using clustering weight r n . In this experiment, the average clustering weights, the elements of are (3.31e-01, 8.31e-47, 8.31e-47, 3.46e-01, 8.31e-47, 3.19e-01, 8.31e-47) , with only three components having effective weights. This weight narrowing-down is one of the advantages of VB learning. The left of Fig. 9 shows an enlarged view of around "freq 4" in Fig. 7 (right) . We found that the distribution of "freq 4" is in fact spread linearly. The right of Fig. 9 is the result Hinton, 2008) . We found that each frequency data formed several spreading clusters without overlapping. As we saw earlier, the distribution of r n has a spreading distribution compared to that of h n . We inferred that the spreading of the distribution r n was caused by extracting the diversity on the driving scenes. In addition, the identification result shows that the combination of the features given by r n and h n can improve the performance. Dong et al. (2017) , which studied a driver identify algorithm using the AE, proposed the minimization of the error integrating the reconstruction error of AE and the classification error of deep neural network. This algorithm can avoid the over-fitting by using unlabeled data whose data collection cost is smaller than labeled data. From these results, we can expect that the MDRA can contribute to not only boost identification performance but also restrain the over-fitting. In this paper, we proposed a new algorithm MDRA that can extract dynamical system features of a time series data. We conducted experiments using periodic signals and actual driving data to verify the advantages of MDRA. The results show that our algorithm not only has robustness for the phase shift, amplitude and signal length variation, but also can boost classification performance.
Novel time series data clustring algorithm based on dynamical system features.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:994
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Two important topics in deep learning both involve incorporating humans into the modeling process: Model priors transfer information from humans to a model by regularizing the model's parameters; Model attributions transfer information from a model to humans by explaining the model's behavior. Previous work has taken important steps to connect these topics through various forms of gradient regularization. We find, however, that existing methods that use attributions to align a model's behavior with human intuition are ineffective. We develop an efficient and theoretically grounded feature attribution method, expected gradients, and a novel framework, attribution priors, to enforce prior expectations about a model's behavior during training. We demonstrate that attribution priors are broadly applicable by instantiating them on three different types of data: image data, gene expression data, and health care data. Our experiments show that models trained with attribution priors are more intuitive and achieve better generalization performance than both equivalent baselines and existing methods to regularize model behavior. Recent work on interpreting machine learning models has focused on feature attribution methods. Given an input feature, a model, and a prediction on a particular sample, such methods assign a number to the input feature that represents how important the input feature was for making the prediction. Previous literature about such methods has focused on the axioms they should satisfy (Lundberg and Lee, 2017; Sundararajan et al., 2017; Štrumbelj and Kononenko, 2014; Datta et al., 2016) , and how attribution methods can give us insight into model behavior (Lundberg et al., 2018a; Sayres et al., 2019; Zech et al., 2018) . These methods can be an effective way of revealing problems in a model or a dataset. For example, a model may place too much importance on undesirable features, rely on many features when sparsity is desired, or be sensitive to high frequency noise. In such cases, we often have a prior belief about how a model should treat input features, but for neural networks it can be difficult to mathematically encode this prior in terms of the original model parameters. Ross et al. (2017b) introduce the idea of regularizing explanations to train models that better agree with domain knowledge. Given a binary variable indicating whether each feature should or should not be important for predicting on each sample in the dataset, their method penalizes the gradients of unimportant features. However, two drawbacks limit the method's applicability to real-world problems. First, gradients don't satisfy the theoretical guarantees that modern feature attribution methods do (Sundararajan et al., 2017) . Second, it is often difficult to specify which features should be important in a binary manner. More recent work has stressed that incorporating intuitive, human priors will be necessary for developing robust and interpretable models (Ilyas et al., 2019) . Still, it remains challenging to encode meaningful, human priors like "have smoother attribution maps" or "treat this group of features similarly" by penalizing the gradients or parameters of a model. In this work, we propose an expanded framework for encoding abstract priors, called attribution priors, in which we directly regularize differentiable functions of a model's axiomatic feature attributions during training. This framework, which can be seen as a generalization of gradient-based regularization (LeCun et al., 2010; Ross et al., 2017b; Yu et al., 2018; Jakubovitz and Giryes, 2018; Roth et al., 2018) , can be used to encode meaningful domain knowledge more effectively than existing methods. Furthermore, we introduce a novel feature attribution method -expected gradientswhich extends integrated gradients (Sundararajan et al., 2017) , is naturally suited to being regularized under an attribution prior, and avoids hyperparameter choices required by previous methods. Using attribution priors, we build improved deep models for three different prediction tasks. On images, we use our framework to train a deep model that is more interpretable and generalizes better to noisy data by encouraging the model to have piecewise smooth attribution maps over pixels. On gene expression data, we show how to both reduce prediction error and better capture biological signal by encouraging similarity among gene expression features using a graph prior. Finally, on a patient mortality prediction task, we develop a sparser model and improve performance when learning from limited training data by encouraging a skewed distribution of the feature attributions. The immense popularity of deep learning has driven its application in many domains with diverse, complicated prior knowledge. While it is in principle possible to hand-design network architectures to encode this knowledge, we propose a simpler approach. Using attribution priors, any knowledge that can be encoded as a differentiable function of feature attributions can be used to encourage a model to act in a particular way in a particular domain. We also introduce expected gradients, a feature attribution method that is theoretically justified and removes the choice of a single reference value that many existing feature attribution methods require. We further demonstrate that expected gradients naturally integrates with attribution priors via sampling during SGD. The combination allows us to improve model performance by encoding prior knowledge across several different domains. It leads to smoother and more interpretable image models, biological predictive models that incorporate graph-based prior knowledge, and sparser health care models that can perform better in data-scarce scenarios. Attribution priors provide a broadly applicable framework for encoding domain knowledge, and we believe they will be valuable across a wide array of domains in the future. Normally, training with a penalty on any function of the gradients would require solving a differential equation. To avoid this, we adopt a double back-propagation scheme in which the gradients are first calculated with respect to the training loss, and alternately calculated with the loss with respect to the attributions (Yu et al., 2018; Drucker and Le Cun, 1992) . Our attribution method, expected gradients, requires background reference samples to be drawn from the training data. More specifically, for each input in a batch of inputs, we need k additional inputs to calculate expected gradients for that input batch. As long as k is smaller than the batch size, we can avoid any additional data reading by re-using the same batch of input data as a reference batch, as in Zhang et al. (2017) . We accomplish this by shifting the batch of input k times, such that each input in the batch uses k other inputs from the batch as its reference values.
A method for encouraging axiomatic feature attributions of a deep model to match human intuition.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:995
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Recurrent neural networks (RNNs) have shown excellent performance in processing sequence data. However, they are both complex and memory intensive due to their recursive nature. These limitations make RNNs difficult to embed on mobile devices requiring real-time processes with limited hardware resources. To address the above issues, we introduce a method that can learn binary and ternary weights during the training phase to facilitate hardware implementations of RNNs. As a result, using this approach replaces all multiply-accumulate operations by simple accumulations, bringing significant benefits to custom hardware in terms of silicon area and power consumption. On the software side, we evaluate the performance (in terms of accuracy) of our method using long short-term memories (LSTMs) and gated recurrent units (GRUs) on various sequential models including sequence classification and language modeling. We demonstrate that our method achieves competitive results on the aforementioned tasks while using binary/ternary weights during the runtime. On the hardware side, we present custom hardware for accelerating the recurrent computations of LSTMs with binary/ternary weights. Ultimately, we show that LSTMs with binary/ternary weights can achieve up to 12x memory saving and 10x inference speedup compared to the full-precision hardware implementation design. Convolutional neural networks (CNNs) have surpassed human-level accuracy in various complex tasks by obtaining a hierarchical representation with increasing levels of abstraction BID3 ; BID31 ). As a result, they have been adopted in many applications for learning hierarchical representation of spatial data. CNNs are constructed by stacking multiple convolutional layers often followed by fully-connected layers BID30 ). While the vast majority of network parameters (i.e. weights) are usually found in fully-connected layers, the computational complexity of CNNs is dominated by the multiply-accumulate operations required by convolutional layers BID46 ). Recurrent neural networks (RNNs), on the other hand, have shown remarkable success in modeling temporal data BID36 ; BID12 ; BID6 ; ; ). Similar to CNNs, RNNs are typically over-parameterized since they build on high-dimensional input/output/state vectors and suffer from high computational complexity due to their recursive nature BID45 ; BID14 ). As a result, the aforementioned limitations make the deployment of CNNs and RNNs difficult on mobile devices that require real-time inference processes with limited hardware resources.Several techniques have been introduced in literature to address the above issues. In BID40 ; BID22 ; BID29 ; BID42 ), it was shown that the weight matrix can be approximated using a lower rank matrix. In BID34 ; BID14 ; ; BID1 ), it was shown that a significant number of parameters in DNNs are noncontributory and can be pruned without any performance degradation in the final accuracy performance. Finally, quantization approaches were introduced in ; BID33 ; BID9 ; BID26 ; BID20 ; BID39 ; BID19 ; BID49 ; ; ) to reduce the required bitwidth of weights/activations. In this way, power-hungry multiply-accumulate operations are replaced by simple accumulations while also reducing the number of memory accesses to the off-chip memory.Considering the improvement factor of each of the above approaches in terms of energy and power reductions, quantization has proven to be the most beneficial for hardware implementations. However, all of the aforementioned quantization approaches focused on optimizing CNNs or fully-connected networks only. As a result, despite the remarkable success of RNNs in processing sequential data, RNNs have received the least attention for hardware implementations, when compared to CNNs and fully-connected networks. In fact, the recursive nature of RNNs makes their quantization difficult. In BID18 ), for example, it was shown that the well-known BinaryConnect technique fails to binarize the parameters of RNNs due to the exploding gradient problem ). As a result, a binarized RNN was introduced in BID18 ), with promising results on simple tasks and datasets. However it does not generalize well on tasks requiring large inputs/outputs BID45 ). In BID45 ; BID20 ), multi-bit quantized RNNs were introduced. These works managed to match their accuracy performance with their full-precision counterparts while using up to 4 bits for data representations.In this paper, we propose a method that learns recurrent binary and ternary weights in RNNs during the training phase and eliminates the need for full-precision multiplications during the inference time. In this way, all weights are constrained to {+1, −1} or {+1, 0, −1} in binary or ternary representations, respectively. Using the proposed approach, RNNs with binary and ternary weights can achieve the performance accuracy of their full-precision counterparts. In summary, this paper makes the following contributions:• We introduce a method for learning recurrent binary and ternary weights during both forward and backward propagation phases, reducing both the computation time and memory footprint required to store the extracted weights during the inference.• We perform a set of experiments on various sequential tasks, such as sequence classification, language modeling, and reading comprehension. We then demonstrate that our binary/ternary models can achieve near state-of-the-art results with greatly reduced computational complexity. In this section, we evaluate the performance of the proposed LSTMs with binary/ternary weights on different temporal tasks to show the generality of our method. We defer hyperparameters and tasks details for each dataset to Appendix C due to the limited space. As discussed in Section 4, the training models ignoring the quantization loss fail to quantize the weights in LSTM while they perform well on CNNs and fully-connected networks. To address this problem, we proposed the use of batch normalization during the quantization process. To justify the importance of such a decision, we have performed different experiments over a wide range of temporal tasks and compared the accuracy performance of our binarization/ternarization method with binaryconnect as a method that ignores the quantization loss. The experimental results showed that binaryconnect method fails to learn binary/ternary weights. On the other hand, our method not only learns recurrent binary/ternary weights but also outperforms all the existing quantization methods in literature. It is also worth mentioning that the models trained with our method achieve a comparable accuracy performance w.r.t. their full-precision counterpart.Figure 1(a ) shows a histogram of the binary/ternary weights of the LSTM layer used for characterlevel language modeling task on the Penn Treebank corpus. In fact, our model learns to use binary or ternary weights by steering the weights into the deterministic values of -1, 0 or 1. Despite the CNNs or fully-connected networks trained with binary/ternary weights that can use either real-valued or binary/ternary weights, the proposed LSTMs trained with binary/ternary can only perform the inference computations with binary/ternary weights. Moreover , the distribution of the weights is dominated by non-zero values for the model with ternary weights.To show the effect of the probabilistic quantization on the prediction accuracy of temporal tasks, we adopted the ternarized network trained for the character-level language modeling tasks on the Penn Treebank corpus (see Section 5.1). We measured the prediction accuracy of this network on the test set over 10000 samples and reported the distribution of the prediction accuracy in FIG0 . FIG0 (b) shows that the variance imposed by the stochastic ternarization on the prediction accuracy is very small and can be ignored. It is worth mentioning that we also have observed a similar behavior for other temporal tasks used in this paper. FIG1 illustrates the learning curves and generalization of our method to longer sequences on the validation set of the Penn Treebank corpus. In fact, the proposed training algorithm also tries to retains the main features of using batch normalization, i.e., fast convergence and good generalization over long sequences. FIG1 (a) shows that our model converges faster than the full-precision LSTM for the first few epochs. After a certain point, the convergence rate of our method decreases, that prevents the model from early overfitting. FIG1 (b) also shows that our training method generalizes well over longer sequences than those seen during training. Similar to the full-precision baseline, our binary/ternary models learn to focus only on information relevant to the generation of the next target character. In fact, the prediction accuracy of our models improves as the sequence length increases since longer sequences provides more information from past for generation of the next target character.While we have only applied our binarization/ternarization method on LSTMs, our method can be used to binarize/ternarize other recurrent architectures such as GRUs. To show the versatility of our method, we repeat the character-level language modeling task performed in Section 5.1 using GRUs on the Penn Treebank, War & Peace and Linux Kernel corpora. We also adopted the same network configurations and settings used in Section 5.1 for each of the aforementioned corpora. TAB4 6 summarizes the performance of our binarized/ternarized models. The simulation results show that our method can successfully binarize/ternarize the recurrent weights of GRUs.As a final note, we have investigated the effect of using different batch sizes on the prediction accuracy of our binarized/ternarized models. To this end, we trained an LSTM of size 1000 over a sequence length of 100 and different batch sizes to perform the character-level language modeling task on the Penn Treebank corpus. Batch normalization cannot be used for the batch size of 1 as the output vector will be all zeros. Moreover, using a small batch size leads to a high variance when estimating the statistics of the unnormalized vector, and consequently a lower prediction accuracy than the baseline model without bath normalization, as shown in Figure 3 . On the other hand, the prediction accuracy of our binarization/ternarization models improves as the batch size increases, while the prediction accuracy of the baseline model decreases. Figure 3 : Effect of different batch sizes on the prediction accuracy of the character-level language modeling task on the Penn Treebank corpus. The introduced binarized/ternarized recurrent models can be exploited by various dataflows such as DaDianNao (Chen et al. (2014) ) and TPU (Jouppi et al. (2017) ). In order to evaluate the effectiveness of LSTMs with recurrent binary/ternary weights , we build our binary/ternary architecture over DaDianNao as a baseline which has proven to be the most efficient dataflow for DNNs with sigmoid/tanh functions. In fact, DaDianNao achieves a speedup of 656× and reduces the energy by 184× over a GPU (Chen et al. (2014) ). Moreover, some hardware techniques can be adopted on top of DaDianNao to further speed up the computations. For instance, showed that ineffectual computations of zero-valued weights can be skipped to improve the run-time performance of DaDianNao. In DaDianNao, a DRAM is used to store all the weights/activations and provide the required memory bandwidth for each multiply-accumulate (MAC) unit. For evaluation purposes, we consider two different application-specific integrated circuit (ASIC) architectures implementing Eq. (2): low-power implementation and high-speed inference engine. We build these two architectures based on the aforementioned dataflow. For the low-power implementation , we use 100 MAC units. We also use a 12-bit fixed-point representation for both weights and activations of the full-precision model as a baseline architecture. As a result, 12-bit multipliers are required to perform the recurrent computations. Note that using the 12-bit fixed-point representation for weights and activations guarantees no prediction accuracy loss in the full-precision models. For the LSTMs with recurrent binary/ternary weights, a 12-bit fixed-point representation is only used for activations and multipliers in the MAC units are replaced with low-cost multiplexers. Similarly, using 12-bit fixed-point representation for activations guarantees no prediction accuracy loss in the introduced binary/ternary models. We implemented our low-power inference engine for both the full-precision and binary/ternary-precision models in TSMC 65-nm CMOS technology. The synthesis results excluding the implementation cost of the DRAM are summarized in TAB7 . They show that using recurrent binary/ternary weights results in up to 9× lower power and 10.6× lower silicon area compared to the baseline when performing the inference computations at 400 MHz.For the high-speed design, we consider the same silicon area and power consumption for both the fullprecision and binary/ternary-precision models. Since the MAC units of the binary/ternary-precision model require less silicon area and power consumption as a result of using multiplexers instead of multipliers, we can instantiate up to 10× more MAC units, resulting in up to 10× speedup compared to the full-precision model (see TAB7 ). It is also worth noting that the models using recurrent binary/ternary weights also require up to 12× less memory bandwidth than the full-precision models. More details on the proposed architecture are provided in Appendix D. In this paper, we introduced a method that learns recurrent binary/ternary weights and eliminates most of the full-precision multiplications of the recurrent computations during the inference. We showed that the proposed training method generalizes well over long sequences and across a wide range of temporal tasks such as word/character language modeling and pixel by pixel classification tasks. We also showed that learning recurrent binary/ternary weights brings a major benefit to custom hardware implementations by replacing full-precision multipliers with hardware-friendly multiplexers and reducing the memory bandwidth. For this purpose, we introduced two ASIC implementations: low-power and high-throughput implementations. The former architecture can save up to 9× power consumption and the latter speeds up the recurrent computations by a factor of 10. Figure 4: Probability density of states/gates for the BinaryConnect LSTM compared to its fullprecision counterpart on the Penn Treebank character-level modeling task. Both models were trained for 50 epochs. The vertical axis denotes the time steps. Figure 4 shows the probability density of the gates and hidden states of the BinaryConnect LSTM and its full-precision counterpart both trained with 1000 units and a sequence length of 100 on Penn Treebank corpus BID35 for 50 epochs. The probability density curves show that the gates in the binarized LSTM fail to control the flow of information. More specifically, the input gate i and the output gate o tend to let all information through, the gate g tends to block all information, and the forget gate f cannot decide to let which information through. p and values centered around 1 for the input gate i. In fact, the binarization process changes the probability density of the gates and hidden states during the training phase.
We propose high-performance LSTMs with binary/ternary weights, that can greatly reduce implementation complexity
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:996
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: This paper addresses unsupervised few-shot object recognition, where all training images are unlabeled and do not share classes with labeled support images for few-shot recognition in testing. We use a new GAN-like deep architecture aimed at unsupervised learning of an image representation which will encode latent object parts and thus generalize well to unseen classes in our few-shot recognition task. Our unsupervised training integrates adversarial, self-supervision, and deep metric learning. We make two contributions. First, we extend the vanilla GAN with reconstruction loss to enforce the discriminator capture the most relevant characteristics of "fake" images generated from randomly sampled codes. Second, we compile a training set of triplet image examples for estimating the triplet loss in metric learning by using an image masking procedure suitably designed to identify latent object parts. Hence, metric learning ensures that the deep representation of images showing similar object classes which share some parts are closer than the representations of images which do not have common parts. Our results show that we significantly outperform the state of the art, as well as get similar performance to the common episodic training for fully-supervised few-shot learning on the Mini-Imagenet and Tiered-Imagenet datasets. This paper presents a new deep architecture for unsupervised few-shot object recognition. In training, we are given a set of unlabeled images. In testing, we are given a small number K of support images with labels sampled from N object classes that do not appear in the training set (also referred to as unseen classes). Our goal in testing is to predict the label of a query image as one of these N previously unseen classes. A common approach to this N -way K-shot recognition problem is to take the label of the closest support to the query. Thus, our key challenge is to learn a deep image representation on unlabeled data such that it would in testing generalize well to unseen classes, so as to enable accurate distance estimation between the query and support images. Our unsupervised few-shot recognition problem is different from the standard few-shot learning (Snell et al., 2017; Finn et al., 2017) , as the latter requires labeled training images (e.g., for episodic training (Vinyals et al., 2016) ). Also, our problem is different from the standard semi-supervised learning (Chapelle et al., 2009) , where both unlabeled and labeled data are typically allowed to share either all or a subset of classes. When classes of unlabeled and labeled data are different in semi-supervised learning (Chapelle et al., 2009) , the labeled dataset is typically large enough to allow transfer learning of knowledge from unlabeled to labeled data, which is not the case in our few-shot setting. There is scant work on unsupervised few-shot recognition. The state of the art (Hsu et al., 2018) first applies unsupervised clustering (Caron et al., 2018) for learning pseudo labels of unlabeled training images, and then uses the standard few-shot learning on these pseudo labels for episodic traininge.g. , Prototypical Network (Snell et al., 2017) or MAML (Finn et al., 2017) . However, performance of this method is significantly below that of counterpart approaches to supervised few-shot learning. Our approach is aimed at learning an image representation from unlabeled data that captures presence or absence of latent object parts. We expect that such a representation would generalize well We use a GAN-like deep architecture to learn an image encoding z on unlabeled training data that will be suitable for few-shot recognition in testing. Our unsupervised training integrates adversarial, self-supervision, and metric learning. The figure illustrates our first contribution that extends the vanilla GAN (the red dashed line) with regularization so the encodingẑ of a "fake" image is similar to the randomly sampled code z which has been used for generating the "fake" image. The self-supervision task is to predict the rotation angle of rotated real training images. Deep metric learning is illustrated in greater detail in Fig. 3 . to unseen classes in our few-shot recognition task. This is because of the common assumption in computer vision that various distinct object classes share certain parts. Thus, while our labeled and unlabeled images do not show the same object classes, there may be some parts that appear in both training and test image sets. Therefore, an image representation that would capture presence of these common parts in unlabeled images is expected to also be suitable for representing unseen classes, and thus facilitate our N -way K-shot recognition. Toward learning such an image representation, in our unsupervised training, we integrate adversarial, self-supervision, and deep metric learning. As shown in Fig. 1 , we use a GAN-like architecture for training a discriminator network D to encode real images d , which will be later used for few-shot recognition in testing. We also consider a discrete encoding z = D z (x) ∈ {−1, 1} d , and empirically discover that it gives better performance than the continuous counterpart. Hence our interpretation that binary values in the discrete z indicate presence or absence of d latent parts in images. In addition to D z , the discriminator has two other outputs (i.e., heads), D r/f and D rot , for adversarial and self-supervised learning, respectively as illustrated in Fig. 2 . D is adversarially trained to distinguish between real and "fake" images, where the latter x are produced by a generator network G, x = G(z ), from image encodings z which are randomly sampled from the uniform distribution Sampling from the uniform distribution is justified, because latent parts shared among a variety of object classes appearing in the unlabeled training set are likely to be uniformly distributed across the training set. We extend the vanilla GAN with regularization aimed at minimizing a reconstruction loss between the sampled z and the corresponding embeddingẑ = D(G(z )). As our experiments demonstrate, this reconstruction loss plays an important role in training both D and G in combination with the adversarial loss, as both losses enforce G generate as realistic images as possible and D capture the most relevant image characteristics for reconstruction and real/fake recognition. Furthermore, following recent advances in self-supervised learning (Doersch et al., 2015; Zhang et al., 2016; Noroozi & Favaro, 2016; Noroozi et al., 2017; Zhang et al., 2017) , we also augment our training set with rotated versions of the real images around their center, and train D to predict their rotation angles,α = D rot (Rotate(x, α)) ∈ {0, 1, 2, 3} * 90 • . As in other approaches that use self-supervised learning, our results demonstrate that this data augmentation strengthens our unsupervised training and improves few-shot recognition. Finally, we use deep metric learning toward making the image encoding z = D z (x) represent latent parts and in this way better capture similarity of object classes for our few-shot recognition. We expect that various object classes share parts, and that more similar classes have more common parts. Therefore, the encodings of images showing similar (or different) object classes should have a small (or large) distance. To ensure this property, we use metric learning and compile a new training set of triplet images for estimating the standard triple loss, as illustrated in Fig. 3 . Since classes in our training set are not annotated, we form the triplet training examples by using an image masking procedure which is particularly suitable for identifying latent object parts. In the triplet, the anchor is the original (unmasked) image, the positive is an image obtained from the original by masking rectangular patches at the image periphery (e.g., top corner), and the negative is an image obtained from the original by masking centrally located image patches. By design, the negative image masks an important object part, and thus the deep representations of the anchor and the negative should have a large distance. Conversely, masking peripheral corners in the positive image does not cover any important parts of the object, and thus the deep representation of the positive should be very close to that of the anchor. In this way, our metric learning on the triplet training examples ensures that the learned image representation z accounts for similarity of object classes in terms of their shared latent parts. As our results show, this component of our unsupervised training further improves few-shot recognition in testing, to the extent that not only do we significantly outperform the state of the art but also get a performance that is on par with the common episodic training for fullysupervised few-shot learning on the Mini-Imagenet (Vinyals et al., 2016; Ravi & Larochelle, 2016) and Tiered-Imagenet (Ren et al., 2018) datasets. Our contributions are twofold: • Extending the vanilla GAN with a reconstruction loss between uniformly sampled codes, , and embeddings of the corresponding "fake" images,ẑ = D(G(z )). • The masking procedure for compiling triplet image examples and deep metric learning of z so it accounts for image similarity in terms of shared latent parts. The rest of this paper is organized as follows. Sec. 2 reviews previous work, Sec. 3 specifies our proposed approach, Sec. 4 presents our implementation details and our experimental results, and finally, Sec. 5 gives our concluding remarks. We have addressed unsupervised few-shot object recognition, where all training images are unlabeled and do not share classes with test images. A new GAN-like deep architecture has been (Donahue et al., 2016) 25.56 ± 1.08 31.10 ± 0.63 --AAL-ProtoNets (Antoniou & Storkey, 2019) 37.67 ± 0.39 40.29 ± 0.68 --UMTRA + AutoAugment (Khodadadeh et al., 2018) 39.93 50.73 --DeepCluster CACTUs -ProtoNets (Hsu et al., 2018) 39. (Snell et al., 2017) 46.56 ± 0.76 62.29 ± 0.71 46.52 ± 0.72 66.15 ± 0.74 Figure 4: Our image masking with rectangular patches for Mini-Imagenet. In every row, the images are organized from left to right in the descending order by their estimated distance to the original (unmasked) image. proposed for unsupervised learning of an image representation which respects image similarity in terms of shared latent object parts. We have made two contributions by extending the vanilla GAN with reconstruction loss and by integrating deep metric learning with the standard adversarial and self-supervision learning. Our results demonstrate that our approach generalizes will to unseen classes, outperforming the sate of the art by more than 8% in both 1-shot and 5-shot recognition tasks on the benchmark Mini-Imagenet dataset. We have reported the first results of unsupervised few-shot recognition on the Tiered-Imagenet dataset. Our ablations have evaluated that solely our first contribution leads to superior performance relative to that of closely related approaches, and that the addition of the second contribution further improves our 1-shot and 5-shot recognition by 3%. We also outperform a recent fully-supervised approach to few-shot learning that uses the common episodic training on the same datasets.
We address the problem of unsupervised few-shot object recognition, where all training images are unlabeled and do not share classes with test images.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:997
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: Existing black-box attacks on deep neural networks (DNNs) so far have largely focused on transferability, where an adversarial instance generated for a locally trained model can “transfer” to attack other learning models. In this paper, we propose novel Gradient Estimation black-box attacks for adversaries with query access to the target model’s class probabilities, which do not rely on transferability. We also propose strategies to decouple the number of queries required to generate each adversarial sample from the dimensionality of the input. An iterative variant of our attack achieves close to 100% adversarial success rates for both targeted and untargeted attacks on DNNs. We carry out extensive experiments for a thorough comparative evaluation of black-box attacks and show that the proposed Gradient Estimation attacks outperform all transferability based black-box attacks we tested on both MNIST and CIFAR-10 datasets, achieving adversarial success rates similar to well known, state-of-the-art white-box attacks. We also apply the Gradient Estimation attacks successfully against a real-world content moderation classifier hosted by Clarifai. Furthermore, we evaluate black-box attacks against state-of-the-art defenses. We show that the Gradient Estimation attacks are very effective even against these defenses. The ubiquity of machine learning provides adversaries with both opportunities and incentives to develop strategic approaches to fool learning systems and achieve their malicious goals. Many attack strategies devised so far to generate adversarial examples to fool learning systems have been in the white-box setting, where adversaries are assumed to have access to the learning model BID18 ; BID0 ; BID1 ; BID6 ). However, in many realistic settings, adversaries may only have black-box access to the model, i.e. they have no knowledge about the details of the learning system such as its parameters, but they may have query access to the model's predictions on input samples, including class probabilities. For example, we find this to be the case in some popular commercial AI offerings, such as those from IBM, Google and Clarifai. With access to query outputs such as class probabilities, the training loss of the target model can be found, but without access to the entire model, the adversary cannot access the gradients required to carry out white-box attacks.Most existing black-box attacks on DNNs have focused on transferability based attacks BID12 ; BID7 ; BID13 ), where adversarial examples crafted for a local surrogate model can be used to attack the target model to which the adversary has no direct access. The exploration of other black-box attack strategies is thus somewhat lacking so far in the literature. In this paper, we design powerful new black-box attacks using limited query access to learning systems which achieve adversarial success rates close to that of white-box attacks. These black-box attacks help us understand the extent of the threat posed to deployed systems by adversarial samples. The code to reproduce our results can be found at https://github.com/ anonymous 1 .New black-box attacks. We propose novel Gradient Estimation attacks on DNNs, where the adversary is only assumed to have query access to the target model. These attacks do not need any access to a representative dataset or any knowledge of the target model architecture. In the Gradient Estimation attacks, the adversary adds perturbations proportional to the estimated gradient, instead of the true gradient as in white-box attacks BID0 ; Kurakin et al. (2016) ). Since the direct Gradient Estimation attack requires a number of queries on the order of the dimension of the input, we explore strategies for reducing the number of queries to the target model. We also experimented with Simultaneous Perturbation Stochastic Approximation (SPSA) and Particle Swarm Optimization (PSO) as alternative methods to carry out query-based black-box attacks but found Gradient Estimation to work the best.Query-reduction strategies We propose two strategies: random feature grouping and principal component analysis (PCA) based query reduction. In our experiments with the Gradient Estimation attacks on state-of-the-art models on MNIST (784 dimensions) and CIFAR-10 (3072 dimensions) datasets, we find that they match white-box attack performance, achieving attack success rates up to 90% for single-step attacks in the untargeted case and up to 100% for iterative attacks in both targeted and untargeted cases. We achieve this performance with just 200 to 800 queries per sample for single-step attacks and around 8,000 queries for iterative attacks. This is much fewer than the closest related attack by . While they achieve similar success rates as our attack, the running time of their attack is up to 160× longer for each adversarial sample (see Appendix I.6).A further advantage of the Gradient Estimation attack is that it does not require the adversary to train a local model, which could be an expensive and complex process for real-world datasets, in addition to the fact that training such a local model may require even more queries based on the training data.Attacking real-world systems. To demonstrate the effectiveness of our Gradient Estimation attacks in the real world, we also carry out a practical black-box attack using these methods against the Not Safe For Work (NSFW) classification and Content Moderation models developed by Clarifai, which we choose due to their socially relevant application. These models have begun to be deployed for real-world moderation BID4 , which makes such black-box attacks especially pernicious. We carry out these attacks with no knowledge of the training set. We have demonstrated successful attacks ( FIG0 ) with just around 200 queries per image, taking around a minute per image. In FIG0 , the target model classifies the adversarial image as 'safe' with high confidence, in spite of the content that had to be moderated still being clearly visible. We note here that due to the nature of the images we experiment with, we only show one example here, as the others may be offensive to readers. The full set of images is hosted anonymously at https://www.dropbox.com/s/ xsu31tjr0yq7rj7/clarifai-examples.zip?dl=0.Comparative evaluation of black-box attacks. We carry out a thorough empirical comparison of various black-box attacks (given in TAB8 ) on both MNIST and CIFAR-10 datasets. We study attacks that require zero queries to the learning model, including the addition of perturbations that are either random or proportional to the difference of means of the original and targeted classes, as well as various transferability based black-box attacks. We show that the proposed Gradient Estimation attacks outperform other black-box attacks in terms of attack success rate and achieve results comparable with white-box attacks.In addition, we also evaluate the effectiveness of these attacks on DNNs made more robust using adversarial training BID0 BID18 and its recent variants including ensemble adversarial training BID21 and iterative adversarial training BID9 . We find that although standard and ensemble adversarial training confer some robustness against single-step attacks, they are vulnerable to iterative Gradient Estimation attacks, with adversar-ial success rates in excess of 70% for both targeted and untargeted attacks. We find that our methods outperform other black-box attacks and achieve performance comparable to white-box attacks.Related Work. Existing black-box attacks that do not use a local model were first proposed for convex inducing two-class classifiers by BID11 . For malware data, Xu et al. (2016) use genetic algorithms to craft adversarial samples, while Dang et al. (2017) use hill climbing algorithms. These methods are prohibitively expensive for non-categorical and high-dimensional data such as images. BID13 proposed using queries to a target model to train a local surrogate model, which was then used to to generate adversarial samples. This attack relies on transferability.To the best of our knowledge, the only previous literature on query-based black-box attacks in the deep learning setting is independent work by BID10 and . BID10 propose a greedy local search to generate adversarial samples by perturbing randomly chosen pixels and using those which have a large impact on the output probabilities. Their method uses 500 queries per iteration, and the greedy local search is run for around 150 iterations for each image, resulting in a total of 75,000 queries per image, which is much higher than any of our attacks. Further, we find that our methods achieve higher targeted and untargeted attack success rates on both MNIST and CIFAR-10 as compared to their method. propose a black-box attack method named ZOO, which also uses the method of finite differences to estimate the derivative of a function. However, while we propose attacks that compute an adversarial perturbation, approximating FGSM and iterative FGS; ZOO approximates the Adam optimizer, while trying to perform coordinate descent on the loss function proposed by BID1 . Neither of these works demonstrates the effectiveness of their attacks on real-world systems or on state-of-the-art defenses. Overall, in this paper, we conduct a systematic analysis of new and existing black-box attacks on state-of-the-art classifiers and defenses. We propose Gradient Estimation attacks which achieve high attack success rates comparable with even white-box attacks and outperform other state-of-the-art black-box attacks. We apply random grouping and PCA based methods to reduce the number of queries required to a small constant and demonstrate the effectiveness of the Gradient Estimation attack even in this setting. We also apply our black-box attack against a real-world classifier and state-of-the-art defenses. All of our results show that Gradient Estimation attacks are extremely effective in a variety of settings, making the development of better defenses against black-box attacks an urgent task.Stephen
Query-based black-box attacks on deep neural networks with adversarial success rates matching white-box attacks
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:998
Below are the abstract, introduction, and conclusion of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary and no additional text. Paper text: We propose a novel subgraph image representation for classification of network fragments with the target being their parent networks. The graph image representation is based on 2D image embeddings of adjacency matrices. We use this image representation in two modes. First, as the input to a machine learning algorithm. Second, as the input to a pure transfer learner. Our conclusions from multiple datasets are that 1. deep learning using structured image features performs the best compared to graph kernel and classical features based methods; and, 2. pure transfer learning works effectively with minimum interference from the user and is robust against small data. Our experiments overwhelmingly show that the structured image representation of graphs achieves successful graph classification with ease. The image representation is lossless, that is the image embeddings contain all the information in the corresponding adjacency matrix. Our results also show that even with very little information about the parent network, Deep network models are able to extract network signatures. Specifically, with just 64-node samples from networks with up to 1 million nodes, we were able to predict the parent network with > 90% accuracy while being significantly better than random with only 8-node samples. Further, we demonstrated that the image embedding approach provides many advantages over graph kernel and feature-based methods.We also presented an approach to graph classification using transfer learning from a completely different domain. Our approach converts graphs into 2D image embeddings and uses a pre-trained image classifier (Caffe) to obtain label-vectors. In a range of experiments with real-world data sets, we have obtained accuracies from 70% to 94% for 2-way classification and 61% for multi-way classification. Further, our approach is highly resilient to training-to-test ratio, that is, can work with sparse training samples. Our results show that such an approach is very promising, especially for applications where training data is not readily available (e.g. terrorist networks).Future work includes improvements to the transfer learning by improving the distance function between label-vectors, as well as using the probabilities from Caffe. Further , we would also look to generalize this approach to other domains, for example classifying radio frequency map samples using transfer learning.
We convert subgraphs into structured images and classify them using 1. deep learning and 2. transfer learning (Caffe) and achieve stunning results.
{ "domains": [ "artificial_intelligence" ], "input_context": "multiple_paragraphs", "output_context": "sentence", "source_type": "single_source", "task_family": "summarization" }
scitldr_aic:train:999